In digital photography, when we hear the expression “more megapixels” we tend to believe that “more is better”. But in this case “more” doesn’t necessary mean “better”. The photography quality depends on a multitude of factors, the number of pixels being just one of them. Each pixel value has a quality that can be described in terms of geometrical accuracy, dynamic range, color accuracy , noise and artifacts. Also, the quality of each pixel value depends on the number of photodetectors that were used to determine it, the level of sophistication of the in-camera imaging processing software, the quality of the lens and sensor combination, the photography file format used to store it, the size of the photodiode, the quality of the camera components. Each sensor and camera design has its compromises.
The number of pixel locations on the sensors and the ability of the lens to match the sensors resolutions determines geometrical or spatial accuracy. The resolution topic explains how this is measured at this site. Interpolation will neither improve geometrical accuracy, nor will create what hasn’t been captured.
Conventional sensors, using a color filter array, have only one photodiode per pixel location. This means that each color channel has some missing pixels which are estimated based on demosaicing algorithms and which will determine some color inaccuracies around the edges. If we increase the number of pixel locations on the sensor, the result we’ll get is the reducing the visibility of these artifacts. Foveon sensors (will be explained in a later post) have three photodetectors per pixel location which allows them to create higher color accuracy by eliminating the demosaicing artifacts. Unfortunately this kind of technology is available only on few cameras and their sensitivities are currently lower than conventional sensors.
The size of a photodiode is very important for the dynamic range. This size is determined by the size of the pixel locations and the fill factor. Higher quality sensors are more accurate and will be able to output a larger dynamic range, which can be preserved when storing the pixel values into a RAW image file. In order to increase the dynamic range, some cameras use two photodiodes per pixel location. Each photodiode has an important role: the more sensitive one measures the shadows and the less sensitive one measures the highlights.
The pixel value consists of two components:
- what you want to see (the actual measurement of the value in the scene)
- what you do not want to see (noise).
A pixel has a better quality if the part you want to see is larger than the one you don’t want to see. The noise depends on the quality of the sensors and the size of its pixel locations. Also, the noise can be changed by increasing sensitivity.
Photography quality across different types of sensors and cameras can’t be compared because there isn’t just a standard objective. For instance, a 3 megapixel Foveon type sensor uses 9 million photodetectors in 3 million pixel locations. The resulting quality is higher than a 3 megapixel, but lower than a 9 megapixel conventional image, and it also depends on the ISO level you compare it at. Likewise, a 6 megapixel Fujifilm Super CCD image is based on measurements in 3 million pixel locations. The quality is higher than a 3 megapixel image, but lower than a 6 megapixel image. A 6 megapixel digital compact image will be of lower quality than a 6 megapixel digital SLR image with larger pixels. To determine an “equivalent” resolution is tricky at best.