Conventional wisdom in the world of photography where sensors are concerned, is that size matters. I would invite you to throw that assumption away for a moment. To resist the temptation of judging image quality of a camera based on its sensor size alone. While that might be true in the world of man made optics today, it’s not as true in nature. Conceding to the reality of economics, what drives our march to higher resolution, greater color depth and dynamic range is how we perceive our world through our eyes.
Our eyes see much more detail and a much broader scale of gray and contrast then our camera technology has been able to bring to market till now. The ability of our modern technology to acquire and store visual data at the highest quality given a certain size sensor is only limited by the technology itself.
We judge camera image quality with our eyes. How our eyes first collect the image, then how the information is processed in parallel to various visual centers of our brain is how we see, how we visually judge our camera technology.
The retina of the eye is really several layers of brain tissue and is the human equivalent of the digital camera sensor. In addition to collecting, then sending data, the retina acts to spatially encode (compress) data to the optic nerve. This is because there are around 100 times more photoreceptor cells then there are at the layer just below it called, the ganglion cell layer. What’s amazing to our technical standards is that the size of a human retina is only about 22MM+ in diameter In camera terms that’s a camera sensor with a 22mm image circle. That would be much smaller then a full frame sensor with an image circle of around 41mm. Granted the field of photoreceptors of the eye is not limited to a rectangle area within that sphere. Still however the eye has around 120+ million photoreceptors involved in acquisition. The equivelant to a 120+ mega pixel camera sensor acquiring, then processing, then sending and writing image data to storage. The way the eye allocates that job is, 120 million receptors getting light sensitivity information and around 7 million receptors for light frequency. In camera lingo that would be similar to,120 million pixels allocated to collect luminance information and 7 million to collect color information.
So imagine if you will, a camera sensor designed to leverage 120 million pixels for acquiring high detail and contrast, including low light and shadows and 7 million for sampling just the color information, then being able to merge the two across the whole spectrum and all in the space of an image circle for a micro 4/3’s camera sensor.
The hard part for camera technology to emulate is the adaptive part of vision. The human eye has a static contrast ratio of around 100:1 but because of the very sophisticated way in which the human visual system adapts to changing light conditions we effectively have a dynamic contrast ratio of closer to 1,000,000:1. That’s the human miracle of sight. That’s the goal post our camera technologies are always chasing after.
So one might argue that it’s not the size of the sensor or the size of the pixels that limits the quality of the image at all. Instead it would be the amount of pixels, how those pixels are assigned to collect data, how the processor deals with all that information and lastly how that information is compressed and stored and at what speed. Historically, our cameras have generally leveraged surface area to convey the amount of light sensitivity and image depth through the channel. But that might only be only a limitation of our technologies for now.
Think about it, If you took the weight and mass of the eyes then the human brain and subtracted everything in the brain not needed for sight, you would end up with something pretty small and lite including a pretty decent fixed lens.