With world getting advanced due to digital evolution and development in various fields including those image capturing devices cameras , our view of perceiving the surroundings has been improved with too much detailing in every segment of object. The development in this digital devices is quite attractive and the term like “Megapixel” is now on very mouth whenever we analyse any camera today.
But ! have you ever thought the thing that make us possible to discuss and view those quality pixel distribution in images , is allowed to compare on same platform?!. well we are talking about “what could be the resolution of human eye”?
In today’s digital cameras, you have orthogonal pixels: they’re in the same distribution across the sensor (in fact, a nearly perfect grid), and there’s a filter (usually the “Bayer” filter, named after Bryce Bayer, the scientist who came up with the usual color array) that delivers red, green, and blue pixels.
Talking about the eye, imagine a sensor with a huge number of pixels, about 120 million. There’s a higher density of pixels in the center of the sensor, and only about 6 million of those sensors are filtered to enable color sensitivity. And of course, only about 100,000 sense for blue! Oh, and by the way, this sensor isn’t made flat, but in fact, semi-spherical, so that a very simple lens can be used without distortions; real camera lenses have to project onto a flat surface, which is less natural given the spherical nature of a simple lens (in fact, better lenses usually contain a few aspherical elements).
This is about 22mm diagonal on the average, just a bit larger than a micro four-thirds sensor, but the spherical nature means the surface area is around 1100mm^2, a bit larger than a full-frame 35mm camera sensor. The highest pixel resolution on a 35mm sensor is on the Canon 5Ds, which stuffs 50.6Mpixels into about 860mm^2.
ABOUT HUMAN EYE
The eye is not shaped like a perfect sphere, rather it is a fused two-piece unit, composed of the anterior segment and the posterior segment. The anterior segment is made up of the cornea, iris and lens. The cornea is transparent and more curved, and is linked to the larger posterior segment, composed of the vitreous, retina, choroid and the outer white shell called the sclera. The cornea is typically about 11.5 mm (0.3 in) in diameter, and 1/2 mm (500 μm) in thickness near its center. The posterior chamber constitutes the remaining five-sixths; its diameter is typically about 24 mm. The cornea and sclera are connected by an area termed the limbus. The iris is the pigmented circular structure concentrically surrounding the center of the eye, the pupil, which appears to be black. The size of the pupil, which controls the amount of light entering the eye, is adjusted by the iris’ dilator and sphincter muscles.
Light energy enters the eye through the cornea, through the pupil and then through the lens. The lens shape is changed for near focus (accommodation) and is controlled by the ciliary muscle. Photons of light falling on the light-sensitive cells of the retina (photoreceptor cones and rods) are converted into electrical signals that are transmitted to the brain by the optic nerve and interpreted as sight and vision.
DYNAMIC RANGE OF HUMAN EYE
Retina has a static contrast ratio of around 100:1 (about 6.5 f-stops). As soon as the eye moves rapidly to acquire a target (saccades), it re-adjusts its exposure by adjusting the iris, which adjusts the size of the pupil. Initial dark adaptation takes place in approximately four seconds of profound, uninterrupted darkness; full adaptation through adjustments in retinal rod photoreceptors is 80% complete in thirty minutes. The process is nonlinear and multifaceted, so an interruption by light exposure requires restarting the dark adaptation process over again. Full adaptation is dependent on good blood flow; thus dark adaptation may be hampered by retinal disease, poor vascular circulation and high altitude exposure.
The human eye can detect a luminance range of 1014, or one hundred trillion (100,000,000,000,000) (about 46.5 f-stops), from 10−6 cd/m2, or one millionth (0.000001) of a candela per square meter to 108 cd/m2 or one hundred million (100,000,000) candelas per square meter.This range does not include looking at the midday sun (109 cd/m2)or lightning discharge.
According to Dr. Roger Clark, a digital and film imaging professional, who is also involved in several outer space imaging NASA projects, the resolution of the human eye is approximately 576 megapixels—which is a conservative estimate because this only pegs the viewing angle at 120 degrees, and the human eye actually spans a bigger field of view of 180 degrees.
However, this is a far, far cry from the technology that is being produced today. So even considering that it is just 120 degrees, our eyes are still a long way from catching up with our tech.
The full computation is as follows: 120 * 120 * 60 * 60 / (0.3 * 0.3) = 576 megapixels.
Wait ! , one has to remember that the eye is not a digital imaging device. The human eye does not capture images like a digital camera. Instead of taking a snapshot, the eye is constantly moving and the brain stitches together these stimuli to form the images we see.
Further, Stevens noted that only 7 megapixels of these 576 megapixels matter. Why? Because while we have a 180-degree field of vision, we can only detect high resolution 2 degrees from the center called Foveal vision.
Lastly, the number of pixels is only one element when determining resolution quality. There are other factors that matter, including lighting, distance, and spatial resolution. In the case of spatial resolution, the number pixels on the screen remains the same even when an object goes out of focus, yet our perception of the image quality goes down.
SOURCE – Futurism