As we all know, image sensors are very integral part of any digital camera infrastructure. They capture images. There are two technologies available, namely, CMOS and CCD. None of them has a considerable advantage over the other. These sensors basically work like converters of incident light into appropriate electronic signal. This electronic signal is then analyzed and compressed into file formats, before finally getting written on card. All this is done so fast that, 100s of such processes would complete by the time I write these sentences about them!
But my main concern here is not about image sensors, but about a basic comparison between them and our eyes. Because they are the only way to see everything. After a little bit of research (just googling everything!), I came to know that the procedure is much different than that of CCD or CMOS digital image sensors and films too. Our eye, such a good machine it is, works not only as a image sensor, but as a powerful focusing system with a wide range of apertures with a self cleaning mechanism that does not obstruct sight. Anatomically, our eye is a hollow ball, which regulates light from one end and senses and signals them to brain from the other side. Pupils work as diaphragm to control the amount of light entering eyes. They can expand from half mm to 8 mm that is a huge 16 fold expansion.
Slightly going away from the topic, do a small but funny experiment on yourself. It is no harm. Take a torch, or any handy small light source. Stand as near as you can to a mirror. Switch the torch on, and point it towards your eye, in a way that it does not obstruct your view in the mirror. Keep switching on and off the light source to see yourself, your eye adjusting to the light, by fast expansion/contraction of pupil. Being in complete darkness is advisable. So the point here is, depending on how much light there is, the iris, by contraction or dilation, decides and arrives at proper exposure.
In optical terms, the anatomical pupil is the eye's aperture and the iris is the aperture stop. The light then goes through the lens. Just like the lens of a camera, the lens of the eye focuses the light. After the exposure and focusing is taken care off, we gradually proceed to our color vision. Our eye is a perfect and interrelated system of about 40 individual subsystems, including the retina, pupil, iris, cornea, lens and optic nerve. For instance, the retina has approximately 137 million special cells that respond to light and send messages to the brain. About 130 million of these cells look like rods and handle the black and white vision. The other seven million are cone shaped and allow us to see in color. So these rod and cone shaped cells can be termed as a biological equivalent of digital image sensors.
How do they process colors? Why don't we need white balance? So let us get it this way. In the first place, our eyes don't know colors. Because, there is no other way round to distinguish them. What our eyes simply do is responding to different wavelengths. Roughly, wavelength spectrum from 380 nm to 740 nm is detectable by normal human eye. The color sensing cone cells are categorized in three ranges of wavelengths within the above said range, that they sense.
Long- Long wavelength sensing cone cells have their peak wavelength near 564–580 nm.
Medium- Medium wavelength sensing cone cells have their peak wavelength near 534–545 nm.
Short- Short wavelength sensing cone cells have their peak wavelength near 420–440 nm.
The processing here, then can be well explained in comparison with photoshop. In photoshop, every shade of any color can be described by a combination of RGB. From the ratios arising by their values standing between 0-255, we can see the respective colors in the same dialog box! Similarly, our brain gets input by three types of cells, long, medium and short. And by adding them up, our brain can derive a colour. For the purpose of kind information for my readers, these wavelength categories do not necessarily describe RGB! A table showing respective color's wavelength range is as below.
Color | Wavelength Interval |
Red | ~ 700–630 nm |
Orange | ~ 630–590 nm |
Yellow | ~ 590–560 nm |
Green | ~ 560–490 nm |
Blue | ~ 490–450 nm |
Violet | ~ 450–400 nm |
So the exact color/wavelength seen can be calculated mathematically (may be!) by doing weighted average of wavelength reported by each n every cone cell.
After getting to know about the color(wavelength) sensing system of the eye, lets have a overall view of it's functions. Here, I would prefer to make just statements.
Our eye is just a light sensing system, which with a proper exposure, partly assists the brain in recognizing colors.
Our brain distinguishes between the the ambient light and the light emitted/reflected by an object. For e.g. once known my car is red, it will not look magenta in lesser light or orange in yellow ambience.
Lesser the light, lesser our capacity to differentiate colours. So as we go in complete darkness, we just differentiate in dark and bright, or black and white.
Brain just analyses and calculates sensed wavelengths, and its a concern of linguistic sciences that how was a specific wavelength or specific range of wavelength given a name?
On this note, I sign off from this post. There is no new topic wondering in my brain right now. But I am sure to come up with one.
No comments:
Post a Comment