From ISRO's site -- "The Terrain Mapping Camera (TMC): The aim of this instrument is to completely map the topography of the moon. The camera works in the visible region of the electromagnetic spectrum and captures black and white stereo images."
Why is it that the camera, in spite of being capable of working in the entire visible spectrum, will be giving its stereo image output in B&W? I think it probably is possible to have coloured stereo output, or is it that color is not of any importance for terrain mapping?
Thanks in advance.
It is a question of sensor fidelity (or sensitivity) and need. Note that ability to sense all regions of the spectrum is different from the ability to discriminate between them. Let me explain
Usually how color pictures are taken is as follows. Millions of square sensors are packed on the surface of a chip. The conductivity of these sensors depend on the intensity of light (independent of the color of light) falling on them. Now one of two things is done
a. The 3 neighboring sensors are covered with a filter which allows only Red, Green or Blue (or maybe Cyan Magenta and Yellow), the light intensity values are read.
b. The 3 neighboring sensors are made of materials sensitive to red, green or blue (or maybe CMY) an the light intensity values are read.
Now the problem is that of you have sensors arranged like this:
They are measuring the R,G,B components at a slight distance away from each other, which should then be combined. This leads to a loss of resolution. If you want to settle for a BW picture, you can do one of two things
A. Make all pixels sensitive to only intensity information. You have tripled the resolution
B. Combine the area of R,G,B pixes and make one huge pixel which is sensitive only to intensity information. In this case, for the same resolution as the color image before, you have increased the sensor sensitivity by a huge amount. Note that the sensitivity of the sensor is dependent on the size of the sensor. In general, bigger sensor = much much better sensitivity. This is the difference between point and shoot digital cameras and digital SLRs. Point and shoot (say 10 megapixels) have all of the 10x3 million sensors packed in the area of a fingernail. This is the reason why they give really sucky pictures at night. The sensors are not big enough to sense light at night. On the other hand DSLRs have 10x3 million pixels packed in an area the size of a traditional film negative. Much larger area and gives much better picture at night.
In space based applications (including the mars rovers spirit and opportunity), there is a single camera with pixels sensitive to only intensity of light. Since most of the photographs they take are of static objects, there is a rotating filter in front of the sensor. First Red is placed, a picture is taken, then Blue is placed and a picture is taken and then Green is placed and a picture is taken. Then all the three are combined to make a color image. ISRO might have omitted the rotating filters. So even though the camera, in theory can capture color images, ISRO maybe is not interested in them and is using it has a highly sensitive, high resolution BW camera.
You might find this interesting. I have included excerpts. The actual article is fascinating and longer.
Digital Secrets: How Spirit Makes Great Photos
Perhaps most important, the sensors on Spirit's CCDs are bigger, explained Patrick Myles, director of corporate communication at the Dalsa Corporation, which built the CCDs for all of the rover's cameras (Spirit has nine altogether, including hazard avoidance cameras and a microscopic imager).
A Sony DSC-F717, with a street price of around $600, has 5.2 million sensors (or 5 megapixels) on a chip that is 8.8 by 6.6 millimeters (or .35 by .26 inches). The Pancam has just a million sensors spread across a chip that's 12 by 12 millimeters -- nearly a half-inch square.
The Pancam does not make a color picture directly. Instead, it records light versus dark in shades of gray. As with other CCD cameras used in high-end astrophotography, such as on the Hubble Space Telescope, a series of filters are applied to gather multiple images that are then blended together.
In the most basic application of this process, three images are gathered of a scene, one each recording red, green and blue light. Those are then put together with special software to create a color picture.
A consumer digital camera uses a single coated filter to make the transition from photon reality to electrons and then digital information.