By Pierre Markuse
Why do objects in space sometimes have different colors in different images?
You have probably seen many stunning and spectacular images of objects in space; galaxies, star clusters, supernova remnants, different nebulae, and many, many more. Oftentimes those images are very colorful, but sometimes images of the same object look very different. Why is that the case?
I see your true colors shining through…
Have a look at these images of the famous “Pillars of Creation” in the Eagle Nebula (M16, NGC 6611). The image on the left was taken by the Hubble Space Telescope, the image on the right by the MPG/ESO 2.2-meter telescope. As you can see, the colors are quite different. But how is that possible?
How we get color
Both images are actually composite images made from three monochromatic images taken through filters for different wavelengths. Those monochromatic images were then assigned (mapped) to one of the three color channels in a digital image (red, green, and blue), which, combined, result in a color image.
The images taken by those two telescopes used filters for different wavelengths, so after being combined the resulting images of course do look different, although they are images of the same object.
In the image you can see both versions of the “Pillars of Creation” and below them the monochromatic images used for each channel. The color of the text more or less represents the color of the light mapped to the channel.
But how would the object look like to the naked eye?
In most cases deep space objects seem very faint because of their great distances to Earth, and since our eyes (unlike a camera) cannot change the exposure time to gather more light over a longer period of time, colors are usually not very saturated, if visible to the naked eye at all. Some objects are just too faint to trigger the cones in the eyes (the photoreceptor cells responsible for color vision) leaving us with a monochromatic view generated by the rods, which are far more sensitive, and triggered by less light.
But if we would have super-sensitive color vision, the “Pillars of Creation” would look more like the MPG/ESO 2.2-meter telescope image to our naked eyes.
And here is why
The Hubble Space Telescope image used filters with wavelengths of 673 nm, 657 nm and 502 nm and mapped these images to the red, green, and blue channel. And while 673 nm is indeed a reddish color mapped to the red channel, they mapped 657 nm, another reddish color, to the green channel and 502 nm, a greenish color to the blue channel. The resulting color image is stunning and beautiful but not like what you would see with your naked eyes.
The MPG/ESO 2.2-meter telescope image used filters with wavelengths of 651 nm, 539 nm and 451 nm and mapped the images to the red, green and blue channel. As you can see these three wavelengths are within the color range of the channel they are mapped to and therefore — when combined — result in an image close to what you could see with your naked eyes. That is, if you had super-vision, sensitive enough to see colors at all.
The image below is showing a linear representation of the visible light spectrum. Aside from filters used to take images in the visible part of the spectrum many telescopes can also take images in the ultraviolet and infrared part of the spectrum. Since we can’t see ultraviolet or infrared light, any mapping of those wavelengths to one of the channels in a digital image will always result in an image that looks different from what you would see with your naked eyes.
Aren’t most color images fake then?
Some people would say the colors in the Hubble image are fake, because the image doesn’t reflect what it would look like to the human eye.
And while it’s true that it does look different, the images aren’t fake!
The color in the images reflects certain chemicals within the object and therefore are based on actual data. Sometimes it is possible to map the color of a certain narrowband filter to the corresponding channel of a digital RGB image. Take a look at the Hubble image for example. The 673 nm filter image (representing ionized sulfur) is within the red part of the spectrum and indeed mapped to the red channel. But then we have the 657 nm filter image (representing ionized nitrogen and hydrogen alpha) which is also in the red part of the spectrum. Now you could map this data to the red channel of the image as well, but then you wouldn’t be able to distinguish between the represented chemicals in the final image. So instead, they mapped the 657 nm filter image to the green channel, making it possible to see the distribution of those chemicals, but also creating an image that looks different from what you would see with your naked eyes. This different view can help to better see and understand the structure of objects.
Therefore, you shouldn’t see these images as fakes, an image of you in the infrared part of the spectrum is still you and real, although it does look a lot different than a “normal” photograph which uses the visible part of the spectrum.
But wait, there is more…
Using a narrowband filter and a monochromatic camera is not the only way to take images. Instead, many, especially amateur astronomers, use color cameras (sometimes standard DSLR cameras, sometimes cameras intended for astrophotography) to take RGB (natural color) images.
And while a digital image only has three color channels (red, green, and blue) you are of course not limited to assign just one set of data to each of those channels. You can even combine both methods (color cameras and the use of filters and monochromatic cameras) for an image. Take a look at these images of the Lagoon Nebula:
In these images you can see how different the same object can look, using different data. The top image is a natural color image taken with a color camera, the bottom image is a 50/50 blend of a natural color image with an image using filters focusing on hydrogen, oxygen, and sulfur (HOS image). This means, that each of the three color channels in the digital image gets its data from two sources.
The red channel combines the red light from the natural color image with the hydrogen narrowband image, the green channel combines the green light from the natural color image with the oxygen narrowband image and the blue channel combines the blue light from the natural color image with the sulfur narrowband image.
As you can see, the result looks a lot different than the pure natural color image. It is showing more of the structure of the nebula.
Which data is assigned to which channel can be chosen freely. There are however combinations, that are more popular than others. One of the more popular ones is the one used in this image assigning hydrogen to red, oxygen to green and sulfur to blue (HOS).
Another combination, made famous by images from the Hubble Space Telescope and therefore often named “Hubble Palette” or “Hubble Colors” is the combination of assigning sulfur to red, hydrogen to green and oxygen to blue (SHO). An example for an image in this “Hubble Palette” is the Hubble image of the “Pillars of Creation” you have seen before.
So what is better – taking monochromatic or RGB images?
There are upsides and downsides to both methods. Using a DSLR or a color camera intended for astrophotography saves time, because you don’t have to take three images to be able to combine them into an RGB image. Using narrowband filters and a monochromatic camera gives you more freedom of which wavelengths you are going to image, but because you “catch” less light you will end up with way longer exposure times. Both methods can yield very good results, which one is better depends heavily on what you are going to image, the type of image you want, and the time you are able and willing to invest in a single image.
An image of the galaxy NGC 1512, demonstrating how different the same object can look in a whole set of different wavelengths from ultraviolet to infrared.
Take a look here to learn more about colors in astrophotography:
More on the Pillars of Creation:
The full-size Hubble image can be found here:
The full-size MPG/ESO 2.2-meter telescope image can be found here:
About Pierre Markuse
Interested in all things related to science, Pierre Markuse loves to write and does so mostly on Google+, where he is writing featured collections on Astronomy and Astrophysics as well as Space/Space Technology. Always impressed by the public’s interest in science, he knows about questions raised by images, and how to explain them to a curious audience.
Aside from his love for space, Pierre also likes to take a look at Earth and processes image data from Earth observing satellites like Landsat or Sentinel to share on his Space Technology collection and Flickr.