Photoshopping the Universe
By Travis A. Rector
Astronomers produce beautiful images by manipulating raw telescope data, but such processing makes images more accurate, not misrepresentative of reality.
Astronomers produce beautiful images by manipulating raw telescope data, but such processing makes images more accurate, not misrepresentative of reality.
DOI: 10.1511/2017.124.46
When people interact with cosmic images, nearly all of their first questions are about authenticity: Are the images real? Is this what I would see standing next to this?
In a world made surreal with the magic of science-fiction special effects and digital image manipulation, there is a need to know that what we are seeing is real, and that these fantastic cosmic starscapes are places that truly existed. These images are of real objects in outer space. They aren’t creations of a graphic artist’s imagination. But how a telescope “sees” is radically different from how our eyes see. Telescopes give us superhuman vision. In most cases they literally make the invisible visible. All astronomical images are translations of what the telescope can see into something that our human eyes can see. But how is it done? This is a question that has challenged astronomers and astrophotographers for decades. Many people have developed and refined techniques to take the data generated by professional-grade telescopes and turn them into color images. Along the way we’ve worked to develop a visual language to better convey an understanding of what these pictures show.
NASA, ESA, SSC, CXC, and STScI.
Once telescopes collect astronomical data, they need to undergo a series of additional processing steps to turn them into color images. This is where programs such as Adobe Photoshop come in. Unfortunately, the word “photoshop” has become a verb to describe manipulating an image, and often in a negative or devious way. Nevertheless, Photoshop and other image editing software are used to make astronomical images without any nefarious intentions or outcomes.
After calibrating data from a telescope (or telescopes, as often data from more than one are used to make an image), the next step is converting the data into grayscale images. (Usually the telescope’s camera can’t see color—that part comes later.) In these images, every pixel has a numerical value between zero and 255. Zero is pure black, whereas 255 is pure white, and everything in between is a shade of gray, with lower numbers being darker. This numeric value has to be a whole number, but when the data come off of the telescope, each pixel has a numeric value that indicates how much light hit that pixel, and that value does not have to be a whole number. So we must use a mathematical function to convert the actual value of the pixels into the range of zero to 255. This is often referred to as the scaling, or stretch, function.
The numeric values of the pixels can be used, for example, to precisely measure the brightness of a star or the temperature of gas. Also, the dynamic range for a telescope is usually much greater than for your eyes. Dynamic range is defined as the ratio of the brightest object in an image to the faintest. It turns out that 256 shades of gray are usually sufficient for our eyes in differentiating brightness levels. But telescopes can do much better. Therefore, if we want to look at the data as an image, we need to translate what the telescope sees into something that works for our eyes. Each chunk of data (the data set from each filter, energy range, or waveband) is converted into its own grayscale image with a scaling function. Often, astronomers choose a different scaling function for each data set to highlight the detail in the darker and brighter areas of each image. Once you have a grayscale image for each data set, the next step is to combine them to create the color image.
Many people think of Photoshop as an image manipulation program designed to change what a picture looks like—think of magazine covers showing celebrities who don’t seem to age. But it does much more than that. In particular, it’s useful for combining multiple grayscale images to create a single color image. Each grayscale image is loaded as a separate layer. The layers are shifted, rotated, and rescaled so that the images are aligned. The brightness and contrast of each layer is separately fine-tuned to better bring out the detail in the bright and dark areas. Next, each layer is then given a color, and then the layers are stacked together to produce the preliminary color image. Photoshop lets astronomers combine as many layers as they wish, which allows for complex images to be made. This is especially useful when we create images with data from multiple telescopes.
NOAO/AURA/AUI/NSF; Local Group Survey Team and T. A. Rector, University of Alaska Anchorage.
Photoshop is also used to “clean” the image, to remove defects from the image that are not real. The defects are false vestiges that appear in the image because of how the telescope or camera functions. It is similar to removing the red-eye effect from photographs taken with a flash. When we remove artifacts from an astronomical image, we do so carefully, so as not to alter the actual structure. This process can be difficult and tedious. Often this step takes more time than the rest of the image-making process.
NASA/ESA/the Hubble SM4 ERO Team.
What are some of the defects that are removed? Sometimes cosmic rays, asteroids, and satellite trails are not fully removed during the data processing. They appear as specks or streaks in the image. A common problem found in visible-light images is called a charge bleed. Because each pixel is collecting electricity created by the light that hits it, we can think of every individual pixel as an electricity bucket. If a bright object, such as a star, is observed for too long, the electricity it generates will “spill out” of the pixels near the center of the bright star and spread into adjacent pixels. We can use Photoshop to remove these bleed defects. If we don’t, it would look like laser beams are shooting out of the bright stars, which is definitely not happening. Another instrumental effect, called diffraction spikes, is noticeable in bright stars. These diffraction spikes are not caused by the camera, but by the telescope itself. As light enters into the telescope, it is slightly spread out (or more precisely, diffracted) by the structure that holds up the secondary mirror at the top of the telescope. The light spreads out along the structure, causing bright stars to appear to have lines sticking out of them. Unlike charge bleeds, the diffraction spikes are usually not removed from the final image. Since the telescope itself produces the spikes (and not the digital camera), these artifacts have been present in astronomical images for as long as such images have been made. They can therefore help function as a visual cue that tells your brain that you’re looking at an astronomical image. In fact, they serve this purpose so well that artists sometimes put diffraction spikes in their drawings or paintings of bright stars.
Another defect that astronomers occasionally have to remove is a noticeable ring around very bright stars. If a star’s light is intense enough, it can reflect off of optics inside the telescope and camera and produce a halo around the star; these are known as internal reflections. Astronomers find these reflections particularly challenging to remove because they are often large and can overlap structures in the image that we don’t want to change. They can also have complex shapes that vary depending on where the star is in the image.
NASA/CXC/SAO/JPL-Caltech.
Many astronomical images are created when several smaller images are combined. And this can create another need for editing. These multipanel images are made when the telescope looks at one portion of the sky (called a pointing) and then moves to look at another, adjacent portion. Or there can be more than one detector inside the instrument. For example, the Kitt Peak National Observatory Mosaic camera in Arizona has eight detectors, so each pointing produces eight images. Variations in the sensitivity of each detector are removed when the data are calibrated, but not perfectly. This can leave seams along the locations where the images overlap, or gaps if the images don’t align properly along the edges. The brightness of each image can be fine-tuned in Photoshop so the seams or gaps are virtually undetectable. Small gaps can be filled in with additional data from other observations and then blended in with the rest of the image.
What’s just as important a question in processing images is what do we not do with Photoshop. Our goal with each image is to show how a telescope (or telescopes) sees a celestial object. In many cases, we also want to illustrate a new scientific result. We assign colors to each filter in a way that aims to be pleasing to the viewer and intuitive to understand, adding to the information the image conveys. For example, it can be distracting to make images of purple or green stars because stars are normally red, orange, yellow, white, or blue (as seen through visible-light broadband filters). Likewise, unusual colors for recognizable objects, such as spiral galaxies, can be distracting. For less familiar images, such as an x-ray image of the area around a black hole, there is more flexibility in the colors used. Undoubtedly, unusual colors such as bright greens can help attract attention to an image. But garish colors can also distract from the overall point. Strong colors can also affect the longevity of the image; that is, you might enjoy an image but is it something that you would want to print and hang on your wall? Will it look as good 10 years from now as it does today?
T. A. Rector (University of Alaska Anchorage) and H. Schweiker (WIYN and NOAO/AURA/NSF).
Another item on the “don’t” list includes modifying the actual structure in the image. We don’t add or remove stars. We don’t enlarge or slim down galaxies by manipulating their proportions or aspect ratio. As tempting as it may be, filter effects that modify the structure are generally not used. Sometimes an image might be slightly sharpened to counter the blurring effects of stacking multiple images together. But that’s pretty much the only such manipulation that is used. Adjustments to color, brightness, or contrast are done to the entire image; for example, we don’t brighten one part of the image so it stands out more. If one star looks brighter than another, that’s because it really is brighter. We might rotate or crop an image to highlight key details. We don’t, however, deliberately crop to remove or hide a particular object so as to change the scientific narrative.
An essential part of the scientific process is to be explicit when describing how an experiment is done or how a conclusion is reached. That way other scientists can recreate your experiment and analysis to see if they achieve similar results. Because these images are often used to illustrate science, we adhere to the same principle when describing them. Most astronomical images from professional observatories include details about the observations used to make an image. This information details the telescopes, cameras, and filters used, number and lengths of the exposures, dates of observations, size and rotation of the image, the location of the object, and the people involved in completing the observations, processing the data, and making the image.
Using a specially developed image metadata standard, called Astronomical Visualization Metadata (AVM), this information can also be embedded into the image. AVM is an easy way to learn about the details of an image. It also allows you to do cool things, such as show where the object is located in the sky using software such as Microsoft’s WorldWide Telescope or Google Sky. For many observatories, including all of NASA’s telescopes, you can also download the raw data from their archives.
The principles we follow produce an image that is scientifically valid and show real objects in space as seen by our telescopes. But there is a subjective, creative element as well to producing images. Although many scientists are reluctant to think of themselves as artists, there is nonetheless some artistry involved in making an appealing astronomical image.
Excerpted with permission from Coloring the Universe by Travis Rector, Kimberly Arcand, and Megan Watzke, published by the University of Alaska Press. © Travis A. Rector, Kimberly Arcand, and Megan Watzke. All rights reserved.
Click "American Scientist" to access home page
American Scientist Comments and Discussion
To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.
If we re-share your post, we will moderate comments/discussion following our comments policy.