A Short Course Book
Sensors, Pixels and Image Sizes

Image Sensors and Color

RGB uses additive colors. When all three are mixed in equal amounts they form white. When red and green overlap they form yellow, and so on.
Maxwell (top) and his actual photograph of the tartan ribbon taken in 1861 (bottom).
Click to explore how red, green and blue can create full color images.
Each pixel on an image sensor has red, green, and blue filters intermingled across the photosites in patterns designed to yield sharper images and truer colors. The patterns vary but the most popular is the Bayer mosaic pattern shown here.
It may be surprising, but pixels on an image sensor only capture brightness, not color. They record the gray scale—a series of tones ranging from pure white to pure black. How the camera creates a color image from the brightness recorded by each pixel is an interesting story with its roots in the distant past.

The gray scale, seen best in black and white photos, contains a range of tones from pure black to pure white.

When photography was first invented in the 1840s, it could only record black and white images. The search for a color process was long and arduous, and a lot of hand coloring went on in the interim (causing one photographer to comment "So you have to know how to paint after all!"). One major breakthrough was James Clerk Maxwell's 1860 discovery that color photographs could be created using black and white film and red, green and blue filters. He had the photographer Thomas Sutton photograph a tartan ribbon three times, each time with a different color filter over the lens. The three black and white images were then projected onto a screen with three different projectors, each equipped with the same color filter used to take the image being projected. When brought into alignment, the three projected images formed a full-color photograph. Over a century later, image sensors work much the same way.

Colors in a photographic image are usually based on the three primary colors red, green, and blue (RGB). This is called the additive color system because colors are created by mixing the three colors. This RGB system is used whenever light is projected to form colors as it is on the display monitor (or in your eye). Another color system uses cyan, magenta, yellow and black (CMYK) to create colors. This system is used in almost all printers since it's the color system used with reflected light. It's called subtractive because it absorbs, or subtracts, colors so only red, green, and blue are reflected.

Since daylight is made up of red, green, and blue light; placing red, green, and blue filters over individual pixels on the image sensor can create color images just as they did for Maxwell in 1860. Using a process called interpolation, the camera computes the actual color of each pixel by combining the color it captured directly through its own filter with the other two colors captured by the pixels around it.

Because each pixel on the sensor has a color filter that only lets through one color, a captured image records the brightness of the red, green, and blue pixels separately. (There are usually twice as many photosites with green filters because the human eye is more sensitive to that color so green color accuracy is more important). Illustration courtesy of Foveon at www.foveon.com.

To create a full color image, the camera's image processor calculates, or interpolates, the actual color of each pixel by looking at the brightness of the colors recorded by it and others around it. Here the full-color of some green pixels are about to be interpolated from the colors of the eight pixels surrounding them.

There are at least 256 tones captured for each color-red, green, and blue. Only one tone at the shadow (black) end of the range and one at the highlight (white) end are pure and have no detail.

Each time you take a picture, millions of calculations are made in just a few seconds. It's these calculations that make it possible for the camera to interpolate, preview, capture, compress, filter, store, transfer, and display the image. All of these calculations are performed in the camera by an image processor that's similar to the one in your desktop computer, but dedicated to this single task. How well your processor performs its functions is critical to the quality of your images but it's hard to evaluate advertising claims about these devices. To most of us these processors are mysterious black boxes about which advertisers can say anything they want. The proof is in the pictures.

Cameras with the latest programmable image processors can be programmed by camera companies to perform a variety of functions. Currently these functions include in-camera photo editing and special effects such as red-eye removal, image enhancement, picture borders, stitching together panoramas, removing blur caused by camera shake, and much more.

When a camera company programs its processors its goal isn't to exactly reproduce a scene's colors. Instead, using a process called color rendering, its goal is to create what the programmers believe will be a pleasing reproduction. Frequently the contrast and color saturation are boosted, especially in the midtones and specular highlights are compressed for printing and viewing on typical displays. The processed images can be so distinctive that it's possible for some people to tell when an image was taken with a Canon or Nikon camera.

Home  |  Shortcourses™ Bookstore  |  Curtin's Guide to Digital Cameras and Other Photographic Equipment  |  Using Your Digital Camera  |  Displaying & Sharing Your Digital Photos  |  Digital Photography Workflow  |  Image Sensors, Pixels and Image Sizes   |  Digital Desktop Lighting   |  
Hot Topics/ About Us

Site designed by Steve Webster and created by i-Bizware solutions, freelance web development, Anil Dada Warbhe, Website development iBizware Solutions, India.iBizware Solutions, India.