Complete Color Integrity

Comments on Calibrating Digital Images

Calibration is the process of characterizing the camera or the film scanner so that the images it produces correspond to the colors and variations of color in the original scene with acceptable accuracy. Note that this does not mean the image will be what we want as a final product, only that it accurately represents the original scene.

We find that it is best to calibrate a camera or scanner using a target having a stepped grayscale for which the RGB pixel values are known for each step. The quality of the grayscale is important as the steps should be uniformly gray with no tinting. The calibration results from comparing the camera's image of this grayscale with the known values for each step. This comparison involves aligning three elements for each of the three channels (R,G,B) so the camera image and the known grayscale values best match. The three elements are 1) a white balance adjustment to account for the lighting used to take the camera image, 2) a blackpoint adjustment, and 3) a "curve" which describes how the camera or film responds to increasing light levels. Of these three elements, the first is specific to the lighting and will be different for different lighting conditions. The second typically results from a combination of several factors and will be different for images taken under different conditions. Only the third element depends just on the digital camera or film being tested and so only the third actually is a calibration of the digital camera or film. These third corrections, the "curves," need to be applied to all images from that digital camera or that type of film. Blackpoint adjustments and white balance (color balance) differ from image to image. These cannot be corrected automatically as part of the calibration but must be determined as required for each different photographic situation; sometimes for each image. As we discuss several other places in this document, the first adjustment, establishing color balance, is done by "adding and removing black" from the color channels and so does not affect color integrity. The second adjustment, blackpoint, is done by "adding or removing white" from the color channels and so it also does not affect color integrity. Since neither of these adjustments affect color integrity only the third adjustment actively causes the calibration to establish color integrity in the image. Calibration methods that are currently in use often make the mistake of including the first and/or second element as part of the calibration. The first and second elements need to be accounted for while calculating the calibration, but should not form part of the calibration itself.

We have a web page that goes into detail on grayscale calibration and describes how it can be done in Photoshop. Our ColorNeg and ColorPos plug-ins for Photoshop provide several methods of grayscale calibration, using known grayscales as described above, but also using grayscales in which the target pixel values for the steps of the grayscale are not known and even for a "grayscale" selected from the natural grays available in many images. The plug-ins also have a non-grayscale calibration method called "FilmType" which actually uses the fact that most people will select an image with color integrity as more natural from a series of images with varying loss of color integrity. The curves that produce the most natural image are thus a good estimate of the calibration curves. The calibration methods in these plug-ins pre-date the work that led to this Complete Color Integrity page and will likely be improved as the result of this work, but in their current state they still work reasonably well with the new concepts.

All of our calibrations use gamma curves for the calibration of films, cameras, and scanners. It is certainly possible to use other more complicated curves for calibration and as gamma curves are nearly as old as photography itself, it certainly is valid to question using them in this age of computers. But our experience is that using a more complicated form rarely results in improved performance of the system. With film the actual calibration curves are never known with high precision due to variations in processing and handling. The use of a system of gamma curves is much more forgiving of variations in the film than is the use of more complicated curves, particularly where the complicated curve shape is different for Red, Green, and Blue. As for most digital cameras, if the image data is available in truly raw form, the "curve" does not depart much from a straight line (or a gamma of 1) except possibly for the very brightest part of the image – a small range of 1/3 stop or so – and there is very little that can be done about that brightest somewhat unstable part in any event. In fact, the main problem with digital cameras is getting the raw camera image data into a computer image file without having the program doing the transfer "improve" the image as the transfer is made. We will have a web page dealing with this problem very soon, we hope. Meanwhile, the literature on our ColorPos plug-in has some suggestions.

One obvious question about the above is why use just a grayscale? This is a color calibration – don't you need color patches to do an adequate calibration? In the past we have explained why using color patches is not necessary. In a digital image gray is composed of equal amounts of Red, Green, and Blue. We calibrate the camera or film so that the responses to Red, Green, and Blue are equal on each step of a grayscale. This fully defines the way the film or sensor responds to Blue light as it goes from dark to light and the same is true for Green light and for Red light. But as you will learn below, it is also true that you cannot do a better calibration than using a grayscale although many – probably most – digital camera makers claim to.

"Acceptable Accuracy" in Calibration

The key to proper calibration is in understanding exactly what "acceptable accuracy" must mean for calibration to work properly. Cameras – digital cameras or film cameras – are simply not capable of producing a completely visually accurate record of the colors in a scene, but a camera can produce scenes with colors that are completely visually realistic. This is because the sensitivity of the eye to colored light is different than the camera sensitivity to colored light. Generally the two can see colors as nearly the same, often visually identical, but also quite often there is a visible difference. An object you see as a particular shade of blue the camera might record as a slightly more greenish blue. But this difference is not consistent. There may be another, different object that you see as an identical shade of blue to the first blue object, but the camera records that object as a slightly more reddish blue than you see. It is important to realize that these differences cannot be corrected by adjusting the camera image to be slightly less green to agree with the first object because then the second object would have an even more pronounced reddish cast. Likewise, making the camera image slightly less red would correct the second object but then the first object would appear as even more greenish. The phenomenon that leads to these differences in the way the camera sees and the way you see is called metamerism. It is the same phenomenon that sometimes causes an item of clothing to be a different color under store lighting than it is when you get it out in daylight. If you want to more fully understand metamerism, you can do no better than looking in the standard textbook on color, Roy A. Berns' "Billmeyer and Saltzman's Principles of Color Technology."

Just above, metamerism made the camera see as two different shades two color patches which we see as matching. The reverse of this is just as common. The camera can see two color patches as identical in color while to us one patch may be slightly more red than the other. Again, it is impossible to "fix" this situation. If the two patches have an identical color in the camera's image there is no way to tell whether or not that color came from a color patch that we see as more red so there is no way to know whether or not the color should be adjusted to be more red. That is, there is no way in general to know which sort of color "patch" that color came from. If we are taking a picture of a target with colored patches, it is possible to use our special knowledge to tell exactly which color patch is which. And that is where camera calibrators get into trouble. They can "calibrate" the colored patches to be different than what the camera actually sees.

Metamerism is a natural phenomenon. The eye routinely has to deal with metameric effects as the character of the light changes in a scene. Even daylight has large changes in character (technically "in spectrum") as cloud conditions change, as natural reflections alter the light, as time of day alters the light, etc. And so metameric color shifts are usually seen as natural unless the shift is uncommonly large or there is direct comparison between the image and the original scene. Another property of metameric color shifts is that they are consistent whether the patch is brightly or dimly lighted. If the camera sees a particular blue object in bright light as slightly redder than you do, then in dim light from the same light source it will also see the same object as slightly redder than you do. This consistency in the light/dark behavior of metamerism is important. As a result metamerically shifted colors will behave as is required for color integrity and will look natural even though they may not match the colors of the original scene. So, for an image to have color of "acceptable accuracy" the key is to have the image give visually the same color for a given colored object whether the object is brightly or more dimly lit in the scene. This means that in calibration of a camera or film, metameric effects can be ignored and in fact it is best to do so. As we have shown above, trying to compensate for metameric effects at best just decreases the metameric color shifts of some colors at the expense of increasing the metameric shifts of other colors. Worse, as we have shown in the section on profiling, some methods of "calibration" that are in common use can actually destroy color integrity in such a way that it is irretrievable.

Calibration and Color Profiling

In the present state of digital imaging nearly all attempts at calibration of digital cameras or film are referred to as profiling. This is because most of the methods used are based on the
ICC Color Management system in which profiles are used to characterize various imaging devices. There are very serious problems with this approach. This is not a fault of the ICC Color Management system itself, which is in fact well-designed, ingenious, and very successfully applied in color managing displays, printers, and commercial printing. The problem is that when it comes to cameras the ICC system is widely misunderstood and has often been grossly misapplied.

The Pitfalls of Using Profiling as Camera Calibration

Given a digital image – that is, a computer file representing an image – the purpose of the ICC Color Management system is to make that file visually appear as similar as possible when rendered on various devices, CRT displays, LCD displays, ink-jet printers, laser printers, commercial printing services, etc. It does this by using profiles that describe how each such device responds to image data to produce color images. "Device" as used here is a very rigid term. When you use a different type of paper in a printer it becomes a different "device" and requires a different profile even though you think of it as the same printer. As a CRT ages its behavior changes so that it becomes a different "device" and needs a different profile to make its displayed colors acceptably accurate. ICC profiling also extends to scanners. Scanners have a built-in light source that is controlled by the scanner. A page that is scanned in a scanner today can be expected to produce the same result if it is scanned in that same scanner next week. So, the scanner is a "device" and can be profiled. If we use ICC profiles to scan a page on a scanner, view it on a display, and print it on an ink-jet printer we can expect that the displayed and the printed page will look very much like the original paper page that was scanned.

In trying to extend this to cameras we first find that a camera is really not a "device." The source of illumination is not constant as is the case with scanners. While some photographic studio or photographic laboratory situations can be exceptions, for general photography the light can come from many different sources at many different intensities and can be altered by sky conditions, reflection and in many other ways before it reaches the subject. Properly applied, the ICC system would require a different profile for each one of these myriad light sources. Since that is impossible, the typical approach has been to pick one light source and profile for just that one source. This is given an official appearance by choosing an official-looking light source, typically D50 or D65, and going to great pains to be exact about it. But neither D50 nor D65 illumination match any real lighting conditions and so any real image that is "calibrated" by using a D50 camera profile will not be accurate and will be very inaccurate for images taken in, say, daylight. To compensate for this, fudge factors that have nothing to do ICC profiling are introduced, but the whole package is still called "profiling" so that it appears to be under the wing of ICC. Apparently this makes everyone feel better.

This would be bad enough, but there is another current trend that is even worse. In dealing with cameras, film or digital, or scanners we are (nearly) always dealing with systems of three primary colors, additive Red, Green, Blue or sometimes the subtractive Cyan, Magenta, Yellow set. The ICC has profile formats which are designed to deal with the special considerations required for three-primary systems. But the big triumph of ICC Color Management has been in dealing with printers, both desk and large printing service systems. Color printers are commonly based on multi-color systems rather than three-primary systems. The most common multi-color system is CMYK (Cyan-Magenta-Yellow-blacK) but printers often uses even more colors. The ICC has profile formats which are designed to deal with the special considerations of these multi-color systems. These multi-color profiles naturally have a lot of control over color matching, and can allow quite precise choice of inks which best match the various colors and tones throughout much of the visible range. This is possible because when there are more than three colors of ink, some visible colors and tones typically can be represented by many different combinations of the inks and in other cases colors and tones can be represented that would not be possible if just three of the ink colors were used. Where multiple choices are possible the choice may be made on economic grounds.

At the start traditional standard color charts were used for checking and calibrating digital cameras. These charts had, in one form or another, numerous patches of color as well as a gray scale and there had been colorimetric measurements made on the various color patches which defined each of their colors and thereby which RGB values each patch ideally should have in a computer representation. When using specified lighting and one of these color charts to calibrate a digital camera to a three primary color ICC profile it was possible to get a reasonably close match to most of the color RGB values but naturally, due to metamerism some patches might closely match while some others might be visibly different. At some time in the development of digital cameras someone had the bright idea of using the ICC multi-color profiles intended for printers with these three primary color RGB systems. By choosing one of the most detailed multi-color profiles they found they could actually "calibrate" the camera so that every one of the color patches and gray scales was an exact match! Thus the color chart would be duplicated exactly and of course the "superior" color ability of the system became a major selling point. The problem is that this approach is completely fallacious. Given correct lighting conditions the resulting profile will indeed make images of the color chart that are perfect matches to the original. At the same time, the profile will make the color response of the camera worse for nearly everything other than the color chart itself. With a camera profiled in this patchwork quilt manner colors can shift several times going from darker to brighter illumination – color integrity is not only lost but becomes nearly impossible to regain.

The serious problem described above really should be obvious, at least to persons with a mathematical background as presumably would be the case for persons setting up profiling calibrations. We can be generous and assume it really must not be that obvious rather than conclude that we have a lot of disingenuous people who have programmed calibration profiles for digital cameras. Being generous, let me try to explain the problem. Nearly all digital cameras have a single sensor chip with an array of identical light sensitive cells. To make the chip color sensitive, a tiny red, green, or blue filter is placed over each sensor cell. Since all the sensors are identical that means that the response to light as it goes from darker to lighter is identical for all cells regardless of color. In addition, the chip is normally designed so that the output it produces is proportional to the light intensity it sees (that is, it is linear) over all or nearly all of the range of interest. Therefore, any calibration should deviate very little from linear for all three color channels. Furthermore, there is no crosstalk between the color channels that is of a predictable nature. The red channel simply measures the amount of red light and that measurement is not influenced in any predictable way by the amount of light that the blue and green sensors see. For this reason any "calibration" which tries to adjust the red value of a pixel differently for different readings in the corresponding blue and green pixels is a false calibration. This in effect changes the near-linear "curve" of the red (or other color) channel to what might be described as lumpy, and with a lumpiness that varies according to the color levels sensed in the other channels. This plays havoc with color integrity, which demands a smooth transition from dark to light.

This can be confusing because the BAYER interpolation usually used on digital camera images does adjust the red value of a pixel differently according to the readings of nearby blue and green pixels. But this is done according to the geometric relationship of several surrounding pixels and the values each pixel is sensing. Geometry within the image is the key. A specific pair of blue and green pixels will influence the red pixel value in different ways depending upon the geometry of placement of the pixel values within the image. The calibration form we treat above requires that the red pixel be changed the same way any time a particular (R,G,B) color is found. In fact BAYER interpolation generally does damage to color integrity, but BAYER interpolation has its largest influence at edges within the image and does little to the larger areas of similar color to which the eye is most sensitive. BAYER interpolation is necessary and the effect it has is not very harmful visually.

Finally, and primarily because I get asked about this, a comment on the transfer matrices which play a part in ICC profiles. These are linear transformations which are intended to accurately convert colors from their expression in one set of three primaries, r, g, b, to another set of three primaries r', g', b'. We will assume for the moment that the primaries r', g', b' are known for the target system. In order for the transformation to be meaningful, the three primaries r, g, b of our camera or film also need to be known. They aren't. So, the use of transfer matrices with film or digital cameras is basically meaningless and can be harmful.

To explain, a CRT display uses phosphors which radiate the red, green, and blue colors each at a very specific wavelength of light. This direct connection between the primaries and specific wavelengths gives the CRT very specific values of r,g,b. With film and digital cameras there is no direct connection between the primaries and a specific wavelength of light. For example, the red filters used in a digital camera pass a wide band of wavelengths of light throughout the red region of the spectrum. When single wavelengths "primaries" are given for a digital cameras they represent averages (integral averages) over the band of wavelengths actually passed by the filters and moreover the average is weighted according to a standard light source (D50 for example). If the many different light sources which the camera will actually experience were used in the calculation instead of D50, there would result a whole collection of different r,g,b "primary" values each of which is as valid a characterization of the camera primaries as any other. All of these result in are very similar rgb systems and it makes much more sense to just use the target r',g',b' as the primaries for the camera rather than converting to r',g',b' from some artificially averaged r,g,b for the camera that is bound to be incorrect most of the time. The situation is basically the same for film cameras, see our
Color Negative FAQ for more detail.

By this I do not mean transforms via transfer matrices are not ever useful. In situations where the difference in primaries is larger and/or the source has a fixed lighting system, transfer matrices are very useful. But for a typical general use camera or film the transform is wishful thinking at best and may well serve to degrade the image rather than improving its accuracy.

Perspectives and Comments
Genesis of the Idea
– what led to the discovery of these simple facts.
Why We Give Few Illustrative Examples of Color Integrity
Color Integrity from the Viewpoint of Artistic Painting
Fog Example
Color Integrity from the Viewpoint of Basic Physics and Mathematics
Trying to Deal With Color Integrity in Photoshop
Color Integrity and Color Balance
– A Few Examples
Comments on Calibrating Digital Images
"Acceptable Accuracy" in Calibration
Calibration and Color Profiling
The Pitfalls of Using Profiling as Camera Calibration

Much of the material in this document and the analyses are original with us. This document is
Copyright © 2009 by C F Systems
If you plan to use our original material or analyses in any published form, please contact us at for terms and conditions. Before-the-fact terms typically require no more than appropriate acknowledgement.

Privacy Policy and E-Mail
Go to C F Systems Home Page