Complete Color Integrity

Color Integrity from the Viewpoint of Basic Physics and Mathematics
The physics and mathematics of adding black and adding white

When we speak of "adding black" in this document we mean "adding" in the sense of adding black paint to a sample of colored paint. In terms of a useful physical model, this can be thought of as taking a patch of color − for instance, a pixel − which has an area A and putting black paint over some fraction b of the area. Black does not reflect light, so only the remaining colored patch (1−b)A reflects light. Since the reflecting portion is part of the original patch that reflects light, the color will remain the same but since only (1−b) of the pixel will show, it will be only a fraction (1−b) of its former brightness. If the light source illuminating the original colored patch were to be dimmed to (1−b) of its former brightness the effect would be exactly the same as adding the black paint. Either way the colored patch or pixel remains the same color but reflects only (1−b) of the light it did originally. Note that in both cases the initial color or lightness of the colored patch does not make any difference. No matter what the starting color and lightness is, the color stays the same but it is reduced to (1−b) of its original lightness.

What does this mean mathematically in terms of pixel values? Each pixel in a digital image is described in terms of its Red, Green, and Blue (R,G,B) components, the three primary colors of digital imaging and the pixel represents a small point in the original scene. For a calibrated image ideally for each pixel the R component is proportional to the amount of red light coming from that point, G is proportional to the green light and B is proportional to the blue. R, G, and B are normalized so that when R = G = B the color is neutral gray, ranging all the way from black to white. (This process of normalization is more commonly known as "white balance" or "color balance.") Because R, G, and B are each proportional to their respective components in the light source they individually behave exactly as described above for the entire pixel. When the fraction (1−b) is applied to (R,G,B) the result is ((1−b)R, (1−b)G, (1−b)B). If we want to dim the entire image without losing color integrity we simply apply the fraction (1−b) to each component of every pixel in the image. This is the same as "adding black paint" to every pixel in the image. But since we are working with a computer image and not with paint we can also lighten the image using the same method in reverse, effectively "removing black paint" from every pixel. Where a fraction (1−b) < 1 is used to darken the image, we simply apply a fraction a > 1 to lighten the image: (aR,aG,aB). Note that this "adding black" and "removing black," that is, (aR,aG,aB) for a > 0 preserves the color of the pixel. This means that using this method we can lighten some parts of an image and darken others selectively and in as many places as we choose and still retain color integrity; that is, the colors will not change.

When we try to apply this very simple rule in Photoshop we immediately run into a problem. The R, G, and B in Photoshop are typically (almost always) "gamma-encoded." We have gone into the historical reasons for this elsewhere but Photoshop (and most of digital imaging) stores image pixels as (Rγ,Gγ,Bγ) instead of (R,G,B) and so the image pixel components are no longer proportional to the red, green, and blue components of light represented by that pixel. As luck would have it (and clearly luck it was rather than planning) this form still works, sort of, with our simple "adding black" operation. If we take (aRγ, aGγ, aBγ) and let a = cγ, then we have (cγRγ, cγGγ, cγBγ) = ((cR)γ,(cG)γ,(cB)γ). So, multiplying the R, G, and B components of a gamma-encoded pixel by a we still get lightening or darkening of the image that preserves color integrity although the fractional amount of darkening or lightening is c = a1/γ. In particular this means that color integrity preserving lightening or darkening of the image can be done in Photoshop by using the highlights control of the Levels tool. (The rest of the functions of the Levels tool damage color integrity, however.)

This bit of serendipity does not always work. For those who have changed to L* encoding in place of gamma encoding in the hope of getting more accurate color (if you have done this, you'll know it) will be thrilled to know that the above trick does not work with L* encoding, nor does it work with certain other profiles such as sRGB which do not adhere to gamma-encoding. For those cases Levels highlights and the few other tools in Photoshop that function by multiplying pixel components by a constant fraction do not preserve color integrity. In fact we are hard-pressed to come up with any means within Photoshop of properly "adding or removing black" without first converting out of these profiles. Our ColorNeg and ColorPos plug-ins do function correctly with L* encoding, however.

We have noted that the "adding or removing black" operation, which mathematically is multiplying each of the Red, Green, and Blue light intensity values of the pixels by the same constant fraction, is exactly equivalent to what would happen if we brightened or dimmed the source lighting for the original scene. Suppose that instead of brightening the source lighting we shifted its color so that the red component of the source was brighter but the Green and Blue components stayed just the same. Our camera or film would then record the image as too red. To compensate for this we need leave the Green and Blue pixel values alone but to decrease the Red pixel values in order to compensate for the increase in red source lighting. If the red light component increased by 5% then the red light reflected by the object in the scene will increase by 5% and the Red pixel values of the image will be 1.05 times as large. To correct this we need to divide the Red pixel values by 1.05 so that (1/1.05)(1.05R) = R and the Red pixel values will be restored to their correct values. Doing this is called white balance or color balance. In order to correctly adjust the color balance of a (calibrated) image all of the Red pixel intensities in an image need to be multiplied by the same fraction and the same type of adjustment may be required using different fractions for Green pixel intensities and for Blue pixel intensities. The highlights adjustment in the Photoshop Levels tool may be used to do this, operating individually on the color channels. Again, this also will work with gamma-encoded pixels, but not with other non-linear encodings such as L*.

We have dealt with adding and removing black. How about adding and removing white? By "adding white" again we mean in the sense of adding white paint to a pixel patch of color. In this case the patch area is again A and we cover a fraction of the d of the patch (pixel) with white paint. With black paint we reflected no light, but with white paint it is necessary to explain in more detail what we mean by "white." In digital imaging, white is taken to mean the brightest pixel value possible. If we call this maximum value W, then white is the condition where R
w = Gw = Bw = W. Returning to our patch A covered with fraction d of white paint, the original pixel (R,G,B) will become ((1−d)R+dW, (1−d)G+dW, (1−d)B+dW). This is an "addition of black" or dimming of the original color where the white partly covers the patch combined with the same constant quantity of white added to each color component for the white portion of the color patch.

This model applies to fog, where something − like real fog − is between the main subject matter of a scene and the camera. If we consider typical white fog, droplets which are between the subject and the camera reflect white light themselves while they block light coming from the subject. Light from the subject passes between the fog droplets, however, so taking the fraction of the area that the fog droplets actually obscure as d, the physical situation matches the mathematical model. This model for "adding white" also applies for specular (mirror) reflections in the highlights, where the nature of the surfaces and the angles cause the light source to be partially or completely directly reflected instead of being diffusely reflected as is required to show the color of an object. Where the transition to specular (mirror) reflection is still partial it still diffusely reflects some colored light added to the specular white light. There is a fraction d involved just as for fog, with the value of d changing with the transition to full specular reflection.

Returning to the "adding white" ((1−d)R+dW,(1−d)G+dW,(1−d)B+dW) form, we can see that this is really a combination of adding white − the +dW in each of the three color component terms − and adding black to account for the part of the color patch which we have covered with the white paint − the (1−d) multiplier in each of the three terms. For the fog relationship the "black" part of the term and the "white" part of the terms are related through the fractional area d, but in the generalized of the black/white transformation for (R,G,B) we have (aR+dW,aG+dW,aB+dW). Note that a < 1 is adding black, a > 1 is removing black, d > 0 is adding white and d < 0 is removing white. Since this transformation involves the addition or removal of black or white the transformed colors have color integrity. The appearance of a calibrated image in which the pixels have been altered in this way will retain the natural appearance of the colors. To use this form, the same a and d constants need to be applied to each of the R, G, and B components of a pixel, but there is no requirement that the a and d be the same for all pixels and indeed they can be varied as required to produce different effects for different parts of the image and the image will still retain its color integrity.

For the above we assumed we were adding pure white, R
w = Gw = Bw = W. Suppose instead that Rw, Gw, and Bw are a little different so that the white has a tint, Rw = drW, Gw = dgW, and Bw= dbW. Then
rdW, aG+dgdW, aB+dbdW) is adding tinted white, which is also a natural situation. For this form, Rw, Gw, and Bw should of course be kept the same over larger areas to represent tinted fog, but a and d are under no more constraint than for the black/white generalized form.

Finally, we can make use of a completely generalized form (a
raR+drdW, agaG+dgdW, abaB+dbdW). Here in addition to tinted fog we also include color balance or white balance. For this form dr, dg, and db should again be kept the same over larger areas and in addition ar , ag and ab determine color balance and should be kept the same over areas that are subject to the same lighting. But once again, a and d are under no more constraint than for the black/white generalized form.

This final generalized form

(R,G,B) −> (araR+drdW, agaG+dgdW, abaB+dbdW)

is a complete representation of adjustments that can be made without affecting color integrity. Note that the (R,G,B) used here and throughout this section are quantities that are proportional to the normalized light intensities of red, green, and blue. That is, the (R,G,B) is linear and not gamma-encoded. Furthermore, while the "adding black" form serendipitously worked directly with gamma encoded pixels, the generalized form definitely does not. It is necessary to linearize encoded (R, G, B) values before use, regardless of their encoding.

Investigating the common blackpoint or shadow adjustment led to much of the above. Some form of this adjustment needs to be made to nearly all digital images:

(R,G,B) −> (R−b, G−b, B−b)
(R,G,B) −> (R−R0, G−G0, B−B0)

Various reasons are given for making this adjustment, typically having to do with various defects in the response to light of camera sensors or film or other equipment defects. While this may in fact be true it is also true that the blackpoint adjustment is the same as we found above for removing white, so that it effectively lifts fog from the image, whether white fog as in the first form or tinted fog as in the second form. For all practical purposes, this adjustment is not available within Photoshop itself for normal gamma-encoded images although the "shadows" (earlier versions) or "black" (later versions) adjustment in the ACR (Adobe Camera Raw) plug-in appears to do the first form above. This blackpoint adjustment is very often applied directly to a gamma-encoded image, losing color integrity from the start.

Since the addition of white actually is a mathematical addition, you might think that to add white in Photoshop it would be easy to simply put in a layer which is white − even a tinted white − and then blend a certain percentage of that with the image. After all,
(R, G, B) + k (Rw,Gw,Bw) = (R+kRw,G+kGw,B+kBw)
Except that again Photoshop works directly with gamma-encoded pixel values:
(Rγ, Gγ, Bγ) + k (Rγw,Gγw,Bγw) = (Rγ+kRγw,Gγ+kGγw,Bγ+kBγw)
and that result is not equal to the required ((R+kRw)γ,(G+kGw)γ,(B+kBw)γ).
Perspectives and Comments
Genesis of the Idea
− what led to the discovery of these simple facts.
Why We Give Few Illustrative Examples of Color Integrity
Color Integrity from the Viewpoint of Artistic Painting
Fog Example
Color Integrity from the Viewpoint of Basic Physics and Mathematics
Trying to Deal With Color Integrity in Photoshop
Color Integrity and Color Balance
− A Few Examples
Comments on Calibrating Digital Images
"Acceptable Accuracy" in Calibration
Calibration and Color Profiling
The Pitfalls of Using Profiling as Camera Calibration

Much of the material in this document and the analyses are original with us. This document is
Copyright © 2009 by C F Systems
If you plan to use our original material or analyses in any published form, please contact us at for terms and conditions. Before-the-fact terms typically require no more than appropriate acknowledgement.

Privacy Policy and E-Mail
Go to C F Systems Home Page