# Color loss during colorspace conversions

Why are conversions from a smaller colorspace to a “larger” one often lossy? How can a smaller gamut have colors that cannot be represented in a larger (overlapping) gamut?

Color values are quantized by the bit depth of the format used to store the data. For example, in most standard formats and modern displays there are eight bits per color, per pixel, sometimes referred to as 24-bit True Color. There are 2^8 = 256 levels, which is why RGB colors are typically represented by three values in the range 0 to 255. Unless the conversion of a particular color results in exactly an integer value in the target space, it has to be rounded to the nearest accurate value. (Incidentally, dithering is used to make this rounding less deterministic, thereby reducing banding.)

To illustrate, let us define some fictional color spaces in one dimension.

Let Profile1 cover the entire range from “true black” to “true white,” represented arbitrarily as the black and white values of our displays. Let it be a 4-bit space, with 2^4 = 16 levels possible. A gradient in this space would look something like this:

Let Profile2 cover only half the range of Profile1, such that the darkest value it can represent is a dark gray, and the lightest is a light gray. It is also 4-bit. A gradient in this space would look something like:

If we wish to convert the gradient image in the smaller Profile2 to the larger Profile1 we have to “map” each of the values from Profile2 to a new value in Profile1. If we did not, and simply changed the profile while preserving the color numbers, the colors would shift: in this case the contrast would increase. Let us see what happens when we map each of the colors in Profile2 gradient to its nearest color in Profile1:

Even though Profile1 is a larger color space this is certainly not a lossless conversion.
In fact because Profile1 is larger, and yet has only the same 16 levels in which to represent colors, the gaps between each available color are larger. Even if the profile spaces were the same size but covered a different range the colors that each level represented would be different, and you would still have to fit a color to its nearest match rather than being accurate. It is this inaccuracy that dithering helps: specifically, it trades one kind for another, introducing noise to fake impossible colors. Here is the same conversion from Profile2 to Profile1, this time using dithering:

The grayscale space simplifies the illustration and the limited bit-depth exaggerates the effect. Nevertheless, the exact same process affects conversions between all color spaces. The higher the bit-depth the more precision is available, and the closer to “lossless” the conversion may be. (You must still of course contend with color gamut, and the tradeoffs that are made to fit one gamut to another.)