Most monitors that I see these days support 32-bit color depth. I can tell the difference between 16-bit, 24-bit, and 32-bit color depth but I am wondering if the human eye could tell the difference between 40-bit, 48-bit, etc. color depth?
You have to be a little careful with the definitions.
24 bit per pixel and 32 bit per pixel
“24 bit” usually means 24 bits total per pixel, with 8 bits per channel for red, green and blue, or 16,777,216 total colours. This is sometimes referred to as 24 bit RGB.
“32 bit” also usually means 32 bits total per pixel, and 8 bits per channel, with an additional 8 bit alpha channel that’s used for transparency. 16,777,216 colours again. This is sometimes referred to as 32 bit RGBA.
24 bit and 32 bit can mean the same thing, in terms of possible colours. It’s also worth noting that transparency doesn’t need to be sent to your display, because displays are opaque (you can’t see through your display to what’s behind it, unless you’re Tony Stark).
32 bit per channel
Also, 32 bit occasionally means 32 bits per channel (128 bits total per pixel). And, a lot of the time 32 bit per channel uses floating point numbers, rather than integers. (I’m happy to add more detail on floating point vs integer, if you’d like.)
The OpenEXR format supports 32 bit float channels. That might sound excessive, but it’s often used for VFX and rendered material, where heavy processing or colour correction may be involved, and large files are less of a concern.
16 bit per pixel or 16 bit per channel?
“16 bit” can typically mean two different things: 16 bits per pixel or 16 bits per channel. 16 bits per pixel works out to be 65,536 possible colours, and it definitely looks worse than 24 bits per pixel. 16 bits per channel means 281,474,976,710,656 total colours — well beyond human perception, but handy for processing.
RGB or CMYK?
All the above information assumes you’re working with RGB or RGBA images. If an image is CMYK, it could be 8 bit per channel and 32 bit per pixel, with 8 bits for cyan, magenta, yellow and black channels.
I am wondering if the human eye could tell the difference between 40-bit, 48-bit, etc. color depth?
I think 8 bit per channel (24 bit per pixel) is on the fringe of what the human eye can easily distinguish, but that’s only part of the story. Processing can cause rounding and clipping, so additional colour depth can push errors beyond the point where humans and display technology can see them.
That is one of the reasons why it’s common for RAW camera formats to be 10, 12 or even 14 bits per channel, which works out to be 30, 36 or 42 bits per pixel. It’s also common for those working on photos to import RAW to a 16 bit per channel document for further manipulation. Pro video recording can be 10 bits per channel or higher, too.
And, in cases where you might not think there’s any processing going on, there might be — colour management alone can introduce additional processing.
8 bits per channel means there’s only 256 levels of intensity, which really isn’t much. Common causes for rounding errors to be visible when using 8 bits per channel:
- Using gradients stacked on top of each other, where layers aren’t 100% opacity.
- Gradients drawn without decent dithering.
- Shadows with large blurs.
- Blurred objects.
- Blending modes and other compositing of two or more layers.
bpp vs bpc
Now might be a good time to mention the shorthand that’s often used for
bits per pixel,
bits per channel,
bpc. It’s common to write
32bpp etc when talking about these things, to remove the ambiguity of saying
Dynamic range and gamma
Dynamic range should also be factored in. It’s typical for displays to target sRGB (gamma of 2.2). Wider dynamic range means the number of possible values are stretched further, so more colour resolution is needed.
Is 8 bits per channel enough for final asset delivery?
Yes, most of the time, depending on the use.
Is 8 bits per channel enough for creation?
Sometimes. But, often it is not.
Please note: I’ve added some more information relating to digijim and Warren Young’s comments below.