Is a DOT comprised of many PIXELS?

I’, trying to understand difference between DPI and PPI.

I learned that a DOT is the smallest physical entity a device can show or printer can print and a DOT may consists of R,G,B elements.

A pixel is the smallest amount of information within a digital image.

So, does that mean in coloured picture each R,G,B value are individual pixel?

If so, then each dot consists of more than one pixel, am I right?

If I’m right is there some attribute like pixel per dot?

I like a lot the reasoning of the question. I will break a little a rigorous analysis for the sake of making this answer as simple (and practical) as possible.

Each dot consists of more than one pixel… Is there some attribute like pixel per dot?

This could be, to some extent, be the other way. One pixel formed by several dots.

And my short answer is Yes. There are some correlations.

A dot. To be or not to be

A printed “dot” (as the basic unit of a printer) can contain only 2 types of states. Or is it printed or not.

A pixel is not only a digital “dot” it can contain different levels of information. The most basic type of pixel is a monochromatic 1bit pixel. It is the same case. Either you have a black pixel or you have a white pixel.

If you use a monochromatic bitmap the relationship can be exactly 1 to 1. One black pixel = one printed dot.

Halftones

Most of the time we do not use a monochromatic image.

If I have a pixel that can have for example 3 values: 1-white 2-Gray 3-Black I could resolve this using a 2×1 dots grid. 0dot=white, 1dot=Gray, 2dots=black.

This means that the reproducible levels of gray depending on how many dots we assign to match the depth of the pixel.

Normally on commercial print we have 8-bit images producing our printed images. If we have a basic grid of 16×16 dots we can have 256 combinations of dots to have 256 levels of gray.

That is the basic relation you are looking for n_n

It is not a direct dependency, (It is an optimization issue) so it is not a direct relationship or it is carved in stone. But you will find on commercial print this numbers together: 300ppi, 150lpi, 2400dpi (150×16=2400).

Things are a bit more complicated than that. But that relationship is a base to optimize these conversions.

I need to finish a paper and video about this. I am preparing physical tests, macro images, etc.

Some other variables, for example, Screen angle

Let us analyze a bit more the case of the commercial print 300ppi, 150lpi, 2400dpi

16×150=2400 is a direct transformation when your screen angle is 0° and is the easiest to understand.

But we have some other angles, like a halftone screen at 45°, where we need a file resolution of at least 212ppi

Double the resolution

So, why do we use 300ppi instead of 150ppi when we have 150lpi?

Here is a simulation of a 150lpi screen at 0°. Watch the red circle.

On the left, we have a 150ppi file. The circle could start growing for example from the center.

On the right, we have a 300ppi file. Now the rip has better information on how to start to grow the circle. Both are 150lpi but the extra information helped a bit to produce a better halftone, but after that, the extra information is lost.

Pixelation

If we use a lower resolution, for example, 75ppi, each line-dot is repeated 2x horizontally and 2x vertically. and this will be noticeable as a pixelation.

In normal halftone screens for commercial print we need:”

• Some amount of pixels assigned to a line to produce enough different shades of gray (16×150=2400).

• A workable, optimized range of pixels assigned to produce a nice line-dot. 300-212ppi on a 150lpi output. We can push this in some cases to 150ppi.

A lot of other things to consider

If we want to get rough I’m listing some other things to consider.

• Halftone or dither

• Viewing distance

• Type of paper

• Print technique

• The pixel on electronic devices

• Pixel density

• Sensors

• What really is a pixel

• Types of pixels

• etc.

Error diffusion

That was the easy part.

On inkjet printers (and other systems) we do not use a line. We shoot the dot directly into the paper.

The error diffusion shoots “random” amounts of ink droplets according to the percentage of the color they want to reproduce.

But they do not need to fill a grid, so it can shoot for example some droplets and shoot a different amount of droplets if it has some new color information next to it.

Think of the difference with the other approach. Using LPI will be like if “it’s a military formation”. But here we have “a bunch of civilian dots playing around”. They produce an overall shade, but no formation is detectable.

This means that using the same 300ppi file will have a bit more final detail printed on a photographic inkjet printer that on a magazine (remember that the information is lost in sake to produce a nice 150lpi dot)

This also means that you can use a 200ppi image and still will have more detail than the 150lpi counterpart.

But as this is random it would be impossible to say “this droplet corresponds to this pixel.”

I ignore the internal algorithm used to produce the “randomness percentage”, but there is a chance they have a 16×16 “grid” or 256 unit somewhere in the mathematics of it. They need to produce some density of droplets shoot according to one maximum unit.

Just a note on joojaa’s comment about “a pixel is not a little dot”

If we treat a pixel just as an array of digital information, the trick is how to convert this information between information systems.

If our system A supports 1bit information (2 states) and our target system B also supports 1-bit information per unit, the relation is 1 to one.

If our system A supports 2bit information, and our target system B only supports 1-bit information, we need to grab two units to reproduce the same amount of information as our the system A.

And so on…

There is a direct correlation between a pixel depth and a dot array in terms of information.