Will scaling down incrementally hurt quality?

In Photoshop, will there be a difference in quality when a raster is scaled down 75% once as opposed to being scaled down 50% twice? In both cases, the final size will be the same: 25% of the original.

The reason I ask is because sometimes I want to scale down an image that I know has been scaled down previously. I hate having to CTRL+Z (undo) a hundred times to the state where the image was in its original size. If final quality is not affected, I’d rather just scale the image down right there and then.

Answer

It’s community wiki, so you can fix this terrible, terrible post.


Grrr, no LaTeX. 🙂 I guess I’ll just have to do the best I can.


Definition:

We’ve got an image (PNG, or another lossless* format) named A of size Ax by Ay. Our goal is to scale it by p = 50%.

Image (“array”) B will be a “directly scaled” version of A. It will have Bs = 1 number of steps.

A = BBs = B1

Image (“array”) C will be an “incrementally scaled” version of A. It will have Cs = 2 number of steps.

A ≅ CCs = C2


The Fun Stuff:

A = B1 = B0 × p

C1 = C0 × p1 ÷ Cs

A ≅ C2 = C1 × p1 ÷ Cs

Do you see those fractional powers? They will theoretically degrade quality with raster images (rasters inside vectors depend on the implementation). How much? We shall figure that out next…


The Good Stuff:

Ce = 0 if p1 ÷ Cs ∈ ℤ

Ce = Cs if p1 ÷ Cs ∉ ℤ

Where e represents the maximum error (worst case scenario), due to integer round-off errors.

Now, everything depends on the downscaling algorithm (Super Sampling, Bicubic, Lanczos sampling, Nearest Neighbor, etc).

If we’re using Nearest Neighbor (the worst algorithm for anything of any quality), the “true maximum error” (Ct) will be equal to Ce. If we’re using any of the other algorithms, it gets complicated, but it won’t be as bad. (If you want a technical explanation on why it won’t be as bad as Nearest Neighbor, I can’t give you one cause it’s just a guess. NOTE: Hey mathematicians! Fix this up!)


Love thy neighbor:

Let’s make an “array” of images D with Dx = 100, Dy = 100, and Ds = 10. p is still the same: p = 50%.

Nearest Neighbor algorithm (terrible definition, I know):

N(I, p) = mergeXYDuplicates(floorAllImageXYs(Ix,y × p), I), where only the x,y themselves are being multiplied; not their color (RGB) values! I know you can’t really do that in math, and this is exactly why I’m not THE LEGENDARY MATHEMATICIAN of the prophecy.

(mergeXYDuplicates() keeps only the bottom-most/left-most x,y “elements” in the original image I for all the duplicates it finds, and discards the rest.)

Let’s take a random pixel: D039,23 . Then apply Dn+1 = N(Dn , p1 ÷ Ds) = N(Dn , ~93.3%) over and over.

cn+1 = floor(cn × ~93.3%)

c1 = floor((39,23) × ~93.3%) = floor((36.3,21.4)) = (36,21)

c2 = floor((36,21) × ~93.3%) = (33,19)

c3 = (30,17)

c4 = (27,15)

c5 = (25,13)

c6 = (23,12)

c7 = (21,11)

c8 = (19,10)

c9 = (17,9)

c10 = (15,8)

If we did a simple scale down only once, we’d have:

b1 = floor((39,23) × 50%) = floor((19.5,11.5)) = (19,11)

Let’s compare b and c:

b1 = (19,11)

c10 = (15,8)

That’s an error of (4,3) pixels! Let’s try this with the end pixels (99,99), and account for the actual size in the error. I won’t do all the math here again, but I’ll tell you it becomes (46,46), an error of (3,3) from what it should be, (49,49).

Let’s combine these results with the original: the “real error” is (1,0) . Imagine if this happens with every pixel… it may end up making a difference. Hmm… Well, there’s probably a better example. 🙂


Conclusion:

If your image is originally a large size, it won’t really matter, unless you do multiple downscales (see “Real-world example” below).

It gets worse by a maximum of one pixel per incremental step (down) in Nearest Neighbor. If you do ten downscales, your image will be slightly degraded in quality.


Real-world example:

(Click on the thumbnails for a larger view.)

Downscaled by 1% incrementally using Super Sampling:

Original
Downscaled x1
Downscaled x10
Zoom into Downscaled x1
Zoom into Downscaled x10

As you can see, the Super Sampling “blurs” it if applied a number of times. This is “good” if you’re doing one downscale. This is bad if you’re doing it incrementally.


*Depending on the editor, and the format, this could potentially make a difference, so I’m keeping it simple and calling it lossless.

Attribution
Source : Link , Question Author : JoJo , Answer Author :
4 revsmuntoo

Leave a Comment