In Photoshop, will there be a difference in quality when a raster is scaled down

75% onceas opposed to being scaled down50% twice? In both cases, the final size will be the same: 25% of the original.The reason I ask is because sometimes I want to scale down an image that I know has been scaled down previously. I hate having to CTRL+Z (undo) a hundred times to the state where the image was in its original size. If final quality is not affected, I’d rather just scale the image down right there and then.

**Answer**

It’s community wiki, so *you* can fix this terrible, terrible post.

Grrr, no LaTeX. 🙂 I guess I’ll just have to do the best I can.

## Definition:

We’ve got an image (PNG, or another lossless* format) named *A* of size *A _{x}* by

*A*. Our goal is to scale it by

_{y}*p = 50%*.

Image (“array”) *B* will be a “directly scaled” version of *A*. It will have *B _{s} = 1* number of steps.

*A = B _{Bs} = B_{1}*

Image (“array”) *C* will be an “incrementally scaled” version of *A*. It will have *C _{s} = 2* number of steps.

*A ≅ C _{Cs} = C_{2}*

## The Fun Stuff:

*A = B _{1} = B_{0} × p*

*C _{1} = C_{0} × p^{1 ÷ Cs}*

*A ≅ C _{2} = C_{1} × p^{1 ÷ Cs}*

Do you see those fractional powers? They will theoretically degrade quality with raster images (rasters inside vectors depend on the implementation). How much? We shall figure that out next…

## The Good Stuff:

*C _{e} = 0* if

*p*

^{1 ÷ Cs}∈ ℤ*C _{e} = C_{s}* if

*p*

^{1 ÷ Cs}∉ ℤWhere *e* represents the maximum error (worst case scenario), due to integer round-off errors.

Now, everything depends on the downscaling algorithm (Super Sampling, Bicubic, Lanczos sampling, Nearest Neighbor, etc).

If we’re using Nearest Neighbor (the *worst* algorithm for anything of any quality), the “true maximum error” (*C _{t}*) will be equal to

*C*. If we’re using any of the other algorithms, it gets complicated, but it won’t be as bad. (If you want a technical explanation on why it won’t be as bad as Nearest Neighbor, I can’t give you one cause it’s just a guess. NOTE: Hey mathematicians! Fix this up!)

_{e}## Love thy neighbor:

Let’s make an “array” of images *D* with *D _{x} = 100*,

*D*, and

_{y}= 100*D*.

_{s}= 10*p*is still the same:

*p = 50%*.

Nearest Neighbor algorithm (terrible definition, I know):

*N(I, p) = mergeXYDuplicates(floorAllImageXYs(I _{x,y} × p), I)*, where only the

*themselves are being multiplied; not their color (RGB) values! I know you can’t really do that in math, and this is exactly why I’m not*

_{x,y}**THE LEGENDARY MATHEMATICIAN**of the prophecy.

(*mergeXYDuplicates()* keeps only the bottom-most/left-most *x,y* “elements” in the original image *I* for all the duplicates it finds, and discards the rest.)

Let’s take a random pixel: *D _{039,23}* . Then apply

*D*over and over.

_{n+1}= N(D_{n}, p^{1 ÷ Ds}) = N(D_{n}, ~93.3%)*c _{n+1} = floor(c_{n} × ~93.3%)*

*c _{1} = floor((39,23) × ~93.3%) = floor((36.3,21.4)) = (36,21)*

*c _{2} = floor((36,21) × ~93.3%) = (33,19)*

*c _{3} = (30,17)*

*c _{4} = (27,15)*

*c _{5} = (25,13)*

*c _{6} = (23,12)*

*c _{7} = (21,11)*

*c _{8} = (19,10)*

*c _{9} = (17,9)*

*c _{10} = (15,8)*

If we did a simple scale down only once, we’d have:

*b _{1} = floor((39,23) × 50%) = floor((19.5,11.5)) = (19,11)*

Let’s compare *b* and *c*:

*b _{1} = (19,11)*

*c _{10} = (15,8)*

That’s an error of *(4,3)* pixels! Let’s try this with the end pixels *(99,99)*, and account for the actual size in the error. I won’t do all the math here again, but I’ll tell you it becomes *(46,46)*, an error of *(3,3)* from what it should be, *(49,49)*.

Let’s combine these results with the original: the “real error” is *(1,0)* . ~~Imagine if this happens with every pixel… it may end up making a difference.~~ Hmm… Well, there’s probably a better example. 🙂

## Conclusion:

If your image is originally a large size, it won’t really matter, unless you do multiple downscales (see “Real-world example” below).

It gets worse by a maximum of one pixel per incremental step (down) in Nearest Neighbor. If you do ten downscales, your image will be slightly degraded in quality.

## Real-world example:

(Click on the thumbnails for a larger view.)

Downscaled by 1% incrementally using Super Sampling:

As you can see, the Super Sampling “blurs” it if applied a number of times. This is “good” if you’re doing one downscale. This is *bad* if you’re doing it incrementally.

*Depending on the editor, and the format, this *could* potentially make a difference, so I’m keeping it simple and calling it lossless.

**Attribution***Source : Link , Question Author : JoJo , Answer Author :
4 revsmuntoo
*