What is the internal precision used by Adobe Illustrator? Or put another way, what internal representation does Illustrator use to store coordinate data?
Try this (I’m using CS6):
- Zoom in as close as Illustrator allows (6400% in CS6).
- Draw a circle.
- Resize the circle to 0.001 × 0.001 pt.
- Resize the circle to 10 × 10 pt.
Here’s what I get when I do that:
This is obviously an extreme example, but I have experienced similar distortions in more realistic scenarios when objects have gone through extensive editing. To avoid this, I have had to resort in some cases to working at an enlarged scale and only resizing my artwork to final size at the very end.
Clearly, Illustrator’s internal precision is not infinite. It appears that coordinates are ultimately rounded to an integer at some small scale (I assume some fraction of a point), such that rounding errors can accumulate, eventually culminating in visible distortion.
Can anyone shed some light on this?
No computer application has infinite precision. Even CAD applications do not have much larger accuracy scale. They do however allow you to change the working unit, which changes your zero point location, whereas illustrator allways calculates everything in points. You should probably want to read what every computer scientist needs to know about floating point numbers, which by now applies to every computer users.
Theres nothing wrong in drawing at scale, that is essentially what the CAD applications do they only have a facility to tell the user this fact. See the application internally only stores numbers interpretation of those numbers is up to the frontend.
Anyway I can not replicate your problem:
Image 1: a 0.001 pt circle progressively scaled by x10 (image form CC2017 but results in cs 5 are comparable)
I can however replicate the problem on CS6
After having tracked down a cs6 version i can verify that this indeed happens.
I can even roundtrip a full double float value for example:
in 0100000001110111011010111110101110000101000111101011100001010010 out 0100000001110111011010111110110000000000000000000000000000000000
So it seems that the internal value of the system is certainly not a double float. Single float perhaps? It would make sense for a early version of a hardware acceleration code!
I have to test whether that is the case in CC2017 tough.