Why should I ever use Unicode’s special characters for Roman numerals?

This is to answer a question which arose in the comments on this question on the Unicode characters for Roman numerals:

Why is this necessary or preferred over the usual way of typing ai, ai-ai, ai-ai-ai, vee-ai, etc.?

To start from the beginning, in Unicode’s Number Forms block, there exist code points for Roman Numerals that are at first glance very similar in appearance to standard capital latin letters or combinations thereof (U+2160 – U+217F). For example, U+2165 (Roman Numeral Six) looks very much like VI (Latin Capital Letter V and Latin Capital Letter I).

Thus, the question arises why one should not use the latter to represent those digits and, e.g., type Louis VII instead of Louis Ⅶ. Obviously, using no special characters avoids compatibility issues with fonts that do not support them. But even if I know that the text will be rendered with a font that does support these characters, why should I bother using them?

Answer

In many fonts you will indeed find hardly any difference between using the Unicode characters for Roman numerals and just composing them from stardard Latin letters. For example, the following shows Louis VII (top) and Louis Ⅶ (bottom, using codepoints for Roman numerals) rendered with FreeSans:

enter image description here

Apart from a tiny difference in spacing, which was propably not intentional, the output is identical.

Here is the same text rendered with DejaVu Sans:

enter image description here

While the characters still look identical, there is a considerable difference in spacing. It may be a matter of taste whether the latter is preferrable for Roman numerals, but it certainly wouldn’t be a good choice of kerning for regular all-caps.

Linux Libertine goes one step further:

enter image description here

Here the Roman numerals are slightly smaller than the capital letters, thus matching the font’s Arabic numerals. Most importantly, they are connected, reproducing a feature often found in hand-drawn Roman numerals.

Now, some may still argue that there aren’t any improvements in the above or that they aren’t worth the effort. So here is a case, where not using the Unicode characters will produce horrible results:

enter image description here

(Note that the small size of the numerals reflects some actual historic typesetting.) Something similar may occur for script or caligraphic fonts.

Without specific Unicode points for Roman numerals, dissolving the latter problem would only be possible with:

  • Using a complex OpenType feature (or similar) that tries to detect whether a sequence of capital letters is a roman numeral. This will inevitably cause problems with words that would also be a valid Roman numeral.

  • Using a simple OpenType feature, that needs to be manually activated for every Roman numeral.

  • Using Unicode’s Private-Use Area. Compatibility issues are likely to ensue even when switching between two fonts that both support Roman numerals.

From Unicode’s point of view, the huge semantical difference between capital Latin letters and Roman numerals should already have sufficed for a seperate encoding of Roman numerals.

Attribution
Source : Link , Question Author : Wrzlprmft , Answer Author : Wrzlprmft

Leave a Comment