# Why Do Computers Use the Binary Number System (0,1)?

Why Do Computers Use the Binary Number System (0,1)? Why don’t they use Ternary Number System (0,1,2) or any other number system instead?

Since we’re in Computer Science, I’ll answer this way: they don’t.

What do we mean by a “computer?” There are many definitions, but in computer science as a science, the most common is the Turing machine.

A turing machine is defined by several aspects: a state-set, a transition table, a halting set, and important for our discussion, an alphabet. This alphabet refers to the symbols which the machine can read as input, and that it can write to its tape. (You could have different input and tape alphabets, but let’s not worry about that for now.)

So, I can make a Turing machine with input alphabet $\{0,1\}$, or $\{a,b\}$, or $\{0,1,2\}$,
or $\{\uparrow,\downarrow\}$. It doesn’t matter. The fact is, I can use any alphabet I choose to encode data.

So, I can say that $0001001$ is 9, or I can say that $\uparrow \uparrow \uparrow \downarrow \uparrow \uparrow \downarrow$ is 9. It doesn’t matter, since they’re just symbols we can distinguish.

The trick is that binary is enough. Any sequence of bits can be interpreted as a number, so you can convert from binary to any other system and back.

But, it turns out unary is enough too. You can encode 9 as 111111111. This isn’t particularly efficient, but it has the same computational power.

Things get even crazier when you look into alternate models of computation, like the Lambda calculus. Here, you can view numbers as functions. In fact, you can view everything as functions. Things are encoded not as bits, 0s and 1s, but as closed mathematical functions with no mutable state. See the Church numerals for how you can do numbers this way.

The point is that, 0s and 1s is a completely hardware specific issue, and the choice is arbitrary. What encoding you’re using isn’t particularly relevant to computer science, outside of a few subfields like operating systems or networking.