Computer Number System
Computer Number System
Computer Number System
Binary - Or base 2. There are only two numbers in binary, 0 and 1. Because computers use a sequence of
switches that can be on or off (also called a bit), base 2 works very well for them. Math in base 2 is
pathetically simple, but incredibly time consuming.
Octal - Or base 8. Uses the numbers 0 to 7. There are eight bits in a byte, which is used very often in the
computer field. (A bit is great, but it's too small to hold any useful data, thus the byte is used.) Math in octal
is more complicated than decimal.
Decimal - Or base 10. Uses the numbers 0-9. I'm sure you're familiar with this system. Computers only
display numbers in decimal, they actually do all their work in binary. Math is quite simple with this number
system, although some may argue.
Hexadecimal - Or base 16. Uses the numbers 0-F. Yes, I said F. Because there are 16 values per
placeholder, five new numbers had to be created. Those numbers are A, B, C, D, E, and F (Original isn't it?).
"A" has a value of 10, "B" is 11, and so on. We use this system in the computer field as a means of viewing
lots of data much faster. As you can see in the chart below, the hexadecimal column is the smallest because
it can handle more data per place holder. Very useful when looking at raw computer data. Math in
hexadecimal is not very simple compared to decimal.
You may wonder why I only counted up to 256 (don't forget 0). Well 256 is a very common number in the
computer field because it is a natural multiple of 2 (the base computers use). 256 is 2 to the 8th power (8 bits
in a byte). Got it?
The term computer numbering formats refers to the schemes implemented in digital computer and
calculator hardware and software to represent numbers. A common mistake made by non-specialist
computer users is a certain misplaced faith in the infallibility of numerical computations.
For example, if one multiplies: one might perhaps expect to get a result of exactly 1, which is the
correct answer when applying an exact rational number or algebraic model. In practice, however, the result
on a digital computer or calculator may prove to be something such as 0.9999999999999999 (as one might
find when doing the calculation on paper) or, in certain cases, perhaps 0.99999999923475.
The latter result seems to indicate a bug, but it is actually an unavoidable consequence of the use of a binary
floating-point approximation. Decimal floating-point, computer algebra systems, and certain bignum
systems would give either the answer of 1 or 0.9999999999999999...
Bits
The concept of a bit can be understood as a value of either 1 or 0, on or off, yes or no, or encoded by a
switch or toggle of some kind. A single bit must represent one of two states:
1
While a single bit, on its own, is able to represent only two values, a string of two bits together are able to
represent increasingly many unique values:
A series of three binary digits can likewise designate twice as many distinct values as the two-bit string.
As the number of bits within a sequence goes up, the number of possible 0 and 1 combinations increases
exponentially. The examples above show that a single bit allows only two value-combinations, while two
bits combined can make four separate values; three bits yield eight possibilities, and the amount of possible
combinations doubles with each binary digit added:
... 2b = N
Bytes
A byte is a sequence of eight bits or binary digits that can represent one of 256 possible values. Modern
computers process information in 8-bit units, or some other multiple thereof (such as 16, 32, or 64 bits) at a
time. A group of 8 bits is now widely used as a fundamental unit, and has been given the name of octet. A
computer's smallest addressable memory unit (a byte) is typically an octet, so the word byte is now generally
understood to mean an octet.
2
Nibbles
A unit of four bits, or half an octet, is often called a nibble (or nybble). It can encode 16 different values,
such as the numbers 0 to 15. Any arbitrary sequence of bits could be used in principle, but in practice the
most common scheme is:
This order (rather than gray code) is used because it is a positional notation, like the decimal notation that
humans are more used to. For example, given the decimal number:
7531
Each digit in the number represents a value from 0 to 9 (hence ten different possible values) which is why
this is called a decimal or base-10 number. Each digit also has a weight of a power of ten associated with its
position.
Similarly, in the binary number encoding scheme mentioned above, the (decimal) value 13 is encoded as:
1101
Each bit can only have a value of 1 or 0 (hence only two possible values) so this is a binary, or base-2
number. Accordingly, the positional weighting is as follows:
1101
= (1 × 23) + (1 × 22) + (0 × 21) + (1 × 20)
= (1 × 8) + (1 × 4) + (0 × 2) + (1 × 1)
= 13 decimal
Notice the values of powers of 2 used here: 1, 2, 4, 8. Experienced computer programmers generally know
the powers of 2 up to the 16th power because they use them often:
20 = 1 28 = 256
21 = 2 29 = 512
22 = 4 210 = 1,024
3
23 = 8 211 = 2,048
24 = 16 212 = 4,096
25 = 32 213 = 8,192
26 = 64 214 = 16,384
27 = 128 215 = 32,768
16
2 = 65,536
Sometimes, in this context (and unlike the SI International System of Units) the value 210 = 1,024 is referred
to as Kilo, or simply K (sometimes referred to as Kibibyte), so any higher powers of 2 are often conveniently
referred to as multiples of that value:
Similarly, the value 220 = 1,024 × 1,024 = 1,048,576 is referred to as a Meg, or simply M (sometimes
referred to as Mebibyte):
221 = 2 M
222 = 4 M
and the value 230 is referred to as a Gig, or simply G (sometimes referred to as Gibibyte).
However, in December 1998 the International Electrotechnical Commission produced new units for these
power-of-two values, in order to bring prefixes such as kilo- and mega- back to their SI definitions. (See
Binary prefix.)
(There is another subtlety in this discussion. If we use 16 bits, we can have 65,536 different values, but the
values are from 0 to 65,535. Humans start counting at one, machines start counting from zero, since it is
easier to program them this way. This detail often confuses.)
The binary scheme just outlined defines a simple way to count with bits, but it has a few restrictions:
• You can only perform simple arithmetic within the bounds of the number of bits that you have. That
is, if you are working with 16 bits at a time, you can't perform arithmetic that gives a result of 65,536
or more.
• There is no way to represent fractions with this scheme. You can only work with non-fractional
(integer) quantities.
• There is no way to represent negative numbers with this scheme. All the numbers are zero or positive
(unsigned).
Despite these limitations, such unsigned integer numbers are very useful in computers for counting things
one-by-one. They are very simple for the computer to manipulate.
Why binary?
• The logic that computers use is Boolean logic which is a two-valued logic, and thus the two states of
a binary system can relate directly to the two states of a Boolean logical system.
• It was easier to make hardware which can distinguish between two values than multiple values.
Imagine a light switch compared to a clock.
• Binary is slightly more efficient than decimal. Many early computers used decimal (usually in
binary-coded decimal representation). This seemingly-natural approach was eventually largely
4
abandoned due to the increase in processing circuitry (as compared to binary) which reduced
reliability.
• Other bases have been tried. A few experimental computers have been built with ternary (base 3)
representation, as it was thought it might be more efficient than binary.[1]
Octal and hex are a convenient way to represent binary numbers, as used by computers. Computer
mechanics often need to write out binary quantities, but in practice writing out a binary number such as
1001001101010001 is tedious, and prone to errors. Therefore, binary quantities are written in a base-8
("octal") or, much more commonly, a base-16 ("hexadecimal" or "hex") number format.
In the decimal system, there are 10 digits (0 through 9) which combine to form numbers as follows:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 ...
0 1 2 3 4 5 6 7 10 11 12 13 14 15 16 17 20 21 22 23 24 25 26 ...
That is, an octal "10" is the same as a decimal "8", an octal "20" is a decimal 16, and so on.
In a hex system, there are 16 digits (0 through 9 followed, by convention, with A through F):
0 1 2 3 4 5 6 7 8 9 A B C D E F 10 11 12 13 14 15 16 17 18 19 1A 1B...
That is, a hex "10" is the same as a decimal "16" and a hex "20" is the same as a decimal "32".
Each of these number systems are positional systems, but while decimal weights are powers of 10, the octal
weights are powers of 8 and the hex weights are powers of 16. To convert from hex or octal to decimal, for
each digit one multiplies the value of the digit by the value of its position and then adds the results. For
example:
octal 756
= (7 × 82) + (5 × 81) + (6 × 80)
= (7 × 64) + (5 × 8) + (6 × 1)
= 448 + 40 + 6 = decimal 494
hex 3b2
= (3 × 162) + (11 × 161) + (2 × 160)
= (3 × 256) + (11 × 16) + (2 × 1)
= 768 + 176 + 2 = decimal 946
Thus, an octal digit has a perfect correspondence to a 3-bit binary value number:
000 = octal 0
001 = octal 1
010 = octal 2
011 = octal 3
5
100 = octal 4
101 = octal 5
110 = octal 6
111 = octal 7
Conversion of numbers from hex or octal to decimal can also be done by using the following pattern.
Where the first digit in the number is multiplied by the numbers base and added to the second digit. To
convert numbers with three digits or more the pattern is just continued.
hex A1
d1 * base + d2 =
10 * 16 + 1 = decimal 161
hex 129
d1=1
d2=2
d3=9
base=16
6
(d1 * base + d2) * base + d3 =
( 1 * 16 + 2) * 16 + 9 = decimal 297
The same method can be applied to conversion of octal and binary numbers:
binary 1011
d1=1
d2=0
d3=1
d4=1
base=2
d1=1
d2=2
d3=3
d4=2
base=8
Binary numbers have no inherent way to representing negative numbers in a computer. In order to create
these "signed integers" a few different systems have been developed. In each, a special bit is set aside as the
"sign bit", which is usually the leftmost (most significant) bit. If the sign bit is 1 the number is negative; if 0,
positive.
Sign-magnitude
Sign-magnitude representation is possibly the simplest way to represent a signed number. The representation
consists of one sign bit and other bits denoting the magnitude, or absolute value, of the number. For
example, using 4 bits:
0101 = +5
1101 = -5
Ones' complement
In ones' complement, the inverse of a number is formed by complementing each bit — that is, performing a
bitwise NOT operation. For example:
0101 = +5
1010 = -5
A side effect of both this and the previous system, one of the reasons these systems are not often used for
computing, is that there are two representations for zero. In ones' complement:
7
0000 = +0
1111 = -0
In sign-magnitude:
0000 = +0
1000 = -0
Two's complement
Two's complement is the most widely used system in modern computing. To form the two's complement,
take the bitwise NOT of the number and add 1. For example:
0101 = +5
1011 = -5
Thus:
Using this system, 16 bits will encode numbers from −32,768 to 32,767, while 32 bits will encode
−2,147,483,648 to 2,147,483,647.
The great advantage of the two's complement system is that most operations are not dependent on the sign of
the operands and furthermore are identical to operations on unsigned binary integers.
0101
+1011
10000
However, seeing as we have taken the numbers as 4 bits long, the leading 1 is discarded and we have the
expected result of 0.
The fact that most operations work no matter the sign of the operands can be explained through the duplicity
of numbers modulo 2n; e.g. 15 ≡ -1 (mod 16). Computers generally use a fixed number of bits for binary
numbers and thus such a system is ideal. Essentially the only difference between two's complement numbers
and unsigned numbers is how they are displayed and compared.
One quirk of two's complement is that the lowest encodable number (e.g. -32768 for 16 bit numbers)
appears to be its own negative. However, this rarely causes problems.
As a number composed entirely of 1s (such as 11111111) equates to -1 in two's complement notation, many
programming languages use -1 for true and 0 for false.
8
REPRESENTING FRACTIONS IN BINARY
Fixed-point numbers
Fixed-point formats are often used in business calculations (such as with spreadsheets or COBOL), where
floating-point with insufficient precision is unacceptable when dealing with money. It is helpful to study it
to see how fractions can be stored in binary.
A number of bits sufficient for the precision and range required must be chosen to store the fractional and
integer parts of a number. For example, using a 32-bit format, 16 bits might be used for the integer and 16
for the fraction.
The fractional bits continue the pattern set by the integer bits: if the eight's bit is followed by the four's bit,
then the two's bit, then the one's bit, then of course the next bit is the half's bit, then the quarter's bit, then the
⅛'s bit, et cetera.
Examples:
However, using this form of encoding means that some numbers cannot be represented in binary. For
example, for the fraction 1/5 (in decimal, this is 0.2), the closest one can get is:
And even with more digits, an exact representation is impossible. Consider the number ⅓. If you were to
write the number out as a decimal (0.333333...) it would continue indefinitely. If you were to stop at any
point, the number written would not exactly represent the number ⅓.
The point is: some fractions cannot be expressed exactly in binary notation... not unless you use a special
trick. The trick is, to store a fraction as two numbers, one for the numerator and one for the denominator,
and then use arithmetic to add, subtract, multiply, and divide them. However, arithmetic will not let you do
higher math (such as square roots) with fractions, nor will it help you if the lowest common denominator of
two fractions is too big a number to handle. This is why there are advantages to using the fixed-point
notation for fractional numbers.
Floating-point numbers
While both unsigned and signed integers are used in digital systems, even a 32-bit integer is not enough to
handle all the range of numbers a calculator can handle, and that's not even including fractions. To
approximate the greater range and precision of real numbers we have to abandon signed integers and fixed-
point numbers and go to a "floating-point" format.
In the decimal system, we are familiar with floating-point numbers of the form:
9
1.1030402E5
which means "1.103402 times 1 followed by 5 zeroes". We have a certain numeric value (1.1030402)
known as a "significand", multiplied by a power of 10 (E5, meaning 105 or 100,000), known as an
"exponent". If we have a negative exponent, that means the number is multiplied by a 1 that many places to
the right of the decimal point. For example:
The advantage of this scheme is that by using the exponent we can get a much wider range of numbers, even
if the number of digits in the significand, or the "numeric precision", is much smaller than the range. Similar
binary floating-point formats can be defined for computers. There are a number of such schemes, the most
popular has been defined by IEEE (Institute of Electrical & Electronic Engineers). The IEEE 754 standard
specification defines a 64 bit floating-point format with:
• an 11-bit binary exponent, using "excess-1023" format. Excess-1023 means the exponent appears as
an unsigned binary integer from 0 to 2047, and you have to subtract 1023 from it to get the actual
signed value
• a 52-bit significand, also an unsigned binary number, defining a fractional value with a leading
implied "1"
• a sign bit, giving the sign of the number.
Let's see what this format looks like by showing how such a number would be stored in 8 bytes of memory:
byte 0: S x10 x9 x8 x7 x6 x5 x4
byte 1: x3 x2 x1 x0 m51 m50 m49 m48
byte 2: m47 m46 m45 m44 m43 m42 m41 m40
byte 3: m39 m38 m37 m36 m35 m34 m33 m32
byte 4: m31 m30 m29 m28 m27 m26 m25 m24
byte 5: m23 m22 m21 m20 m19 m18 m17 m16
byte 6: m15 m14 m13 m12 m11 m10 m9 m8
byte 7: m7 m6 m5 m4 m3 m2 m1 m0
where "S" denotes the sign bit, "x" denotes an exponent bit, and "m" denotes a significand bit. Once the bits
here have been extracted, they are converted with the computation:
This scheme provides numbers valid out to about 15 decimal digits, with the following range of numbers:
maximum minimum
positive 1.797693134862231E+308 4.940656458412465E-324
negative -4.940656458412465E-324 -1.797693134862231E+308
The spec also defines several special values that are not defined numbers, and are known as NaNs, for "Not
A Number". These are used by programs to designate invalid operations and the like. You will rarely
encounter them and NaNs will not be discussed further here. Some programs also use 32-bit floating-point
numbers. The most common scheme uses a 23-bit significand with a sign bit, plus an 8-bit exponent in
"excess-127" format, giving seven valid decimal digits.
byte 0: S x7 x6 x5 x4 x3 x2 x1
byte 1: x0 m22 m21 m20 m19 m18 m17 m16
10
byte 2: m15 m14 m13 m12 m11 m10 m9 m8
byte 3: m7 m6 m5 m4 m3 m2 m1 m0
maximum minimum
positive 3.402823E+38 2.802597E-45
negative -2.802597E-45 -3.402823E+38
Such floating-point numbers are known as "reals" or "floats" in general, but with a number of inconsistent
variations, depending on context:
A 32-bit float value is sometimes called a "real32" or a "single", meaning "single-precision floating-point
value".
A 64-bit float is sometimes called a "real64" or a "double", meaning "double-precision floating-point value".
The term "real" without any elaboration generally means a 64-bit value, while the term "float" similarly
generally means a 32-bit value.
Once again, remember that bits are bits. If you have eight bytes stored in computer memory, it might be a
64-bit real, two 32-bit reals, or four signed or unsigned integers, or some other kind of data that fits into
eight bytes.
The only difference is how the computer interprets them. If the computer stored four unsigned integers and
then read them back from memory as a 64-bit real, it almost always would be a perfectly valid real number,
though it would be junk data.
So now our computer can handle positive and negative numbers with fractional parts. However, even with
floating-point numbers you run into some of the same problems that you did with integers:
• As with integers, you only have a finite range of values to deal with. Granted, it is a much bigger
range of values than even a 32-bit integer, but if you keep multiplying numbers you'll eventually get
one bigger than the real value can hold and have a numeric overflow.
If you keep dividing you'll eventually get one with a negative exponent too big for the real value to hold and
have a numeric underflow. Remember that a negative exponent gives the number of places to the right of the
decimal point and means a really small number. The maximum real value is sometimes called "machine
infinity", since that's the biggest value the computer can wrap its little silicon brain around.
• A related problem is that you have only limited "precision" as well. That is, you can only represent
15 decimal digits with a 64-bit real. If the result of a multiply or a divide has more digits than that,
they're just dropped and the computer doesn't inform you of an error.
This means that if you add a very small number to a very large one, the result is just the large one. The small
number was too small to even show up in 15 or 16 digits of resolution, and the computer effectively discards
it. If you are performing computations and you start getting really insane answers from things that normally
work, you may need to check the range of your data. It is possible to "scale" the values to get more accurate
11
results. It also means that if you do floating-point computations, there's likely to be a small error in the result
since some lower digits have been dropped. This effect is unnoticeable in most cases, but if you do some
math analysis that requires lots of computations, the errors tend to build up and can throw off the results.
The fraction of people who use computers for doing math understand these errors very well, and have
methods for minimizing the effects of such errors, as well as for estimating how big the errors are. By the
way, this "precision" problem is not the same as the "range" problem at the top of this list. The range issue
deals with the maximum size of the exponent, while the precision issue deals with the number of digits that
can fit into the significand.
• Another more obscure error that creeps in with floating-point numbers is the fact that the significand
is expressed as a binary fraction that doesn't necessarily perfectly match a decimal fraction.
That is, if you want to do a computation on a decimal fraction that is a neat sum of reciprocal powers of two,
such as 0.75, the binary number that represents this fraction will be 0.11, or ½ + ¼, and all will be fine.
Unfortunately, in many cases you can't get a sum of these "reciprocal powers of 2" that precisely matches a
specific decimal fraction, and the results of computations will be very slightly off, way down in the very
small parts of a fraction. For example, the decimal fraction "0.1" is equivalent to an infinitely repeating
binary fraction: 0.000110011 ...
Low-level programmers have to worry about unsigned and signed, fixed and floating-point numbers. They
have to write extremely different code, with different opcodes and operands, to add two floating point
numbers compared to the code to add two integers.
However, high-level programming languages such as LISP and Python offer an abstract number that may be
an expanded type such as rational, bignum, or complex. Programmers in LISP or Python (among others)
have some assurance that their program code will Do The Right Thing with mathematical operations. Due to
operator overloading, mathematical operations on any number—whether signed, unsigned, rational, floating-
point, fixed-point, integral, or complex—are written exactly the same way. Others languages such as REXX
and Java provide decimal floating-point which avoids many "unexpected" results. One drawback in Java
though, is its lack of native support for unsigned integer types.
12