4 Floating Point

Download as pdf or txt
Download as pdf or txt
You are on page 1of 39

Floating point representation

(Unsigned) Fixed-point representation


The numbers are stored with a fixed number of bits for the integer part
and a fixed number of bits for the fractional part.

Suppose we have 8 bits to store a real number, where 5 bits store the
integer part and 3 bits store the fractional part:

1 0 1 1 1.0 1 1 !$
!
2% 2$ 2# 2" 2! 2
!#
2 2!"

Smallest number: 00000.001 # = 0.125

Largest number: 11111.111 # = 31.875


(Unsigned) Fixed-point representation
Suppose we have 64 bits to store a real number, where 32 bits store the
integer part and 32 bits store the fractional part:
$" $#
𝑎$" … 𝑎#𝑎"𝑎!. 𝑏"𝑏#𝑏$ … 𝑏$# # = 4 𝑎) 2) + 4 𝑏) 2')
)*! )*"

= 𝑎!" × 2!" +𝑎!# × 2!# + ⋯ + 𝑎# × 2# +𝑏" × 2$" +𝑏% × 2% + ⋯ + 𝑏!% × 2$!%

Smallest number:
𝑎& = 0 ∀𝑖 and 𝑏", 𝑏#, … , 𝑏$" = 0 and 𝑏$# = 1 → 2'$#≈ 10'"!

Largest number:
𝑎& = 1 ∀𝑖 and 𝑏& = 1 ∀𝑖 → 2$" + ⋯ + 2! + 2'" + ⋯ + 2'$#≈ 10(
(Unsigned) Fixed-point representation
Suppose we have 64 bits to store a real number, where 32 bits store the
integer part and 32 bits store the fractional part:
$" $#
𝑎$" … 𝑎#𝑎"𝑎!. 𝑏"𝑏#𝑏$ … 𝑏$# # = 4 𝑎) 2) + 4 𝑏) 2')
)*! )*"

Smallest number →≈ 10'"!


Largest number → ≈ 10(

0 ∞
(Unsigned) Fixed-point representation
Range: difference between the largest and smallest numbers possible.
More bits for the integer part ⟶ increase range

Precision: smallest possible difference between any two numbers


More bits for the fractional part ⟶ increase precision

𝑎! 𝑎" 𝑎# . 𝑏" 𝑏! 𝑏$ ! OR 𝑎" 𝑎# . 𝑏" 𝑏! 𝑏$ 𝑏% !

Wherever we put the binary point, there is a trade-off between the


amount of range and precision. It can be hard to decide how much
you need of each!

Fix: Let the binary point “float”


Floating-point numbers
A floating-point number can represent numbers of different order of
magnitude (very large and very small) with the same number of fixed
digits.

In general, in the binary system, a floating number can be expressed as

𝑥 = ± 𝑞 × 2&
𝑞 is the significand, normally a fractional value in the range [1.0,2.0)

𝑚 is the exponent
Floating-point numbers
Numerical Form:

𝑥 = ±𝑞 × 2" = ±𝑏# . 𝑏$ 𝑏% 𝑏& … 𝑏' × 2"

Fractional part of significand


(𝑛 digits)

𝑏! ∈ 0,1
Exponent range: 𝑚 ∈ 𝐿, 𝑈
Precision: p = 𝑛 + 1
“Floating” the binary point
1
1011.1 ! = 1×8 + 0×4 + 1×2 + 1×1 + 1× = 11.5 "#
2

10111 ! = 1×16 + 0×8 + 1×4 + 1×2 + 1×1 = 23 "#

= 1011.1 ! × 2" = 23 "#


1 1
101.11 ! = 1×4 + 0×2 + 1×1 + 1× + 1× = 5.75 "#
2 4
= 1011.1 ! × 2&" = 5.75 "#

Move “binary point” to the left by one bit position: Divide the decimal
number by 2
Move “binary point” to the right by one bit position: Multiply the decimal
number by 2
Converting floating points
Convert (39.6875)"! = 100111.1011 # into floating point
representation

(39.6875)"! = 100111.1011 = 1.001111011 × 2+


# #
Normalized floating-point numbers
Normalized floating point numbers are expressed as

𝑥 = ± 1. 𝑏$ 𝑏% 𝑏& … 𝑏' × 2" = ± 1. 𝑓 × 2"


where 𝑓 is the fractional part of the significand, 𝑚 is the exponent and
𝑏! ∈ 0,1 .

Hidden bit representation:

The first bit to the left of the binary point 𝑏" = 1 does not need to be
stored, since its value is fixed.
This representation ”adds” 1-bit of precision (we will show some exceptions
later, including the representation of number zero).
Iclicker question
Determine the normalized floating point representation
1. 𝒇 × 2𝒎 of the decimal number 𝑥 = 47.125 (𝒇 in binary
representation and 𝒎 in decimal)

A) 1.01110001 *× 2𝟓
B) 1.01110001 *× 2𝟒
C) 1.01111001 *× 2𝟓
D) 1.01111001 *× 2𝟒
Normalized floating-point numbers
𝑥 = ± 𝑞 × 2' = ± 1. 𝑏" 𝑏! 𝑏$ … 𝑏( × 2' = ± 1. 𝑓 × 2'

• Exponent range: 𝐿, 𝑈

• Precision: p = 𝑛 + 1

• Smallest positive normalized FP number:

UFL = 2,

• Largest positive normalized FP number:

OFL = 2&'"(1 − 2$( )


Normalized floating point number scale

−∞ +∞
0
Floating-point numbers: Simple example
A ”toy” number system can be represented as 𝑥 = ±1. 𝑏" 𝑏# ×2-
for 𝑚 ∈ [−4,4] and 𝑏) ∈ {0,1}.
1.00 ! ×2" =1 1.00 ! ×2$ =2 1.00 ! ×2! = 4.0
1.01 " $ !
! ×2 = 1.25 1.01 ! ×2 = 2.5 1.01 ! ×2 = 5.0
" $ !
1.10 ! ×2 = 1.5 1.10 ! ×2 = 3.0 1.10 ! ×2 = 6.0
1.11 " $ !
! ×2 = 1.75 1.11 ! ×2 = 3.5 1.11 ! ×2 = 7.0

1.00 ! ×2% = 8.0 1.00 ! ×2& = 16.0 1.00 ! ×2#$ = 0.5


% & #$
1.01 ! ×2 = 10.0 1.01 ! ×2 = 20.0 1.01 ! ×2 = 0.625
1.10 % & #$
! ×2 = 12.0 1.10 ! ×2 = 24.0 1.10 ! ×2 = 0.75
% & #$
1.11 ! ×2 = 14.0 1.11 ! ×2 = 28.0 1.11 ! ×2 = 0.875

1.00 ! ×2#! = 0.25 1.00 ! ×2#% = 0.125 1.00 ! ×2#& = 0.0625


#! #&
1.01 ! ×2 = 0.3125 1.01 ! ×2
#%
= 0.15625 1.01 ! ×2 = 0.078125
1.10 #! #&
! ×2 = 0.375 #% 1.10
1.10 ! ×2 = 0.1875 ! ×2 = 0.09375
1.11 #! #&
! ×2 = 0.4375 #% 1.11
1.11 ! ×2 = 0.21875 ! ×2 = 0.109375
Same steps are performed to obtain the negative numbers. For simplicity, we
will show only the positive numbers in this example.
𝑥 = ±1. 𝑏"𝑏#×2- for 𝑚 ∈ [−4,4] and 𝑏) ∈ {0,1}

• Smallest normalized positive number:


1.00 # ×2'% = 0.0625
• Largest normalized positive number:
1.11 # ×2% = 28.0
• Any number 𝑥 closer to zero than 0.0625 would UNDERFLOW to
zero.

• Any number 𝑥 outside the range −28.0 and +28.0 would


OVERFLOW to infinity.
Machine epsilon
• Machine epsilon (𝜖% ): is defined as the distance (gap) between 1 and the
next larger floating point number.

𝑥 = ±1. 𝑏" 𝑏% ×2* for 𝑚 ∈ [−4,4] and 𝑏) ∈ {0,1}

1.00 % ×2# = 1 1.01 % ×2# = 1.25

𝝐𝒎 = 0.01 # ×2' = 𝟎. 𝟐𝟓
Machine numbers: how floating point
numbers are stored?
Floating-point number representation
What do we need to store when representing floating point
numbers in a computer?
𝑥 = ± 1. 𝒇 × 2𝒎

𝑥= ± 𝑚 𝑓

sign exponent significand

Initially, different floating-point representations were used in computers,


generating inconsistent program behavior across different machines.

Around 1980s, computer manufacturers started adopting a standard


representation for floating-point number: IEEE (Institute of Electrical and
Electronics Engineers) 754 Standard.
Floating-point number representation
Numerical form:

𝑥 = ± 1. 𝒇 × 2𝒎

Representation in memory:

𝑥= 𝒔 𝑐 𝑓

sign exponent significand

𝑥 = (−1)𝒔 1. 𝒇 × 2𝒄:𝒔𝒉𝒊𝒇𝒕 𝒎 = 𝒄 − 𝒔𝒉𝒊𝒇𝒕


Finite representation: not all
Precisions: numbers can be represented
exactly!
IEEE-754 Single precision (32 bits):

𝑥 = 𝑠 𝑐 = 𝑚 + 127 𝑓
sign exponent significand
(1-bit) (8-bit) (23-bit)

IEEE-754 Double precision (64 bits):

𝑥= 𝑠 𝑐 = 𝑚 + 1023 𝑓
sign exponent significand
(1-bit) (11-bit) (52-bit)
Special Values:
𝑥 = (−1)𝒔 1. 𝒇 × 2𝒎 = 𝒔 𝒄 𝒇

1) Zero:
𝑥= 𝑠 000 … 000 0000 … … 0000

2) Infinity: +∞ (𝑠 = 0) and −∞ 𝑠 = 1

𝑥= 𝑠 111 … 111 0000 … … 0000

3) NaN: (results from operations with undefined results)

𝑥= 𝑠 111 … 111 𝑎𝑛𝑦𝑡ℎ𝑖𝑛𝑔 ≠ 00 … 00

Note that the exponent 𝑐 = 000 … 000 and 𝑐 = 111 … 111 are reserved
for these special cases, which limits the exponent range for the other numbers.
IEEE-754 Single Precision (32-bit)
𝑥 = (−1)𝒔 1. 𝒇 × 2𝒎

𝑠 𝑐 = 𝑚 + 127 𝑓
sign exponent significand
(1-bit) (8-bit) (23-bit)

𝑠 = 0: positive sign, 𝑠 = 1: negative sign

Reserved exponent number for special cases:


𝑐 = 11111111 # = 255 and 𝑐 = 00000000 # =0

Therefore 0 < c < 255


The largest exponent is U = 254 − 127 = 127
The smallest exponent is L = 1 − 127 = −126
IEEE-754 Single Precision (32-bit)
𝑥 = (−1)𝒔 1. 𝒇 × 2𝒎

Example: Represent the number 𝑥 = −67.125 using IEEE Single-


Precision Standard

67.125 = 1000011.001 # = 1.000011001 # ×2(


𝑐 = 6 + 127 = 133 = 10000101 #

1 10000101 00001100100000 … 000

1-bit 8-bit 23-bit


IEEE-754 Single Precision (32-bit)
𝑥 = (−1)𝒔 1. 𝒇 × 2𝒎 = 𝒔 𝒄 𝒇 𝑐 = 𝑚 + 127

• Machine epsilon (𝜖- ): is defined as the distance (gap) between 1


and the next larger floating point number.
1 $" = 0 01111111 00000000000000000000000

1 $" + 𝜖' = 0 01111111 00000000000000000000001

𝝐𝒎 = 𝟐!𝟐𝟑 ≈ 1.2 × 10!+

• Smallest positive normalized FP number:


UFL = 2+ = 2$"%, ≈ 1.2 ×10$!-

• Largest positive normalized FP number:


OFL = 2&'"(1 − 2$( ) = 2"%-(1 − 2$%.) ≈ 3.4 ×10!-
IEEE-754 Double Precision (64-bit)
𝑥 = (−1)𝒔 1. 𝒇 × 2𝒎

𝑠 𝑐 = 𝑚 + 1023 𝑓
sign exponent significand
(1-bit) (11-bit) (52-bit)

𝑠 = 0: positive sign, 𝑠 = 1: negative sign

Reserved exponent number for special cases:


𝑐 = 11111111111 # = 2047 and 𝑐 = 00000000000 # =0

Therefore 0 < c < 2047


The largest exponent is U = 2046 − 1023 = 1023
The smallest exponent is L = 1 − 1023 = −1022
IEEE-754 Double Precision (64-bit)
𝑥 = (−1)𝒔 1. 𝒇 × 2𝒎 = 𝒔 𝒄 𝒇 𝑐 = 𝑚 + 1023

• Machine epsilon (𝜖- ): is defined as the distance (gap) between 1


and the next larger floating point number.
1 $" = 0 0111 … 111 000000000000 … 000000000

1 $" + 𝜖' = 0 0111 … 111 000000000000 … 000000001

𝝐𝒎 = 𝟐!𝟓𝟐 ≈ 2.2 × 10!$(

• Smallest positive normalized FP number:


UFL = 2+ = 2$"#%% ≈ 2.2 ×10$!#-

• Largest positive normalized FP number:


OFL = 2&'"(1 − 2$( ) = 2"#%.(1 − 2$/!) ≈ 1.8 ×10!#-
Normalized floating point number scale
(double precision)

−∞ +∞
0
Subnormal (or denormalized) numbers
• Noticeable gap around zero, present in any floating system, due to
normalization
ü The smallest possible significand is 1.00
ü The smallest possible exponent is 𝐿
• Relax the requirement of normalization, and allow the leading digit to be zero,
only when the exponent is at its minimum (𝑚 = 𝐿)
• Computations with subnormal numbers are often slow.

Representation in memory (another special case):

𝑥= 𝑠 𝑐 = 000 … 000 𝑓

Note that this is a special case, and


the exponent 𝒎 is not evaluated as
Numerical value: 𝒎 = 𝒄 − 𝒔𝒉𝒊𝒇𝒕 = −𝒔𝒉𝒊𝒇𝒕.

𝑥 = (−1)𝒔 0. 𝒇 × 2𝑳
Instead, the exponent is set to the
lower bound, 𝒎 = 𝐋
Subnormal (or denormalized) numbers
IEEE-754 Single precision (32 bits):
𝑐 = 00000000 # = 0
Exponent set to 𝑚 = −126
Smallest positive subnormal FP number: 2'#$ × 2'"#G ≈ 1.4 ×10'%+

IEEE-754 Double precision (64 bits):


𝑐 = 00000000000 # = 0
Exponent set to 𝑚 = −1022
Smallest positive subnormal FP number: 2'+# × 2'"!## ≈ 4.9 ×10'$#%

Allows for more gradual underflow to zero (however subnormal numbers


don’t have as many accurate digits as normalized numbers)
IEEE-754 Double Precision
Summary for Single Precision
𝑥 = (−1)𝒔 1. 𝒇 × 2𝒎 = 𝒔 𝒄 𝒇 𝑚 = 𝑐 − 127

Stored binary Significand value


exponent (𝑐) fraction (𝑓)
00000000 0000…0000 zero
00000000 𝑎𝑛𝑦 𝑓 ≠ 0 (−1)𝒔 0. 𝒇 × 2'𝟏𝟐𝟔
00000001 𝑎𝑛𝑦 𝑓 (−1)𝒔 1. 𝒇 × 2'𝟏𝟐𝟔
⋮ ⋮ ⋮

11111110 𝑎𝑛𝑦 𝑓 (−1)𝒔 1. 𝒇 × 2𝟏𝟐𝟕


11111111 𝑎𝑛𝑦 𝑓 ≠ 0 NaN
11111111 0000…0000 infinity
Example
Determine the single-precision representation of the decimal number
𝑥 = 37.625
• Convert the decimal number to binary: 37.625 $" = 100101.101 !

𝟐𝟓 𝟐𝟒 𝟐𝟑 𝟐𝟐 𝟐𝟏 𝟐𝟎 𝟐'𝟏 𝟐'𝟐 𝟐'𝟑


32 16 8 4 2 1 0.5 0.25 0.125
# 1 0 0 1 0 1 1 0 1
37.625 5.625 5.625 5.625 1.625 1.625 0.625 0.125 0.125 0

• Convert the binary number to the normalized FP representation 1. 𝒇 × 2𝒎


)
100101.101 ! = 1.00101101 ! ×2

𝑠=0 𝑓 = 00101101 … 00 𝑚=5

𝑐 = 𝑚 + 127 = 132 = 10000100 !

0 10000100 00101101000000000000000
What is the equivalent decimal
number?
0 00000000 00000000000000000000000

1 11111111 00000000000000000000000

0 11111111 11111111110000111111111

0 00000000 11110000000000000000000

0 01111111 00000000000000000000000
Iclicker question
A number system can be represented as 𝑥 = ±1. 𝑏" 𝑏# 𝑏$ ×2-
for 𝑚 ∈ [−5,5] and 𝑏) ∈ {0,1}.

1) What is the smallest positive normalized FP number:


a) 0.0625 b) 0.09375 c) 0.03125 d) 0.046875 e) 0.125

2) What is the largest positive normalized FP number:


a) 28 b) 60 c) 56 d) 32

3) How many additional numbers (positive and negative) can be


represented when using subnormal representation?
a) 7 b) 14 c) 3 d) 6 e) 16

4) What is the smallest positive subnormal number?


a) 0.00390625 b) 0.00195313 c) 0.03125 d) 0.0136719

5) Determine machine epsilon


a) 0.0625 b) 0.00390625 c) 0.0117188 d) 0.125
A number system can be represented as 𝑥 = ±1. 𝑏" 𝑏# 𝑏$ 𝑏% ×2-
for 𝑚 ∈ [−6,6] and 𝑏) ∈ {0,1}.

1) Let’s say you want to represent the decimal number 19.625 using the
binary number system above. Can you represent this number exactly?

2) What is the range of integer numbers that you can represent exactly using
this binary system?
Iclicker question
Determine the decimal number corresponding to the
following single-precision machine number:
1 10011001 00000000000000000000001

A) 67,108,872

B) −67,108,872

C) 67,108,864

D) −67,108,864
Iclicker question
Determine the double-precision machine representation
of the decimal number 𝑥 = −37.625

A) 1 10000100000 00101101000000 … 0

B) 1 10000000100 00101101000000 … 0

C) 0 10000100000 00101101000000 … 0

D) 0 10000000100 00101101000000 … 0

(52-bit)

You might also like