CH 10

Download as pdf or txt
Download as pdf or txt
You are on page 1of 93

Chapter 10

Error Detection
and
Correction
10.1

Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Note

Data can be corrupted


p
during transmission.
Some applications require that
errors be detected and corrected.
corrected

10.2

10--1 INTRODUCTION
10
Let us first discuss some issues related, directly or
indirectly, to error detection and correction
correction..
Topics discussed in this section:
Types of Errors
Redundancy
D t ti V
Detection
Versus C
Correction
ti
Forward Error Correction Versus Retransmission
Coding
Modular Arithmetic

10.3

Note

In a single-bit error, only 1 bit in the data


unit
it has
h changed.
h
d

10.4

Figure 10.1 Single-bit error

10.5

Note

A burst error means that 2 or more bits


i the
in
th data
d t unit
it have
h
changed.
h
d

10.6

Figure 10.2 Burst error of length 8

10.7

Note

To detect or correct errors, we need to


send
d extra
t (redundant)
( d d t) bits
bit with
ith data.
d t

10.8

Figure 10.3 The structure of encoder and decoder

10.9

Note

In this book, we concentrate on block


codes;
d
we leave
l
convolution
l ti codes
d
to advanced texts.

10.10

Note

In modulo-N arithmetic, we use only the


i t
integers
in
i the
th range 0 to
t N 1,
1 inclusive.
i l i

10.11

Figure 10.4 XORing of two single bits or two words

10.12

10--2 BLOCK CODING


10
In block coding,
coding we divide our message into blocks,
blocks
each of k bits, called datawords
datawords.. We add r redundant
bits to each block to make the length n = k + r. The
resulting n-bit blocks are called codewords
codewords..

Topics discussed in this section:


Error Detection
Error Correction
Hamming Distance
Minimum Hamming Distance
10.13

Figure 10.5 Datawords and codewords in block coding

10.14

Example 10.1
The 4B/5B block coding discussed in Chapter 4 is a good
example of this type of coding. In this coding scheme,
k = 4 and n = 5. As we saw, we have 2k = 16 datawords
and
d 2n = 32 codewords.
d
d We saw that
h 166 out off 32
codewords are used for message transfer and the rest are
either
i h usedd for
f other
h purposes or unused.
d

10.15

Figure 10.6 Process of error detection in block coding

10.16

Example 10.2
Let us assume that k = 2 and n = 3. Table 10.1 shows the
list of datawords and codewords. Later, we will see
how to derive a codeword from a dataword.
Assume the
A
h sender
d encodes
d the
h dataword
d
d 01 as 011 andd
sends it to the receiver. Consider the following cases:
1. The receiver receives 011. It is a valid codeword. The
receiver
i
extracts the
h dataword
d
d 01 from
f
i
it.

10.17

Example 10.2 (continued)


2. The codeword is corrupted during transmission, and
111 is received. This is not a valid codeword and is
discarded.
3. The codeword is corrupted during transmission, and
000 is
i received.
i d This
hi is
i a valid
lid codeword.
d
d The
h receiver
i
incorrectly extracts the dataword 00. Two corrupted
bi have
bits
h
made
d the
h error undetectable.
d
bl

10.18

Table 10.1 A code for error detection (Example 10.2)

10.19

Note

An error-detecting code can detect


only
l the
th types
t
off errors for
f which
hi h it is
i
designed; other types of errors may
remain undetected.

10.20

Figure 10.7 Structure of encoder and decoder in error correction

10.21

Example 10.3
Let us add more redundant bits to Example 10.2 to see if
the receiver can correct an error without knowing what
was actually sent. We add 3 redundant bits to the 2-bit
d
dataword
d to make
k 5-bit
bi codewords.
d
d Table
bl 10.2 shows
h
the
h
datawords and codewords. Assume the dataword is 01.
The
h sender
d creates the
h codeword
d
d 01011.
01011 The
h codeword
d
d is
i
corrupted during transmission, and 01001 is received.
Fi
First,
the
h receiver
i
fi d that
finds
h the
h received
i d codeword
d
d is
i not
in the table. This means an error has occurred. The
receiver,
i
assuming
i that
h there
h
i only
is
l 1 bit
bi corrupted,
d uses
the following strategy to guess the correct dataword.
10.22

Example 10.3 (continued)


1. Comparing the received codeword with the first
codeword in the table (01001 versus 00000), the
receiver decides that the first codeword is not the one
that was sent because there are two different bits.
2. By the same reasoning, the original codeword cannot
be the third or fourth one in the table.
3. The original codeword must be the second one in the
table because this is the only one that differs from the
received codeword by 1 bit. The receiver replaces
01001 with 01011 and consults the table to find the
dataword 01.
10.23

Table 10.2 A code for error correction (Example 10.3)

10.24

Note

The Hamming distance between two


words
d is
i the
th number
b off differences
diff
between corresponding bits.

10.25

Example 10.4
Let us find the Hamming distance between two pairs of
words.
1. The
h Hamming
i
di
distance
d(
d(000,
011)) is
i 2 because
b

2 The
2.
Th Hamming
H
i distance
di
d(10101 11110) is
d(10101,
i 3 because
b

10.26

Note

The minimum Hamming distance is the


smallest
ll t Hamming
H
i
distance
di t
between
b t
all possible pairs in a set of words.

10.27

Example 10.5
Find the minimum Hamming distance of the coding
scheme in Table 10.1.
Solution
We first find all Hamming distances.

The dmin in this case is 2.

10.28

Example 10.6
Find the minimum Hamming distance of the coding
scheme in Table 10.2.
Solution
We first find all the Hamming distances.

The dmin in this case is 3.

10.29

Note

To guarantee the detection of up to s


errors in
i all
ll cases, the
th minimum
i i
Hamming distance in a block
code must be dmin = s + 1.

10.30

Example 10.7
The minimum Hamming distance for our first code
scheme (Table 10.1) is 2. This code guarantees detection
of only a single error. For example, if the third codeword
(
(101)
) is
i sent andd one error occurs, the
h received
i d codeword
d
d
does not match any valid codeword. If two errors occur,
h
however,
the
h received
i d codeword
d
d may match
h a valid
lid
codeword and the errors are not detected.

10.31

Example 10.8
Our second block code scheme (Table 10.2) has dmin = 3.
This code can detect up to two errors. Again, we see that
when any of the valid codewords is sent, two errors create
a codeword
d
d which
hi h is
i not in
i the
h table
bl off valid
lid codewords.
d
d
The receiver cannot be fooled.
However, some combinations of three errors change a
valid
lid codeword
d
d to another
h valid
lid codeword.
d
d The
Th receiver
i
accepts the received codeword and the errors are
undetected.
d
d

10.32

Figure 10.8 Geometric concept for finding dmin in error detection

10.33

Figure 10.9 Geometric concept for finding dmin in error correction

10.34

Note

To guarantee correction of up to t errors


i all
in
ll cases, the
th minimum
i i
Hamming
H
i
distance in a block code
must be dmin = 2t + 1.

10.35

Example 10.9
A code scheme has a Hamming distance dmin = 4. What is
the error detection and correction capability of this
scheme?
Solution
This code guarantees the detection of up to three errors
(s = 3),
3) but it can correct up to one error.
error In other words,
words
if this code is used for error correction, part of its capability
is wasted.
wasted Error correction codes need to have an odd
minimum distance (3, 5, 7, . . . ).
10.36

10--3 LINEAR BLOCK CODES


10
Almost all block codes used today belong to a subset
called linear block codes
codes.. A linear block code is a code
in which the exclusive OR (addition modulo
modulo--2) of two
valid codewords creates another valid codeword
codeword..

Topics discussed in this section:


Minimum Distance for Linear Block Codes
Some Linear Block Codes

10.37

Note

In a linear block code, the exclusive OR


(XOR) off any ttwo valid
lid codewords
d
d
creates another valid codeword.

10.38

Example 10.10
Let us see if the two codes we defined in Table 10.1 and
Table 10.2 belong to the class of linear block codes.
1. The
h scheme
h
i Table
in
bl 10.1 is
i a linear
li
bl k code
block
d
because the result of XORing any codeword with any
other
h codeword
d
d is
i a valid
lid codeword.
d
d For example,
l the
h
XORing of the second and third codewords creates the
f
fourth
h one.
2 Th
2.
The scheme
h
i Table
in
T bl 10.2
10 2 is
i also
l a linear
li
bl k code.
block
d
We can create all four codewords by XORing two
other
h codewords.
d
d
10.39

Example 10.11
In our first code (Table 10.1), the numbers of 1s in the
nonzero codewords are 2, 2, and 2. So the minimum
Hamming distance is dmin = 2. In our second code (Table
10.2),
) the
h numbers
b
off 1s in
i the
h nonzero codewords
d
d are 3,
3, and 4. So in this code we have dmin = 3.

10.40

Note

A simple parity-check code is a


single-bit
i l bit error-detecting
d t ti
code in which
n = k + 1 with dmin = 2.

10.41

Table 10.3 Simple parity-check code C(5, 4)

10.42

Figure 10.10 Encoder and decoder for simple parity-check code

10.43

Example 10.12
Let us look at some transmission scenarios. Assume the
sender sends the dataword 1011. The codeword created
from this dataword is 10111, which is sent to the receiver.
We examine
i five
fi cases:
1 No error occurs; the
1.
h received
i d codeword
d
d iis 10111.
10111 The
h
syndrome is 0. The dataword 1011 is created.
2 One
2.
O single-bit
i l bi error changes
h
a1 . The
Th received
i d
codeword is 10011. The syndrome is 1. No dataword
i created.
is
d
3. One single-bit error changes r0 . The received codeword
i 10110.
is
10110 The
Th syndrome
d
is
i 1.
1 No
N ddataword
d iis created.
d
10.44

Example 10.12 (continued)


4. An error changes r0 and a second error changes a3 .
The received codeword is 00110. The syndrome is 0.
The dataword 0011 is created at the receiver. Note that
h
here
the
h ddataword
d iis wrongly
l createdd due
d to the
h
syndrome value.
5. Three
h
bi
bitsa3, a2, andd a1are changed
h
d bby errors.
The received codeword is 01011. The syndrome is 1.
Th dataword
The
d
d is
i not created.
d This
Thi shows
h
that
h the
h simple
i l
parity check, guaranteed to detect one single error, can
also
l find
fi d any odd
dd number
b off errors.

10.45

Note

A simple parity-check code can detect


an odd
dd number
b off errors.

10.46

Note

All Hamming codes discussed in this


book have dmin = 3.
The relationship between m and n in
th
these
codes
d is
i n = 2m
2 1.
1

10.47

Figure 10.11 Two-dimensional parity-check code

10.48

Figure 10.11 Two-dimensional parity-check code

10.49

Figure 10.11 Two-dimensional parity-check code

10.50

Table 10.4 Hamming code C(7, 4)

10.51

Figure 10.12 The structure of the encoder and decoder for a Hamming code

10.52

Table 10.5 Logical decision made by the correction logic analyzer

10.53

Example 10.13
Let us trace the path of three datawords from the sender
to the destination:
1. The dataword 0100 becomes the codeword 0100011.
The
h codeword
d
d 0100011 is
i received.
i d The
h syndrome
d
is
i
000, the final dataword is 0100.
2 The
2.
h ddataword
d 0111 becomes
b
the
h codeword
d
d 0111001.
0111001
The syndrome is 011. After flipping b2 (changing the 1
to 0),
0) the
h fi
finall dataword
d
d is
i 0111.
0111
3. The dataword 1101 becomes the codeword 1101000.
Th syndrome
The
d
is
i 101.
101 After
Af flipping
fli i b0, we get 0000,
0000
the wrong dataword. This shows that our code cannot
correct two errors.
10.54

Example 10.14
We need a dataword of at least 7 bits. Calculate values of
k and n that satisfy this requirement.
Solution
We need to make k = n m greater than or equal to 7, or
2m 1 m 7.
1. If we set m = 3, the result is n = 23 1 and k = 7 3,
or 4, which is not acceptable.
p
2. If we set m = 4, then n = 24 1 = 15 and k = 15 4 =
11, which satisfies
f the condition. So the code is
C(15, 11)
10.55

Figure 10.13 Burst error correction using Hamming code

10.56

10--4 CYCLIC CODES


10
Cyclic codes are special linear block codes with one
extra property
property.. In a cyclic code, if a codeword is
cyclically shifted (rotated),
(rotated) the result is another
codeword..
codeword
Topics discussed in this section:
Cyclic Redundancy Check
Hardware Implementation
Polynomials
Cyclic Code Analysis
Advantages of Cyclic Codes
Oth Cyclic
Other
C li Codes
C d
10.57

T bl 10.6
Table
10 6 A CRC
C C code
d with
i h C(7,
C( 4)

10.58

Figure 10.14 CRC encoder and decoder

10.59

Figure 10.15 Division in CRC encoder

10.60

Figure 10.16 Division in the CRC decoder for two cases

10.61

Figure 10.17 Hardwired design of the divisor in CRC

10.62

Figure 10.18 Simulation of division in CRC encoder

10.63

Figure 10.19 The CRC encoder design using shift registers

10.64

Figure
g
10.20 General design of encoder and decoder of a CRC code

10.65

Figure 10.21 A polynomial to represent a binary word

10.66

Figure 10.22 CRC division using polynomials

10.67

Note

The divisor in a cyclic code is normally


called
ll d the
th generator
t polynomial
l
i l
or simply the generator.

10.68

Note

In a cyclic code,
If s(x) 0, one or more bits is corrupted.
If s(x) = 0,
0 either
a. No
N bit is
i corrupted.
t d or
b. Some bits are corrupted, but the
decoder failed to detect them.
10.69

Note

In a cyclic code, those e(x) errors that


are divisible
di i ibl by
b g(x)
( ) are nott caught.
ht

10.70

Note

If the generator has more than one term


and
d the
th coefficient
ffi i t off x0 is
i 1,
1
all single errors can be caught.

10.71

Example 10.15
Which of the following g(x) values guarantees that a
single-bit
i l bit error is
i caught?
ht? For
F each
h case, what
h t is
i the
th
error that cannot be caught?
a. x + 1
b x3
b.
c. 1
Solution
a. No xi can be divisible by x + 1. Any single-bit error can
be caught.
g
b. If i is equal to or greater than 3, xi is divisible by g(x).
All single-bit
g
errors in ppositions 1 to 3 are caught.
g
c. All values of i make xi divisible by g(x). No single-bit
error can be caught.
g This g(x)
g( ) is useless.
10.72

Figure 10.23 Representation of two isolated single-bit errors using polynomials

10.73

Note

If a generator
t cannott divide
di id xt + 1
(t between 0 and n 1),
then all isolated double errors
can be detected
ca
detected.

10.74

Example 10.16
Find the status of the following generators related to two
i l t d single-bit
isolated,
i l bit errors.
a. x + 1 b. x4 + 1 c. x7 + x6 + 1 d. x15 + x14 + 1
Solution
a. This is a very poor choice for a generator. Any two
errors next to each other cannot be detected.
b. This generator cannot detect two errors that are four
positions apart.
c. This is a good choice for this purpose.
d. This polynomial cannot divide xt + 1 if t is less than
32,768. A codeword with two isolated errors up to
32,768 bits apart can be detected by this generator.
10.75

Note

A generator that contains a factor of


x + 1 can detect
d t t all
ll odd-numbered
dd
b d
errors.

10.76

Note

All burst errors with L r will be


detected.
All burst errors with L = r + 1 will be
detected with probability 1 (1/2)r1.
All b
burstt errors with
ith L > r + 1 will
ill be
b
detected with probability 1 (1/2)r.

10.77

Example 10.17
Find the suitability of the following generators in relation
to burst errors of different lengths.
a. x6 + 1
b. x18 + x7 + x + 1
c. x32 + x23 + x7 + 1
Solution
a. This generator can detect all burst errors with a length
less than or equal to 6 bits; 3 out of 100 burst errors
with length 7 will slip by; 16 out of 1000 burst errors of
length 8 or more will slip by.
by

10.78

Example 10.17 (continued)


b. This ggenerator can detect all burst errors with a length
g
less than or equal to 18 bits; 8 out of 1 million burst
errors with length
g 19 will slipp by;
y; 4 out off 1 million
burst errors of length 20 or more will slip by.
c. This generator can detect all burst errors with a length
less than or equal
q
to 32 bits;; 5 out off 10 billion burst
errors with length 33 will slip by; 3 out of 10 billion
burst errors off length
g 34 or more will slipp by.
y

10.79

Note

A good polynomial generator needs to


have the following characteristics:
1. It should have at least two terms.
2. The coefficient of the term x0 should
be 1.
1
3. It should not divide xt + 1, for t
b t
between
2 and
d n 1.
1
4. It should have the factor x + 1.
10.80

Table 10.7 Standard polynomials

10.81

10--5 CHECKSUM
10
Th last
The
l t error detection
d t ti
method
th d we discuss
di
here
h
i
is
called the checksum
checksum.. The checksum is used in the
Internet
I t
t by
b severall protocols
t l although
lth
h nott att the
th data
d t
link layer
layer.. However, we briefly discuss it here to
complete
l t our discussion
di
i on error checking
h ki
Topics discussed in this section:
Idea
Ones Complement
Internet Checksum
10.82

Example 10.18
Suppose our data is a list of five 4-bit numbers that we
want to send to a destination. In addition to sending these
numbers, we send the sum of the numbers. For example,
if the
h set off numbers
b is
i (7,
( 11, 12, 0, 6),
6) we sendd (7,
( 11, 12,
0, 6, 36), where 36 is the sum of the original numbers.
The
h receiver
i
adds
dd the
h five
fi numbers
b
andd compares the
h
result with the sum. If the two are the same, the receiver
assumes no error, accepts the
h five
fi numbers,
b
andd discards
di
d
the sum. Otherwise, there is an error somewhere and the
d
data
are not accepted.
d

10.83

Example 10.19
We can make the job of the receiver easier if we send the
negative (complement) of the sum, called the checksum.
In this case, we send (7, 11, 12, 0, 6, 36). The receiver
can add
dd all
ll the
h numbers
b
received
i d (including
(i l di
the
h
checksum). If the result is 0, it assumes no error;
otherwise,
h i there
h
i an error.
is

10.84

Example 10.20
How can we represent the number 21 in ones
complement arithmetic using only four bits?

Solution
The number 21 in binary is 10101 (it needs five bits). We
can wrap the leftmost bit and add it to the four rightmost
bits. We have (0101 + 1) = 0110 or 6.

10.85

Example 10.21
How can we represent the number 6 in ones
complement arithmetic using only four bits?
Solution
In ones complement arithmetic, the negative or
complement of a number is found by inverting all bits.
bits
Positive 6 is 0110; negative 6 is 1001. If we consider only
unsigned numbers,
numbers this is 9.
9 In other words,
words the
complement of 6 is 9. Another way to find the
complement of a number in one
oness complement arithmetic
is to subtract the number from 2n 1 (16 1 in this case).
10.86

Example 10.22
Let us redo Exercise 10.19 using ones complement
arithmetic Figure 10.24
arithmetic.
10 24 shows the process at the sender
and at the receiver. The sender initializes the checksum
to 0 and adds all data items and the checksum (the
checksum is considered as one data item and is shown in
color) The result is 36.
color).
36 However,
However 36 cannot be expressed
in 4 bits. The extra two bits are wrapped and added with
the sum to create the wrapped sum value 6.
6 In the figure,
figure
we have shown the details in binary. The sum is then
complemented resulting in the checksum value 9 (15 6
complemented,
= 9). The sender now sends six data items to the receiver
including the checksum 9.
9
10.87

Example 10.22 (continued)


The receiver follows the same procedure as the sender. It
adds all data items (including the checksum); the result
is 45. The sum is wrapped and becomes 15. The wrapped
sum is
i complemented
l
d andd becomes
b
0. Since
Si
the
h value
l off
the checksum is 0, this means that the data is not
corrupted.
d The
h receiver
i
d
drops
the
h checksum
h k
andd keeps
k
the other data items. If the checksum is not zero, the
entire
i packet
k is
i dropped.
d
d

10.88

Figure 10.24 Example 10.22

10.89

Note

Sender site:
1. The
1
Th message is
i divided
di id d into
i t 16-bit
16 bit words.
d
2. The value of the checksum word is set to 0.
3 All words
3.
d including
i l di the
th checksum
h k
are
added using ones complement addition.
4 The
4.
Th sum is
i complemented
l
t d and
d becomes
b
the
th
checksum.
5 The checksum
5.
checks m is sent with
ith the data.
data

10.90

Note

Receiver site:
1 The
1.
Th message (including
(i l di checksum)
h k
) is
i
divided into 16-bit words.
2 All words
2.
d are added
dd d using
i ones

complement addition.
3 The
3.
Th sum is
i complemented
l
t d and
d becomes
b
the
th
new checksum.
4 If the
4.
th value
l off checksum
h k
is
i 0,
0 the
th message
is accepted; otherwise, it is rejected.
10.91

Example 10.23
Let us calculate the checksum for a text of 8 characters
((Forouzan).
Forouzan ). The text needs to be divided into 22-byte
byte
(16-bit) words. We use ASCII (see Appendix A) to change
each byte to a 22-digit
digit hexadecimal number. For example,
F is represented as 0x46 and o is represented as 0x6F.
Figure 10.25 shows how the checksum is calculated at the
sender and receiver sites. In part a of the figure, the value
of partial sum for the first column is 0x36. We keep the
rightmost digit (6) and insert the leftmost digit (3) as the
carry in the second column. The process is repeated for
each column. Note that if there is any corruption, the
cchecksum
ec su recalculated
ecalculated by tthee receiver
eceive is not
ot all 0s. We
leave this an exercise.
10.92

Figure 10.25 Example 10.23

10.93

You might also like