Digital Communication Systems by Simon Haykin-102
Digital Communication Systems by Simon Haykin-102
Digital Communication Systems by Simon Haykin-102
The set of vectors (ei, i = 0, 1, , 2k – 1) defined in (10.21) is called a coset of the code. In
other words, a coset has exactly 2k elements that differ at most by a code vector. Thus, an
(n,k) linear block code has 2n – k possible cosets. In any event, multiplying both sides of
(10.21) by the matrix HT and again using (10.16), we get
T T T
ei H = eH + c i H
(10.22)
T
= eH
which is independent of the index i. Accordingly, we may say:
Each coset of the code is characterized by a unique syndrome.
We may put Properties 1 and 2 in perspective by expanding (10.20). Specifically, with the
matrix H having the systematic form given in (10.14), where the matrix P is itself defined
by (10.8), we find from (10.20) that the (n – k) elements of the syndrome s are linear
combinations of the n elements of the error pattern e, as shown by
s 0 = e 0 + e n – k p 00 + e n – k + 1 p 10 + + e n – k p k – 1 0
s 1 = e 1 + e n – k p 01 + e n – k + 1 p 11 + + e n – k p k – 1 1
(10.23)
⋮
s n – k – 1 = e n – k – 1 + e n – k p 0 n – k – 1 + + e n – 1 p k – 1 n – k + 1
This set of (n – k) linear equations clearly shows that the syndrome contains information
about the error pattern and may, therefore, be used for error detection. However, it should
be noted that the set of equations (10.23) is underdetermined, in that we have more
unknowns than equations. Accordingly, there is no unique solution for the error pattern.
Rather, there are 2n error patterns that satisfy (10.23) and, therefore, result in the same
syndrome, in accordance with Property 2 and (10.22). In particular, with 2n – k possible
syndrome vectors, the information contained in the syndrome s about the error pattern e is
not enough for the decoder to compute the exact value of the transmitted code vector.
Nevertheless, knowledge of the syndrome s reduces the search for the true error pattern e
from 2n to 2n – k possibilities. Given these possibilities, the decoder has the task of making
the best selection from the cosets corresponding to s.
That is, the minimum distance is the same as the smallest Hamming weight of the difference
between any pair of code vectors. From the closure property of linear block codes, the sum
(or difference) of two code vectors is another code vector. Accordingly, we may also state:
The minimum distance of a linear block code is the smallest Hamming weight
of the nonzero code vectors in the code.
The minimum distance dmin is related to the structure of the parity-check matrix H of the
code in a fundamental way. From (10.16) we know that a linear block code is defined by
the set of all code vectors for which cHT = 0, where HT is the transpose of the parity-check
matrix H. Let the matrix H be expressed in terms of its columns as shown by
H = h 1 h 2 h n (10.24)
Then, for a code vector c to satisfy the condition cHT = 0, the vector c must have ones in
such positions that the corresponding rows of HT sum to the zero vector 0. However, by
definition, the number of ones in a code vector is the Hamming weight of the code vector.
Moreover, the smallest Hamming weight of the nonzero code vectors in a linear block
code equals the minimum distance of the code. Hence, we have another useful result stated
as follows:
The minimum distance of a linear block code is defined by the minimum
number of rows of the matrix HT whose sum is equal to the zero vector.
From this discussion, it is apparent that the minimum distance dmin of a linear block code
is an important parameter of the code. Specifically, dmin determines the error-correcting
capability of the code. Suppose an (n,k) linear block code is required to detect and correct
all error patterns over a binary symmetric channel, and whose Hamming weight is less
than or equal to t. That is, if a code vector ci in the code is transmitted and the received
vector is r = ci + e, we require that the decoder output ĉ = c i whenever the error pattern e
has a Hamming weight
w(e) t
We assume that the 2k code vectors in the code are transmitted with equal probability. The
best strategy for the decoder then is to pick the code vector closest to the received vector r;
that is, the one for which the Hamming distance d(ci,r) is the smallest. With such a
strategy, the decoder will be able to detect and correct all error patterns of Hamming weight
w(e), provided that the minimum distance of the code is equal to or greater than 2t + 1. We
may demonstrate the validity of this requirement by adopting a geometric interpretation of
the problem. In particular, the transmitted 1-by-n code vector and the 1-by-n received
vector are represented as points in an n-dimensional space. Suppose that we construct two
spheres, each of radius t, around the points that represent code vectors ci and cj under two
different conditions:
1. Let these two spheres be disjoint, as depicted in Figure 10.6a. For this condition to
be satisfied, we require that d(ci,cj) 2t + 1. If, then, the code vector ci is transmitted
and the Hamming distance d(ci,r) t, it is clear that the decoder will pick ci, as it is
the code vector closest to the received vector r.
2. If, on the other hand, the Hamming distance d(ci,cj) 2t, the two spheres around ci
and cj intersect, as depicted in Figure 10.6b. In this second situation, we see that if ci
Haykin_ch10_pp3.fm Page 589 Friday, January 4, 2013 5:03 PM
t t t t
ci r cj ci r cj
(a) (b)
is transmitted, there exists a received vector r such that the Hamming distance
d(ci,r) t, yet r is as close to cj as it is to ci. Clearly, there is now the possibility of
the decoder picking the vector cj, which is wrong.
We thus conclude the ideas presented thus far by saying:
An (n,k) linear block code has the power to correct all error patterns of weight t
or less if, and only if, d(ci,cj) > 2t + 1, for all ci and cj.
By definition, however, the smallest distance between any pair of code vectors in a code is
the minimum distance dmin of the code. We may, therefore, go on to state:
An (n,k) linear block code of minimum distance dmin can correct up to t errors
if, and only if,
1
t --- d min – 1 (10.25)
2
where denotes the largest integer less than or equal to the enclosed
quantity.
The condition described in (10.25) is important because it gives the error-correcting
capability of a linear block code a quantitative meaning.
Syndrome Decoding
We are now ready to describe a syndrome-based decoding scheme for linear block codes.
Let c 1 c 2 c2k denote the 2k code vectors of an (n, k) linear block code. Let r denote
the received vector, which may have one of 2n possible values. The receiver has the task of
partitioning the 2n possible received vectors into 2k disjoint subsets D 1 D 2 D2 k in
such a way that the ith subset Di corresponds to code vector ci for 1 < i < 2k. The received
vector r is decoded into ci if it is in the ith subset. For the decoding to be correct, r must be
in the subset that belongs to the code vector ci that was actually sent.
The 2k subsets described herein constitute a standard array of the linear block code. To
construct it, we exploit the linear structure of the code by proceeding as follows:
1. The 2k code vectors are placed in a row with the all-zero code vector c1 as the
leftmost element.
2. An error pattern e2 is picked and placed under c1, and a second row is formed by
adding e2 to each of the remaining code vectors in the first row; it is important that
Haykin_ch10_pp3.fm Page 590 Friday, January 4, 2013 5:03 PM
...
...
...
...
...
ej c2 + ej c3 + ej ... ci + ej ... c2k + ej
...
...
...
...
...
e2n – k c2 + e2n – k c3 + e2n – k ci + e2n – k c2k + e2n – k
the error pattern chosen as the first element in a row has not previously appeared in
the standard array.
3. Step 2 is repeated until all the possible error patterns have been accounted for.
Figure 10.7 illustrates the structure of the standard array so constructed. The 2k columns of
this array represent the disjoint subsets D 1 D 2 D2k. The 2n–k rows of the array
represent the cosets of the code, and their first elements e 2 e2 n – k are called coset
leaders.
For a given channel, the probability of decoding error is minimized when the most
likely error patterns (i.e., those with the largest probability of occurrence) are chosen as
the coset leaders. In the case of a binary symmetric channel, the smaller we make the
Hamming weight of an error pattern, the more likely it is for an error to occur.
Accordingly, the standard array should be constructed with each coset leader having the
minimum Hamming weight in its coset.
We are now ready to describe a decoding procedure for linear block codes:
1. For the received vector r, compute the syndrome s = rHT.
2. Within the coset characterized by the syndrome s, identify the coset leader (i.e., the
error pattern with the largest probability of occurrence); call it e0.
3. Compute the code vector
c = r + e0 (10.26)
as the decoded version of the received vector r.
This procedure is called syndrome decoding.
1 1 0 ⋮ 1 0 0 0
0 1 1 ⋮ 0 1 0 0
1 1 1 ⋮ 0 0 1 0
G =
1 0 1 ⋮ 0 0 0 1
{
{
P Ik
which conforms to the systematic structure of (10.12).
The corresponding parity-check matrix is given by
1 0 0 ⋮ 1 0 11
H = 0 1 0 ⋮ 1 1 10
0 0 1 ⋮ 0 1 11
{
{
T
In – k P
The operative property embodied in this equation is that the columns of the parity-check
matrix P consist of all the nonzero m-tuples, where m = 3.
With k = 4, there are 2k = 16 distinct message words, which are listed in Table 10.1. For
a given message word, the corresponding codeword is obtained by using (10.13). Thus, the
application of this equation results in the 16 codewords listed in Table 10.1.
In Table 10.1, we have also listed the Hamming weights of the individual codewords in
the (7,4) Hamming code. Since the smallest of the Hamming weights for the nonzero
codewords is 3, it follows that the minimum distance of the code is 3, which is what it
should be by definition. Indeed, all Hamming codes have the property that the minimum
distance dmin = 3, independent of the value assigned to the number of parity bits m.
To illustrate the relation between the minimum distance dmin and the structure of the
parity-check matrix H, consider the codeword 0110100. In matrix multiplication, defined
Weight of Weight of
Message word Codeword codeword Message word Codeword codeword
by (10.16), the nonzero elements of this codeword “sift” out the second, third, and fifth
columns of the matrix H, yielding
0 0 0 0
1 + 0 + 1 = 0
0 1 1 0
We may perform similar calculations for the remaining 14 nonzero codewords. We thus
find that the smallest number of columns in H that sums to zero is 3, reconfirming the
defining condition dmin = 3.
An important property of binary Hamming codes is that they satisfy the condition of
(10.25) with the equality sign, assuming that t = 1. Thus, assuming single-error patterns,
we may formulate the error patterns listed in the right-hand column of Table 10.2. The
corresponding eight syndromes, listed in the left-hand column, are calculated in
accordance with (10.20). The zero syndrome signifies no transmission errors.
Suppose, for example, the code vector [1110010] is sent and the received vector is
[1100010] with an error in the third bit. Using (10.19), the syndrome is calculated to be
1 0 0
0 1 0
0 0 1
s = 1100010 1 1 0
0 1 1
1 1 1
1 0 1
= 0 0 1
From Table 10.2 the corresponding coset leader (i.e., error pattern with the highest
probability of occurrence) is found to be [0010000], indicating correctly that the third bit
of the received vector is erroneous. Thus, adding this error pattern to the received vector,
in accordance with (10.26), yields the correct code vector actually sent.
000 0000000
100 1000000
010 0100000
001 0010000
110 0001000
011 0000100
111 0000010
101 0000001