Digital Communication Systems by Simon Haykin-116
Digital Communication Systems by Simon Haykin-116
Digital Communication Systems by Simon Haykin-116
0 0 1 0 1 1
1 0 1 0 0 1
A1 = 1 1 1 0 0 0
–1
1 1 0 0 1 0
0 1 0 0 1 1
0 1 1 1 0 1
–1 –1
Using the given value of A2 and the value of A 1 just found, the matrix product A 2 A 1
is given by
1 0 0 1 1 0
A2 A1 = 0 0 0 1 1 1
–1
0 0 1 1 1 0
0 1 0 1 1 0
Finally, using (10.144), the generator of the (10, 3, 5) LDPC code is defined by
1 0 0 1 1 0 1 0 0 0
0 0 0 1 1 1 0 1 0 0
G = 0 0 1 1 1 0 0 0 1 0
0 1 0 1 1 0 0 0 0 1
}
}
–1
A2 A1 Ik
It is important to recognize that the LDPC code described in this example is intended only
for the purpose of illustrating the procedure involved in the generation of such a code. In
practice, the block length n is orders of magnitude larger than that considered in this
example. Moreover, in constructing the matrix A, we may constrain all pairs of columns to
have a matrix overlap (i.e., inner product of any two columns in matrix A) not to exceed
one; such a constraint, over and above the regularity constraints, is expected to improve
the performance of LDPC codes. Unfortunately, with a small block length as that
considered in this example, it is difficult to satisfy this additional requirement.
Among these properties, the minimum distance of the member codes is of particular
interest. From Section 10.4, we recall that the minimum distance of a linear block code is
the smallest Hamming distance between any pair of code vectors in the code. In contrast,
we say:
Over an ensemble of LDPC codes, the minimum distance of a member code in
the ensemble is naturally a random variable.
Elsewhere,18 it is shown that as the block length n increases for fixed tc > 3 and tr > tc, the
probability distribution of the minimum distance can be overbounded by a function that
approaches a unit step function at a fixed fraction t c t r of the block length n. Thus, for
large n, practically all the LDPC codes in the ensemble have a minimum distance of at
least n t c t r . Table 10.7 presents the rate and t c t r of LDPC codes for different values of
the weight-pair (tc, tr). From this table we see that for tc = 3 and tr = 6, the code rate r
attains its highest value of 12 and the fraction t c t r attains its smallest value; hence the
preferred choice of tc = 3 and tr = 6 in constructing the LDPC code.
Table 10.7 The rate and fractional term of LDPC codes for
varying weight-pairs*
tc tr Rate r tc tr
5 6 0.167 0.255
4 5 0.2 0.210
3 4 0.25 0.122
4 6 0.333 0.129
3 5 0.4 0.044
3 6 0.5 0.023
Haykin_ch10_pp3.fm Page 673 Friday, January 4, 2013 5:03 PM
The decoding algorithm has two alternating steps: a horizontal step and a vertical step,
which run along the rows and columns of matrix A, respectively. In the course of decoding,
two probabilistic quantities associated with nonzero elements of matrix A are alternately
x
updated. One quantity, denoted by P ij , defines the probability that bit j is symbol x (i.e.,
symbol 0 or 1), given the information derived via checks performed in the horizontal step
x
except for check i. The second quantity, denoted by Q ij , defines the probability that check i
is satisfied given that bit j is fixed at the value x and the other bits have the probabilities
P ij where we have j 𝒥 i \j .
The LDPC decoding algorithm then proceeds as follows.19
Initialization
0 1 0 1
The variables P ij and P ij are set equal to the a priori probabilities p j and p j of
0 1
symbols 0 and 1, respectively, with p j + p j = 1 for all j.
Horizontal Step
In the horizontal step of the algorithm, we run through the checks i. To this end, define
0 1
P ij = P ij – P ij
For each weight-pair (i, j), compute
Q ij = P ij
j 𝒥 i \j
Hence, set
0 1
Q ij = --- 1 + Q ij
2
1 1
Q ij = --- 1 – Q ij
2
Vertical Step
0 1
In the vertical step of the algorithm, values of the probabilities P ij and P ij are updated
using the quantities computed in the horizontal step. In particular, for each bit j, compute
0 0 0
P ij = ij p j Q ij
i 𝒥 i \j
1 1 1
P ij = ij p j Q ij
i 𝒥 i \j
where the scaling factor ij is chosen so as to satisfy the condition
0 1
P ij + P ij = 1 for all ij
In the vertical step, we may also update the pseudo-posterior probabilities:
0 0 0
Pj = j pj Q ij
i 𝒥 j
1 1 1
Pj = j pj Q ij
i 𝒥 j
Haykin_ch10_pp3.fm Page 674 Friday, January 4, 2013 5:03 PM
d X
d–1
X = (10.145)
d=1
where X denotes a node variable in the code’s Tanner graph, d denotes the fraction
of variable nodes with degree d in the graph, and dN denotes the maximum degree of
a variable node in the graph.
b. Correspondingly, the degree distribution of the check nodes in the irregular code’s
Tanner graph is described by
dc
d X
d–1
X = (10.146)
d=1
Haykin_ch10_pp3.fm Page 675 Friday, January 4, 2013 5:03 PM
where X denotes the check node in the code’s Tanner graph, d denotes the fraction
of check nodes with degree d in the graph, and dc denotes the maximum degree of a
check node in the graph.
The irregular LDPC code embodies the regular LDPC code as a special case. Specifically,
(10.145) and (10.146) simplify as follows for the variable and check nodes of a regular
LDPC code, respectively:
N – 1
X = X for d = 1 and d N = N (10.147)
and
c – 1
X = X for d = 1 and d c = c (10.148)
By exploiting the two degree distributions of (10.145) and (10.146) for the variable and
check nodes, respectively, irregular LDPC codes are commonly constructed on the basis of
their Tanner graphs. Such an approach is exemplified by the irregular LDPC codes
reported in Richardson et al. (2001), and Richardson and Urbanke (2001).21
In the different approaches to channel coding described up to this point in the chapter, the
one common feature that describes them all may be summarized as follows:
Encoding is performed separately from modulation in the transmitter, and
likewise for decoding and detection in the receiver.
Moreover, error control is provided by transmitting additional redundant bits in the code,
which has the effect of lowering the information bit rate per channel bandwidth. That is,
bandwidth efficiency is traded for increased power efficiency.
To attain a more effective utilization of available resources, namely bandwidth and
power, coding and modulation would have to be treated as a combined (single) entity. We
may deal with this new paradigm by invoking the statement
Coding is redefined as the process of imposing certain patterns on the
modulated signal in the transmitter.
Indeed, this definition includes the traditional idea of parity-check coding.
Trellis codes for band-limited channels result from the treatment of modulation and
coding as a combined entity rather than as two separate operations. The combination itself
is referred to as trellis-coded modulation (TCM).22 This form of signaling has three basic
requirements:
1. The number of signal points in the constellation used is larger than what is required for
the modulation format of interest with the same data rate; the additional signal points
allow redundancy for forward error-control coding without sacrificing bandwidth.
2. Convolutional coding is used to introduce a certain dependency between successive
signal points, such that only certain patterns or sequences of signal points are
permitted for transmission.
3. Soft-decision decoding is performed in the receiver, in which the permissible
sequence of signals is modeled as a trellis structure; hence the name trellis codes.
Haykin_ch10_pp3.fm Page 676 Friday, January 4, 2013 5:03 PM
Requirement 3 is the result of using an enlarged signal constellation. By increasing the size
of the constellation, the probability of symbol error increases for a fixed SNR. Hence, with
hard-decision demodulation, we would face a performance loss before we begin. Performing
soft-decision decoding on the combined code and modulation trellis ameliorates this problem.
In an AWGN channel, we look to the following approach:
Maximum likelihood decoding of trellis codes consists of finding that particular
path through the trellis with minimum squared Euclidean distance to the
received sequence.
Thus, in the design of trellis codes, the emphasis is on maximizing the Euclidean distance
between code vectors (equivalently codewords) rather than maximizing the Hamming
distance of an error-correcting code. The reason for this approach is that, except for
conventional coding with binary PSK and QPSK, maximizing the Hamming distance is
not the same as maximizing the squared Euclidean distance. Accordingly, in what follows,
the Euclidean distance between code vectors is adopted as the distance measure of
interest. Moreover, while a more general treatment is possible, the discussion is (by
choice) confined to the case of two-dimensional constellations of signal points. The
implication of such a choice is to restrict the development of trellis codes to multilevel
amplitude andor phase modulation schemes such as M-ary PSK and M-ary QAM.
π d0
d0 = 2 sin = 2– 2
8
0 1
d1
d1 = 2
0 1 0 1
d2 = 2 d2
00 10 01 11
Signal 0 2 1 3
number
Figure 10.37 Partitioning of 8-PSK constellation, which shows that d0 < d1 < d2.