BCH Notes
BCH Notes
BCH Notes
There are no prime trinomials of degree 8; in fact, there are no prime trinomials of degree 8m for any m .
Since rows of H are linearly independent, there are 28 syndromes. There are
15 15
1+ + = 121 < 27 < 28
1 2
error patterns of weight 2. This code does not achieve the Hamming bound.
A systematic parity-check matrix can be found using the generator polynomial.
c(x)
Modified error trapping can be used for (15, 7, 5) binary BCH code.
Any 2 -bit error pattern can be rotated into the 8 check positions. However, two error trapping passes may be needed.
1 α4 α8 · · · α56
The three possible channel alphabets are GF(2) , GF(22) , and GF(24):
The BCH codes corresponding to these channels alphabets are
• (15,7) binary BCH code over GF(2) (presented earlier in lecture)
• (15,9) BCH code over GF(4)
• (15,11) Reed-Solomon code over GF(16)
The blocklengths in symbols are 15; blocklengths in bits are 15, 30, and 60.
The final two rows, corresponding to conjugate α4 over GF(4), are redundant.
(Row 7 equals row 1 + row 2, while row 8 equals row 2).
Thus g(x) = f1(x)f2(x)f3 (x) has degree 6 ⇒ (15, 9, 5) code over GF(4) .
When we delete the redundant rows of H , we obtain the parity-check matrix of the
(15, 7) 2EC BCH code over GF(2) shown earlier.
Vandermonde matrix
Definition: The µ × µ Vandermonde matrix V (X1, . . . , Xµ) is
1 1 ··· 1
X1 X2 ··· Xµ
V = .. .. ... .. .
X1µ−1 X2µ−1 · · · Xµµ−1
BCH bound
Theorem: A BCH code whose parity-check matrix has d − 1 rows has dmin ≥ d.
Proof : Every set of d − 1 columns of H is linearly independent over GF(q m).
To see this, consider a submatrix consisting of columns i1, . . . , id−1 .
i1 b id−1 b
α ··· α
i1(b+1)
· · · αid−1(b+1)
α
det .. ... .. =
i1 (b+d−2) id−1 (b+d−2)
α ··· α
1 ··· 1
αi1 · · · αid−1
i1 b id−1 b
(α · · · α ) det . ... .. 6= 0
.
(d−2)i1 (d−2)id−1
α ··· α
This determinant is nonzero because αi1 6= 0, . . . , αid−1 6= 0 and the second matrix
is a Vandermonde matrix with distinct columns.
EE 387 Notes #7, Page 16
Design of BCH codes
Codewords of BCH code have zeroes that are d − 1 consecutive powers of α.
Conjugates over channel alphabet GF(q) are also zeroes.
The degree of the generator polynomial is the total number of conjugates.
Example: Channel alphabet GF(2), decoder alphabet GF(26).
The first six conjugacy classes, represented by exponents:
{0} {1, 2, 4, 8, 16, 32} {3, 6, 12, 24, 48, 33}
{5, 10, 20, 40, 17, 34} {7, 14, 28, 56, 49, 35} {9, 18, 36}
• d∗ = 5 requires 4 powers. Exponents {1, 2, 3, 4} ⇒ 12 conjugates.
• d∗ = 9 requires 8 powers. Exponents {1, . . . , 8} ⇒ 24 conjugates.
• d∗ = 11 requires 10 powers. Exponents {1, . . . , 10} ⇒ 27 conjugates.
• d∗ = 4 requires 3 powers.
◦ Exponents {1, 2, 3} ⇒ 12 conjugates.
◦ Better: {0, 1, 2} ⇒ 7 conjugates (expurgated code)
EE 387 Notes #7, Page 17
1 α4 α8 · · · α1016
Generator polynomials lcm(f1(x), . . . , f4(x)) have coefficients from subfields.
GF(22) = {0, 1, α85, α170} = {00 , 01 , D6 , D7 }
GF(24) = {0, 1, α17, . . . , α238} = span{01 , 0B , 98 , D6 }
c(x)
Syndrome s(x) mod g(x) used for error detection has a similar circuit.
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
0 0 0 0
1 1 1 1
2 2 2 2
3 3 3 3
4 4 4 4
5 5 5 5
6 6 6 6
7 7 7 7
Reed-Solomon encoder
A Reed-Solomon code with d∗ = 8 has the following generator polynomial:
g(x) = (x + α−3)(x + α−2)(x + α−1)(x + 1)(x + α+1)(x + α+2)(x + α+3)
= x7 + 6bx6 + 09x5 + 9ex4 + 9ex3 + 09x2 + 6bx + 1
Since the reciprocals of its zeroes are also zeroes, g(x) is its mirror image.
Thus the encoder corresponding to g(x) has only 3 distinct scalers.
m(x)
9e 09 6b
c(x)
Syndrome equations
Error locators are elements of the decoder alphabet GF(q m).
Error magnitudes are elements of the channel alphabet GF(q).
Important special case: Yi = 1 for channel alphabet GF(2).
For today, assume narrow-sense BCH code (b = 1) with d = 2t + 1.
Partial syndromes are constants in system of 2t equations in 2ν unknowns:
S 1 = Y1 X 1 + · · · + Yν X ν
S2 = Y1X12 + · · · + Yν Xν2
..
S2t = Y1X12t + · · · + Yν Xν2t
This is an algebraic system of equations of degree 2t.
Goal: reduce to one-variable polynomial equation with ν solutions.
= 1 + Λ1x + · · · + Λν xν .
The zeroes of Λ(x) are X1−1, . . . , Xν−1 — the reciprocals of error locators.
The degree of Λ(x) is the number of errors. elegant!
PGZ in general
By definition of the error locator polynomial, Λ(Xi−1) = 0:
1 + Λ1Xi−1 + · · · + Λν Xi−ν = 0 (i = 1, . . . , ν)
Lemma: Suppose that there are ν ≤ t symbol errors. Then Mν is nonsingular, but
Mµ is singular for µ > ν .
Matrices that are constant along anti-diagonals are called Hankel matrices.
1 1 1 · · · X1µ−1
··· Y1X1 · · · 0
= ... ... ... · ..
. ... ... · ... . . . ...
X1µ−1 · · · Xµµ−1 0 · · · YµXµ 1 · · · Xµµ−1
Ω(X −1 )
The Forney algorithm, Yi = − i , is a “closed” form solution for step 5.
Λ′ (X −1)
i
EE 387 Notes #7, Page 35
Another trick to reduce encoder complexity is to choose b so that the generator polynomial is reversible — inverses of
zeroes are also zeroes — so half as many scalers are needed in the encoding circuit.
EE 387 Notes #7, Page 36
3EC Reed-Solomon decoding (1)
The partial syndromes defined by Sj = r(αj ) for j = 0, . . . , 5 satisfy the equations
S0 = Y1 + Y2 + Y3
S1 = Y1 X 1 + Y2 X 2 + Y3 X 3
S2 = Y1X12 + Y2X22 + Y3X32
S3 = Y1X13 + Y2X23 + Y3X33
S4 = Y1X14 + Y2X24 + Y3X34
S5 = Y1X15 + Y2X25 + Y3X35
where X1, X2, X3 are error location numbers and Y1, Y2, Y3 are error magnitudes.
The coefficients of the error locator polynomial Λ(x) satisfy the linear equations:
S0 S1 S2 Λ3 S3
S 1 S 2 S 3 Λ 2 = S 4
S2 S3 S4 Λ1 S5
Starting from the partial syndromes S0, S1, . . . , S5 , approximately 30 Galois field
multiplications and 3 Galois field divisions are needed to perform decoding.
This estimate does not count the effort needed to find the zeroes of Λ(x).
Horner’s method: α α3
5 5 5 5
r(x)
S1 S3
Multiplication by
0 1 0 0 0 0 0 0 1 0
α, α3 uses matrices: 0 0 1 0 0 0 0 0 0 1
Mα = 0 0 0 1 0 , Mα3 = 1 0 1 0 0
(α5 + α2 + 1 = 0) 0 0 0 0 1 0 1 0 1 0
1 0 1 0 0 0 0 1 0 1
Circuit for
r(α) and r(α3)
4 4
3 3
2 2
1 1
r(x) 0 0
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
0 0 0 0
1 1 1 1
2 2 2 2
3 3 3 3
4 4 4 4
5 5 5 5
6 6 6 6
7 7 7 7
r0 0
1
2
3
4
5
6
r7 7
Chien search
The Chien search is a clever method for finding zeroes of the error locator
polynomial by brute force.
The Chien search evaluates Λ(αi) for i = 1, 2, . . . , n using ν constant
multiplications instead of ν general multiplications.
Key idea: use ν state variables Q1, . . . , Qν such that at time i
Qj = Λj αji , j = 1, . . . , ν .
Each state variable is updated by multiplication by a constant:
Qj → Qj αj , i = 1, . . . , n .
ν
X
Sum of state variables at time i is Qj = Λ(αi) − 1.
j=1
Λ1 Λ2 Λt Σ /ERRLOC
1
Λ1 Λ2 Λt
Σ /ERRLOC0
α α2 αt
1
Σ /ERRLOC1
α α2 αt
α α2 αt
1
Λ1 Λ2 Λt
Σ /ERRLOC1
Σ /ERRLOC0
The cascade of two scalers for αi may be slightly slower than one scaler for α2i .
In both cases, the left hand side is the coefficient of xν+j−1 in the polynomial
product S(x)Λ(x).
We can define partial syndromes Si for every i > 0 . However, the decoder can compute only the first 2t values.
Formal derivative
We can obtain a closed formula for Yi in terms of Ω(x), Λ(x), and Xi .
First we need the notion of the formal derivative of a polynomial.
Definition: The formal derivative of
f (x) = f0 + f1x + f2x2 + · · · + fnxn
is the polynomial
ν
Y
Next we use the definition Λ(x) = 1 − Xlx to compute S(x)Λ(x).
l=1
i=1 l6=i
ν
X Y ν
X Y
= YiXib (1 − Xlx) − YiXib(Xix)2t (1 − Xlx)
i=1 l6=i i=1 l6=i
When we evaluate Λ′(x) at Xi−1 , only one term in the sum is nonzero:
Y
Λ′(Xi−1) = −Xi (1 − Xj Xi−1) .
j6=i
Similarly, the value of Ω(x) at Xi−1 includes only one term from the sum:
ν
X Y Y
Ω(Xi−1) = Yl Xlb (1 − Xj Xi−1) = YiXib (1 − Xj Xi−1) .
l=1 j6=l j6=i
Thus
Ω(Xi−1) YiXib −1
−(b−1) Ω(Xi )
= ⇒ Yi = −Xi .
Λ′(Xi−1) −Xi Λ′(Xi−1)
α α2 αt αb−1
m
m
1
Ω1 Ω2 Ωt
m
m Σ Yi
Ω(αi )
α α2 αt
Λ′1 Λ′
m 2
m Λ′t Σ
Λ′(αi )
1
Unless Si = 0 for i ≤ t , and not all Si = 0 , in which case an uncorrectable error has occurred.
EE 387 Notes #7, Page 57
4. Finalize:
∆ ← A22(0); Λ(x) ← ∆−1A22(x); Ω(x) ← ∆−1 t(x) ;
s(x) s(x)
The quotient is defined by s(x) = t(x) + r(x) , deg r(x) < deg t(x) .
t(x) t(x)
S S S S S S S S S S S S 1
q1 q0 q1 q0
q1 q0
q1 q0
q1 q0
q1 q0
q1 q0
Ω(x) Λ(x)
Ω(x) = 04 F4 16 CC F2 BA , Λ(x) = 01 7C 95 B7 09 DA 82
Op count: mul = 169 , div = 25
−1 Λ1 Λ2 Λ3 ... Λν
...
1 1 Λ 0 T
2 1 Λ 0 T T
3 1 Λ Λ 0 T T
4 1 Λ Λ 0 T T T
5 1 Λ Λ Λ 0 T T T
6 1 Λ Λ Λ 0 T T T T
7 1 Λ Λ Λ Λ 0 T T T T
8 1 Λ Λ Λ Λ 0 T T T T T
9 1 Λ Λ Λ Λ Λ 0 T T T T T
10 1 Λ Λ Λ Λ Λ 0 T T T T T T
11 1 Λ Λ Λ Λ Λ Λ 0 T T T T T T
12 1 Λ Λ Λ Λ Λ Λ 0 T T T T T T T
3
↑
L 2
0
0 1 2 3 4 5 6 7 8
k →
4 ν=4
3 ν=3
↑
L 2
0
0 1 2 3 4 5 6 7 8
k →
Λ1 Λ1
X2 = (U1 + 1) = X1 + .
Λ2 Λ2
If 2m is not too large, we can store a table of zeroes of u2 + u + c.
Erasure correction
Erasures are special received symbols used to represent uncertainty. Examples:
• Demodulator erases a symbol when signal quality is poor.
• Lower level decoder erases symbols of codeword that has an uncorrectable error.
Theorem: A block code can correct up to d∗ − 1 erasures.
Proof : If the number of erasures is less than d∗ , then there is only one codeword
that agrees with the received sequence.
c1
11111111
00000000
00000000
11111111
r 00000000
11111111
00000000
11111111
< d erasures
c2
subframe 1
subframe 2
111
000
subframe 3 000
111
000
111
checkframe 1
checkframe 2
When only one subframe is lost, it can be replaced by XORing the other subframes.
r1 1111 ...
If number of errors is ≤ ⌊ 12 (d∗ − 1 − ρ)⌋ then
d (ĉ, r) ≤ ⌊ 1 ρ⌋ + ⌊ 1 (d∗ − 1 − ρ)⌋
H 2 2
≤ 12 ρ + 21 (d∗ − 1 − ρ) = 1 ∗
2 (d − 1) .
This shows that r is within the decoding sphere of ĉ.
EE 387 Notes #7, Page 82
Error and erasure correction: Berlekamp-Massey
Qρ
1. Compute erasure locator polynomial Γ(x) = l=1 (1 − Ulx).
2. Compute partial syndrome polynomial S(x) using 0 for erased locations.
3. Compute modified syndrome polynomial Ξ(x) = S(x)Γ(x) mod x2t . Modified
syndromes are Ξ1, . . . , Ξ2t−ρ .
4. Run Berlekamp-Massey algorithm with the modified syndromes Ξ1, . . . , Ξ2t−ρ
to find the error-locator polynomial Λ(x) of degree ≤ 21 (2t − ρ).
5. Use the modified key equation to find error evaluator polynomial Ω(x):
Ω(x) = S(x)Λ(x)Γ(x) mod x2t = S(x)Ψ(x) mod x2t
where Ψ(x) = Λ(x)Γ(x) is the error-and-erasure locator polynomial.
6. Use modified Forney algorithm to compute error magnitudes:
Ω(Xi−1) Ω(Ul−1)
Yi = − Xi1−b , El = − Ul1−b
Ψ′(Xi−1) Ψ′(Ul−1)
for i = 1, . . . , ν and l = 1, . . . , ρ.
EE 387 Notes #7, Page 83
Then use errors-and-erasures decoding, where missing check symbols are erased.