Academia.eduAcademia.edu

Implementation of APP decoders on dual code trellis

2008, Electronics Letters

For high rate k/n convolutional codes (k/n. 0.5), a trellis based implementation of a posteriori probability (APP) decoders is less complex on the dual code trellis owing to its branch complexity (2 n2k) being lower than the code trellis (2 k). The log scheme used for APP decoders is not attractive for practical implementation owing to heavy quantisation requirements. As an alternative, presented is an arc hyperbolic tangent (AHT) scheme for implementing the dual-APP decoder. The trellis based implementation of this AHT dual APP decoder is discussed and some fundamental differences between primal APP and dual APP decoders that have an effect on a quantised implementation are reported.

Implementation of APP decoders on dual code trellis S. Srinivasan and S.S. Pietrobon For high rate k/n convolutional codes (k/n . 0.5), a trellis based implementation of a posteriori probability (APP) decoders is less complex on the dual code trellis owing to its branch complexity (2n2k) being lower than the code trellis (2k). The log scheme used for APP decoders is not attractive for practical implementation owing to heavy quantisation requirements. As an alternative, presented is an arc hyperbolic tangent (AHT) scheme for implementing the dualAPP decoder. The trellis based implementation of this AHT dual APP decoder is discussed and some fundamental differences between primal APP and dual APP decoders that have an effect on a quantised implementation are reported. Introduction: A symbol by symbol decoding algorithm working on the dual code was first presented in [1] for equally likely symbols and hard decision outputs. A soft input soft output (SISO) version was first presented in [2]. The numerical computation requirements of a dual APP (DAPP) decoder are different from a normal APP decoder. Consequently, finding a suitable metric representation scheme (like the log scheme for APP decoders) is not straightforward. Also problematic are the approximations to be used for a suboptimal, reduced complexity implementation. In this Letter, we describe a trellis based implementation of the DAPP decoder using an arc hyperbolic tangent (AHT) scheme. We point out some key differences between a normal APP decoder and a DAPP decoder with regard to suboptimal implementations and point out why approximations like max-log are not suitable in DAPP decoding. Dual trellis APP decoding: Let u be the information sequence entering a k/n systematic convolutional encoder for the code C. The encoder transmits a codeword v of length N bits (corresponding to N/n trellis stages) into a memoryless additive white Gaussian noise (AWGN) channel. Given the corresponding received sequence y, using the notations L(I) and L(O) for the input and output likelihood ratios (LRs) of the SISO decoder, we define for 0  i , N: channel LR Lci (I ) W p( yijvi ¼ 1)/p( yijvi ¼ 0), a priori LR Li0 (I) W p(ui ¼ 1)/p(ui ¼ 0), intrinsic LR Li (I ) ¼ Lci (I)Li0 (I ), and the APP LR Li (O) W p(ui ¼ 1jy)/p(ui ¼ 0jy) ¼ Li (I)Li0 (O), where Li0 (O) is the extrinsic LR. The trellis of the reciprocal dual code C ? [3], generated by the parity check matrix H is used for DAPP decoding. We have (see [6] for derivation) ! 1  Q1i =Q0i Gi  Li ðOÞ ¼ Li ðIÞ ð1Þ 1 þ Q1i =Q0i Gi equivalent operation Ē. For the addition of two non-negative real numbers A and B, represented in the negative log domain as a and b (a ¼ 2ln(A), b ¼ 2ln(B)), we have A þ B ¼ aEb ¼ min(a, b) 2ln(1 þ e2ja 2 bj). The second term 2ln(1 þ e2jdjl), where d is the difference between the log domain values, is called the correction term. We also define another log domain operator that gives the absolute difference between two non-negative real numbers as jA 2 Bj ¼ aĒ b ¼ min(a, b) 2ln(1 2 e2ja 2 bj). Unlike the correction term in the E operation, which only takes values in the range [2ln(2), 0], the correction term for the Ē operation, 2ln(1 2 e2jdj), can have any value in the range [0, 1]. Moreover, the correction term value starts from infinity asymptotically, thereby taking extremely large values for small differences between the values of a and b. Similar to the log representation used for normal APP decoder implementation, a sign-magnitude (SM) scheme was suggested by Montorsi and Benedetto [5]. In this scheme, every number in the decoder is represented by its sign and the logarithm of its magnitude. Here, the multiplications are simply implemented by additions. However, the addition, which is required for all state metric calculations, is quite complex to implement, because of the requirement to accommodate Ē operations, the quantised implementation of which is not straightforward owing to the asymptotic region. Therefore, a fixed point implementation is not very attractive for the SM scheme. As an alternative to this SM scheme, we introduced the arc hyperbolic tangent (AHT) representation scheme in [6]. AHT representation: In this scheme, an inverse transformation (tanh21) is used to represent all the metric values in the decoder. An important feature is that the AHT DAPP decoder takes as input the LLR values for channel soft information and a priori information and gives the extrinsic and APP LLR values as output, similar to a normal log-APP SISO decoder. Here, a number x is represented as L21 (x), where (for jxj  1) L21(x) ¼ 2ln (1 2 x/1 þ x) ¼ 2 tanh21 x. Evidently, L(x) ¼ tanh(x/2). So, using (2), we have L21(Gi) ¼ L21 (L(li (I ))) ¼ li (I). Thus the DAPP bit metric (2) is translated again to LLR. Also note that the positive and negative metrics are handled in the same way. For the sake of brevity, we also give in the following the equivalent arithmetic operations in the AHT domain (for proofs, see [6]). Arithmetic operations: Here, the operations between two different metrics l1 and l2 are defined. The subscripts do not refer to time, but to two different values. Note that the AHT domain operations are expressed in terms of log domain operations E and Ē. For multiplication, we have L1 ðLðl1 ÞLðl2 ÞÞ ¼ ðl1 El2 Þ  ð0:0Eðl1 þ l2 ÞÞ ¼ signðl1 Þsignðl2 Þminðjl1 j; jl2 jÞ  lnð1 þ ejl1 l2 j Þ þ lnð1 þ ejl1 þl2 j Þ In the above, Gi is the DAPP bit metric given by Gi ¼ ð1  Li ðIÞÞ=ð1 þ Li ðIÞÞ ¼ tanhðli ðIÞ=2Þ ð2Þ where the lower case notation li (I) is the log likelihood ratio (LLR) of the intrinsic values (using negative log domain), i.e. li (I ) ¼ 2ln(Li (I )), and P P At ðs1 ÞGt ðs1 ; s2 ÞBtþ1 ðs2 Þ ð3Þ Qbi ¼ s1 [St s2 [SA ðs1 Þ; bint ðs1 ;s2 Þ¼b where the trellis stage index t ¼ bi/nc (0  t  N/n), St refers to the set of possible starting states at time t, SA(s) refers to the set of possible next states from state s, bj (s1 , s2) (0  j  n) the jth bit from the branch from s1 to s2 , At (s) and Bt (s) the forward and backward state metrics and Gt (s1 , s2) the branch metric of the branch s1 ! s2. The branch metric is given by Gt ðs1 ; s2 Þ ¼ n1 Q j¼0 The above operation is similar to the check node computation performed in low density parity check (LDPC) decoders. However, the min-sum approximation, where the correction term is omitted, causes heavy performance loss in an AHT DAPP decoder. For addition, with the condition jL(l1) þ L(l2)j  1, we have   3 þ el2 þ el1  el1 þl2 L1 ðLðl1 Þ þ Lðl2 ÞÞ ¼ l1 þ l2  ln l l ðl þl Þ 3þe 1 þe 2 e 1 2 ’ l1 þ l2 For jL(l1)j, jL(l2)j , 0.5, the approximation above causes no noticeable performance loss and results in a significant reduction in complexity. For division, we have L1 ðLðl1 Þ=Lðl2 ÞÞ ¼ ðl1 El2 Þ  ð0:0Eðl1 þ l2 ÞÞ b ðs ;s2 Þ j 1 Gntþj The forward and backward state metrics follow the usual definitions [4]. Log domain implementation: Since the DAPP bit metric Gi can take both positive and negative values, the branch and the state metrics can also be of both signs. Thus, in a log domain implementation of a DAPP decoder, we would have to perform both the addition equivalent operation E (also known as max in the þln domain) and the subtraction The above AHT operations are used to implement (3) on the dual trellis. The LLR extrinsic values are  directly calculated by the AHT DAPP decoder as L1 Q1i=Q0i Gi ¼ li0 ðOÞ, since li0 ðOÞ ¼ ln 1  Q1i = Q0i Gi Þ=1 þ Q1i =Q0i Gi Þ from (1). Note that unlike [3], the AHT decoder does not compute partial products for branch metrics. Instead, the corresponding bit metric (Gi) is divided out in the calculation of the extrinsic LLR, thereby saving some computation. Note also that the AHT domain decoder requires only two Ē operations per decoded bit, which is required for implementing one division in the extrinsic ELECTRONICS LETTERS 10th April 2008 Vol. 44 No. 8 LLR calculation. Normalisation of the state metrics in the AHT domain is needed to maintain the values in the valid range [21, 1], and can be done by a simple multiplication, instead of a division. The AHT domain decoder thus saves the complexity of implementing the Ē function to a great extent. Suboptimal implementations: A practical implementation of APP decoders always uses some approximations to reduce the complexity and have a coarser quantisation. The max-log approximation, in which the correction term for the E operation is neglected, is widely used in APP decoders. Additionally, for moderate to high SNRs, the bit width used for representing the state metrics is also quite small. On the other hand, in a DAPP decoder implementation, regardless of the scheme used (SM or AHT), there are some key aspects that differ from a normal APP decoder. 1. As a consequence of the orthogonality between the primal and dual codewords, the metric values of all the paths along the dual trellis are equally important for successful decoding, regardless of the SNR. So, a path selection approach, similar to Viterbi decoding is not applicable here. 2. Since the LLR magnitudes increase with increasing SNR, in a normal APP decoder, the most likely (trellis) path metric is far different from the other path metric values. Because of this, the minimum quantisation required to decode reliably becomes coarser and coarser, thereby also allowing approximations like max-log to perform without any severe performance degradation. On the other hand, in the DAPP decoder, as we increase SNR, the various metrics get closer to each other in magnitude (see definition of Gi in (2)). However, the magnitude differences between the various metrics still have to be preserved for reliable decoding. Because of this phenomenon, the minimum quantisation required to reliably decode becomes more and more fine with increasing SNR. 3. In iterative decoders, when the effective SNR increases over iterations, the a priori LLR values also increase in magnitude. In DAPP decoding, this again causes numerical problems owing to the bit metrics becoming extremely close to magnitude one. As a consequence, depending on the code used, there is a threshold SNR where even floating point models start to be inadequate in resolution. To overcome this phenomenon, a magnitude limiting of the extrinsic and intrinsic values may be necessary even in floating point simulation models. Conclusion: Implementing a DAPP decoder requires large quantisation and large look-up tables to get the same performance as an APP decoder. The AHT scheme reduces the quantisation requirements significantly. Even so, because of the nature of the DAPP metrics, many approximations that work well in log-APP decoding perform very poorly in a DAPP decoder. Considering the above, we point out that a fair comparison of decoding complexity between the primal and dual domains should take into account the quantisation requirements for all the stages in the decoder. Thus, in practice, the cross-over code rate for the DAPP decoder’s complexity advantage is fairly higher than 0.5. # The Institution of Engineering and Technology 2008 7 November 2007 Electronics Letters online no: 20083181 doi: 10.1049/el:20083181 S. Srinivasan and S.S. Pietrobon (Institute for Telecommunications Research, University of South Australia, Mawson Lakes, SA 5095, Australia) E-mail: [email protected] S.S. Pietrobon: Also with Small World Communications, 6 First Avenue, Payneham South, SA 5070, Australia References 1 Hartmann, C.R.P., and Rudolph, L.D.: ‘An optimum symbol-by-symbol decoding rule for linear codes’, IEEE Trans. Inf. Theory, 1974, IT-22, pp. 514– 517 2 Hagenauer, J., Offer, E., and Papke, L.: ‘Iterative decoding of binary block and convolutional codes’, IEEE Trans. Inf. Theory, 1996, 42, pp. 429– 445 3 Riedel, S.: ‘MAP decoding of convolutional codes using reciprocal dual codes’, IEEE Trans. Inf. Theory, 1998, 44, pp. 1176– 1187 4 Bahl, L.R., Cocke, J., Jelinek, F., and Raviv, J.: ‘Optimal decoding of linear codes for minimizing symbol error rate’, IEEE Trans. Inf. Theory, 1974, IT-20, pp. 284– 287 5 Montorsi, G., and Benedetto, S.: ‘An additive version of the SISO algorithm for the dual code’. IEEE Int. Symp. Information Theory, Washington, USA, June 2001, p. 27 6 Sudharshan, S., and Pietrobon, S.S.: ‘Arc hyperbolic tangent representation scheme for APP decoders working on the dual code’. Int. Symp. on Turbo Codes and Related Topics, Munich, Germany, April 2006, paper 106 ELECTRONICS LETTERS 10th April 2008 Vol. 44 No. 8