Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2008, Electronics Letters
…
3 pages
1 file
For high rate k/n convolutional codes (k/n. 0.5), a trellis based implementation of a posteriori probability (APP) decoders is less complex on the dual code trellis owing to its branch complexity (2 n2k) being lower than the code trellis (2 k). The log scheme used for APP decoders is not attractive for practical implementation owing to heavy quantisation requirements. As an alternative, presented is an arc hyperbolic tangent (AHT) scheme for implementing the dual-APP decoder. The trellis based implementation of this AHT dual APP decoder is discussed and some fundamental differences between primal APP and dual APP decoders that have an effect on a quantised implementation are reported.
IEEE Transactions on Information Theory, 2010
This paper deals with a posteriori probability (APP) decoding of high-rate convolutional codes, using the dual code's trellis. After deriving the dual APP (DAPP) algorithm from the APP relation, its trellis-based implementation is addressed. The challenge involved in practical implementation of a DAPP decoder is then highlighted. Metric representation schemes similar to the log domain used for log-APP decoding are shown to be unattractive for DAPP decoding due to quantization requirements. After explaining the nature of the DAPP metrics, an arc hyperbolic tangent (AHT) scheme is proposed and its equivalent arithmetic operations derived. By using an efficient approximation, an addition is translated to an addition in the AHT domain. Efficient techniques for normalization and extrinsic log-likelihood ratio (LLR) calculation are presented which reduce implementation complexity significantly. Simulation results with different high-rate codes are given to show that the AHT-DAPP decoder performs similarly to a log-APP decoder and at the same time performs better than a decoder for a punctured code. A fully fixed-point model of an AHT-DAPP decoder is shown to perform close to an optimum decoder. The decoding complexity of the log-APP and AHT-DAPP decoders are listed and compared for several rate-k=(k+1) codes. It is shown that an AHT-DAPP decoder starts to be less complex from a code rate of 7=8. When compared against a max-log-APP decoder, the AHT-DAPP decoder is less complex at a code rate of 9=10 and above. Index Terms-Arc hyperbolic tangent (AHT), a posteriori probability (APP), complexity, convolutional code, dual code, fixed point, high rate, implementation, maximum a posteriori probability (MAP) decoder. I. INTRODUCTION C OMMUNICATION and recording applications increasingly require error control codes of high rate that give high power gains with a reasonable implementation complexity. In situations where the bandwidth expansion due to coding should be minimized, high-rate convolutional codes are used. This is typically the case when high data rates are used or when bandwidth is scarce. Among high-rate convolutional codes, rate-codes are the most commonly used codes. Iterative decoding techniques that achieve performance close to the Shannon limit require the component decoders to be of a
2020 IEEE Wireless Communications and Networking Conference (WCNC), 2020
Decoding using the dual trellis is considered as a potential technique to increase the throughput of soft-input soft-output decoders for high coding rate convolutional codes. However, the dual Log-MAP algorithm suffers from a high decoding complexity. More specifically, the source of complexity comes from the soft-output unit, which has to handle a high number of extrinsic values in parallel. In this paper, we present a new low-complexity sub-optimal decoding algorithm using the dual trellis, namely the dual Max-Log-MAP algorithm, suited for high coding rate convolutional codes. A complexity analysis and simulation results are provided to compare the dual Max-Log-MAP and the dual Log-MAP algorithms. Despite a minor loss of about 0.2 dB in performance, the dual Max-Log-MAP algorithm significantly reduces the decoder complexity and makes it a first-choice algorithm for high-throughput high-rate decoding of convolutional and turbo codes.
IEEE Transactions on Communications, 1999
Soft-decision maximum-likelihood decoding of convolutional codes over GF(q q q) can be accomplished via searching through an error-trellis for the least weighing error sequence. The error-trellis is obtained by a syndrome-based construction. Its structure lends itself particularly well to the application of expedited search procedures. The method to carry out such error-trellis-based decoding is formulated by four algorithms. Three of these algorithms are aimed at reducing the worst case computational complexity, whereas by applying the fourth algorithm, the average computational complexity is reduced under low to moderate channel noise level. The syndrome decoder achieves substantial worst case and average computational gains in comparison with the conventional maximum-likelihood decoder, namely the Viterbi decoder, which searches for the most likely codeword directly within the code.
2019 IEEE 30th International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC Workshops)
Puncturing a low-rate convolutional code to generate a high-rate code has some drawback in terms of hardware implementation. In fact, a Maximum A-Posteriori (MAP) decoder based on the original trellis will then have a decoding throughput close to the decoding throughput of the mother non-punctured code. A solution to overcome this limitation is to perform MAP decoding on the dual trellis of a high-rate equivalent convolutional code. In the literature, dual trellis construction is only reported for specific punctured codes with rate k/(k + 1). In this paper, we propose a multi-step method to construct the equivalent dual code defined by the corresponding dual trellis for any punctured code. First, the equivalent nonsystematic generator matrix of the high-rate punctured code is derived. Then, the reciprocal parity-check matrix for the construction of the dual trellis is deduced. As a result, we show that the dual-MAP algorithm applied on the newly constructed dual trellis yields the same performance as the original MAP algorithm while allowing the decoder to achieve a higher throughput. When applied to turbo codes, this method enables highly efficient implementations of high-throughput high-rate turbo decoders.
Tail-biting trellises are the simplest of decoding graphs with cycle. Basically, trellis representations not only reveal the code structure, but also lead to efficient trellis based decoding algorithms. Existing Circular Viterbi Algorithms are non-convergent and sub-optimal. In this paper, decoding of convolution code is done over a Rayleigh channel and it is shown that the net path metric of each tail-biting path is lower-bounded during the decoding process of the CVA. This lower bounding property can be applied to remove unnecessary iterations of the existing CVA and results in a bounded or convergent CVA based Maximum Likelihood (ML) decoder. Comparison between the existing two phase ML decoder and the convergent CVA decoder is done over a Rayleigh channel to show that the proposed algorithm has higher efficiency and lower decoding complexity.
1998
in [67, 109]. The first serious study of trellis structure and trellis construction for linear block codes was due to Wolf. In his 1978 paper [109], Wolf presented the first method for constructing trellises for linear block codes and proved that an N-section trellis diagram for a q-ary (N, K) linear block code has at most qmla(K,N-K) states. He also presented a method for labeling the states based on the parity-check matrix of a code. Right after Wolf's work, Massey presented a simple but elegant paper [67] in which he gave a precise definition of a code trellis, derived some fundamental properties, and provided implications of the trellis structure for encoding and decoding of codes. However, these early works in trellis representation of linear block codes did not arouse much enthusiasm, and for the next 10 years, there was basically no research in this area. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. In fact, for more than two decades, most of the practicing communication engineers believed that the rate-l/2 convolutional code of constraint length 7 with Viterbi decoding was the only effective error control coding scheme for digital communications, except for perhaps ARQ schemes. To achieve higher reliability for certain applications such as NASA's satellite and deep space communications, this convolutional code concatenated with a Reed-Solomon outer code was thought the best solution. It was really Forney's paper in 1988 [24] that aroused enthusiasm for research in the trellis structure of linear block codes. In this paper, Forney showed that some block codes, such as Reed-Muller (RM) codes and some lattice codes, do have relatively simple trellis structures, and he presented a method for con-D R A F T Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named differential trellis decoding (DTD) algorithm.
IEEE Transactions on Information Theory, 2000
ABSTRACT
EURASIP Journal on Wireless Communications and Networking, 2012
Sequential decoding can achieve a very low computational complexity and short decoding delay when the signalto-noise ratio (SNR) is relatively high. In this article, a low-complexity high-throughput decoding architecture based on a sequential decoding algorithm is proposed for convolutional codes. Parallel Fano decoders are scheduled to the codewords in parallel input buffers according to buffer occupancy, so that the processing capabilities of the Fano decoders can be fully utilized, resulting in high decoding throughput. A discrete time Markov chain (DTMC) model is proposed to analyze the decoding architecture. The relationship between the input data rate, the clock speed of the decoder and the input buffer size can be easily established via the DTMC model. Different scheduling schemes and decoding modes are proposed and compared. The novel high-throughput decoding architecture is shown to incur 3-10% of the computational complexity of Viterbi decoding at a relatively high SNR.
arXiv (Cornell University), 2022
Recently, rate-1/n zero-terminated and tail-biting convolutional codes (ZTCCs and TBCCs) with cyclicredundancy-check (CRC)-aided list decoding have been shown to closely approach the random-coding union (RCU) bound for short blocklengths. This paper designs CRCs for rate-(n − 1)/n CCs with short blocklengths, considering both the ZT and TB cases. The CRC design seeks to optimize the frame error rate (FER) performance of the code resulting from the concatenation of the CRC and the CC. Utilization of the dual trellis proposed by Yamada et al. lowers the complexity of CRCaided serial list Viterbi decoding (SLVD) of ZTCCs and TBCCs. CRC-aided SLVD of the TBCCs closely approaches the RCU bound at a blocklength of 128. This paper also explores the complexity-performance trade-off for three decoders: a multitrellis approach, a single-trellis approach, and a modified single trellis approach with pre-processing using the Wrap Around Viterbi Algorithm (WAVA).
IEEE Transactions on Information Theory, 1994
Norms of War in Eastern Orthodox Christianity , 2009
Μελέτες για την ελληνική γλώσσα, 2025
Magis, Revista Internacional de Investigación en Educación, 16, 1–13., 2023
Antiquity, 2020
Dissertação - Antropoceno e queda da "humanidade que pensamos ser", 2020
Zenodo (CERN European Organization for Nuclear Research), 2023
The Implications of the Ukraine Crisis for NATO's Solidarity, 2016
شؤون استراتيجية, 2021
Color research and application, 2023
Neural Computing and Applications, 2014
Saudi Journal of Medical and Pharmaceutical Sciences, 2022
American journal of otolaryngology
Journal of Biomolecular Structure and Dynamics, 2019
Cerebrospinal Fluid Research
Tidsskriftet Antropologi, 2019