Turbo Codes

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 28
At a glance
Powered by AI
The key takeaways are about error correction coding techniques like turbo codes that can achieve near Shannon limit performance with low error rates. Turbo codes were a breakthrough that enabled higher data rates and more reliable communication.

The objectives of studying channel coding are to understand channel capacity and find ways to increase data rate while providing reliable communication links.

The different error correction mechanisms discussed are backward error correction, forward error correction, block codes, convolution codes and turbo codes.

Implementation of Turbo Decoders

Submitted by: Sharad Kaushik(03714802809) Shashank Vaish(05714802809) Pratik Gauri(11014802809) Abhishek Jain(13314802809)

Objective

Error Correction Codes Turbo Codes Technology Turbo decoding Turbo Codes Performance Turbo Coding Application

Introduction

Motivation

Can we have error free communication as much as possible. Can we reach Shannon Limit?
Studying channel coding Understanding channel capacity Ways to increase data rate Provide reliable communication link

Objectives

Shannon Theory

For every combination of bandwidth (W), channel type, signal power (S) and received noise power (N), there is a theoretical upper limit on the data transmission rate R, for which error-free data transmission is possible. This limit is called channel capacity or also Shannon capacity.

Shannon Theory

sets a limit to the energy efficiency of a code. Started the information Theory Stated the max data rate of a channel Error rate Power

Error Correction Mechanisms

Backward Error correction Error detection capability Communication cost Real time traffic Forward Error Correction Detection and correction of errors More complex receivers DSP cost

Forward Error Correction

Block Codes Data split into blocks Checks are within the block Convolution code Bit streamed data Involves memory Turbo codes Uses convolution Codes Special properties

Coding advantages
Pn 10-3

Coding gain

10-8

19

Eb/N0 dB

Coding disadvantages

More bandwidth due to redundant Processing Delay Design Complexity

Turbo Codes History


IEEE International Comm. conference 1993 in Geneva Berrou, Glavieux. : Near Shannon Limit Error-Correcting Coding : Turbo codes Provided virtually error free communication at data date/power efficiencies beyond most expert though

Turbo Codes History

Double data throughput at a given power Or work with half the power The two men were not known, most were thinking that they are wrong in calculation They realized that it was true. Many companies adopted, new compnaies started: turboconcept and iCoding 0.5 dB from Shannon limit at Pe 10-6

Turbo codes

Parallel concatenated

The k-bit block is encoded N times with different versions (order) Pro the sequence remains RTZ is 1/2Nv Randomness with 2 encoders; error pro of 10-5 Permutations are to fix dmin

Turbo Encoder
X
RSC Input

Y1

Interleaver

random Systematic codeword RSC

Y2

Interleaver

The interleavers function is to permute low weight code words in one encoder into high weight code words for the other encoder. A row-column interleaver: data is written row-wise and read column wise. While very simple, it also provides little randomness. A helical interleaver: data is written rowwise and read diagonally. An odd-even interleaver: first, the bits are left uninterleaved and encoded, but only the oddpositioned coded bits are stored. Then, the bits are scrambled and encoded, but now only the even-positioned coded bits are stored. Odd-even encoders can be used, when the second encoder produces one output bit per one input bit.

INPUT X1 X6 X2 X7 X3 X8 X4 X9 X5 X10

X11

X12

X13

X14

X15

Row-column interleaver output X1 X6 X1 1 X2 X7 X1 2 X3 X8 X1 3 X4 X9 X1 4 X5 X1 0 X1 5

Helical interleaver output X1 1 X7 X3 X1 4 X1 0 X1 X1 2 X8 X4 X1 5 X6 X2 X1 3 X9 X5

Odd-even interleaver output Encoder output without interleaving X1 Y1 X2 X3 Y3 X4 X5 Y5 X6 X7 Y7 X8 X9 Y9 X1 0 X1 1 Y1 1 X1 1 X1 2 X1 3 Y1 3 X1 3 X1 4 X1 5 Y1 5 X1 5 -

Encoder output with row-column interleaving X1 X2 Z6 X3 X4 Z2 X5 X6 Z1 2 Z1 2 X7 X8 Z8 X9 X1 0 Z4 X1 2 Z1 4 Z1 4 X1 4 Z1 0 Z1 0

Final output of the encoder Y1 Z6 Y3 Z2 Y5 Y7 Z8 Y9 Z4 Y1 1 Y1 3 Y1 5

Turbo Decoding

Criterion

For n probabilistic processors working together to estimate common symbols, all of them should agree on the symbols with the probabilities as a single decoder could do

Turbo Decoder

Turbo Decoder

The inputs to the decoders are the Log likelihood ratio (LLR) for the individual symbol d. LLR value for the symbol d is defined ( Berrou) as

Turbo Decoder

The SISO decoder reevaluates the LLR utilizing the The value z is the extrinsic value local Y1 and Y2 determined by the same decoder if d is 0 redundancies to and it is negative if d is 1 and it is positive improve the The updated LLR is fed into the other decoder and which calculates confidence
the z and updates the LLR for several iterations After several iterations , both decoders converge to a value for that symbol.

Turbo Decoding

Assume

Ui : modulating bit {0,1} Yi : received bit, output of a correlator. Can take any value (soft). Turbo Decoder input is the log likelihood ratio
For each data bit, calculate the LLR given that a sequence of bit were sent
R(ui) = log [ P(Yi|Ui=1)/(P(Yi|Ui=0)] For BPSK, R(ui) =2 Yi/ (var)2

Turbo Decoding

Compare the LLR output, to see if the estimate is towards 0 or 1 then take HD

Soft in/ Soft out processor

At the heart of the decoder Represent all possible states of an encoder (trellis) Number of states at a particular clock is 2n ; n = # of flip flops used in the SR Trellis shows:

Current state Possible paths lead to this state

SISO

Label all branches with a branch metric Function of processor inputs Obtain the LLR for each data bit by traversing the Trellis Two algorithms :

Soft output Viterbi Algorithm (SOVA) Maximum a Posteriori (MAP)

Log MAP

Turbo Codes Performance

Turbo Codes Applications


Deep space exploration

France SMART-1 probe

JPL equipped Pathfinder 1997 Mobile 3G systems


In use in Japan UMTS NTT DoCoMo


Turbo codes : pictures/video/mail Convolutional codes : voice

Conclusion : End of Search

Turbo codes achieved the theorical limits with small gap Give rise to new codes : Low Density Parity Check (LDPC) Need Improvements in decoding delay

Reference

[1] University of South Australia, Institute for Telecommunications Research,Turbo coding research group. http://www.itr.unisa.edu.au/~steven/turbo /. [2] S.A. Barbulescu and S.S. Pietrobon. Turbo codes: A tutorial on a new class of powerful error correction coding schemes. Part I: Code structures and interleaverdesign. J. Elec. and Electron.Eng., Australia, 19:129142, September 1999.

You might also like