Chapter One: 1.1 Background
Chapter One: 1.1 Background
Chapter One: 1.1 Background
CHAPTER ONE
Introduction
1.1 Background
Wireless communication, as the name suggests is wireless way of transmitting information
from one place to another, is replacing most of the wired transmission of today’s world.
Research in the field of wireless communication is still a hot topic to discover new
possibilities. The goal of every research in this topic is to find more effective communication
methods. Wireless communication helped the user to move freely without worrying about
transfer of data. It dramatically changed the concept of information transfer in homes and in
offices.
Some of the key advantages gained by wireless communication are:
Efficiency Increase- It improved communications that leads to faster transfer of information
within businesses and between partners/customers.
Always in reach–There is no need to carry cables or adaptors in order to access some
data in your office or home.
Greater flexibility and mobility for users–Workers in an office don’t need to sit on
dedicated PCs. They can be wirelessly networked together.
Reduced costs–Compared to wired communication, wireless systems are usually
cheaper to use, easy to install and maintain.
The basic building blocks of a typical wireless communication system are shown in Figure
1.1.
In the figure the source block is the source of information that we need to transmit from one
place to another. It can be a voltage signal, a voice signal or anything else. This information
is first converted to digital data and then source encoded. Source encoding reduces the
amount of the data present in the signal to reduce the bandwidth required to transmit the
associated data. Then the data proceeds to channel encoder block, which is responsible for
adding extra bits in the data to help correcting errors inflicted to the data due to fading and
noise. Our thesis contains two techniques for channel encoding that will be explained in later
chapters. The modulator modulates the message signal on the transmission frequency so that
the signal is ready for transmission.
The signal, when transmitted, through wireless channel and faces multiple problems. One
major problem faced by the signal is fading. Fading can be caused by natural weather
disturbances, such as rainfall, snow, fog, hail and extremely cold air over a warm earth.
Fading can also be created by manmade disturbances, such as irrigation, or from multiple
paths. All these factors introduce errors into the transmitted data. When received at the
receiver the signal is added with noise, since receiver antenna is designed to receive any
signal present within a certain frequency range and noise is also present in that range. Now
that data is passed to the demodulator whose job is convert the signal back from helps to
recover original signal from the degraded signal due to channel fading and noise. This is done
by using the redundant bits that were added by the channel encoder. This signal after
recovery is passed to the source decoder, which converts the signal back to its original
form.With the advancements in wireless many multiplexing techniques were introduced like
Frequency Division Multiplexing (FDM), Time Division Multiplexing (TDM) and Code
Division Multiplexing (CDM) are some of those techniques. Orthogonal Division Frequency
Multiplexing (OFDM) is an advance form of FDM that has been studied in this thesis [1].
Most of the transmission channels are frequency selective. This means that the frequency
Components from the input signal are affected differently by the channel. In other words, the
channel’s transfer function H (f) is not flat over the whole frequency band but behaves
differently for different frequencies. This introduces Inter-symbol interference (ISI) in the
received signal and equalizer (filters the received signal to cancel the isi) becomes necessary
to deal with ISI. ISI is a time-domain manifestation of the frequency selectivity.
W frequency
W/n frequency
Figure 2: shows a single and multicarrier system.
The difference is to use the whole band as a single carrier or divide into small bands and use
multiple carriers for transmission. In a single carrier system the whole available spectrum is
used for transmission of a single message signal. The small time duration of message signal
leads to ISI due to multipath signals. While in a multicarrier system the available spectrum is
divided into many narrow bands and data is divided into parallel data streams each
transmitted in a separate band. OFDM is an example of multicarrier systems in which each
carrier has a very narrow bandwidth, having lower symbol rate. This results in the signal
having a high tolerance to multipath delay spread, as the delay spread must be very long to
cause significant inter-symbol interference.
amplitude. Channel coding can be used across the subcarriers to correct the errors of weak
subcarriers [3].
In OFDM systems error correction has a significant role since OFDM along with error
correcting techniques help to deal with fading channels. Error correction helps in recovery of
faded information by providing a relation between information and transmitted code such that
errors occurring within the channel can be removed at the receiver. In particular, we will
evaluate Linear Block codes and Reed-Solomon codes. We chose these three techniques due
to difference of categories to which they belong to and their implementation style (systematic
and non-systematic).
1.3 Objective
1.3.1 General objective of the thesis
The general objective of this thesis is:-
To perform the performance of error correction techniques for OFDM system
in MATLAB simulation.
1.3.2 Specific objectives of the thesis
The specific objective of this thesis is: -
To design OFDM system
To design different error correction techniques for OFDM system
To analysis the difference between different error correction which applies for
OFDM system
1.4 Motivation
OFDM is a technique for transmission of data stream over a number of sub-carriers.it
overcomes the problem of inter-symbol interference by transmitting a number of narrow band
subcarriers together with a guard interval. But this gives rise to another problem that all
subcarriers will arrive at the receiver with different amplitudes. Some carriers may be
detected without error but the errors will be distributed among the few subcarriers with small
amplitude. Channel coding can be used across the subcarriers to correct the errors of weak
subcarriers. So we are motivated to do this this thesis for overcomes this problem
This helped to further investigate the effect of code rate on error correction performance. The
system parameters and channel characteristics were kept same to test the codes under same
conditions.
This work is organized as follows. We will start with the modulation of data. After
modulation we will apply our particular channel coding scheme to enhance the systems error
detection and correction abilities. After channel coding we will convert the serial stream of
data into parallel streams of data depending on the desired size of FFT and the number of
sub-carriers used. After converting the data into parallel streams we apply IFFT on each
stream to convert the data from frequency domain to time domain. Then after that we will
apply guard interval to give some protection between sub carriers. The performance of
OFDM depends on the modulation scheme used, the channel coding scheme and the channel
used. We are focussing on the channel coding schemes and their performance in OFDM
systems with QPSK modulation and fading channel.
Chapter 3gives a brief detail about OFDM Systems, Multi carrier modulation technique and its
Applications and advantages.
Chapter 4describes the Channel Coding and different procedures involve in the process of
coding.
Chapter 5explains the research methodology and performance evaluation of error correcting
Techniques for OFDM systems.
Chapter 6is about simulation results in different coding schemes, validity threats, conc lusion and
future work.
CHAPTER TWO
Literature Review
In [4] a comparison of turbo codes and convolution codes for broadband fixed wireless
access (FWA) systems is made. Another similar work is presented in [17]. It focuses on
OFDM system’s shortcomings in terms of high peak-to- average power ratio (PAPR) that
causes nonlinear distortion in the transmitted signal, degrading the performance of the
system.
Another work in focuses on OFDM tones that are severely degraded when spectral nulls are
present in the channel so forward error correction (FEC) techniques with Code-Spread
OFDM (CSOFDM) are used to spread the data symbols across the frequency band before
OFDM modulation.
Another work in focuses on OFDM tones that are severely degraded when spectral nulls are
present in the channel so forward error correction (FEC) techniques with Code-Spread
OFDM (CSOFDM) are used to spread the data symbols across the frequency band before
OFDM modulation.
Another work focuses on ATM cell oriented data transmission. This method enables accurate
comparison of coding and interleaving techniques and optimization of OFDM system
parameters.
Other related works include, focuses on the effect of various concatenated forward
Error correction (FEC) codes on the performance of a wireless OFDM system. The study is
done on a recorded audio signal under additive white Gaussian noise (AWGN) channel. The
results show that the audio message was received effectively under noisy channel conditions.
The work done in our thesis and the work done in above related works follow the same
context (the comparison of bit error rate (BER) vs. signal to noise ratio (SNR)) but the work
that we cover is to compare three techniques in this single research. All the related work that
we discussed they do not cover these three techniques in the single research.
CHAPTER THREE
OFDM Systems
OFDM is a special type of multicarrier modulation in which a single data stream is
transmitted over a number of lower rate subcarriers. In OFDM a high rate bit-stream is split
into; say N parallel bit streams of lower rate and each of them is modulated using one of N
orthogonal sub-carriers. OFDM increases the immunity against frequency selective fading or
narrowband interference. A single fade causes the entire link to fail in a single carrier system,
while only few subcarriers will be disturbed in multicarrier system like OFDM. Error
correction coding can then be used to correct those few errors.
To realize the overlapping multicarrier technique, however, it is needed to reduce the
crosstalk between subcarriers, which demands orthogonally between the different modulated
carriers. Figure 2.1 shows a comparison of single carriers, non-overlapping multicarrier and
overlapping orthogonal multi carrier modulation.
In a normal frequency-division multiplex system, by using conventional filters and
demodulators, the signal can be received. Guard bands result in lowering of spectrum
efficiency, which are introduced between different carriers in frequency domain.
Some of the main advantages of OFDM are its multi-path delay spread tolerance, robustness
Against frequency selective fading and efficient spectral usage by allowing overlapping of
two or more signals in the frequency domain and its robustness against frequency selective
fading. Additionally, for an OFDM system, the modulation and demodulation can be done
using IFFT and FFT operations, which are computationally efficient.
OFDM will be covered in more detail in this chapter. Chapter 4 will cover the Channel
coding Techniques of Linear Block Coding and Reed-Solomon Coding. Chapter 5 will cover
the implementation methodology and chapter 6 will cover the simulation results.
3.1 OFDM Signal
OFDM is similar to FDM technique except that in OFDM the ‘N’ sub-carriers are made
orthogonal to each other over the symbol (frame) duration Ts. By orthogonally of the carriers,
we mean that the carrier frequencies satisfy the following requirement:
𝐾
𝐹𝑘 = 𝐹0 + 𝑇 𝐾 = 1,2. . . . . 𝑁 − 1 (3.1)
𝑆
Equation (2.1) when converted to the time domain, shows that there will be integer number f
cycles translated to the time domain, means that there must be integer number of cycles of
each carrier over the duration Ts.
3.1.1 Orthogonality
Signals are orthogonal if they are mutually independent of each other. Orthogonality is a
property that allows multiple information signals to be transmitted perfectly over a common
channel and detected, without interference. Many common multiplexing schemes are
inherently orthogonal. Time division multiple access (TDMA) allows transmission of
multiple information signals over a single frequency channel by assigning unique time slots
to each separate information signal. During each time slot only the signal from a single
source is transmitted preventing any interference between the multiple information sources,
because of this TDMA is orthogonal in nature. In the frequency domain most FDM systems
are orthogonal as each of the separate transmission signals are well spaced out in frequency
preventing interference. Although these methods are orthogonal the term OFDM has been
reserved for a special form of FDM. The subcarriers in an OFDM signal are spaced as close
as is theoretically possible while maintain orthogonality between them. An OFDM system
with four subcarriers is shown in Figure 3.2.
If any two different functions within the set are multiplied, and integrated over a symbol
period, the result is zero, for orthogonal functions.
∑𝑁−1
𝑘=0 𝑛𝑘 =M (3.7)
If the transmitter uses 4-QAM on each channel, then 𝑛𝑘 = 2, 𝑘 = 0 … 𝑁 − 1 and 𝑀 =
2𝑁 since each transmitted symbol carries two bits worth of information. Similarly, if 128
QAM is substituted for 4-QAM, then 𝑛𝑘 = 7, 𝑘 = 0 … 𝑁 − 1 and 𝑀 = 7𝑁.
When using a two-dimensional signal format, the transmitted point in the signal constellation
for subcarrier k (which codes 𝑛𝑘 bits) is written 𝑑[𝑘] = 𝑥[𝑘] + 𝑗 𝑦[𝑘], where
x[k]represents the in-phase component and y[k] is the quadrature component. The subcarriers
are spaced in frequency at the symbol rate to keep them orthogonal. Hence, an OFDM
symbol duration of T seconds results in a subcarrier spacing of 1/T Hz. Longer symbol
durations allow to pack the subcarriers closer together in frequency.
We can describe the band pass continuous time transmitted waveform D(t) during one
symbol interval as:
𝐷(𝑡) =∑𝑁−1
𝑘=0 (𝑥[𝑘]cos(2𝜋𝑓 kt) +y[k]sin(2𝜋𝑓 kt)) (3.8)
Where 𝑓𝑜 is the base frequency, ∆𝑡 the serial symbol duration, ∆𝑓 =1/N∆t the subcarrier
spacing and𝑓 k = 𝑓 0+k∆𝑓the frequency of the 𝑘 𝑡ℎ subcarrier. Note that the individual
subcarriers are separated by ∆𝑓 =1/N∆t= 1/T the correct amount to keep them orthogonal.
Using Euler's equation 𝑒 𝑗𝛳 = 𝑐𝑜𝑠(𝜃) + 𝑗sin (𝜃), we may rewrite Equation (2.7) as follows:
𝐷(𝑡)=Real{∑𝑁−1
𝑘=0 𝑑[𝑘]𝑒
𝑗2𝜋𝑓 𝑡
𝑘 (3.9)
A single real function can entirely describe a band pass signal. However, two such functions,
or one complex one, are required for a baseband description. We can translate D(t) into a
baseband signal by sampling it at intervals of ∆𝑡 and maintaining the complex notation.
Keeping in mind that 𝑤𝑘 = 2𝜋𝑓𝑘 gives:
2𝜋𝑛𝑘
𝑗
D[n] =∑𝑁−1
𝑘=0 𝑑[𝑘]𝑒 𝑁 (3.10)
In fact, we can rewrite Equation 2.9 as;
𝐷[𝑛] = 𝐷𝐹𝑇{𝑑[𝑛]} (3.11)
Modulating the carrier signal with D[n] effectively windows the signal with a rectangular
baseband shaping function h(t) which is defined mathematically as:
1
0≤𝑡<𝑇
ℎ(𝑡) = { 𝑇 (3.12)
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
This rectangular pulse shape imposes a 𝑠𝑖𝑛𝑐 frequency response on each sub channel. This
since function (𝑠𝑖𝑛 (𝜔𝑇)/𝜔𝑇)) has zeros at multiples of 1/𝑇 = 𝛥𝑓, satisfying the Nyquist
condition for ISI free transmission. Hence, in theory, the receiver can recover all of the
transmitted symbols.
DBU, Institute of Technology, Departments of Electrical and Computer Engineering
12
Performance Evaluation of Error Correction Techniques for OFDM System 2009 E.C
So, the IFFT and FFT operations are used to achieve OFDM modulation and demodulation.
3.3 Designing of an OFDM System
The design of an OFDM system requires a trade-off between various parameters like in all
Communication system designs. Usually, the input parameters to the design are the bit rate,
Available bandwidth and the maximum delay spread introduced by the channel. The design
Involves calculation of symbol duration, guard time, number of subcarriers, and the
modulation and coding schemes among other factors.
3.3.1 Guard Time and Cyclic Extension
OFDM is immune to multi-path delay spread, which causes Inter-symbol Interference (ISI) in
Wireless networks. Making the symbol duration larger reduces the effect of delay spread. It is
done by converting high rate data signal into lower rate data signal. ISI is eliminated by the
introduction of guard time. This is done by making the guard time duration larger than that of
the estimated delay spread in the channel. If the guard period is left empty, the orthogonality
of the sub-carriers no longer holds, i.e., Inter Carrier Interference (ICI) comes into picture. In
order to eliminate both the ISI as well as the ICI, part of the OFDM symbol called ‘cyclic
prefix (CP)’ is cyclically extended into the guard period. The orthogonality of the sub-carriers
is preserved within the FFT interval so that delayed versions of OFDM always have an
integer number of samples. The cyclic prefix is a copy of the last part of the OFDM symbol
which should be presented to the transmitted symbol (see Figure 2.3) and removed at the
receiver before demodulation. The cyclic prefix should be as long as the significant part of
the channel impulse response experienced by the transmitted signal.
This doubles the benefit of cyclic prefix. First, it avoids ISI. Second, it also converts the
linear convolution with channel impulse response into a cyclic convolution. As a cyclic
convolution in the time domain translates to multiplication in the frequency domain, the
subcarrier remains orthogonal and there is no ICI. By setting the guard time larger than the
delay spread and cyclically spreading the OFDM signal into guard period we can eliminate
ISI and ICI. The guard time is made longer than the maximum delay spread introduced by the
channel. But the guard time cannot be made very large since no information bits are
transmitted during the guard time. The symbol duration must be fixed in such a way that the
overhead associated with the guard time is minimal. This can be achieved by making the
symbol duration much longer than the guard time. However large symbol duration means
more number of sub-carriers and thus causes implementation complexities and increased
Peak-to-average power problems. Thus a practical design choice for the symbol duration is
around 5-6 times the guard time.
gain/phase encountered by it. Even if some subcarriers are completely lost due to fading, the
user data can be recovered by proper coding and interleaving at the transmitter.
3.4.3 High Spectral Efficiency
OFDM allows the subcarriers to overlap in the frequency domain. The sub-carriers are made
Orthogonal to each other at the same time. The total bandwidth required for “N” number of
subcarriers is:
𝐵𝑊𝑡𝑜𝑡𝑎𝑙 = 𝑁 +1𝑇𝑠 (3.13)
For large values of N, the total bandwidth required can be approximated as
𝑁
𝐵𝑊𝑡𝑜𝑡𝑎𝑙 = (3.14)
𝑇𝑠
For serial transmission of the same data the required bandwidth is:
2𝑁
𝐵𝑊𝑡𝑜𝑡𝑎𝑙 = (3.15)
𝑇𝑠
Compared to the single carrier serial transmission, we get a spectral gain of nearly 100% in
OFDM.
3.4.4 Efficient Modulation and Demodulation
Modulation and demodulation of the sub-carriers is done using IFFT and FFT methods
respectively, which are computationally efficient. The modulation and demodulation in
digital domain avoids the need of high frequency stable oscillators.
3.5 Applications of OFDM
Due to recent advances of digital signal processing (DSP) and very large scale integrated
circuit (VLSI) technologies, the initial obstacles of OFDM implementation such as massive
complex computation, and high speed memory do not exist anymore. The use of FFT
algorithms eliminates arrays of sinusoidal generators and coherent demodulation required in
parallel data systems and makes the implementation cost effective. OFDM has recently been
applied widely in wireless communication systems due to its high data rate transmission
capability with high bandwidth efficiency and its robustness to multi-path delay. It has been
used in wireless LAN standards such as American IEEE802.11a and the European equivalent
HIPERLAN/2 and in multimedia wireless services such as Japanese Multimedia Mobile
Access Communications very high speed digital subscriber line (VDSL; 100Mbps), digital
audio broadcasting (DAB), and high definition television (HDTV) terrestrial broadcasting.
CHAPTER FOUR
Channel coding and system design
Channel coding is basically from the class of signal transformations designed to improve the
Communication performance by enabling the transmitted signal to better resist the effects of
various channel impairments such as noise fading and jamming. The goal of channel coding
is to improve the bit error rate (BER) performance of power limited and/or band limited
channels by adding redundancy to the transmitted data [5].
We have already explained that how OFDM avoids the problem of inter symbol interference
by transmitting a number of narrowband subcarriers together with using a guard time. This
give rise to another problem which is the fact that in multipath fading channels all subcarriers
will arrive at the receiver with different amplitudes. In fact some subcarriers may be
completely lost because of the deep fades.
Although most subcarriers may be detected without errors and the overall BER will be
largely dominated by a few subcarriers with the smallest amplitudes. Forward-error
correction coding is essential to avoid this domination by the weakest subcarriers. By using
coding across the subcarriers, errors of the weak subcarriers can be corrected up to a certain
limit that depends on the error control code and the channel.
Channel codes are a very important component of any modern digital communication system.
Channel codes are used to improve the performance of a communications system when other
means of improvement such as increasing transmitter power or using a more sophisticated
demodulator are impractical. We will discuss linear block codes and reed Solomon codes in
following.
4.1. Linear Block Codes
A block code is defined as a code in which k symbols are input and n symbols are output and
is denoted as a (n, k) code. If the input is k symbols, then there are 2k distinct messages. For
each k input symbols output is n symbols known as a code word where n is greater than k.
Figure 3.1 shows input and output of a block code and the size of code word formed.
A block code of length n with 2kcodewords is called a linear (n, k) code if and only if its
2kcodewords form a k-dimensional subspace of the vector space of all n-tuples over the
Galois field GF [2]. Since there are n output bits so there are 2n combinations possible for
code words but all of them are not code words rather 2k are code words. Rest of the
combinations usually come forward when there is an erroneous transmission and code words
are corrupted by change in bits. There are some rules which code words have to fulfil. One of
them is that the sum of any two code words is also a code word and being a linear vector
space, there is some basis, and all code words can be obtained as linear combinations of the
basis.
4.1.1. Encoding
We design a generator matrix for encoding of linear block coding. To start with we can
designate {g0, g1... gk-1} as the basis vectors. It means that we can represent the coding
operation as matrix multiplication. We can make a generator matrix as
𝑔0
𝑔1
.
𝐺 = . (4,1)
.
[𝑔𝑘−1 ]
G is a 𝑘 × 𝑛 matrix. If 𝑚 = (𝑚𝑜 , 𝑚1 , . . . 𝑚𝑘−1 ) is the input sequence of dimension 1 × 𝑘,
the corresponding output is the code word
𝑚𝐺 = 𝑚𝑜 𝑔𝑜 + 𝑚1 𝑔1 +. . . . . . 𝑚𝑘−1 𝑔𝑘−1 (4.2)
Note that the all-zero sequence must be a code word and therefore the minimum distance of
the code is the code word of smallest weight. We have a vector space of dimension k
embedded in a Vector space of dimension n, the set of all n-tuples. Associated with every
linear block code Generator G is a matrix H called the parity check matrix whose rows span
the null space of G. Then if c is a code word, then
CHT=0 (4.3)
This code has a dual code in which His the generator matrix. If G is the generator for an (n, k)
code then H is the generator for an (n, n-k) code.
When we want to do the encoding, it is often convenient to have the original data explicitly
Eviden in the code word. Coding of this sort is called systematic encoding. For the codes we
Considered that it will always be possible to determine a generator matrix in such a way the
Encoding is systematic, simply perform row reductions and column reordering on G until an
Identity matrix is revealed. We can thus write Gas
𝑃
𝐺 = [𝐼 ] (4.4)
𝑘
The parity check matrix can be used to get some useful information about the code.
Following theorem holds for linear block codes.
“Let a linear block code Chas a parity check matrix H. The minimum distance of C is equal
to the smallest positive number of columns of H which are linearly dependent.”
On the receiver side let ϒ be the vector, and syndrome is defined as:
𝑆 = ϒ𝐻 𝑇 (4.6)
If the received vector ϒ is a code word the syndrome s is equal to zero and if it is not a code
word the syndrome is non-zero.
If received vector is an erroneous code word corrupted with error vector e then
ϒ=𝐶+е (4.7)
𝑆 = (𝐶 + е)𝐻𝑇 = е𝐻 𝑇 (4.8)
4.1.2. Decoding
To understand the process of decoding we start with the concept of maximum likelihood
Detection.
4.1.3. Maximum likelihood detection
Before talking about decoding, we should introduce a probabilistic criterion for decoding,
and show that it is equivalent to finding the closest code word. Given a received vector ϒ,
the decision rule that minimizes the probability of error is to find that code word 𝑐𝑖 which
maximizes P(c = 𝑐𝑖 /ϒ). This is called the maximum posterior decision rule. According to
Bayes rule
ϒ
P(C)P( )
C
P(C/ϒ) = (4.9)
𝑃(ϒ)
Where p(ϒ)is the probability of observing the vector ϒ. Since p(r)is independent of c,
Maximizing p(c/ϒ) is equivalent to maximizing the term p(ϒ) p(ϒ/c). Assuming that each
Code word is chosen with equal probability, then maximizing p(ϒ) p(ϒ/c)is equivalent to
Maximizing 𝑝(ϒ /𝑐).
A code word selected on the basis of maximizing p(ϒ/c)is said to be selected according to the
Maximum likelihood criterion. We define 𝑝(ϒ/𝑐) as
𝑃(ϒ/𝐶) =∏𝑛𝑖−1 𝑝(ϒi/Ci) (4.10)
Assuming a binary symmetric channel with crossover probability p, we have
ϒ 1 − 𝑝 𝑖𝑓𝐶𝑖 = ϒi
𝑃 𝐶𝑖 = { (4.11)
𝑖 𝑝 𝑖𝑓𝐶𝑖 ≠ ϒi
Then
ϒ ϒ 𝑝
𝑝(𝑐 ) = ∏𝑛𝑖=0 𝑃 (𝐶𝐼 ) = (1 − 𝑝)𝑛−𝑑(𝑐𝑖 ≠𝑟𝑖 ) 𝑝𝑑(𝑐𝑖≠𝑟𝑖 ) = (1 − 𝑝)𝑛 ( )𝑑(𝑐,𝑟) (4.12)
𝐼 1−𝑝
This example shows that if we want to maximize p(r/c) then select c, which is closest to r,
Since 0 <= (𝑝/1 − 𝑝) <= 1. Considering our assumption, now we can say that the
maximum Likelihood criterion is the minimum distance criterion, so we should choose the
error vector of lowest weight.
4.1.4. The standard array and syndrome table decoding
Suppose a transmitter transmits a symbol c and receiver receives r = c + e. We assume that
error sequences with lower weight are more probable than error sequences with higher
weight. We want to determine our decoded word c' such that the error sequence e' satisfying r
= c’ + e’has minimum weight.
One possible way of doing this is to create a standard array. We form the standard array by
writing down a list of all possible code words in a row with all-zero code word being the first
term. From the remaining n-tuples which have not already been used in the standard array,
select one of smallest weight. Write this down as the coset leader under the all-zero code
word. On this row, add the coset leader to each code word at the top of the column. Select
another small weight from the remaining and repeat the same step until all the n-tuples are
filled in the standard array. The standard array satisfies the following properties:
There are 2k columns and 2n-k rows, therefore, an (n, k) code is capable of correcting
2n-k Error patterns (the coset leaders).
The difference of any two vectors in the same row is a code word in 𝐶, all vectors in
the Same row have the same syndrome.
No two vectors in the same row are identical.
Every vector appears exactly once in the standard array.
An example of a (7, 3) code is shown here, where
0111100
G=[1 0 1 1 0 1 0] (4. 3)
1101001
The standard array is shown in Table 4.1 below:
Table 1: standard array
Assume that the sequence r = 0011011 is received at the output of channel. Looking up this
Sequence in the standard array, we find that the corresponding coset leader is e = 0101000,
and the sequence is decoded as c = 0110011.
DBU, Institute of Technology, Departments of Electrical and Computer Engineering
21
Performance Evaluation of Error Correction Techniques for OFDM System 2009 E.C
Each pattern shown syndrome decoding table is a coset leader in standard array. We can
Summarize the decoding steps as follows:
Compute the syndrome 𝑆 = ϒ𝐻 𝑇 .
Look up the error pattern e using s in the syndrome decoding table.
Then calculate decoded code word using 𝐶 = ϒ + е
The first k bits of the decoded code word are the message signal.
4.2 Reed-Solomon Codes
These error correcting techniques are block-based and are used to correct errors in various
Systems including storage devices, wireless or mobile communications, satellite
communications etc.
The mechanism by which Reed-Solomon codes correct errors is that the encoder adds
redundant bits to the digital input data block. The decoder attempts to correct and restore the
original data by removing the errors that are introduced in transmission, due to many reasons
that might be the noise in the channel or the scratches on a CD. There are different families of
Reed-Solomon codes and each has its own abilities to correct the number and type of errors.
These codes are linear and a subsection of BCH codes and can be denoted as RS (n,k) with s-
bit (where s are symbols represented as a bit) symbols.
An n-symbol code word is made by the k data symbols of s bits each and the encoder adds
parity bits. Errors in a codeword can be corrected by the decoder up to t symbols, where 2t =
n - k.
Given a symbol size s, the possible code word length n for a Reed-Solomon code is n = 2s -1.
For example, the maximum possible length of a code with 8-bit symbols (s=8) is 255 bytes.
R-S codes perform really well against burst noise. For example, consider an (n, k) = (255,
247) R-S code, where each symbol is made up of m = 8bits or 1 byte. Since n - k = 8 and this
code can correct any four symbol errors in a block of 255.Imagine the presence of a noise
burst, lasting for 25-bit durations and disturbing one block of data during transmission, as
illustrated in Figure 4.7.
In this example, notice that a burst of noise that lasts for a duration of 25 continuous bits must
disturb exactly four symbols. The RS decoder for the (255, 247) code corrects any four-
symbol errors regardless of damage type. In other words, when a decoder corrects a byte, it
replaces the incorrect byte with the correct one, regardless of number of bit errors in the
symbol. Thus if a symbol is wrong, it might as well be wrong in all of its bit positions. This
gives an RS code a tremendous burst-noise advantage over binary codes, even allowing for
the interleaving of binary codes. In this example, if the 25-bit noise disturbance had occurred
in a random fashion rather than as a contiguous burst, it should be clear that many more than
four symbols would be affected (as many as 25 symbols might be corrupted). Of course, that
would be beyond the capability of the RS (255, 247) code.
4.2.1. Encoding
The encoding of RS Code starts with the understanding of the code word it forms. Equation
3.15 defines Reed-Solomon (RS) codes in terms of the parameters n, k, t, and any positive
integer m>2;
(𝑛, 𝑘) = (2𝑚 – 1, 2𝑚 − 1 − 2𝑡) (4.14)
Where n - k = 2t is the number of parity symbols, and t is the symbol-error correcting
capability of the code. The generating polynomial for an RS code takes the following form
[12]:
𝑔(𝑥) = 𝑔0 + 𝑔1 𝑥 + 𝑔2 𝑥 2 + . . . +𝑔2𝑡−1 𝑥 2𝑡−1 + 𝑥 2𝑡 (4.15)
The degree of the generator polynomial is equal to the number of parity symbols. Consider as
an example the (7, 3) double-symbol-error correcting RS code. We describe the
generator polynomial in terms of its 2t = n - k = 4 roots, as follows:
𝑔(𝑥) = (𝑥 − 𝛼) (𝑥 – 𝛼 2 ) (𝑥 – 𝛼 3 ) (𝑥 – 𝛼 4 ) (4.16)
This can be simplified using the addition and multiplication defined for 4 finite field elements
of GF (22):
𝑔(𝑥) = 𝑥 – 𝛼 3 𝑥 + 𝛼 0 𝑥 – 𝛼 1 𝑥 + 𝛼 3 (4.17)
Reed-Solomon codes are cyclic codes, encoding in systematic form is analogous to the binary
encoding procedure [11]. It can be related to shifting a message polynomial m(X) into the
rightmost k stages of a code word register and then appending a parity polynomial, p(X), by
placing it in the leftmost n - k stages. Therefore we multiply m(X) by X^n - k, and by handling
the message polynomial algebraically so that it is right-shifted n-k positions. Next step is
dividing 𝑥 𝑛−𝑘 𝑚( 𝑋) by the generator polynomial g(X), which is written in the following
form:
𝑥 𝑛−𝑘 𝑚(𝑥) = 𝑞(𝑥) 𝑔(𝑥) + 𝑝(𝑥) (4.18)
Where 𝑞(𝑋) and 𝑝(𝑋) are quotient and remainder polynomials respectively. Equation 3.17
can be expressed as:
𝑃(𝑥) = 𝑥 𝑛−𝑘 𝑚(𝑥) 𝑚𝑜𝑑𝑢𝑙𝑜 𝑔(𝑥) (4.19)
The resulting code word polynomial, U(X) can be written as
𝑈(𝑥) = 𝑝(𝑥) + 𝑥 𝑛−𝑘 𝑚(𝑥) (4.20)
A valid code word is of the following form:
𝑈(𝑥) = 𝑔(𝑥) 𝑚(𝑥) (4.21)
When the code word is evaluated at any root of generator polynomial g(X), it should produce
zero. For example for a RS(7, 3) code the four roots were α, α2, α3, α4, the code word U(X)
evaluated at these roots must be zero.
𝑈(𝛼) = 𝑢(𝛼 2 ) = 𝑢(𝛼 3 ) = 𝑢(𝛼 4 ) = 0 (4.22)
4.2.2. Decoding
Reed-Solomon algebraic decoding procedures can correct errors within its error correcting
Capability. A decoder can correct up to (n - k)/2errors.
When a code word is decoded and the errors are not more than the error correcting capability
limit, then the original transmitted code word will always be recovered. If this is not the case
then there are two possibilities:
The decoder will detect that it cannot recover the original code word and
indicate this fact.
The decoder will mis-decode and recover an incorrect code word without
any indication.
The probability of any possibilities depends on the particular Reed-Solomon code and on the
Number and distribution of errors. If there is an erroneous transmission, received corrupted
Code word polynomial r(x) is then represented by the sum of the transmitted-code word
Polynomial and the error-pattern polynomial in Equation 3.21.
𝑟(𝑥) = 𝑈(𝑋) + 𝑒(𝑋) (4.23)
An important difference between the no binary decoding of r(x) and binary decoding is that
in binary decoding the decoder only needs to find the error locations [11]. In binary decoding
the error location value is just flipped because it is either 0 or 1. While dealing with no binary
symbols like in this case we not only learn the error location but also determine the correct
symbol values at those locations. We start with the syndrome decoding.
Syndrome Decoding
The syndrome is the result of a parity check performed on r to determine whether r is a valid
member of the code word set [12]. We get the value 0by the syndrome S, if r is a member of
code word set. Any nonzero value of S indicates the presence of errors. Similar to the binary
case, the syndrome S is made up of n - k symbols, {𝑆𝑖} (𝐼 = 1, … , 𝑛 − 𝑘).
In case of RS (7, 3) code, there are four symbols in every syndrome vector. Their values can
be computed from the received polynomial r(X). Equation (3.20) is rewritten below;
𝑈(𝑋) = 𝑔(𝑋)𝑚(𝑋) (4.24)
The above equation states that the generator polynomial g(X) roots must also be the code
word Polynomial roots U(X). We also know that r(X) = U(X) + e(X) then r(X) evaluated at
each of the roots of g(X) will yield zero when received polynomial is a valid code word. Any
errors will result in a nonzero value. The computation of a syndrome symbol can be described
as:
𝑠𝑖 = 𝑟(𝑥) = 𝑖) 𝑖 𝛼(𝑟 = 𝑖𝛼=𝑥ﺍ1, … . , 𝑛 − 𝑘 (4.25)
Each syndrome symbol Si will be equal to 0 if r(X) is a valid code word.
An essential property of codes regarding the standard array is that of same syndrome in each
rows [12]. This is also valid for RS codes. The syndrome value will be same if it is obtained
by evaluating 𝑒(𝑋) or 𝑟(𝑋) at the roots of 𝑔(𝑋).
Error Location
Let there be ν errors in a code word at position 𝑥 𝑗1 𝑥 𝑗2 , … 𝑥 𝑗𝑣 . Then the error polynomial
e(X) can be written as follows:
𝑒(𝑋) = 𝑒𝑗1 𝑥 𝑗1 + 𝑒𝑗2 𝑥 𝑗2 + … . +𝑒𝑗𝑣 𝑥 𝑗𝑣 (4.26)
The error location here is referred by the index 𝑗, and the first, second……., 𝑣 𝑡ℎ errors are
denoted by the indices 1, 2, … , 𝜈. for correction of the degraded code word, each error
location𝑥 𝑗𝑙 , where 𝑙 = 1, 2, . . . , 𝜈 and its error value 𝑒𝑗𝑙 , must be determined. We define an
error locator number as 𝐵𝑙 = 𝛼 𝑗𝑙 Next, we obtain the n - k = 2t syndrome symbols by
substituting αi in to the received polynomial for 𝑖 = 1, 2, … ,2𝑡:
𝑆1 = ϒ(𝛼) = 𝑒𝑗1 𝛽1 + 𝑒𝑗2 𝛽2 + ⋯ . 𝑒𝑗𝑣 𝛽𝑣
𝑆2 = ϒ(𝛼 2 ) = 𝑒𝑗1 𝛽12 + 𝑒𝑗2 𝛽22 + ⋯ . 𝑒𝑗𝑣
𝑆2𝑡 = ϒ(𝛼 2𝑡 ) = 𝑒𝑗1 𝛽12𝑡 + 𝑒𝑗2 𝛽22𝑡 + ⋯ . 𝑒𝑗𝑣 𝛽𝑣2𝑡 (4.27)
Due to the nonlinearity of the equations these 2𝑡 equations cannot be solved in the usual way.
For these kind of equations we use the technique that is a Reed-Solomon decoding algorithm.
An error is signified, after it has been received when a nonzero syndrome vector has been
calculated. Next, it is necessary to learn the location of the error or errors. An error-locator
polynomial, 𝜎(𝑋), can be defined as follows:
𝜎(𝑋) + (1 + 𝛽1 𝑋)(1 + 𝛽2 𝑋) … .1 + 𝛽𝑉 𝑋
𝜎(𝑋) = 1 + 𝜎1 𝑋 + 𝜎2 𝑋 2 + … . 𝜎𝑉 𝑋 𝑉 (4.28)
The roots of 𝜎 (𝑋) = −1/𝛽1 , −1/𝛽2 . . . . . . . . . , −1/𝛽2 the reciprocals of roots of 𝜎(𝑥 ) are the
error-location numbers of the error pattern 𝑒(𝑥) by using auto regressive modelling
techniques [12], we form a matrix from the syndromes, where the first t syndromes are used
to predict the next syndrome. That is,
S1 S2 S3 … St − 1 st 𝜎𝑡 −𝑆𝑡 + 1
S2 s3 s4 … st st + 1 𝜎𝑡 − 1 −𝑆𝑡 + 2
⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮
(4.29)
⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮
St − 1 st st + 1 … s2t − 3 s2t − 2 𝜎2 −𝑆2𝑡 − 1
[ St st + 1 . s2t − 2 … . s2t − 2 s2t − 1] [ 𝜎1 ] [ −𝑆2𝑡 ]
By using the largest dimensioned matrix with nonzero determinant, we apply the
autoregressive model of Equation 3.26. For the solution of the error-locator polynomial, 𝜎(𝑋)
and coefficients 𝜎1 and 𝜎2 , we start as:
S1 S2 S3 … St − 1 st
S2 s3 s4 … st st + 1
⋮ ⋮ ⋮ ⋮ ⋮ ⋮
A = (4.30)
⋮ ⋮ ⋮ ⋮ ⋮ ⋮
St − 1 st st + 1 … s2t − 3 s2t − 2
[ St st + 1 . s2t − 2 … . s2t − 2 s2t − 1]
𝜎𝑡
𝜎𝑡 − 1
⋮
X = (4.31)
⋮
𝜎2
[ 𝜎1 ]
−𝑆𝑡 + 1
−𝑆𝑡 + 2
⋮
b = (4.32)
⋮
−𝑆2𝑡 − 1
[ −𝑆2𝑡 ]
We solve for x as
𝑋 = 𝐴−1 𝑏 (4.33)
The roots of 𝜎(𝑋) are the reciprocals of the error locations. Once these roots are located, the
error locations will be known. To locate an error, we test each of the field elements with the
𝜎(𝑋) polynomial and any element 𝑋 is a root if it results in𝜎(𝑋) = 0.
When we know the number of error locations we will know the length of error polynomial
defined in Equation 3.23 and rewritten below:
е(𝑋) = е𝐽1 𝑋𝐽1 + е𝐽2 𝑋𝐽2 + ⋯ + е𝐽𝑉 𝑋𝐽𝑉 (4.34)
vector and the transmitted code word is less than the distance between the received vector
and other code words. A linear block code with minimum distance 𝑑𝑚𝑖𝑛 can correct all error
𝑑𝑚𝑖𝑛 −1
patterns of weight less than or equal to[ ] denoted by 𝑡 which is called the error-
2
correction capability of a code with minimum distance 𝑑𝑚𝑖𝑛 . It is the upper bound on the
weights of all error patterns for which one can correct.
A Reed-Solomon decoder can detect d errors where 𝑑 = 𝑛 – 𝑘 + 1and can correct up
To [𝑑/2] error.
4.4 System design
In this chapter we will discuss the work done in this thesis. We will start with the research
Methodology and then we will explain the implementation. Our main target was to test
OFDM systems for channel coding and evaluate the performance based on the improvement
in BER. In this chapter we have discussed the way this research was carried out and its
implementation technique.
Two MATLAB functions ‘modem.pskdemod ()’ and ‘demodulate ()’ are used to perform the
QPSK demodulation.
The MATLAB implementation of OFDM receiver is given in appendix A-5
4.6.3 Linear Block Coding
We used MATLAB function ‘Encode()’ that includes linear block codes. We need to provide
the message length k, codeword length n and generator matrix for computing the code word.
The encoder and decoder for linear block coding are given in appendix A-6 and A-7.
4.6.4 Reed-Solomon Coding
For Reed-Solomon encoding we made some changes to the binary data. We first converted
the binary data to decimal symbols required to the Reed-Solomon encoder. ‘fec.rsenc()’
function is used for generating Reed-Solomon encoder and ‘encode()’ function is used to
encode the data.The data after encoding is once again changed to binary. The Reed-Solomon
encoder and decoder functions are given in appendix A-8 and A-9.
At the decoder side the data is once again converted from binary data to decimal symbols.
Function ‘fec.rsdec()’ is used for Reed-Solomon decoder and ‘decode()’ function is used to
decode the data. After that the data is once again converted to binary.
CHAPTER FIVE
Simulation Results and Discussion
This chapter discusses the results of simulations performed to investigate the performance of
the considered error control codes for OFDM systems. On the basis of simulations we have
tried to explain the results and discussed the reasons leading to such results.
5.1. Simulation setup
To keep the conditions same for all the simulations the input data, the fading channel and the
noise is once generated and saved. The input data is kept same for testing of all coding
schemes. The total number of carriers chosen in our OFDM system is 64 and used is 52. We
have used Eb/No (Energy per bit to noise power ratio) in our testing. 𝐸𝑏/𝑁𝑜 is useful when
comparing BER in digital modulation schemes without taking the bandwidth into account.
Rayleigh fading [15] channel is once generated and saved. The same fading is used for all the
simulations. The fading is defined by the following MATLAB equation [15]:
𝑅𝑎𝑦𝑙𝑒𝑖𝑔ℎ 𝑓𝑎𝑑𝑖𝑛𝑔 = 𝑠𝑞𝑟𝑡 (0.5 ∗ (𝑟𝑎𝑛𝑑𝑛 (𝑛, 1) + 𝑖 ∗ 𝑟𝑎𝑛𝑑𝑛 (𝑁, 1))) (5.1)
Where ‘𝑟𝑎𝑛𝑑𝑛()’ generates values from the standard normal distribution function and
‘𝑁’ defines the length of fading signal.
The random noise is generated using the ‘𝑟𝑎𝑛𝑑𝑛()’ function and is also generated once and
saved. It is recalled in the receiver section and its power is adjusted according to the given
𝐸𝑏/𝑁𝑜, after that it is added to the received signal. This way same AWGN is added to the
data for all the simulations. Same fading and AWGN helps to get the same error patterns for
all the coding techniques.
The final 𝐸𝑏/𝑁𝑜 changes due to the following reasons:
1. The number of bits after channel coding is more than number of bits before, so it
𝐸𝑏
causes decrease in 𝑁𝑜.
2. The number of modulated symbols is less than the total number of bits and this
increases the value of 𝐸𝑏/𝑁𝑜. In our case QPSK was used, which means two bits
now make one modulated signal and total four types of modulated signals are there.
3. There are total 64 OFDM carriers in our simulation out of which 52 are used. The
𝐸𝑏
unused carriers cause decrease in the effective𝑁𝑜.
4. After cyclic prefix addition there are 16 bits that are repeated in the start of each
symbol,
so now in each symbol the bits are increased from 64 to 80. This also decreases the
value of 𝐸𝑏/𝑁𝑜.
So the new adjusted 𝐸𝑏/𝑁𝑜 can be given by.
𝐸𝑏
(𝑎𝑑𝑗𝑢𝑠𝑡𝑒𝑑) = 𝐸𝑏 / 𝑁𝑜+10 log 𝐷𝑎𝑡𝑎 𝑏𝑖𝑡𝑠
𝑁𝑜 10 (𝑐𝑜𝑑𝑒 𝑟𝑎𝑡𝑒)+10 log1𝑜 (log2 (𝑀))+10 log10 (𝐷𝐴𝑇𝐴 𝐵𝐼𝑇𝑆+𝑃𝐼𝑙𝑜𝑡 𝑏𝑖𝑡𝑠+𝑐𝑎𝑟𝑟𝑖𝑒𝑟 𝑏𝑖𝑡𝑠)
(5.2)
Where ‘code rate’ is code rate of the error control and it becomes 1 for uncoded OFDM, ‘M’
is the number of signals in modulation scheme (For QPSK, 𝑀 = 4), Point 3 and 4 are
combined together to get Equation 5.2.A similar form of adjusted Eb/No is given in [16] and
we have changed the equation according to our requirement.
The ‘𝑏𝑖𝑡 𝑒𝑟𝑟()’ function of MATLAB is used for the computation of BER and number of
errors by comparing the binary data. We have used BER vs. 𝐸𝑏/𝑁𝑜 plot to evaluate the
coding schemes. The three main MATLAB scripts for the three channel coding schemes are
given in appendix A-1, A-2 and A-3.
The bits input to the encoder are matched to the bits output of the decoder. BER is defined as
the ratio of total number of unmatched bits to the total number of bits.
5.2 Results for our OFDM system design
The result of OFDM system show in the figure 6.1 below
0
Bit error probability curve of OFDM system
10
simulated
theoritical
-1
10
-2
10
BER
-3
10
-4
10
-5
10
0 5 10 15
EbNo(dB)
From the figure as the energy of bit per noise power increase the bit error rate decrease
gradually. There for we can decrease the probability of bit error rate by inserting guard
interval in the block diagram of the OFDM system, that means inter symbolic interference
and inter carrier interference will decrease by insertion guard interval in the block diagram of
the OFDM system,
5.3. Results for Linear Block Codes
The simulation is done for code rates of 1/3. The generator matrices used for the code were:
1 0 1 0 1 0
G1/3 = [ ] (5.3)
0 1 0 1 1 1
Generator matrix takes two bits as input and outputs six bits so the code rate becomes
1/3. These generators are selected as simplest possible generator matrices. The results of
Linear block codes for code rates of 1/3 is shown in Figure 5.2.
0
Bit error probability curve using Linear Block Coding and OFDM
10
BER LB Coding
-1
10
BER
-2
10
-3
10
-4
10
-2 0 2 4 6 8 10 12 14
Eb/No((dB)
from the above figure the bit error rate can decrease when energy per bit to noise power ratio
increase, for example when 𝐸𝑏/𝑁𝑜 is approximately 9.5dB the bit error rate is 10-2 and when
𝐸𝑏/𝑁𝑜 is 12dBthe bit error rate is 10-3.
5.4 Result for Reed Solomon coding
The code rates of 1/3 were also used for testing Reed-Solomon encoder of symbol length 4
bits. The codes used were (15,5) for code rate 1/3,
DBU, Institute of Technology, Departments of Electrical and Computer Engineering
34
Performance Evaluation of Error Correction Techniques for OFDM System 2009 E.C
The results of Reed-Solomon coding with OFDM are shown in Figure 5.3.
0
Bit error probability curve using RS Block Coding and OFDM
10
BER RS Coding
-1
10
-2
10
BER
-3
10
-4
10
-5
10
-4 -2 0 2 4 6 8 10 12
Eb/No(dB)
from the above figure the bit error rate can decrease when energy per bit to noise power ratio
increase, for example when 𝐸𝑏/𝑁𝑜 is approximately 4 dB the bit error rate is 10-2 and when
𝐸𝑏/𝑁𝑜 is 8db the bit error rate is 10-3.
0
Bit error probability curve using uncoded OFDM
10
uncoded OFDM
-1
10
BER
-2
10
-3
10
-4
10
2 4 6 8 10 12 14 16 18 20
Eb/No
from the above figure the bit error rate can decrease when energy per bit to noise power ratio
more increase, for example when 𝐸𝑏/𝑁𝑜 is 14dB the bit error rate is 10-2 and when 𝐸𝑏/𝑁𝑜 is
approximately 17dB the bit error rate is 10-3.
5.6 compression between uncoded OFDM, linear block coding and reed
Solomon coding
Table 3: comparison between uncoded OFDM, linear block coding and reed Solomon coding
CHAPTER SIX
Conclusions and Recommendation
6.1 Conclusion
Linear block codes are very simple to implement. They also show good performance and are
also good for lower code rates and highly suitable for application with less complexity
requirement and not so high performance requirement. Reed-Solomon codes in comparison to
linear block codes are difficult to implement but their performance is much better and
consistent than linear block codes since they can handle long bursts of errors. These codes
showed good performance on all three tested code rates. The systems where very high
performance is required Reed-Solomon codes are recommended and systems that don’t allow
complicated structure linear block codes are the preferable solution.
6.2 Recommendation
Future work of this thesis can be comparison of other error correcting techniques. Another
future work can be finding the trade-off between performances and complexity when the
number of bits to the encoder is increased.
References
[1] Wikipedia. Fading Channels [online] (Last modified on February 2013)
http://en.wikibooks.org/wiki/Communication_Systems/Fading_Channels
[2] Hui Liu and Guoqing Li, OFDM based Broadband Wireless Networks Design &
Optimization, WILEY, NewJersey, 1-30.
[3] Bahai, A., and B. Saltzberg. Multicarrier Digital Communications: Theory and
Applications of OFDM. New York:Kluwer Academic/Plenum Publishers, 1999.
[4] Ioannis A. Chatzigeorgiou, “A Comparison of Convolutional and Turbo Coding Schemes
For Broadband FWASystems”, CiteSeer, DOI=10.1.1.100.6680.
[5] HeauJo Kang, “Performance Analysis of OFDM System using Adaptive FEC Code
Control Technique for MobileWireless Communication”, International Conference on
Intelligent Pervasive Computing, Korea, 2007.
[6] Kumar S. and Gupta R.,“Performance Comparison of Different Forward Error Correction
Coding Techniques forWireless Communication Systems”, IJCST Vol. 2, Issue 3, September
2011.
[7] Robertson, P.; Kaiser, S. “The Effects of Doppler Spreads in OFDM(A) Mobile Radio
Systems”, 50thVehicularTechnology Conference, Amsterdam, September1999, VTC 1999-
Fall.
[8] Haas, R..andBelfiore, J.C., “A Time-Frequency Well-localized Pulse for Multiple Carrier
Transmission”, Publishedin Wireless Personal Communications, Vol-5, Issue 1, pp 1–18,
July 1997, DOI:10.1023/A:1008859809455.
[9] Clark, G. and Cain, J., “Error-Correction Coding for Digital Communications” New
York: Plenum PublishingCorporation 1988.
[10] Hagenauer, J. and Hoeher, P., “A Viterbi Algorithm with Soft-Decision Outputs and Its
Applications,” Proc.GLOBECOM ’89, Dallas, Texas, November 1989, pp. 1680-1686.
[11] Sklar, B., Digital Communications: Fundamentals and Applications, Second Edition
(Upper Saddle River, NJ:Prentice-Hall, 2001).
[12] Blahut, R. E., Theory and Practice of Error Control Codes (Reading, MA: Addison-
Wesley, 1983).
[13] Weinstein.S.B, Ebert P.M, “Data Transmission by Frequency Division Multiplexing
using the Discrete FourierTransform”, IEEE Transactions on Communications, Vol-19, Issue
5, pp. 628-634, Oct 1971.
Appendix
%%Appendix 1.1. for OFDM system design
clearall
closeall
clc
nbits = 208000;
modlevel = 2 ;
nbitpersym = 52; % number of bits per qam OFDM symbol (same as the number of
subcarriers for 16-qam)
nsym = 10^4; % number of symbols
len_fft = 64; % fft size
sub_car = 52; % number of data subcarriers
EbNo
= 0:2:15;
EsNo= EbNo+10*log10(52/64)+ 10*log10(64/80) +10*log10(4);
snr=EsNo - 10*log10((64/80));
M = modem.qammod('M',16); % modulation object
% Generating data
t_data=randint(nbitpersym*nsym*4,1,2);
qamdata=bi2de(reshape(t_data,4,520000).','left-msb')
maping = bin2gray(qamdata,'qam',16);
% modulating data
mod_data =1/sqrt(10)* modulate(M,maping);
% serial to parallel conversion
par_data = reshape(mod_data,nbitpersym,nsym).';
% pilot insertion
pilot_ins_data=[zeros(nsym,6) par_data(:,[1:nbitpersym/2]) zeros(nsym,1)
par_data(:,[nbitpersym/2+1:nbitpersym]) zeros(nsym,5)] ;
% fourier transform time doamain data
IFFT_data =ifft(fftshift(pilot_ins_data.')).';
a=max(max(abs(IFFT_data)));
IFFT_data=IFFT_data./a; % normalization
LB_BER=[];
for ii=1:length(EbNo)
ii
% OFDM Reciever
[ demod_Data1 ] = OFDM_reciever(chan_data,EbNo(ii),fading,nbitpersym,nsym );
% Channel Decoding
[ decoded_data ] = LBlock_Decoder( demod_Data1,n,k,genmat );
% Error Calculation
[no_of_error(ii),LB_BER(ii)]=biterr(decoded_data,t_data2); % error rate calculation ; %
error rate calculation
end
% plotting the result
%close all
EbNo_LB=EbNo+10*log10(k/n)+10*log10(2)+10*log10(52/80);
figure, semilogy(EbNo_LB,LB_BER,'--or');
gridon, xlabel('Eb/No(dB)'), ylabel('BER');
title('Bit error probability curve using Linear Block Coding and OFDM');
legend('BER LB Coding')
% %For code rate =1/3
EbNo_LB1b3=EbNo_LB;
LB_BER1b3=LB_BER;
save('LBdata_1b3.mat','LB_BER1b3','EbNo_LB1b3');
%For code rate =1/2
% EbNo_LB1b2=EbNo_LB;
% LB_BER1b2=LB_BER;
% save('LBdata_1b2.mat','LB_BER1b2','EbNo_LB1b2');
nbitpersym = 52; % number of bits per OFDM symbol (same as the number of subcarriers
for BPSK)
% nsym = 13500; number of symbols
len_fft = 64; % fft size
sub_car = 52; % number of data subcarriers
EbNo = 0:16;
% Channel Coding setting
% For Code rate = 1/3
% n=15;
% k=5;
% For Code rate = 1/2
% n=15;
% k=7;
% For Code rate = 2/3
n=15;
k=10;
% Data generation
t_data2 = randint(18720,1);
% These two steps are to match the exact input length required by
si=floor(length(t_data2)/(log2(n+1)*104*k))*(log2(n+1)*104*k);
t_data2=t_data2(1:si);
% Channel Coding
[ t_data1 ] = RS_Encoder( t_data2,n,k);
% OFDM Transmitter
nsym = length(t_data1)/104;
[ ser_data ] = OFDM_transmitter(t_data1,nbitpersym,nsym); % ser_data is transmitted data
% Passing through channel, applying fading.
fading = sqrt(0.5*(randn(19200,1) + 1i*randn(19200,1))); % using fixed(once generated and
saved)fading data for all simulation
fading = fading(1:size(ser_data)); % Fading signal length adjusted to match passing signal
chan_data = fading.*ser_data; % Applying Channel Fading
% Reciever end
no_of_error=[];
RS_BER=[];
for ii=1:length(EbNo)
ii
% OFDM Reciever
[ demod_Data ] = OFDM_reciever(chan_data,EbNo(ii),fading,nbitpersym,nsym );
% Channel Decoding
[ decoded_output ] = RS_Decoder(demod_Data,n,k);
% Error Calculation
[no_of_error(ii),RS_BER(ii)]=biterr(decoded_output,t_data2)
end
%close all
%load uncoded_OFDM.mat
EbNo_RS=EbNo+10*log10(k/n)+10*log10(2)+10*log10(52/80);
figure, semilogy(EbNo_RS,RS_BER,'--or');
gridon, xlabel('Eb/No'), ylabel('BER');
title('Bit error probability curve using RS Block Coding and OFDM');
legend('BER LB Coding')
% % %For code rate =1/3
% EbNo_RS1b3=EbNo_RS;
% RS_BER1b3=RS_BER;
% save('RSdata_1b3.mat','RS_BER1b3','EbNo_RS1b3');
%For code rate =1/2
% EbNo_RS1b2=EbNo_RS;
% RS_BER1b2=RS_BER;
% save('RSdata_1b2.mat','RS_BER1b2','EbNo_RS1b2');
sigp_dB = 10*log10(norm(chan_data,2)^2/numel(chan_data)); %
Signal Power in dB
noisep_dB = sigp_dB-EbNo; % Noise Power in dB
noisep = 10^(noisep_dB/10); %Noise Poise in linear scale
noise = randn(100000,1); % using fixed(once generated and
saved)noise for all simulation
% Noise*standard deviation to achieve SNR, length is adjusted
according to % singal length
% noise = sqrt(noisep)*noise(1:length(chan_data),1);
noise =
sqrt(noisep)*(noise(1:length(chan_data),1)+noise(1:length(chan
_data),1)*1i);
chan_awgn= chan_data+noise; % Noise addition in signal
% Error Calculation
[no_of_error(ii),OFDM_BER(ii)]=biterr(demod_Data1,t_data2) ; % error rate calculation
end
% plotting the result
figure,
EbNo_uncoded=EbNo+ 10*log10(2)+10*log10(52/64);
semilogy(EbNo_uncoded,OFDM_BER,'--.r')
gridon, xlabel('Eb/No'), ylabel('BER');
title('Bit error probability curve using uncoded OFDM');
save('uncoded_OFDM.mat','OFDM_BER','EbNo_uncoded');