0% found this document useful (0 votes)
139 views84 pages

DWC-USRP - LectureNotes PDF

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 84

EE 371C Wireless Communications Lab

Lecture Notes R. W. Heath Jr.

2 Introduction to digital communication

• Objective: Explain why digital communication is relevant.


• Objective: Explain the components of a digital communication system.
• Objective: Define relevant terminology in digital communications.
• Principles of communication
− Transmitter: generates information (“source”)
− Channel : propagation medium
− Receiver : processes information (“sink”)
• Analog vs. digital communication
− Analog - source is a continuous-time waveform
− Digital - source is a digital a.k.a., binary sequence
† Both analog and digital communication systems actually send a continuous-time waveform.
The difference is the source.
† Digital communication uses a finite number of possible waveforms.
• Why digital communication?
− Suitable for digital data
− Use high-quality reproducible digital components
· Analog components have variable specs, this has an effect on continuous outputs
· Digital components are built from analog components but their output is typically in-
dependent of the specifications thanks to the discrete levels
− Security
− Robustness to noise
− Easy to support multiple rates (i.e., multiplexing)
− Easy to change + reconfigure (e.g., SDR : software defined radio) though note that much is
still in hardware
− Major advantage : use digital signal processing (DSP) to take advantage of Moore’s law
· Lower power and cost
· Analog components are not advancing as quickly as digital
• Relevant applications of wireless digital communication
− Cellular Communication Networks
− Paging Networks
− Broadcast Radio
− Broadcast Television
− Wireless Local Area Networks (WLANs)

1
Figure 1: The components of a typical digital communication system.

− Broadband Wireless Access Networks


− Personal Area Networks (PANs)
− Satellite Systems

2 Components of a Digital Communication System

• Objective: Explain components of a digital communication system.


• Objective: Explain terminology.
• Overview of components (overview is representative but not complete, more pieces will be filled
in throughout the course)
− Transmitter: Source → Source encoder → Channel code → Modulation →
− Channel: Analog front end → Propagation channel → Analog front end →
− Receiver: Demodulation → Decoding → Source reconstruction → Sink
• Source: origin of information {s[n]}
• Source encoder: purpose is to perform compression
− Can be lossy
− Can be lossless
• Source decoder: purpose is to uncompress the original source, performs reconstruction
− Perfect reconstruction for lossless codes (e.g. zip encoding)
− Imperfect reconstruction for lossy codes (e.g. jpeg)
• Encryption: purpose is to convert information into something that is hard to understand by an
unintended recipient
− Encryption is one step in a security protocol that may include multiple levels of security
− Well known example is public key encryption (which requires exchange of auxiliary infor-
mation in the form of a key)

2
• Decryption: purpose is to remove encryption to make the transmitted signal readable.
• Channel code : basic idea is to add redundancy to correct channel errors (i.e., use error correction
or error detection)
− May perform error correction, e.g., a repetition code
− May perform error detection, e.g., a CRC (cyclic redundancy check)
− Code rate = ##ofofuncoded bits
coded bits
− Types of forward error correction (FEC) also known as error control codes (ECC)
1. Block code (developed in the 50’s) - encode a block of bits to produce another block.
For example a R = 1/2 block code takes 5 bits in and produces 10 bits out. Block
codes are good for burst errors. Example is Reed-Solomon code used in DSL.
2. Convolutional code (developed 60’s - 70’s) - convolve data with multiple finite impulse
response (FIR) filters in the binary field and interleave the results. Note that convolu-
tional codes are good for random but not burst errors. Used in GSM, IEEE 802.11g,
IEEE 802.16, etc. Typically combined with interleaving. Puncturing used to control
the rate.
· Example (5, 7) rate 1/2 code

c[2n] = i[n] ⊕ i[n − 1]

c[2n + 1] = i[n] ⊕ i[n − 2]


· Constraint length of 2 (memory 2 bits)
· More than 2 bit errors is a problem
3. Trellis code (developed in the 80’s) - generalization of a convolutional code where
modulation and coding are combined. Used in V.54 modems.
4. Turbo code (invented in ’93, developed in the 90’s - 00’s) - tricky extension of convo-
lution coding
· Uses convolutional code with specially designed IIR filters
· Uses a deep random interleaver between inputs
· Use special iterative maximum a posteriori decoder
· Very good error protection, but need large block length (e.g., ∼ 10,000)
· Used in the 3G WCDMA system, others under consideration
5. LDPC (low density parity check) code (developed in the 60’s, rediscovered in ’00s) - a
block code with large block length and some sparse parity checks
· Carefully designed combined with special iterative MAP decoder
· Works with smaller block lengths (e.g., ∼ 1,000)
· Decoding is more complex
· Optional in the IEEE 802.11n standard, IEEE 802.16e
• Decoding: Reverse the encoding operation. Removes redundant information and corrects or
detects errors.
• Modulation: Most generally, the process of mapping bits to analog waveforms
c[n] −→ Modulator −→ m(t)
O/P of channel codes voltage or current

3
− The mapping from bits to waveforms may be linear or nonlinear.
− It can be memoryless or with memory.
− Modulator often consists of the following
bits → symbol mapping → constellation mapping → pulse shaping filter → D/A → m(t)
− Symbol mapping: Bits are grouped into symbols e.g.) |{z}
01 |{z} 10 11 · · ·
11 |{z}
s1 s2 s3
− Constellation mapping: Symbols are mapped to points in constellation C = {c1 , c2 , ..., cN }
Cardinality |C| = 2k = N , k → # of bits/symbol
e.g.) QPSK constellation : C = { √12 (±1, ±j)}

0 → +1
e.g.) PAM : BPSK or 2-PAM. Hence, C = {+1,1 } (constellation)
1 → −1

(−1)c[n] g(t − nT ), where g(t) is a pulse being transmitted (i.e., pulse-shaping


P
† m(t) =
n
filter)
• Demodulation: use received samples to infer transmitted sequence
− May use an estimate of the channel
− May include equalization
− May include detection
− May include sequence detection
− Two types of detection methods
1. Hard decision - each coded bit is demodulated as 1 or 0
e.g.)
Tx 0 1 0
Mod 1 −1 1
Rx −0.2 0.2 0.5
Demod 1 0 0
Here, only received symbol & no other info are used.
2. Soft decision - no hard decisions are made; instead soft decision made corresponding
to “distance” between received sequence & sequence corresponding to 0 or 1 bit trans-
mission
e.g.) C = {(+1, +1, +1), (−1, −1, −1)}

Tx 0
Mod (1, 1, 1)
Rx (1, 0.5, −0.5)
By soft decision, the decision is done by comparing “distance” between Rx symbol &
(+1, +1, +1), (−1, −1, −1).
Demod:
d2 (Rx, (+1, +1, +1)) = 02 + (1/2)2 + (3/2)2 = 2.5,
d2 (Rx, (−1, −1, −1)) = 22 + (3/2)2 + (1/2)2 = 6.5,
Thus, 0 is chosen.
† Soft decision is used for sequence detection and with more sophisticated types of
error control decoding algorithms.

4
• Channel
− Analog Front End : (After D/A at TX, before A/D at RX) performs all analog functions to
produce appropriate transmission waves and preprocess received wave before reception
· Filtering: use to bandlimit signals, remove out-of-band noise and intereference
· Mixing: used to upconvert signals to a carrier frequency and to downcovert or remove
the carrier frequency
− Propagation Channel
· Includes EM propagation effects such as reflection, transmission, diffraction, and scat-
tering
· Includes fading (random fluctuations of the signal envelope)
· Includes noise - random disturbance that degrades the signal

→ Equivalent system - includes all distortion resulting from the propagation channel
− Types of channels:
1. Additive noise - y(t) = x(t) + v(t), where v(t) models interference due to thermal
noise caused by random motion of electrons
RL
2. Multipath model (channel modeled as LTI system) - y(t) = 0 h(τ )x(t − τ )dτ + v(t)
3. Multiplicative - (scalar channel) y(t) = h(t)x(t) + v(t)

2 Fundamental observation: communication signals are bandlimited.

• Objective: Explain the connection between communication and DSP.


• The transmitted signal is bandlimited
• The received signal is bandlimited.
• Implications of bandlimited property
− Use Nyquist’s Sampling Theorem to represent signals in discrete-time
− Use a byproduct of Nyquist to model the channel in discrete-time
• The bandlimited property makes the connection between communication and digital signal pro-
cessing (DSP)

2 DSP Preliminaries

• Objective: Describe signal types that we will encounter in this course.


• Types of Signals
− Deal with both real R or complex C signals
− Continuous-time signals: x(t) (t ∈ R)
− Discrete-time signals: x[n] (n ∈ Z)
− Digital signals xQ [n] where the possible values of xQ [n] come from a finite set
− We deal primarily with discrete-time signals but in practice the signals implemented in DSP,
FPGA, or ASIC are digital. This often requires designing special algorithms to take into
account fixed point arithmetic.

5
2 Stochastic Processes

• Objective: Explain the application of random processes in wireless communications.


• Objective: Calculate mean, covariance, correlation of a stochastic process using expectation or
ergodicity.
• Loose definition of a random process or stochastic process (used interchangeably)
− A random process is a probabilistic model for a random signal, sequence, or function
− Characterizes an ensemble of random signals, not just a single realization
− Very technical area with a deep mathematical structure
• Why use stochastic processes?
− The transmitted information in a communication system is well-modeled as a random pro-
cesses
− Thermal noise, one of the most important sources of errors in a communication system, is
well modeled as a random process
− The propagation channel, since it is fundamentally time-varying, is also well modeled as a
random process since it is hard to model accurately using a deterministic approach. Primar-
ily for analysis and simulations.
• Probability review
− For continuous Rvalued random variables, let fx (t) denote the probability distribution of x
x
and Fx (α) = −∞ f (t)dt denote the cumulative distribution function where Px (x ≤ α) =
Fx (α)
− For discrete valued random variables, let X denote the set of outcomes of x, let m ∈
{1, 2, . . . , |X |} index the values of that random variable, and let px [m] to denote the prob-
ability mass function of x for the mth outcome in X
− Mean of a random variable
R∞
· Continuous-valued Ex = −∞ xfx (x)dx
P
· Discrete-valued Ex = m∈{1,2,...,|X |} X [m]px [m]
− Variance: σ 2 = E|x|2 − (Ex)2
• Random processes
− A random process is basically a collection of indexed random variables

6
· For a continuous-time random process x(t), t is the continuous-time index and x(t) is
a random variable at time t ∈ R
· For a discrete-time random process x[n], n is the integer index and x[n] is the random
variable at time n
− Focus explanation on continuous-valued discrete-time random processes (DTRP) for sim-
plicity
− A discrete-time random process is basically a sequence of random variable
− There are different characterizations of a random process based on what is known
· First order characterization of a DTRP is knowing the distribution for each n, i.e.
Fx[n] (α)
· Second order characterization of a DTRP is knowing the joint distribution between
x[n1 ] and x[n2 ] for any choice of n1 and n2 : Fx[n1 ],x[n2 ] (α1 , α2 ) (includes the first
order as a special case)
· Higher order characterizations can be similarly defined
· A random process is fully characterized by knowledge of all its higher order distribu-
tions
• Independent identically distributed (i.i.d.) random process
− Most common kind of random process we will consider
− All the collections of random variables (x[n1 ], x[n2 ], . . . , x[nk ]) for k > 1 are independent

Fx[n1 ],x[n2 ],...,x[nm ] (α1 , α2 , . . . , αm ) = Fx[n1 ] (α1 ) Fx[n2 ] (α2 ) · · · Fx[nm ] (αm ) .

− Second: all the random variables are identically distributed thus

Fx[nk ] (αk ) = Fx (αk )

for all x[n1 ], x[n2 ], . . . , x[nk ]. Thus

Fx[n1 ],x[n2 ],...,x[nm ] (α1 , α2 , . . . , αm ) = Fx (α1 ) Fx (α2 ) · · · Fx (αm ) .

• Moments of a stochastic process


− The mean of a random process is
Z ∞
mx [n] = Ex[n] {x[n]} = zfx[n] (z)dz.
z=−∞

− The correlation of a random process is


Z ∞ Z ∞

Rxx [n1 , n2 ] = Ex[n1 ]x[n2 ] {x[n1 ]x [n2 ]} = z1 z2∗ fx[n1 ],x[n2 ] (z1 , z2 )dz1 dz2 ,
z1 =−∞ z2 =−∞

where we use ∗ to denote the complex conjugate


− The covariance of a random process is

Cxx [n1 , n2 ] = Ex[n1 ]x[n2 ] {x[n1 ]x∗ [n2 ]} − mx[n1 ] m∗x[n2 ] .

Note that if the process is zero-mean Cxx [n1 , n2 ] = Rxx [n1 , n2 ]. Most processes we deal
with are zero mean.

7
− The variance is
σx2 [n] = Cxx [n, n].
The variance is the measure of the power of a random signal
− A process if called uncorrelated if

Cxx [n1 , n2 ] = 0 for n1 6= n2 (1)

• Wide sense stationarity


− A type of stationarity, e.g. a characterization of a random process based on its moments
− A wide sense stationary random process {x[n]} satisfies:
· The mean is constant mx[n] [n] = mx .
· The covariance (or equivalently the correlation) is only a function of the difference

Cxx [n1 , n2 ] = Cxx [n2 − n1 ].

− No assumptions are made about joint probability distributions associated with a WSS ran-
dom process
− By construction an i.i.d. random process is also WSS
• Ergodicity
− Under an ergodicity assumption, sample averages converge to expectations. Deep subject.
− Basically can observe one realization and can use it to estimate the moments for the entire
random process
− WSS RP are typically ergodic (necessary & sufficient conditions provided by Slutsky’s
theorem)
− Practical question: suppose you obtain an observation x[0], x[1], . . . , x[N − 1]. Possible
estimators
N −1
1 X
mx = lim x[n] (2)
N →∞ N
n=0

and
−1−k
NX
1
Rxx [k] = lim x[n]x∗ [n + k]. (3)
N →∞ N − k
n=0

The covariance follows in a similar manner


• i.i.d. Gaussian random process
− Most common model for additive thermal noise (AWGN)
− Refers to a i.i.d. random process where each sample is chosen from a Gaussian distribution
− Real Gaussian distribution is written in shorthand as N (m, σ 2 ) where

1 (x−m)2
fv (x) = √ e 2σ 2 .
2πσ 2

− Circularly symmetric Complex Gaussian distribution Nc (m, σ 2 )

8
· Real and imaginary parts are independent with the same variance but possible different
mean
· Distribution be written as 2
1 |x−m|
fv (x) = e σ2
πσ 2
where x is complex
− The relevant moments are

mx = m (4)
2
Cxx [n] = σ δ[n]. (5)

2 Relevant Fourier transforms

• Objective: Define and compute various Fourier transforms.


• Several Fourier transforms arise in this course
• Continuous-time Fourier transform (CTFT)
− A (well-behaved) continuous time function x(t) and its Fourer transform X(f ) are related
by the synthesis and analysis equations
Z
Analysis X(f ) = x(t)e−j2πf t dt (6)
t
Z
Synthesis x(t) = X(f )ej2πf t df. (7)
f

• Continuous-time Fourier series (CTFS)


− Consider a periodic signal x(t). The period, T > 0 is the smallest real number such that
x(t) = x(t + T ). The continuous-time Fourier series (CTFS) of a periodic signal x(t) can
be defined

1 T
Z

Analysis X[n] = x(t)e−j T nt dt (8)
T 0

X 2π
Synthesis x(t) = X[n]ej T nt (9)
n=−∞

− Define the Fourier transform of a periodic signal as


 
X 1
Analysis X(f ) = x[n]δ f − n (10)
T
Zn
Synthesis x(t) = X(f )ej2πf t df.
f

• Discrete-time Fourier transform (DTFT)

9
− For a well behaved discrete-time signal
X
Analysis X(ej2πf ) = x[n]e−j2πf n (11)
n=−∞
Z 1/2
Synthesis x[n] = X(ej2πf n )ej2πf n df. (12)
−1/2

The DTFT is also often written in radians using ω = 2πf . Perhaps the most important
difference between the CTFT and DTFT is that with the DTFT, f is only defined on any
finite interval of unit length. Typically f ∈ (−1/2, 1/2] or f ∈ [0, 1).
• Also deal with the Discrete Fourier Transform (will be defined later)

2 Sampling

• Objective: Determine the Nyquist sampling frequency for a given signal.


• Objective: Explain the theory behind sampling and reconstruction.
• Objective: Demonstrate aliasing.
• Nyquist sampling theorem Let xc (t) be a bandlimited signal, which means that Xc (f ) = 0
for f ≥ fN . Then xc (t) is uniquely determined by its samples {x[n] = xc (nT )}∞n=−∞ if the
sampling frequency satisfies
1
fs := > 2fN (13)
T
where fN is the Nyquist frequency and 2fN is generally known as the Nyquist rate. Further
X sin(π(t − nT )/T )
xc (t) = x[n] .
n
π(t − nT )/T

Note that xc (nT ) = x[n].


• Relation between continuous-time signal and discrete-time signal in the sampling theorem
 

j2πf
 1X f n
X e = Xc − .
T n T T

− Note that X(ej2πf ) is periodic as we expect.


− Pay attention to the scaling factor. Why do we have the scaling factor?
1 1
• Sampling frequency satisfies the Nyquist rate if T > 2fN . Then X(ej2πf ) = T Xc (f /T ) for
f ∈ [0, 1).
• Sampling frequency does not satisfy the Nyquist rate if T1 < 2fN . Then X(ej2πf ) 6= 1 P
T n Xc (f /T −
n/T ) for f ∈ [0, 1).
• Illustration of sampling in the time domain.
• Illustration of sampling in the frequency domain.

2 Discrete-time Processing of Bandlimited Continuous-Time Signals

10
xc (t )
xc (nT )
(a)

x[n]

(b)


0 1 2 3 4 5 6 7 8

Figure 2: Illustration of sampling in the time domain. Essentially periodic samples of xc (t) in (a) form the
values for the discrete-time signal x[n] in (b).

• Objective: Determine equivalent combination of sampling, digital filtering, and reconstruction


to process a bandlimited continuous-time signal with discrete-time signal processing.
• Discrete-time ProcessingRof Bandlimited Signals Let xc (t) be bandlimited with maximum
frequency fN and y(t) = h(τ )x(t − τ )dτ . Suppose that the sampling rate T is chosen such
that T < 1/2fN . Then  
X t − nT
y(t) = y[n]sinc
n
T
where

X
y[n] = h[m]x[n − m],
m=−∞

x[n] = xc (nT )
and

h[n] = T hl (nT ) (14)


sin(π(nT − τ )/T )
Z
= T h(τ ) dτ. (15)
τ π(nT − τ )/T

• Discrete-time equivalent system

11
Xc( f )

(a)

fN f

X ( e j 2π f )
(b) fs > 2 f N

 
1/ 2 1
f NT 1 − f NT
X ( e j 2π f )
(c ) fs < 2 f N
 
1/ 2 1

Figure 3: Illustration of sampling in the frequency domain. (a) Magnitude spectrum of a continuous-time
signal. (b) Corresponding magnitude spectrum for a discrete-time signal assuming that the Nyquist criterion
is satisfied. (c) Corresponding magnitude spectrum for a discrete-time signal assuming that the Nyquist
criterion is satisfied. In this example fS > fN and fS < 2fN .

  t − nT  
sin  π  
 T 
x[n] D/C xc (t ) = ∑ x[n] 
 t − nT 
n
π 
 T 

Figure 4: Generation of a bandlimited signal using a discrete-to-continous converter.

− hl (t) is the low-pass filtered channel, i.e. h(t) filtered with an ideal low-pass filter with
cut-off frequency of fN .
− h[n] is the equivalent channel.

12
yc (t ) C/D y[n] = yc (nT )

Figure 5: Sampling using a signal using a continuous-to-discrete-time converter.

− The frequency response of the equivalent channel is


    

j2πf
 1 f f
H e = T LfN Hc
T T T

for f ∈ [−1/2, 1/2) where LfN (f ) is a low-pass filter with cutoff frequency fN .

xc (t ) hc (τ ) yc (t )

(a)

x[n] D/C xc (t ) hc (τ ) yc (t ) C/D y[n]

(b)
T T

x[n] h[l ] y[n]

(c )

Figure 6: (a) A continuous-time LTI system with a bandlimited input. (b) Generating a bandlimited input
using a discrete-to-continuous converter, followed by processing with an LTI system, followed by sampling
with a continuous-to-discrete converter. (c) The discrete-time equivalent system.

2 Bandwidth of a signal

• Objective: Define and calculate various real measures of bandwidth.


• A signal’s bandwidth is determined from its Fourier transform or power spectrum (for random
signals)
• Bandwidth definition differs depending on whether the signal is baseband or passband
− Baseband signals have energy that is nonzero for frequencies near the origin.

13
− Passband signals have energy that is nonzero for a frequency band concentrated about f =
+/ − fc where fc is the carrier frequency and fc  0.
− All wireless signals are passband.
• Measures of bandwidth
− Absolute: smallest f2 − f1 such that Px (f ) = 0 for f > f2 and Px (f ) = 0 for f < f1
− Half-power (or 3dB bandwidth): smallest f2 − f1 such that Px (f1 )/Px (fmax ) = 0.5 and
Px (f2 )/Px (fmax ) where fmax is the frequency where Px (f ) achieves its maximum value.
− XdB bandwidth: extension of 3dB bandwidth to XdB.
− Fractional containment:
R∞ let X%R be the containment. Then the R f1bandwidth isR the smallest
f2 − f1 such that f2 Px (f )df / Px (f )df = (1 − X)/2 and −∞ Px (f )df / Px (f )df =
(1 − X)/2.

Figure 7: Illustration of two different notions of bandwidth in baseband and passband signals.

2 How to define the frequency response of a random signal?

• Objective: Define and calculate the power spectrum of a wide sense stationary stochastic pro-
cess.
• Random signals do not have a conventional definition of a spectrum. For example, if xc (t) is a
random processes, what does the Fourier transform of xc (t) mean? Is it well defined?
• The autocovariance function provides a definition of spectrum for WSS random signals.
• For continuous-time random processes, the power spectrum is defined using the CTFT as
Z ∞
Px (f ) = Cxx (t)e−j2πf t dt.
−∞

• For discrete-time signals, it is defined in a similar way using the DTFT as



X
Px (ej2πf ) = Cxx [n]e−j2πf n .
n=−∞

• Power spectrum is real and non-negative due to conjugate symmetry.

14
2 Complex envelope representation of wireless passband signals

• Objective: Use the complex envelope to represent a passband signal at baseband.


• Wireless communication signals are all real passband signals
• Most passband signals in wireless systems (except perhaps ultra wideband) satisfy the narrow-
band assumption B  fc . Essentially their bandwidth is much smaller than the carrier fre-
quency.
• Very general way to write a passband signal carrier frequency fc

xp (t) = A(t) cos(2πfc t + φ(t))


= A(t) cos(φ(t)) cos(2πfc t) − A(t) sin(φ(t)) sin(2πfc t)
= xi (t) cos(2πfc t) − xq (t) sin(2πfc t)

xp (t) = A(t) cos(2πfc t + φ(t))


= xi (t) cos(2πfc t) − xq (t) sin(2πfc t)
n o
= Re x(t)ej2πfc t

• Complex envelope or complex baseband signal: x(t) = xi (t) + jxq (t)


• Upconversion takes a baseband signal and produces a passband signal

xp (t) = Re(x(t)) cos(2πfc t) − Im(x(t)) sin(2πfc t)


1
Xp (f ) = (X(f − fc ) + X ∗ (−f + fc ))
2

Re {xb (t )}

cos (2π f ct ) x p (t )

Im {xb (t )}

− sin (2π f c t )

Figure 8: Mathematical block diagram model for direct upconversion of a baseband signal to a passband
signal. The real analog may not perform a direct conversion but rather may have a series of up conversions
and downcoversions.

15
xb (t ) x p (t )

e j 2π f c t

Figure 9: Shorthand for the block diagram in Fig. 8.

• Downconversion takes a passband signal and produces a baseband signal


1 1 1
yb (t) cos(2πfc t) = yi (t) − yi (t) cos(4πfc t) − yq (t) sin(4πfc t)
2 2 2
1 1 1
yb (t) sin(2πfc t) = − yq (t) − yq (t) cos(4πfc t) − yi (t) sin(4πfc t).
2 2 2

• Equivalently yb (t) = 2l(t) ? yp (t)e−j2πfc t




y p (t ) LPF 2

cos (2π f ct ) x p (t )

j
y p (t ) LPF 2

− sin (2π f c t )

Figure 10: Mathematical block diagram for direct downconversion of a passband signal to a baseband
signal. The real analog may not perform a direct conversion but rather may have a series of up conversions
and downconversions. Additionally there is typically a low-noise amplifier and a bandlimiting filter just
after the antenna and a series of gain control circuits.

y p (t ) yb ( t )

e − j 2π f c t

Figure 11: Shorthand for the block diagram in Fig. 10.

16
• To obtain the final signal we need a suitably chosen low-pass filter and a gain of 2

2 Baseband equivalent channel

• Objective: Use the complex envelope to represent a passband channel at baseband.


• Linear-time invariant approach to the wireless channel
− A good model for the wireless propagation channel is a linear time-varying system
Z
yp (t) = hp (t; τ )xp (τ )dτ.

· Linearity from propagation medium; time-varying from mobility of transmitter, re-


ceiver, or objects
− Most wireless systems designed so that the channel is time-invariant in a small interval
Z
yp (t) = h(t − τ )xp (τ )dτ . (16)

• The complex baseband equivalent channel is the one that satisfies


Z
yb (t) = he (t − τ )xb (τ )dτ (17)

for some suitable he (t)


• Equivalently n o
yp (t) = < (he (t) ? x(t))ej2πfc t (18)

• Deriving he (t) in the frequency domain


− Note that Yp (f ) = H(f )Xp (f ) thus can deal with Yp (f ) = Hp (f )Xp (f )
− Hp (f ) = P (f )H(f ) where

1 for f ∈ [−fc − W/2, −fc + W/2] or [fc − W/2, fc + W/2]
P (f ) =
0 else

− Note that
1 1
Yp (f ) = He (f − fc )X(f − fc ) + He (−f − fc )X(−f − fc ). (19)
2 2
= Hp (f )Xp (f ) (20)
  
1 1 1 1
= Hb (f − fc ) + Hb (−f − fc ) X(f − fc ) + X(−f − fc ) (21)
2 2 2 2
1 1
= Hb (f − fc )X(f − fc ) + Hb (−f − fc )X(−f − fc ) (22)
4 4
1
− He (f ) = 2 Hb (f ) or 2He (f ) = Hb (f )

17
|Xp ( f )|

|H ( f )|

|Yp ( f )|

Figure 12: Illustration of convolution between the passband signal and the channel. In this case the total
response of the channel H(f ) is illustrated.

• In the time domain


1  
he (t) = 2l(t) ? (p(t) ? h(t)) e−j2πfc t (23)
2  
= l(t) ? (p(t) ? h(t)) e−j2πfc t (24)

• Discrete-time complex baseband equivalent channel

h[n] = T he (nT )

2 Introduction to digital modulation

• Objective: Explain the basic concept of digital modulation.


• Digital modulation is generally the mapping from a sequence of bits to a passband waveform

18
|Xp ( f )|

|H p( f )|

|Yp ( f )|

Figure 13: Illustration of convolution between the passband signal and the channel.

• With the complex baseband equivalent, digital modulation is equivalent to mapping a sequence
of bits to a bandlimited complex baseband waveform

2 Linear complex amplitude modulation techniques

• Objective: Explain the basic concept of digital modulation.


• In this class we consider a special kind of linear modulations generated from linear combinations
of shifted amplitude modulated pulses.
• This is general enough to include several practical linear modulation schemes of interest
− QAM - quadrature amplitude modulation used in IEEE 802.11g, IEEE 802.16
− PSK - phase shift keying used in EDGE
− QPSK - quadrature phase shift keying (same as 4-QAM)
− PAM - pulse amplitude modulation, used in wireline systems

19
|H( f )|

- fc fc f

|H p ( f )|

- fc fc f

|H b ( f )|
2

|H e ( f )|
1

Figure 14: Frequency domain illustration of the transformation from H(f ) to Hp (f ) to Hb (f ) to He (f ).

• Mathematically the block diagram generates the following complex baseband signal
p X∞
x(t) = Ex s[n]gtx (t − nT )
n=−∞

20
RF h(τ) RF
xb(t) Upconversion Downconversion
yb(t)

xb(t) hb(τ) yb(t)

− s[n] is the complex symbol at discrete time n.


− Symbols are generated from symbol mapping, which takes a groups of bits and maps to a
point s in a constellation C

C = {s0 , ..., sM −2 , sM −1 }

|C| = M = 2# of bits/symbol

− T is the symbol period


− T1 is the symbol rate (e.g., order of Msymbols/s)
b
− T → data rate (bit rate)
− s[n] −→ symbol mapping −→
P
s[n]gT x (t − nT )
n
− gtx (t) is the transmit pulse shaping filter, very similar to a reconstruction filter
− Ex is the transmit energy

2 Constellations

• Objective: Draw a constellation diagram including bit labels.


• Objective: Calculate the energy of a constellation and normalize it.
• Example constellations we will consider in this course
− BPSK (2-PAM) : C = {+1, −1} (i.e., si = (−1)b →bit=0 or 1 )
− 4-QAM, QPSK : C = {1 + j, −1 + j, 1 − j, −1 − j}
− 4-PSK : C = {1, j, −j, −1}
† Bit mapping is done in a scientific manner (e.g., Gray coding)
− M-PAM : M is a power of 2
− M-QAM : M is a power of 4
· (M M
2 -PAM)× ( 2 -PAM)
· In wireless communications, 4-QAM, 16-QAM, and 64-QAM are most common.
− M-PSK : Uses M roots of unity, practically is chosen to be a power of 2
n j2πk oM −1
· C= e M
k=0

21
Im Im Im
(01) (01) (00)
j
(0) (1) (11) (00)
Re Re Re
-1 1 -1 1

-j
(10)
(11) (10)
BPSK QPSK 4-QAM
Im Im
(0000) (0100) (1100) (1000) (011)
(010) j (001)
(0001) (1001)
(0101) (1101) (110) (000)

(0111) (1111) Re -1 1 Re
(0011) (1011)
(111) (100)
-j (101)
(0010)(0110) (1110) (1010)

16-QAM 8-PSK
Im
(000000) (001000) (011000) (010000) (110000) (111000) (101000) (100000)

(000001) (001001) (011001) (010001) (110001) (111001) (101001) (100001)

(000011) (001011) (011011) (010011) (110011) (111011) (101011) (100011)

(000010) (001010) (011010) (010010) (110010) (111010) (101010) (100010)

Re
(000110) (001110) (011110) (010110) (110110) (111110) (101110) (100110)

(000111) (001111) (011111) (010111) (110111) (111111) (101111) (100111)


(010101) (110101)

(000101) (001101) (011101) (111101) (101101)(100101)

(000100)(001100) (011100) (010100) (110100) (111100) (101100)(100100)

64-QAM

Figure 15: Several different constellations with generally accepted bit to symbol mappings based on Gray
labeling.
· 8-PSK is used in EDGE
• Bit labeling is typically done according to the Gray coding principle. This means that adjacent
symbols differ by a single binary digit.
• Constellation energy
h i
− Definition: The constellation energy is defined as Es := Es |s[n]|2
− Used for zero-mean constellation, since if you didn’t, there would be a DC component that

22
causes problems with transmission and reception.
− Assume that s[n] is an i.i.d. random variable with probability mass function Ps [m]

h i M
X −1
2
E = E |s[n]| = |sm |2 Ps [m]
m=0

− If the symbols are equally likely and zero mean, as is usually assumed,
M −1
1 X
E = |sm |2
M
m=0

− We always normalize C such that Es = 1, to control transmit power. The reason is that we
use Ex to control / model transmit gain.
− To normalize the constellation, find an α such that

Cα = {αs0 , ..., αsM −2 , αsM −1 }

has unit energy

2 Computing the bandwidth, energy, and power of x(t)

• Objective: Calculate the bandwidth, energy, and power of a general amplitude modulated sig-
nal.
• Bandwidth

p ∞
X
x(t) = Ex s[n]gtx (t − nT )
n=−∞

− We cannot compute Fourier Transform of this as s[n] is discrete-time random process,


whereas x(t) is continuous-time random process. In fact x(t) is a cyclostationary random
process.
− We can define the power spectral density of signals like in x(t) as
h i
Px (f ) = Ex E |s[n]|2 |Gtx (f )|2

− With a unit energy constellation

Px (f ) = = Ex |Gtx (f )|2

− |Gtx (f )|2 determines the bandwidth of x(t)


• Energy / Power
R∞
− We always assume −∞ |Gtx (f )|2 df = 1
− We do this because we don’t want to ad any energy to the signal by any other way other
than Ex

23
− The transmit energy is in Joules
Z ∞
Px (f )df = Ex
−∞

− The transmit power is in Watts is Ex /T

2 Additive white Gaussian noise (AWGN) channel

• Objective: Define AWGN, white noise.

v(t)
(AWGN)

x(t) y(t)=x(t)+v(t)
(Baseband signal)

• AWGN is a good model for the impairments due to thermal noise present in any wireless com-
munication system
• x(t) is the complex baseband signal, v(t) is the AWGN, y(t) is the observation
• v(t) AWGN means
− Noise is additive
− v(t) is an Independent and identically distributed (i.i.d.) complex Gaussian random variable

v(t) ∼ Nc 0, σ 2


σ2
 
Re{v(t)} ∼ N 0,
2
σ2
 
Im{v(t)} ∼ N 0,
2

and E [Re{v(t)}Im{v(t)}] = 0
− σ 2 = No kTe is Boltzman’s constant k = 1.38 × 10−23 J/K and the effective noise
temperature of the device is Te in Kelvins. Assume Te = 290K in the absence of other
information.
− The noise is white because it is i.i.d. This means Rv (τ ) = σ 2 (τ ) or equivalently Pv (f ) =
σ2.
− Signal-to-noise ratio
Ex |x(t)|2 Ex
SN R = =
Ev |v(t)|2 No
2 Overview of the AWGN transmitter and receiver

• Objective: Describe key components of the transmitter and receiver.


• It is possible to show using detection theory that the AWGN receiver has the following form.
− grx (t) is the receiver pulse-shaping filter. Performs bandlimiting among other functions.
− C/D samples the received signal at the symbol rate of 1/T

24
Tx

bits Symbol Create


gTx(t) x(t)
Mapping Pulse Train Ex

Rx
Inverse bits
y(t) gRx(t) C/D y[n] Detection Symbol Mapping

− Detection is the process of producing a good guess about s[n]


− The inverse symbol mapping determines the bit sequence corresponding to the detected
symbol ŝ[n] output from the detection block.
• How do we choose each component?

2 Pulse-shape design

• Objective: Determine properties of the optimal receiver pulse-shape for the AWGN channel.
• Claim: The gr x(t) that maximizes the received SNR is optimal. This can be proven from detec-
tion theory.
• Consider the received signal (let ⊗ stand for convolution)
!
p X
y(t) = Ex grx (t) ⊗ gtx (t) ⊗ s[m]δ(t − mT ) + grx (t) ⊗ v(t)
m

• Let g(t) := grx (t) ⊗ gtx (t)


• After sampling we have
p X
y[n] = Ex s[m]g((n − m)T ) + grx (t) ⊗ v(t)|nT
m
√ 2
− Signal energy: Es Ex s[n]g(0) = Ex |g(0)|2
− Noise energy: Ev |grx (t) ⊗ v(t)|nT |2 = No |Grx (f )|2 df
R

25
− Intersymbol interference energy:
2

X X
|g(mT )|2
p
Es Ex
s[m]g((n − m)T ) = Ex
m,m6=n m6=0

• The quality of the receiver can be measured by the

Ex |g(0)|2
SIN R :=
|Grx (f )|2 df + Ex m6=n |g(mT )|2
R P
No

• Use series of inequalities to find the SINR maximizing design:

Ex |g(0)|2 |Grx (f )|2 df


R
Ex

No |Grx (f )|2 df + Ex m6=n |g(mT )|2 |Grx (f )|2 df + Ex m6=n |g(mT )|2
R P R P
No
Ex |gtx (t)|2 dt
R
≤ R
No |Grx (f )|2 df
Ex
= (25)
No
• Using the Cauchy-Schwarz inequality
Z 2
2

|g(0)| = grx (−t)gtx (t)dt

Z Z
≤ |gtx (t)| dt |grx (t)|2 dt
2
(26)
Z
= |grx (t)|2 dt
Z
= |Grx (f )|2 df (27)

(
− Equality holds when grx (t) = gtx − t)
− Matched filter: grx (t) = gtx (−t)
• Zero-ISI condition Ex m6=0 |g(mT )|2 = 0 equivalent to
P

g(nT ) = cδ[n].

Note that with our normalization of gtx (t), c = g(0) = |gtx (t)|2 dt = 1. Do such pulse shapes
R

exist?

2 Nyquist pulse-shapes

• Objective: Describe tradeoffs between bandwidth, excess bandwidth, and sampling frequency of
the raised cosine pulseshape.
• Objective: Define Nyquist pulse criterion and determine if a given pulse shape satisfies that
criterion.

26
• A necessary and sufficient condition for

g(nT ) = δ[n].

is that its Fourier transform  


X k
G f+ = T
T
k

or equivalently with gd [n] ↔ Gd (ej2πf )


X
Gd (ej2πf ) = G(f T + k) = 1.
k

• Standard example is the sinc function

sin(πt/T )
gsync (t) =
πt/T

− Must be truncated in practice


− Is non-causal, but can be made causal through sufficient delay
− Difficult to approximate steep band edges
− Wide main pulse implies lots of sensitivity to timing errors
− Slow decay 1̃/t

Figure 16: Illustration of the raised cosine pulse-shape in time and frequency domains for different choices
of roll-off. Replace with Matlab simulation

• When 1/T is less than the Nyquist rate we get aliasing!

27
• Standard example is the raised cosine pulse-shape

 T, ≤ |f | ≤ (1 − α)/2T
 01−α
T πT 1−α 1+α
Grc (f ) = 1 + cos |f | − , 2T ≤ 2T
 2 α 2T
0, |f | > 1+α
2T

sin πt/T cos(παt)


grc =
πt/T 1 − 4α2 t2 /T 2
− α is the rolloff factor, 0 ≤ α ≤ 1. Somes β is used instead of α. Often expressed as
percentage excess bandwidth.
− (1 + α)/2T is the absolute bandwidth of the raised cosine pulse
− (1 + α)/T is the Nyquist rate of x(t) with a raised cosine pulse shape
− 1/T is the rate at which x(t) is sampled
− α = 0 is the sinc pulse
− Width of main lobe controlled by α
− Decay at best 1̃/t3
• Recall G(f ) = Gtx (f )Grx (f ) and for matched filter Grx (f ) = G∗tx (f )
• Use the square-root raised cosine in practice for gtx (t) and grx (t)
sin((1−α)πt/T )
4α cos ((1 + α)πt/T ) + 4αt/T
gsqrc (t) = √ 2 .
π T 1 − (4αt/T )

2 Maximum likelihood detection

• Objective: Derive the ML detector for the AWGN channel.


• Objective: Implement the ML detection algorithm.
• Using a matched filter and if the composite pulse shape satisfies Nyquist then
p X
y[n] = Ex s[m]g((n − m)T ) + grx (t) ⊗ v(t)|nT
m
p
y[n] = Ex s[n] + v[n]

where v[n] is i.i.d. complex Gaussian noise with Nc (0, No ) (follows because grx (t) is a matched
filter and forms a Nyquist pulse-shape).
• Detection problem is to find a good guess of s[n] given an observation of y[n]
• Maximum likelihood is a detection rule that maximizes the likelihood of the observation under
the hypothesis that a given symbol was transmitted.
− Likelihood function: fy|s (y[n]|s[n] = s)
− fy|s (x) is the conditional probability distribution function of y[n] given s[n]

− For the AWGN channel, given s[n] = s, y[n] is complex Gaussian with mean Ex s and
variance No √
1 − |x− NEx s|2
fy|s (x|s) = e o
πNo

28
• The ML detector solves the optimization

1 − |y[n]−N Ex s|2
arg min fy|s (y[n]|s[n] = s) = arg min e o
s∈C s∈C πNo

|y[n]− Ex s|2

arg min fy|s (y[n]|s[n] = s) = arg min e No
s∈C s∈C

|y[n] − Ex s|2
arg min ln fy|s (y[n]|s[n] = s) = arg min
s∈C s∈C No
p
arg min ln fy|s (y[n]|s[n] = s) = arg min |y[n] − Ex s|2
s∈C s∈C

• ML detection for the AWGN channel solves


p
arg max |y[n] − Ex s|2
s∈C

• Voronoi regions of the constellation C with AWGN detection


 2 
2
p
Vsl = y : y − Ex sl < |y − sk | , sl , sk ∈ C, k 6= l

2 Probability of symbol error analysis

• Objective: Derive the probability of symbol error for the ML detector in an AWGN channel.
• Objective: Calculate the probability of symbol error.
• Performance of a dector is measured by the probability of error
• Can be computed in closed form in some cases, estimated via bounds in most cases
• Interested in the probability of symbol error since symbol detection is employed (can also derive
probability of bit error, frame error, etc)
• For equally likely symbols in AWGN the probability of symbol error is
  M −1  
Ex 1 X Ex
Pe = Pe|sm .
No M No
m=0

Ex
− It is a function of the SNR No

 
Ex
Pe|sm = Pr {sm is detected incorrectly|sm was transmitted}
No
M
X −1
= Pr {sm is decoded as sl |sm was transmitted, m 6= l} (28)
l=0
l6=m

29
 Ex Ex   Ex Ex 
 − ,   , 
 2 2   2 2 
• •
v2 v1

 Ex E   Ex E 
 − ,− x   ,− x 
 2 2   2 2 

• v3 • v4

Figure 17: Voronoi regions for the 4-QAM constellation are the quadrants of the complex plane. Any point
y[n] that falls in a Voronoi region is mapped to the corresponding symbol that generate that region through
the ML detection process.

30
• Hard to compute in practice thus we use a bound based on the pairwise error probability, defined
as
n p p o
Pr {sm → sl } = Pr y − Ex sm > y − Ex sl

s 
2
E x ksm − sl k
= Q 
No 2

− Note:
Z ∞
1 2
Q(x) = √ e−t /2 dt
2π x
1 −x2 /4
≤ e
2
− Use the Chernoff bound to simplify
• The PEP assumes only two symbols in the constellation thus provides an upper bound

Pr {sm is decoded as sl |sm was transmitted, m 6= l} ≤ Pr {sm → sl }

• Using the Union bound on the probability of error


  M −1
Ex X
Pe|sm ≤ Pr {sm → sl }
No
l=0
l6=m

• Substituting into the bound


s 
  M −1 M −1 2
Ex 1 X X Ex ksm − sl k 
Pe ≤ Q .
No M No 2
m=0 l=0,l6=m

• Define the minimum distance of the constellation as

d2min = min ksl − sm k2


l,m,l6=m

• Using the minimum distance


s 
M −1
Ex d2min 
 
Ex 1 X
Pe ≤ (M − 1)Q 
No M No 2
m=0
s 
Ex d2min 
= (M − 1)Q 
No 2

− This is known as the union bound and is loose at low SNRs

31
• For M-QAM
6
d2min = .
M −1
Substituting !
  r
Ex Ex 3
PeQAM ≤ (M − 1)Q .
No No M − 1

• Exact form
    r !  2 " r !#2
E x 1 E x 3 1 E x 3
PeQAM = 4 1− √ Q −4 1− √ Q .
No M No M − 1 M No M − 1

2 Implementing pulse-shaping at the transmitter

• Objective: Implement discrete-time pulse shaping at the transmitter using upsampling.


• Objective: Apply upsampling identities to implement and simplify transmit pulse shaping
• Thus far we have considered the following QAM transmitter

• This structure does not map to a realizable implementation


− Mix of analog and digital implementations
− Would like to do as much as possible in digital
− There is no discrete-to-continuous (D/A) converter
− gtx (t) may have bandwidth greater than 1/2T
• Approach is to shift as many functions as possible into discrete-time
• First observation: x(t) is bandlimited. Let Tx be some sampling period such that 1/Tx is greater
than the Nyquist rate of x(t). Then there exists a discrete-time sequence {c[n]} such that

X sin(π(t − nTx )/Tx )
x̃(t) = c[n]
n=−∞
π(t − nTx )/Tx

Nyquist rate by the Nyquist sampling theorem where x̃(t) = x(t)/ Ex .
• Second observation:
X
c[n] = s[m]gtx (nTx − mT ) (29)
m

• Suppose that Tx = T /L for some integer L for duration of lecture (extensions possible though
more complex)
X∞
c[n] = s[m]gtx (nTx − mLTx )
m=−∞

32
• Let gtx [n] := gtx (nTx )

X
c[n] = s[m]gtx [n − mL]
m=−∞

• Note that

!
X
c[n] = s[m]δ[n − mL] ⊗ gtx [n]
m=−∞

• But this can be implemented using upsampling ↑ L operation


X
s̄[n] = s[k]δ[n − kL]
k

S̄(ej2πf ) = S(ej2πf L )

s[n] ↑L s [ n]

s[0] s[0]
s[1] s[1]
s[2] s[2]

0 1 2 0 

 L 

 2L
L−1 zeros L−1 zeros

Figure 18: Upsampling of a discrete-time signal.

• Can be implemented more efficiently using multi-rate DSP theory by recognizing


(l)
− Let gtx [k] = gtx [kL]
− Note that
L−1 ∞
!
(l)
XX X
c[n] = s[m]gtx [k − m] δ[n − kL − l]
l=0 k m=−∞

s[n] ↑L gTX [n] D/C Ex x(t )

Tx

Figure 19: An implementation of transmit pulse shaping simplified using upsampling identities.

33
(0)
s[n] gTX [ n] ↑L D/C Ex x(t )

 Tx
 z−1

z −1
( L −1)
g TX [ n] ↑L

Figure 20: An implementation of transmit pulse shaping using upsampling.

2 Implementing pulse-shaping at the receiver

• Objective: Implement discrete-time pulse shaping at the receiver using oversampling combined
with downsampling.
• Objective: Apply downsampling identities to implement and simplify receive pulse shaping
• Thus far we have considered the following QAM receiver

• This structure does not map to a flexible implementation


− Requires analog matched filtering
− Take advantage of flexible digital processing
− Does not lend itself to various synchronization tasks
• How can we implement pulse-shaping in discrete-time?
• Let Tz = T /M for some integer M . If 1/Tz > Nyquist rate then this is called oversampling
• Suppose that we implement the analog filter using discrete-time processing
• We have already solved this problem using discrete-time processing of a bandlimited continuous-
time signal

z (t ) C/D g RX [n] D/C C/D y[n]

Tz Tz T = MTz

Figure 21: An implementation of receive matched filtering in continuous-time using discrete-time process-
ing.
g[n] = Tz grx (nTz )
For simplicity we neglect the Tz term since receiver scaling does not matter
grx [n] := grx (nTz )

34
• Simplify using downsampling ↓ M
y[n] = z[nM ]
M −1
1 X  j2πf /M −j2πm/M 
Y (ej2πf ) = Z e
M
m=0

z[n] ↓M y[n]

z[0] z[0]
z[1] z[2] z[ M ]


 
0 1 2 0 1

Figure 22: Downsampling of a discrete-time signal.

• Thus if z[n] = z(nTz ) we can simplify with downsampling


X
y[n] = grx [k]z[nM − k]
k

z (t ) C/D g RX [n] ↓M y[n]

Tz

Figure 23: An implementation of receive matched filtering in continuous-time using discrete-time process-
ing.

• Note that the straightforward convolution results in lots of extra computations that are wasted
because we toss those coefficients
• Use downsampling equivalence to reduce computations
X
y[n] = z[mM ]g[n − m]
m
!
X X
= z[m] g[p]δ[nM − m − pM ]
m p
X
= z[m]ḡ[nM − m]
m

35
• Using a transformation k = pM − m
X
y[n] = grx [k]z[nM − k]
k
X
= z[k]grx [nM − k]
k
XM
X −1
= z[pM − m]grx [nM − pM + m]
p m=0

XM −1
(m)
X
= z[pM − m]grx [n − p]
k m=0

(0)
z (t ) C/D gRX [ n] ↓M y[n]
−1
z
(1)
g RX [ n]
Tz


( M −1)

g RX [ n]

Figure 24: Exchange of downsampling and filtering operations.

2 Changing the sampling rate

• Objective: Increase and decrease the effective sample rate of a discrete-time signal using multi-
rate DSP.
• Can generalize transmit and receive pulse shaping to T = M Ts /L using the concept of resam-
pling
• Consider a bandlimited signal x(t) sampled with sampling period T such that Nyquist is satisfied
• Increasing the sampling rate by L
− Have x[n] = x(nT ) but would like z[n] = x(nT /L)
X sin(π(t − nT )/T
x(t) = x[n]
n
π(t − nT )/T

thus
X sin(π(nT /L − mT )/T
x(nT /L) = x[m]
m
π(t − nT )/T
X sin(π(n − mL)/L)
= x[m] .
m
π(n − mL)/L

36
LPF
x[n] ↑L Gain L z[n]
Cutoff 1/L

Figure 25: Increasing the sampling rate by an integer factor L. Also known as interpolation.

• Decreasing the sampling rate by M


− Have x[n] = x(nT ) but would like a signal sampled at M T
− To avoid aliasing filter by low-pass filter first then downsample
X sin(π(nM − m)/M
x̃[n] = M x[m]
m
(nM − m)/M

LPF
x[n] Gain L ↓M x[n]
Cutoff 1/M

Figure 26: Decreasing the sampling rate by an integer factor M . Also known as decimation.

• Changing the sampling rate by L/M


− First expand the rate by L then downsample by M
− Combine the low-pass filters with R = max(L, M )

LX sin(π(nM − m)/R
x̃[n] = x[m] .
R m (nM − m)/R

2 Challenge of communication in frequency flat channels

• Objective: Explain different impairments caused by the frequency flat wireless communication
channel.
• Thus far we have considered transmission and reception in an ideal channel using the following
structure
• What happens in a frequency flat channel?
− Baseband channel is h(τ ) = αejφ δ(t − τd )
− In the frequency domain H(f ) = αe−j2πτd +jφ
− Frequency flat because |H(f )| is constant

37
LPF
x[n] ↑L Gain L ↓M x[n]
Cutoff
min(1/L, 1/M)

Figure 27: Changing the sampling rate by a factor of L/M . Also known as resampling.

Tx

bits Symbol Digital


M Pulse Shape D/C Es RF
Mapping

Ts/M
Rx

RF C/D Matched N Detector Demapping


Filter

Ts/N

v(t ) receiver structure.


Figure 28: Summary of the transmit and

x(t ) α e jθ δ(t − τ ) z (t )

Figure 29: A frequency flat wireless channel consisting of AWGN, attenuation by α, phase shift by φ, and
delay by τd .
• Consider the signal after matched filtering and sampling with 0 < τd < T
p X
y[n] = Ex αejφ s[m]g((n − m)T − τd ) + v[m]
m
p p X

= E αe s[n]g(τd ) + Ex αejφ s[m]g((n − m)T − τd ) + v[n]
| x {z } |{z}
m6=n
desired | {z } noise
ISI

− Sampling at the incorrect point creates intersymbol interference!


− Need symbol synchronization or equalization to compensate

38
• Suppose that τd = dT for some integer d
p X
y[n] = Ex αejφ s[m]g((n − m)T − τd ) + v[m]
m
p X

= Ex αe s[m]g((n − m)T − dT ) + v[m]
m
p
= Ex αejφ s[n − d] + v[m]

− Sampling at integer offsets creates unknown delay


− Need frame synchronization to find the beginning of our data
• Suppose τd = 0
p
y[n] = Ex αejφ s[n] + v[m]

− Sampling and removing delay leaves distortion due to α and φ


− Some modulation schemes are robust to such distortion (e.g., differential QPSK)
− For general linear modulation, need channel estimation and equalization
• General algorithmic approaches to mitigating channel distortions
− Burst mode - operating on bursts or frames of data, often applied at start of data collection
− Continuous mode - operating on continuous data acquisition, often applied to adaptively
compensate for distortions

2 Symbol synchronization or timing recovery

• Objective: Formulate the discrete-time symbol synchronization problem.


• Objective: Solve the symbol synchronization problem using maximum output energy maximiza-
tion.
• Different approaches to symbol synchronization

C/D C/D C/D Digital signal


+ Correct

Ts Ts
Ts
Analog Digital N Digital
Timing Synch Synch

Figure 30: Different options for correcting symbol timing.

• Assume that g(t) is a Nyquist pulseshape


• Two general approaches depending on the sampling rate
− Oversampling method suitable when M is large
− Resampling or interpolation method suitable when M is small, e.g. M = 2

39
C/D g(m)[n] z-k M

k
Ts
M Determine
optimal sample

Interpolation
C/D N:M
g(m)[n] z-k M

k
Ts
N Determine
optimal sample

• The best approach would be to formulate and solve for the maximum likelihood symbol timing
estimate, somewhat complex in practice
• Output maximization is one criteria for symbol timing that has a simpler implementation
− Consider the continuous-time output of the matched filter y(t) followed by sampling at
nT + τ
Jopt (τ ) = E |y (nT + τ )|2
− The maximum output energy solution is

τ̂d = arg max Jopt (τ )


τ ∈[0,T )

− Why does this work?


X
E |y (nT + τ )|2 = Ex |g (mT + τ − τd ) |2 + σv2 ≤ Ex |g(0)|2 + σv2
m

− How to solve?
· Differentiating the expectation and approximating
n o
d d 2
J
dτ opt (τ ) ' E dτ |y (nT s + τ )|
δ ∗
  ∗ δ

= E y(nT + τ ) δτ y (nT + τ ) + E y (nT + τ ) δτ y(nT + τ )
δ ∗
  
= 2Re E y(nT + τ ) δτ y (nT + τ )
δ ∗
· Discrete-time difference δτ y (nT + τ ) ' y ∗ (nTs + τ + δ) − y ∗ (nTs + τ − δ)
· Replace E{·} by time-average
P −1
d 1 X
Jopt (τ ) ' 2Re {y (nTs + τ ) (y ∗ (nTs + τ + δ) − y ∗ (nTs + τ − δ))}
dτ P
n=1

40
· Assume oversampling or interpolation then discretize τ and δ on [0, 1, . . . , M − 1]
P
X −1
∗ ∗
δJ[k] = 2Re {rover [nP + k] (rover [nP + k + δ] − rover [nP + k − δ])}
n=0

kopt T
For k = 0, ..., M − 1, kopt = arg min dJ[k] and τ̂d = M .
k
• Once an estimate of the offset τ̂d is obtained then it is possible to incorporate the delay in to the
discrete-time matched filter to reduce processing complexity.
• How do we estimate the error rate after synchronization for pulse amplitude modulation with the
union bound
 
v
Ex |g(τd − τ̂d )|2
u
u 1 
Pe ≤ (M − 1) Q  u
t 3M − 1 E · P 2

x |g(mT − τ d )| + N 0

m6=0

2 Frame synchronization

• Objective: Formulate the frame synchronization problem.


• Objective: Solve the frame synchronization problem using energy detector or a correlator.
• Objective is to solve for d after symbol synchronization
p
y[n] = Ex αejφ s[n − d] + v[n]

• Radios with a training phase


ssync[n]

L Train P-L data L Train ......


− Suppose that {t[n]}N t
n=0 is a known training sequence
− Correlation based detector correlates with the training sequence
N −1 2
Xt

J[n] = t [k]y[n + k]


k=0

and solves for


dˆ = max J[n]
n
− Periodic insertion of the training symbols can improve performance and resistance to find-
ing the frame synchronization sequence in the data packet
2
−1 N t −1

TX X
Jp [n] = t∗ [k]y[n + k + tP ]


t=0 k=0

where P is the length of the frame, T is the number of periods

41
− Special sequences with low correlation can improve performance
· Length 16 Frank sequence

t[n] = t[mq + k] = ej2πrkm/q 0 ≤ k, j < q, (r, q) = 1

where 0 ≤ n ≤ q 2 − 1 and q is any integer. For our purpose we can take q = 4 and
r = 3 to obtain a length 16 QPSK sequence (you may need to phase shift to get a QPSK
from a 4-PSK sequence).
· Neuman-Hoffman binary code

(−1, −1, −1, −1, −1, −1, 1, 1, −1, −1, 1, −1, 1)

− Receiver with frame synchronization and symbol synchronization

ˆ Symbol
z (t ) C/D g RX [n] zk M z d̂ Detector sˆ[n]
-1
kˆ d̂ hˆ
T Symbol Frame Channel
M Sync Sync Estimator

Figure 31: Receiver with symbol synchronization based on oversampling, frame synchronization, and chan-
nel estimation.

2 Frequency flat channel estimation

• Objective: Formulate and solve the frequency-flat channel estimation problem.


• Objective: Implement detection with channel estimation.

• Let complex scaler h = αejθ Ex be the unknown channel
• Objective of channel estimation is to solve for h
• Formal approach would be to derive an optimal estimator like the maximum likelihood estimate
or another estimate under some assumption of pilot sequences or training sequences,
• Consider least-squares channel estimation (maximum likelihood with gaussian noise)
• Minimize the squared error assuming training after frame synchronization
N
X t −1

J(a) = |y[n] − at[n]|2


n=0

Can differentiate with respect to h∗ and solve


PNt −1
t∗ [n]y[n]
ĥ = Pn=0
Nt −1
n=0 t∗ [n]t[n]

42
− Top term is the correlator - can jointly perform frame synchronization and channel estima-
tion!
− Bottom term corrects for scaling
• To correct for h simply include scaling in ML detector |y[n] − hs[n]|2
• To simplify detection note that if h 6= 0

|y[n] − hs[n]|2 = |y[n]/h − s[n]|2 /|h|2

→ can equalize by dividing by h


• Receiver with symbol synchronization, frame synchronization, and equalization

2 Challenge of communication in frequency selective channels

• Objective: Explain different impairments caused by the frequency selective wireless communi-
cation channel.
Figure 32: A frequency selective wireless channel consisting of convolution with an impulse response fol-
lowed by AWGN.

• Frequency selective channel hc (τ ) models multipath delay spread


− Called frequency selective because Hc (f ) is not in general flat
− With specular multipath X
hc (τ ) = αk ejφk δ(t − τk )
k

− A reasonable assumption is that hc (τ ) is causal and FIR


• Let Z τmax
p
h(τ ) = Ex h(s)g(t − s)ds
s=0
and let h[n] be the discrete-time equivalent channel h[n] = T h(nT )
• After matched filtering
L
X
y[n] = h[l]s[n − l] + v[m].
l=0

• Frequency selective channels create intersymbol interference


• There are many ways to detect the transmitted signal
−1
− Maximum likelihood sequence estimation (MLSE) - solve for {s[n]}N
n=0 . Efficiently im-
plemented with Viterbi Algorithm (highest complexity)
− Linear equalizer (LE) - find a function f [n] s.t.

f [n] ⊗ h[n] ' δ[n − nd ]

(low complexity with the worst performance)


− Decision feedback equalizer (DFE) - takes LE + subtract tentative estimate of symbol

43
2 Least squares estimation of intersymbol interference channels

• Objective: Estimate the channel coefficients using the least-squares method.


• Several different estimation criteria, least squares is among the most common
t −1
• Suppose that {t[n]}N
n=0 is a known training sequence
• Least-square problem

Nt
2
X L
X
{ĥ[0], ĥ[1], . . . , ĥ[L]} = arg min y[n] − a[l]t[n − l] .

a[0],a[1],...,a[L]
n=L l=0

• Write in matrix form


 
t[L] ··· s[0]
  
y[L] a[0]
 y[L + 1]   .. .. 
a[1]
  t[L + 1] . .  
= .
 
 .. .. ..
 ..
. .
 
   . .  
y[Nt − 1] t[Nt − 1] · · · t[Nt − 1 − L] a[L]
| {z } | {z }| {z }
y T a

• Least squares solution, assuming T∗ T is invertible, is

ĥ = (T∗ T)−1 T∗ y.

− Need S to be square or tall


Nt − L − 1 ≥ L + 1
or equivalently
Nt ≥ 2(L + 1)
− Need training to be sufficiently exciting
• The squared error of the least-squares solution is

kh − ĥk2 = kh − (S∗ S)−1 S∗ yk2

2 Least-square equalizer from a channel estimate

• Objective: Derive and compute the least-squares equalizer using an estimate of the channel.
• An equalizer is a filter that removes frequency selective distortion
• A good equalizer, in the absence of noise, satisfies
Lf
X
ŝ[n − nd ] = fnd [l]y[n − l]. (30)
l=0

or equivalently
Lf
X
f [l]ĥ[n − l] ≈ δ[n − nd ]. (31)
l=0

44
− nd is the delay, a design parameter
− Generally nd > 0 improves performance
• One approach for solving is the least-squares equalizer is to use the channel estimate
• Consider the least-squares FIR inverse that minimizes the squared error w/ estimated channel
Lf +L Lf
X X
Jf [nd ] = |δ[n − nd ] − fnd [l]ĥ[n − l]|2
n=0 l=0

• Building a linear equation


   
··· ··· f [0] 0

ĥ[0] 0
 f [1]   .. 
 ĥ[1] ĥ[0] 0 ··· 
   . 

.. ..  ..   
 1 
 . 

 . . 
  ← nd + 1

 ĥ[L]
 . 
 .    = ..  .
  .   . 
 0 ĥ[L] · · ·  .   
. 
   ..    .. 
..
. f [Lf ] 0
| {z } | {z } | {z }
Ĥ f end

• Least squares solution to kend − Hfnd k2


 −1
f̂nd = Ĥ∗ Ĥ Ĥ∗ end .

• H is tall, Lf + L + 1 × Lf + 1, always full rank if h[0] 6= 0


• Optimize squared error
 −1
Jf [nd ] = kend − Ĥf̂ k2 = kend − Ĥ Ĥ∗ Ĥ Ĥ∗ end k2

Symbol
z(t ) C/D g RX [n] zk̂ M f[l] zn̂ d
Detector sˆ[n]


Ts Symbol Equalizer
M Sync Computation

Channel
Estimator

Figure 33: QAM receiver with channel estimation and linear equalization.

2 Least-square equalizer directly from the observed data

• Objective: Derive and compute the least-squares equalizer without first estimating the channel.

45
• Consider the received signal
L
X
y[n] = h[l]s[n − l] + v[n]
l=0

were s[n] = t[n] for n = 0, 1, . . . , Nt − 1.


• After equalization with delay nd
Lf
X
ŝ[n − nd ] ≈ fnd [l]y[n − l]. (32)
l=0

• Suppose that s[n] = t[n] for n = 0, 1, . . . , Nt − 1 is the known training data

ŝ[n − nd ] = t[n − nd ] forx n = nd , nd + 1, . . . , nd + Nt

• Rewriting with knowledge of the training data


Lf
X
t[n] ≈ fnd [l]y[n + nd − l]
l=0

for n = 0, 1, . . . , Nt .
• Squared error
N t −1 Lf
X X
Jf [nd ] = |t[n] − fnd [l]y[n + nd − l]|2
n=0 l=0

• Building a linear equation


 
y[nd ] ··· y[nd − Lf ]
  
t[0] fnd [0]
t[1]   .. .. 
fnd [1]

  y[nd + 1] . .  
= .
 
 .. .. ..
 ..
.   .

  . .  
t[Nt − 1] y[nd + Nt − 1] · · · s[nd + Nt − Lf ] fnd [Lf ]
| {z } | {z }| {z }
t Ynd fnd

• Least squares solution to kt − Ynd fnd k2


−1
f̂nd = Yn∗ d Ynd Yn∗ d t.

• LS solution, under the assumption that Y is full rank (which is reasonable in the presence of
noise) −1 ∗
f̂nd = Yn∗ d Ynd Ynd t.
• Optimize order based on squared error
−1
Jf [nd ] = kt − Ŷnd f̂nd k2 = kt − Ynd Yn∗ d Ynd Yn∗ d tk2

46
Symbol
z(t ) C/D g RX [n] zk̂ M f[l] zn̂ d
Detector sˆ[n]


Ts Symbol Equalizer
M Sync Computation

Figure 34: QAM receiver with direct equalizer estimation and linear equalization.

• Direct versus indirect methods


− Direct method avoids chain of estimation error
− Indirect method can design equalizers of arbitrary order
− Indirect method uses channel estimate, which may anyway be useful

2 Overview of Impairment Caused by Frequency Offset

• Objective: Explain origin of frequency offset and discuss nature of the impairment.
• What is it? Frequency offset occurs when fc 6= fc0 and is unknown!

'

e j 2π f c t e − j 2π f c t

• Let fe = fc − fˆc and  = T fe


• Flat fading channel with frequency offset (after the matched filter)

y[n] = ej2πn hs[n] + v[n]

−  is generally small but unknown


− Rotates constellation by ej2πn
• Frequency selective fading channel with frequency offset (after the matched filter)
L
X
y[n] = ej2πn h[l]s[n − l] + v[n]
l=0

− Rotation occurs after the convolution


− Impacts channel estimation and thus equalization
• Frequency offset estimation and correction is called frequency offset synchronization
• Complicates frame synchronization and channel estimation
• Frequency offset synchronization methods
− Blind methods use properties of the received signal

47
− Non-blind methods use some form of training to estimate the offset and correct

2 Frequency offset estimation using periodic training

• Objective: Use the Moose method to perform joint frequency offset estimation and frame syn-
chronization.

Frame

... ...
Nt Nt N-2T data symbols
Training

• Let training start at n = 0


• For L ≤ n ≤ Nt − 1
L
X
j2πn
y[n] = e h[l]s[n − l] + v[n]
l=0
L
X
j2π(n+Nt )
y[n + Nt ] = e h[l]s[n + Nt − l] + v[n + Nt ]
l=0

Note that s[n + Nt ] = s[n] = t[n] for n = 0, 1, . . . , Nt − 1


L
X
j2πNt j2πn
y[n + Nt ] = e e h[l]t[n − l] + v[n + Nt ]
l=0
≈ ej2πNt y[n]. (33)

• Least squares is one solution but  in the exponent requires a nonlinear least squares solution
• As an alternative, consider a simplification of the least squares problem
N
X t −1

J(a) = ky[n + Nt ] − ay[n]k2


l=L

• Solution is PNt −1 ∗
l=L y [n + Nt ]y[n]
â = N1 −1
.
2
P
l=L |y[n + Nt ]|
− Can neglect denominator since only the phase of â is of interest
• Simple frequency offset estimator
PNt −1
arg l=L y ∗ [n + Nt ]y[n]
ˆ =
2πNt
or PNt −1
arg y ∗ [n + Nt ]y[n]
fˆe = l=L
2πT Nt
where arg donotes the principle phase of the argument.

48
1
• Estimate of  will only be accurate for |Nt | ≤ 2 or equivalently

1
|| ≤
2Nt
or
1
|fe | ≤ .
2T Nt
• Example 1M s/s QAM signal, fc = 2GHz, Nt = 10
1 1 1 1
max |fe | = = 106 = 105 = 50kHz
2Ts T 2 10 2

• Self referenced frame synchronization


Nt
X
dˆ = arg max y ∗ [n + d]y[n + d + Nt ]
n=L

2 Motivation for frequency domain linear equalization

• Objective: Explain why frequency domain equalization is difficult.


• Received signal with intersymbol interference w/out noise
L
X
y[n] = h[l]s[n − l]
l=0

• In the frequency domain


     
Y ej2πf = H ej2πf S ej2πf

• The ideal zero-forcing equalizer is


  1
F ej2πf =
H (ej2πf )

• Why not equalize in the frequency domain?


− Need whole {s[n]} → S ejω


− In practice we only have a few samples of s[n] and cannot wait to acquire all of s[n]
− Further h[n] is not time invariant over a long period
• Simplify frequency domain equalization
− Use specially designed s[n] to enable frequency domain equalization
− Leverage principles of the discrete Fourier transform (DFT)

2 Background on the discrete Fourier transform

• Objective: Calculate the DFT and IDFT of a sequence.


• Objective: Calculate the circular convolution of two sequences.

49
• The DFT is a basis expansion for finite-length signals
N −1
X 2π
Analysis : X[k] = x[n]e−j N kn k = 0, 1, ..., N − 1
n=0
N −1
1 X 2π
Synthesis : x[n] = X[k]ej N kn n = 0, 1, ..., N − 1
N
k=0
(N : length of signal)

• The DFT can be computed efficiently with the Fast Fourier transform for N a power of 2 and
certain other special cases
• Circular shift property of the DFT

k
j2π ( N )m x [((n − m))N ] 0 ≤ n ≤ N − 1
If X1 [k] = e X[n] ⇒ x1 [n] =
0 else

• Products in the frequency domain become circular convolution in discrete-time


N
X −1
Y [k] = H[k]S[k] ↔ y[n] = h[l]s [((n − l))N ]
l=0

• Circular convolution
− To illustrate, let length of h[n] be L + 1, length of s[n] be N > L
− Zero-pad h[n] with N − 1 − L zeros
N
X −1
y[n] = h[l]s [((n − l))N ]
l=0
L
X
= h[l]s [((n − l))N ]
l=0
 n L
P P


 h[l]s[n − l] + h[l]s[n + N − l] 0 ≤ n < L
l=0 l=N +1
= L
P
h[l]s[n − l] n≥L



l=0

Linear convolution Circular convolution


s1 s1

n1 L +1
s2 s2

n2 N
⇓ ⇓

n1 + n2 + 1 N

50
2 Single carrier frequency domain equalization

• Objective: Use cyclic prefix to convert a linear convolution to a circular convolution and equal-
ize in frequency domain.
• Need to make a linear convolution look circular
• Let s[n], n = 0, ..., N − 1 be N QAM symbols for transmission
• Create the following signal
− Add a cyclic prefix of length Lc

w[n] = s[n + N − Lc ] n = 0, 1, . . . , Lc − 1

− Append N samples of data symbols

w[n + Lc ] = s[n] n = 0, 1, . . . , N − 1

• After convolution with an L + 1 tap channel


L
X
y[n] = h[l]w[n − l]
l=0

• Neglecting the first Lc terms of the convolution (discarding the cyclic prefix)
L
X
y[n + L] = h[l]w[n + L − l]
l=0
L
X
= h[l]s[(n − l)N ]
l=0
(34)

• Can use frequency domain equalization!

ŝ[n] = IDF T {Y [k]/H[k]}

• Can use standard symbol timing, frame timing, and channel estimation algorithms.

2 Orthogonal frequency division multiplexing (OFDM)

• Objective: Describe the OFDM digital modulation scheme.


• Objective: Derive the OFDM receiver and prove that it works.
• Single carrier frequency domain equalization uses a cyclic prefix to allow frequency domain
equalization
− It requires a DFT and IDFT at the receiver
• OFDM is a type of multicarrier modulation that also allows frequency domain equalization
− Key difference versus SC-FDE is that the IDFT operation is moved to the transmitter

51
z (t ) C/D g RX [n] z m̂ ↓M z d̂ Serial Lc
to
m̂ d̂ Parallel
T
Symbol Frame 1: N + Lc  DFT
 IDFT 
M Sync Sync
H −1[k ]

Channel
S/P DFT
Estimation

QAM Parallel
sˆ[n]
Detection to Serial
N:1

Figure 35: Receiver with cyclic prefix removal and a frequency domain equalizer.
• Advantages of OFDM
− Allows simple frequency domain equalization at the receiver
− Allows sophisticated types of adaptive modulation where information is adapted to the fre-
quency response of the channel
− Obtains diversity against fading when combined with error control coding
• Applications of OFDM
− WiFi - IEEE 802.11g, IEEE 802.11a, IEEE 802.11n
− WiMax - IEEE 802.16
− Mobile broadband wireless - IEEE 802.20
− Digital video broadcast - DVB (used in Europe), DVB-h
• OFDM transmitter

Serial Parallel
bits to IFFT Add to
QAM CP
L gTX [n] D/C
Parallel N Serial
...
...

...

1:N (N+L c ):1

T
L

Figure 36: Block diagram of an OFDM transmitter. In practice the IDFT would instead be implemented
with an IFFT
−1
− Suppose we want to transmit {s[n]}N n=0
− Let Lc be the length of the cyclic prefix

52
− The OFDM transmitter sends the samples
N −1
1 X m(n−Lc )
w[n] = s[m]ej2π N n = 0, ..., N + Lc − 1
N
n=0

− Note that w[n] = w[n + N ] for n = 0, 1, . . . , Lc (this is a cyclic prefix)


− Note that w[n + Lc ] = IDF T {s[n]} (this is the data)
− N is the number of subcarriers or tones
• OFDM receiver

z (t ) C/D g RX [n] z m̂ ↓M z d̂ Serial Lc


to
m̂ d̂ Parallel H −1[0]
T
Symbol Frame 1: N + Lc  DFT
 
M Sync Sync
H −1[ N − 1]

Channel
DFT
Estimation

QAM Parallel
sˆ[n]
Detection to Serial
N:1

Figure 37: Block diagram of an OFDM receiver


− The observed signal after matched filtering and downsampling is
L
X
y[n] = h[l]w[n − l] + v[n]
l=0
− The receiver discards the first Lc samples to form for n = 0, 1, . . . , N −1 (neglecting noise)
ȳ[n] = y[n + Lc ]
L
X
= h[l]w[n + Lc − l] n = 0, ..., N − 1
l=0
L N −1
1 X X m(n+Lc −Lc −l)
= h[l] s[m]ej2π N
N
l=0 m=0
L N −1
1 X X mn ml
= h[l] s[m]ej2π N e−j2π N
N
l=0 m=0
N −1 L
!
1 X X
−j2π ml mn
= h[l]e N s[m]ej2π N
N
m=0 l=0

53
− The receiver then takes the DFT of these samples

Ȳ [k] = DF T [ȳ[n]]
N −1
X 2πkn
= z̃[n]e−j N

n=0
N −1 N −1 L
!
1 X X X
−j 2πml 2πmn 2πkn
= h[l]e N s[m]ej N e−j N
N
n=0 m=0 l=0
N −1 L N −1 N −1
! !
1 X X
−j 2πml
X
j
2πn(m−k) X
j
2πn(m−k)
= h[l]e N s[m] e N ← e N = N δ[m − k]
N
m=0 l=0 n=0 n=0
L
!
X 2πml
= h[l]e−j N s[k]
l=0
= H[k]s[k]

− Equalization simply requires dividing Ȳ [k] by H[k]


− Terminology / parameters in OFDM
· T : sample period
· T (N + Lc ): OFDM symbol period
· Subcarrier spacing: ∆c = BW N = NT
1

· Guard interval: Lc T

2 Channel Estimation in OFDM

• Objective: Describe different approach for channel estimation in an OFDM system.


• Time-domain approach
−1
− Suppose that {s[n]}Nn=0 is a known training symbol
+Lc −1
− Apply the usual least-squares channel estimator to {w[n]}N
n=0
− Take the DFT of the result to find H[k]
• Frequency-domain with training symbol
−1
− Suppose that {s[n]}N
n=0 is a known training symbol
− Estimate H[k] as
Ȳ [k]
Ĥ[k] =
S[k]
− This is noisy - may need some averaging
• Frequency-domain with pilots
− Suppose that pilots s[n1 ], s[n2 ], . . . are known
− Estimate H[k] as
Ȳ [nk ]
Ĥ[nk ] =
S[nk ]
− Use interpolation to find remaining H[k]

54
2 Frequency Offset Estimation in OFDM
• Objective: Describe and implement carrier frequency offset estimation for an OFDM system.
• General OFDM transmitter

Serial Parallel
bits to IFFT Add to
QAM CP
L gTX [n] D/C
Parallel N Serial

...
...

...
1:N (N+L c ):1

T
L

Figure 38: Block diagram of an OFDM transmitter. In practice the IDFT would instead be implemented
with an IFFT
• General OFDM receiver

Freq. offset Symbol


RF C/D g RX [n] ↓M Timing
Corection
Correction

Sample Frequency Symbol


Ts Timing
Timing Offset
M

Serial
Remove to FFT QAM
EQ Demapping
CP Parallel
1:N
 N Detection

Channel
Estimation

• OFDM is very sensitive to carrier frequency offset


• Modification of Moose’s method by Cox and Schmidl - basic idea is to create periodicity in one
OFDM symbol instead of repeating multiple OFDM symbols
• As an example consider turning off the odd subcarriers
N N
2
−1 2
−1
1 X j
2π2m(n−Lc ) 1 X j
2πm(n−Lc )
w[n] = s[2m]e N = s[2m]e N/2
N N
m=0 m=0
 
N N
⇒ for n ≥ Lc and n < Lc + , w[n] = w n +
2 2

55
CP Data

p N/2 N/2

• Consider the received signal


L
X
y[n] = ej2πn h[l]w[n − l] + v[n]
l=0

• Because of the periodicity in w[n], for n = Lc , Lc + 1, . . . , N/2 − 1

y[n + N/2] ≈ ej2πN/2 y[n]

• Simple frequency offset estimator follows as before


PN/2−1
arg l=L y ∗ [n + Lc + N/2]y[n + Lc ]
ˆ =
πN
• Note can perform frame and frequency offset synchronization jointly
• Maximum estimateable offset is || ≤ 1/2(N/2) = 1/N . This is called fine offset correction.
• How is it possible to correct for larger frequency offsets?
• Correction algorithm finds smallest ˆ such that  − ˆ is an integer multiple of N/2
• With a larger frequency offset, after correction
L
nm
j2π N/2
X
y[n] = e h[l]w[n − l] + v[n]
l=0
L
2nm X
= ej2π N h[l]w[n − l] + v[n]
l=0

• In the frequency domain after cyclic prefix removal


2Lc m
Y [k] = ej2π N H[(k − 2m)N ]S[(k − 2m)N ] + V [k]

• Schmidl-Cox correction algorithm


−1
− Let {S1 [n]}N
n=0 denote the QPSK coefficients for the first training symbol
−1
− Let {S2 [n] = S1 [n]V [n]}N n=0 denote the coefficients for the second training symbol where

V [n] = S2 [n]S1 [n] denote a QPSK training signal.
− Note that after fine offset correction
2Lc m
Y1 [k] = ej2π N H[(k − 2m)N ]S1 [(k − 2m)N ] + V1 [k]
4Lc m
Y2 [k] = ej2π N H[(k − 2m)N ]S2 [(k − 2m)N ] + V2 [k]

56
− Using the structure of S2 [k] for even k
2Lc m
Y2 [k]Y1∗ [k] = ej2π N |H[(k − 2m)N ]|2 V [(k − 2m)N ] + noise

− Formulate a least squares problem like


N/2−1
X
J(α, m) = kY2 [2k]Y1∗ [2k] − αV [(k − 2m)N ]k
k=0

Solving
PN/2−1
k=0 V ∗ [(2k − 2m)N ]Y2 [2k]Y1∗ [2k]
α = PN/2−1
k=0 |V [(k − 2m)N ]|2
The best m is the one that minimizes the squared error

arg min J(α, m)


m=0,1,...,N/2−1

2 Introduction to Multi-Antenna Communication

• Objective: Explain different multiple antenna concepts and their leverages.


• Objective: Extend the single antenna work in this course to the multi-antenna setting.
• Motivation for multiple antennas
− Antennas provide another design degree of freedom
− Can be used at the transmitter, receiver, or both
• Applications, why is this relevant
− Multiple antennas are used on almost all base stations in cellular systems
− Used in cordless phone base stations, WiFI both access points and clients
− Most next generation wireless standards will employ multiple antennas in a direct way
• Terminology
− Single-input single-output (SISO), the usual link we have considered in class
− Multiple-input single-output (MISO), often called transmit diversity
− Single-input multiple-output (SIMO), often called receive diversity
− Multiple-input multiple-output (MIMO), called MIMO or spatial multiplexing
• Receive diversity (SIMO)
− Basic idea is to add extra receive antennas
− Multi-antenna processing can be analog and digital
− Gains include array gain (resistance to noise) and diversity gain (resistance to fading)
− Example: Block diagram for a QAM system exploiting SIMO with receive processing in
digital and multi-antenna linear equalization

57
− Mathematical description

y1 [n] = h1 s[n] + v1 [n]


y2 [n] = h2 s[n] + v2 [n]

or in matrix form

y[n] = hs[n] + v[n]

• Transmit diversity (MISO)


− Idea is to obtain diversity benefit from different transmit antennas
− Requires transmitting different signals on different antennas
− Example: Transmit delay diversity in a QAM system
− Mathematical description

y[n] = h1 s1 [n] + h2 s2 [n] + v[n]


= h1 s[n] + h2 s[n − 1] + v[n] (35)

• Spatial multiplexing (MIMO)


− Idea is to obtain capacity benefit by sending different data from different antennas
− Uses a concept called spatial multiplexing, multiplex data in space

y1 [n] = h11 s1 [n] + h12 s2 [n] + v1 [n]


y2 [n] = h21 s1 [n] + h22 s2 [n] + v2 [n]

or in matrix form

y[n] = Hs[n] + v[n]

2 Decoding the IEEE 802.11a wireless standard

• Objective: Explain the organization of the IEEE 802.11a standard.


• Objective: Find relevant information from the standard and implement in the lab.
• IEEE 802 is a group that develop LAN and MAN standards, focusing on the PHY, MAC, and
LINK layers
− Examples include ethernet, token ring, WLAN, WiMAX, etc.
− http://grouper.ieee.org/groups/802/index.html
• 802.11 is WLAN working group (members develop standards + vote)
• 802.11 subgroups
− 802.11: 1/2Mbps in 2.4GHz band, FHSS or DSSS
− 802.11a: examine to 5GHz ban, 54Mbps, OFDM
− 802.11b: (WiFi) DSSS with 11Mbps in 2.4GHz band
− 802.11g: similar to 802.11a but for 2.4GHz
− 802.11n: MIMO enhancement, 100-200Mbps (still under development)

58
− 802.11h, i, e, s,...
• Why develop standards?
− Need compatibility between equipment vendors
− Allow for competition, reduces prices for consumers
• Why should we study a standard?
− To see how all the tricks we discussed manifest in practice
− Standards are well thought out - it’s in the details
• Standards under consideration
− ANSI/IEEE Std. 802.11, 1999 edition
− IEEE Std. 802.11a-1999
• How to read the standard?
− Look at the table of contents
− Decode vocabulary
− Look at criteria
• IEEE 802.11a specifications (very similar to IEEE 802.11g but I have the IEEE 802.11a stan-
dard)
− MAC protocol - control access to medium, etc.
− PHY protocol - waveform, coding, modulation, etc.
• Overview
− WLAN: 5.15-5.25, 5.25-5.35, 5.785-5.825GHz, UNII band
− Payloads: 6, 9, 12, 18, 24, 36, 48, 54 Mbps (6, 12, and 24Mbps are mandatory)
− Modulation: BPSK, QPSK, 16-QAM, 64-QAM
− N = 64-point FFT, 48 data tones, 4 pilots
− Coding: rate 1/2, 2/3, 3/4 convolutional codes with puncturing
• Transmission of data (see page 7, Figure 107)
− PPDU - PHY convergence layer protocol (Frame format)

PLLP Preamble
12 Symbols
SIGNAL
1 OFDM Symbol
DATA
Variable # of OFDM Symbol 
Training
Information about Data packets in OFDM
AGC
when is to come
Synchronization

• Construction of an OFDM data symbol

59
PLLP SIGNAL DATA

TGI TFFT
TR
T = TGI + TFFT

NSD 48
NSP 4
NST 52
∆F 0.3125MHz (20MHz/64)
TF F T 3.2µs (1/∆F )
TP REAM BLE 16µs (TSHORT + TLON G )
TSIGN AL 4µs (TGI + TF F T )
TGI 0.8 µs (TF F T /4)
TGI2 1.6 µs (TF F T /2)
TSY M 4 µs (TGI + TF F T )
TSHORT 8 µs (10 × TF F T /4)
TLON G 8 µs (TGI2 + 2 × TF F T )
• Possible data rates (used for adaptive modulation)

60
• PLCP preamble used for synchronization (Section 17.3.3)
− The PLCP preamble field is used for synchronization

− t1 to t10 denote short training symbols and T1 and T2 denote long training symbols
− Short OFDM training symbol consists of 12 subcarriers
− Long OFDM training symbol consists of 53 subcarriers (including a zero value at dc)
• DATA field (Section 17.3.5)
− The DATA field contains the SERVICE field, the PSDU, the TAIL bits (6bits), and the PAD
bits.
− Pad bits: first pad the symbols to make sure there are the appropriate integer number of
symbols
− Scrambling: all bits in the DATA field are scrambled with a length 127 frame-synchronous
scrambler.
· The frame synchronous scrambler uses the generator polynomial S(x) as

S(x) = x7 + x4 + 1

− Convolutional encoding: the DATA field, composed of SERVICE, PSDU, tail, and pad
parts, shall be coded with a convolutional encoder of coding rate R = 1/2, 2/3 or 3/4,
corresponding to the desired data rate. Viterbi decoding is recommended.
· The convolutional encoder shall use the industry-standard generator polynomials.
· The bit denoted as “A” shall be output from the encoder before the bit denoted as “B”.
− Puncturing: the higher rates are derived from it by employing “puncturing”. Puncturing is
a procedure for omitting some of the encoded bits in the transmitter and inserting a dummy
“zero” metric in to the convolutional encoder on the receive side in place of the omitted bits.
− Interleaving: defined by a two-step permutation
1. Adjacent coded bits → mapped onto nonadjacent subcarriers

61
2. Adjacent coded bits → mapped alternately onto less and more significant bits (LSB and
MSB) on the constellation
− Subcarrier modulation mapping: the OFDM subcarriers shall be modulated by using
BPSK, QPSK, 16-QAM, or 64-QAM modulation.
· According to Gray-encoded constellation mapping, the output values, d, are formed by

d = (I + jQ) × KM OD

where KM OD is a normalization factor as

− Input-output mapping of IFFT

2 Decoding the GSM standard

• Objective: Explain the components of the GSM standard.


• Objective: Justify the framing and slot structure.
• What does GSM stand for?
− Global System for Mobile communications
− (French) Groupe Special Mobile
• What is GSM?
− 2nd generation cellular system (1st was analog)
− First generation of the standard started in 1982, ended in 1988
− Motivation was to provide uniform cellular coverage in europe, replace low capacity analog
systems
− Release in phases
· Phase 1: 1990 basic voice

62
· Phase 2: 1993 added SMS, fax, etc. data
· Phase 2+: evolution of GSM (EDGE, 2.5G)
− Next phase is third generation system 3G called Wideband CDMA (UMTS)
· Uses the same network infrastructure
• GSM is a standard that includes many frequency bands, vocoder, data rates, etc.
• What is included in the GSM standard?

BTS
Abis
Um

BTS BSC
Mobile
station
MS
BSS VLR
(base station subsystem)

MSC Gateway
TE MT MSC
Terminal Mobile Mobile
equipment termination switching
MS center HLR
SIM
(subscirber
I/O module)
AuC DIR

− BTS - base transceiver station


− BSC - base station controller

63
− BSS - base station subsystem
− MSC - mobile switching center routes call to/from users
− HLR - home location registry store subscriber specific information
− AuC - authentication center
− VLR - visitor location register
• Where is the GSM standard?
− www.etsi.org
− Standard can be downloaded for free but it helps to know the document numbers
· ETSI TS 100 959 v.8.4.0 Digital cellular telecommunications system (Phase 2+); Mod-
ulation contains modulation information
· ETSI TS 100 910 v.8.9.0 Digital cellular telecommunications system (Phase 2+); Radio
transmission and reception
· ETSI TS 100 908 v.8.9.0 Digital cellular telecommunications system (Phase 2+); Mul-
tiplexing and multiple access on the radio path
· ETSI TS 100 573 v.8.7.0 Digital cellular telecommunications system (Phase 2+); Phys-
ical layer on the radio path; general description
− Is about 5,000 pages in print
• GSM is a TDMA/FDMA system
− TDMA: time division multiple access
· Resource divided into different periods over time-slots
− FDMA: frequency division multiple access
· resource divided into different carrier
• Basic Tx/Rx

Speech Channel Burst GMSK


Interleaver Cipher RF
coder coder assembly modulator

Channel

Speech Channel Equalizer


Deinterleaver Decipher RF
decoder decoder (Viterbi)

Channel
estimator

• GSM uses GMSK modulation or 8-PSK


− Gaussian minimum shift keying
− Non-linear modulation with special properties
· Near constant envelope
· Allows use of a nonlinear amplifier
· Complex baseband waveform

64
x(t) = ejφ(t)+φ0
where
    
X π 1 3
φ(t) = s[k]Φ(t − kT ), Φ(xT ) = G x− −G x−
2 2 2
k
Z ∞

2
1 − t2 σ − t22 ln 2
G(x) = x √ e 2σ dt + √ e 2σ , σ =
−∞ 2πσ 2π 2π · 0.3
− Introduce ISI at Tx
− Can be written as an equivalent linearized QPSK system with differential encoding
• GSM framing structure is quite interesting
− Normal burst

Training

58 5 26 5 58
3 tail 3 tail
bits bits

· 8 training sequence
· 16bits, cyclically extends 5bits
· Good auto-correlation
· Low cross-correlation
− Frequency correction burst (send a control)

· GMSK looks like a sine wave


− Access burst

8 bits 41 Sync 36 Data 68.25 bits

Guard (252 µ s )

· Sync used for training + offset


· Guard period for timing
− Synchronization burst

39 bits 67 bits 39 bits

· Extra-long training data for sync + channel estimation + training

65
2 Introduction to Propagation and Fading Channels

• Objective: Define the mechanisms of propagation.


• Objective: Distinguish between large-scale + small-scale fading.
• Simplified channel with finite paths

y(t) = h0 x(t − d0 /c) + h1 x(t − d1 /c) + h2 x(t − d2 /c) + v(t)

n: h 2
Gai
: d2
vels
Tra
Tx Rx
Travels: d0 Gain: h0
Trav
els:
d
1 Gai
n: h
1

Figure 39: Simplified Channel with Finite Paths


− h0 , h1 , h2 are the gains for the existing paths in the channel
− d0 , d1 , d2 are the distances travelled by the wave from Tx to Rx along paths
− c is the speed of light
− v(t) is the noise process in the channel
• A general model for a wireless communication channel is
Z
y(t) = h(t, τ )x(t − τ )dτ + v(t)
τ

− h(t, τ ) is the baseband equivalent time-varying impulse response of the channel (can as-
sume bandlimited)
− t is the time index, τ is the lag
− Models the combination of different propagation effects in the channel
− Creates fluctuations of the signal over time called fading and intersymbol interference
− Includes many different propagation effects
• There are several different mechanisms of propagation all leading to the presence of multiple RF
paths (or multi-path) between the transmitter and receiver
− Transmission - wave propagation through the medium, typically refers to the line-of-sight
path but also includes propagation through objects
· Always first component received
· Typically strongest component received

66
− Reflection - wave impinging on an object with dimensions  λ
− Scattering - the wave is impinging on object with irregularities ≈ λ. “roughness”
− Diffraction - “bending” of wave around large structure with irregularities (coverage in urban
area). Stronger with low frequencies (larger wavelengths)

Figure 40: Mechanisms of propagation


• The effects of propagation determine the channel between the transmitter and receiver and thus
have a significant impact on radio link performance
− Determines the received signal strength and thus the signal-to-noise ratio, throughput, and
probability of error
− Determines L, i.e. the length of the channel, which determines how much equalization is
required, how much training is required
− Determines the frequency of training, how often must the channel be reestimated
− Figures substantially in the physical layer design of the communication system
• Propagation modeling
− A propagation model is a mathematical model (typically stochastic) for either the propaga-
tion channel or some function of the propagation channel
− Parameters of the model are derived or “inspired” by measurements or electromagnetic
simulation
− May be used for system design, algorithm evaluation, comparisons, coverage prediction,
etc.
− Historically the first propagation models were for models for received signal strength as a
function of location for narrowband signals Pr (d)
− There are “standardized” channel models used by standards bodies for the purpose of com-
paring different physical layer algorithms. They are reproducible.
− Models are often derived from measurement data using model fitting (e.g. in ADSP we fit
a pole-zero model to a given impulse response)

67
2 Modeling received signal strength

• Objective: Describe the components of a typical model for received signal strength.
• Perhaps the first propagation models were for the received signal strength as a function of loca-
tion Pr (d)
• Purpose of model is to account for typical behavior as a function of distance not to predict a
specific path loss in a given environment
• For the flat fading model Pr (d) = Ps |h(d)|2
• Model should include the following phenomena
− Large-scale fading - Median signal strength as a function of distance, captures general decay
of received signal, variation of the signal power over larger distances on the order of 100’s
of wavelengths, shadowing by buildings, etc. Captures general loss in signal as a function
of distance.
− Small-scale fading - Variation of the signal power over small distances, on the order of
λ0 s, as a result of constructive + destructive multi-path interference. Captures more subtle
multi-path effects.

Large-scale fading

Small-scale fading

Figure 41: Illustration of the variation of received signal strength as a function of location.

• Typical mathematical model of received signal strength in Watts

Pr (d) = αlarge αsmall Pt

Pr (d) [dB] = αlarge [dB] + αsmall [dB] + Pt [dB]

68
− αlarge [dB] = 10 log10 αlarge : attenuation in dB, often modeled as a random variable
− αsmall [dB] = 10 log10 αsmall : attenuation in dB, often modeled as a random variable
− Pt [dB]: transmit power in dB
− Path loss is Pr (d)[dB] − Pt [dB]

2 Review of dB, dBm, Watts, milliWatts

• Objective: Calculate gain and power in linear, dB, and dBm scales.
• Signal strength is often measured in Watts or dBW (referenced to 1 Watt)

P [dBW ] = 10 log10 P [W ]/1W

We usually omit the W when there is no chance of confusion

G[dB] = 10 log10 G

where G is unitless.
• Because mobile wireless systems typically use < 1W att of power, often milliWatts or dBmW
are instead used
P [dBmW ] = 10 log10 P [mW ]/1mW
We call this dBm when there is no chance for confusion.
• The following conversion is useful

P [dBm] = P [dB] + 30dB

• Example: 2W = 3dB = 33dBm = 2, 000mW


• Example: Consider
Pr [W ] = αPt [W ]
in dB
Pr [dB] = α[dB] + Pt [dB]
in dBm
Pr [dBm] = α[dB] + Pt [dBm]
Notice that α retains its dB units!
• Example: 0dBm + 3dB = 3dBm
• Algebraic manipulations of quantities in dB and dBm
− dB + dB = dB
− dBm + dBm = dBm
− dB + dBm = dBm
− dB − dB = dB
− dBm − dBm = dB
− dB − dBm = dBm

2 Large scale versus small scale fading processes

69
• Objective: Explain motivation for large scale versus small scale fading phenomena
• Both large-scale and small-scale propagation phenomena are important
− Large-scale influence system planning, link budget, captures “typical” loss in received sig-
nal strength as a function of distance
− Small-scale influences physical layer link design, modulation schemes, captures local con-
structive and destructive multi-path effects

Figure 42: Large-scale versus small-scale propagation characterization


2 Large scale fading

• Objective: Calculate received signal strength, path loss, and received SNR as a function of dis-
tance for the Friis freespace equation, log-distance equation, and log-normal Gaussian models.
• Large-scale fading typically models received signal strength
• The Friis free space equation is the foundation of most large scale fading models

Pt G t G r λ 2
Pr (d) =
(4π)2 d2 L

or more commonly in dB notation (we assume dB from now on unless otherwise noted)

Pr (d)[dB] = Pt [dB] + Gt [dB] + Gr [dB] + 20 log10 λ − 20 log10 4π − 20 log10 d − 20 log10 L

− Pr (d) is the received power as a function of distance d


− Pt is the transmit power
− Gt and Gr are the transmit and receive antenna gains (assume 1 unless given)
− λ is the wavelength of the carrier
− L is a loss factor

70
• Observations about Friis
− If the distance doubles, the received power increases by 6dB.
− If the wavelength doubles, the received power drops by 6dB
− If the frequency doubles (inversely proportional to the wavelength), then the received power
drops by 6dB
• The Friis free space equation is too optimistic for mobile systems
• Need to account for the typical (mean or median) signal and variation of the signal at a given
distance d → path loss models
• Log distance path loss model with shadowing (everything in dB now)

Pr (d) = Pt − L(d)

L(d) = Ls (d0 ) + L̄(d) + Xσ


− d0 is a reference distance in the far field (typically greater than about 2λ), typically 1m
− L(d) is the path loss as a function of d
− Ls (d0 ) is the reference path loss (calculate using Friis if not given)
− L̄(d) = 10n log10 (d/d0 ) is the mean (or median) path loss with exponent n ∈ [1, 5]
− Xσ is a zero mean Gaussian random variable with variance σ 2 , called log-normal because
it is Gaussian in the log scale, log-normal in the linear scale. Typically σ 2 ∈ [5, 8.5].
− The parameters σ and n would be found from measurements
• Other common models include the ground-bounce model (includes height of transmitter and
receiver, has n = 4 decay), and more empirically derived models like the Okamura model or
Hata models.
• Signal-to-noise ratio at a distance d

SN R(d) = Pr (d)[dB] − 10 log10 No W

where W is the bandwidth and No = kTe


• Utility of path loss equations is system design - not propagation prediction!
• Example calculation
Suppose that Pt = 100mW = 20dBm, Pr (d0 ) = 10dBm, d0 = 1m, n = 3
¯ ¯ ¯
1. Calculate Pr (100m)   Pr (d) = Pt − L(d)
(the average)
¯ = Ls (d0 ) + 10n log d
L(d) d0 , with Ls (d0 ) = Pt [dBm] − Pr (d0 ) = 10dB
⇒ L(100m) = Ls (d0 ) + 30 log 100

1 = 10dB + 60dB = 70dB
Pr¯(d) = Pt − Ls (d0 ) − 30 log dd0
¯
⇒ Pr (100m) = 20dBm − 70dB = −50dBm

71
2. Suppose that σ = 8dB. What is the probability that Pr (100m) < −60dBm?
 

 

Prob {Pr (100m) < −60dBm} = Prob −50dBm + Xσ < −60dBm

 |{z} 

∼N (0,σ)
= Prob {Xσ < −10dB}
= Prob {Xσ > 10dB}
 
Xσ 10dB
= Prob >
σ σ
 
10dB
= Q = Q (1.25)
8dB
= 0.106 ∼ 10% outage

3. What d is required to ensure that Pr (d) > −65dBm is 95%?


Prob {Prn(d) ≤ −65dBm} = 0.05  o
⇒ Prob Pt − Ls (d0 ) − 10n log dd0 + Xσ < −65dBm = 0.05
( “ ”)
d
−65dBm−Pt +Ls (d0 )+10n log
Xσ d0
⇒ Prob σ ≤ σ = 0.05
( “ ”)
d
65dBm+Pt −Ls (d0 )−10n log
Xσ d0
⇒ Prob σ > σ = 0.05

⇒ Q(α) = 0.05
⇒ α = 1.65  
⇒ 65dBm + Pt − Ls (d0 ) − 10n log dd0 = 1.65σ
⇒ 65dBm + 20dBm − 10dBm − 30 log d = 1.65 × 8
⇒ 61.8 = 30 log d
⇒ d ' 114m

2 Small Scale Fading Overview

• Objective: Describe the factors that influence small-scale fading.


• Objective: Justify the framing and slot structure.
• Factors that influence small-scale fading
− Small-scale fading - fluctuation in the envelope of the received signal over small distance
≈λ
− Frequency-selective fading: smearing of Rx signal due to multipath propagation
− Time-selective fading: variation of channel over time due to mobility of Tx and/or Rx
− Equivalent discrete-time system models for each case
· Flat/Slow: y[n] = hs[n] + v[n]
· Flat/Fast: y[n] = h[n]s[n] + v[n]
P
· Frequency selective/Slow: y[n] = l h[l]s[n − l] + v[n]
P
· Frequency selective/Fast: y[n] = l h[l, n]s[n − l] + v[n]
− Key questions to answer about small-scale fading:

72
Frequency
Flat Fading
Selective Fading
Fast Fading

Symbol Period
Fast Fading

Flat Fading Frequency


Selective Fading
Slow Fading
Slow Fading

Signal Bandwidth

1. How do we know which case we are in?


2. How can we mitigate the effects?
− A wide sense stationary uncorrelated scattering assumption (WSSUS) allows us to make
simplifying statements about the channel

2 Frequency Selective Fading

• Objective: Determine when a given propagation environment is frequency selective or frequency


flat.
• Variation of channel amplitude with frequency
• As multipath intensity increases, frequency selectivity increases
• Multipath intensity profile or power delay profile measures energy as a function of lag and is
found via measurements
Sm (τ ) = E|h(t, τ )|2

• Spaced-frequency correlation function measures the correlation as a function of the difference


between subcarriers
Rm (∆f ) = F {Sm (τ )}
− Mean-excess delay R∞
Sm (τ )τ dτ
τ̄ = R0 ∞
0 Sm (τ )dτ
− Second moment R∞
Sm (τ )τ 2 dτ
τ2 = 0R ∞
0 Sm (τ )dτ
− Root mean square (rms) delay spread
q
στ = τ 2 − (τ̄ )2

− Coherence bandwidth
1
Bc ≈
5στ

73
• Coherence Bandwidth Bc is the range of frequencies over which the channel remains fairly
constant
• A channel is frequency flat if the symbol time

T  στ

or equivalently is frequency flat if the bandwidth of the signal

B  Bc

• Measure Sm (τ ) by estimating several channel realizations and estimating the average power by
replacing the expectation with the sample mean.
• Measure Rm (∆f ) by sending sinusoids at ∆f = f1 − f2 and estimating the correlation between
their respective channels
• Mitigation of frequency selective fading: linear equalization, decision feedback equalization,
maximum likelihood sequence detection

2 Time Selective Fading

• Objective: Determine when a given propagation environment is time selective or non-time-


selective.
• Variation of the channel with respect to time
• Often measured using the coherence Tc of the channel over which the channel remains fairly
constant
• Spaced-time correlation function measures the correlation as a function of the difference in time
∆t = t2 − t1
Rd (∆t) = Eh(t2 , τ )h∗ (t1 , τ )
• Doppler power spectrum measures energy as a function of frequency

Sd (τ ) = F {Rd (τ )}

• Doppler spread can be defined in a similar way as rms delay spread. In mobile channels we
often use the maximum Doppler shift (as a worst case)
vfc
fm =
c
• Coherence time
1
Tc ≈
fm
• A channel can be assumed to be time invariant (not time selective) if

T  Tc

or equivalently
B  fm

74
• Note: we often consider packet length

KT  Tc with K = # of symbols

• Estimate coherence time using either the maximum expected Doppler frequency or by estimating
the coherence time directly
• Mitigation of time-selective fading:
− More robust modulation schemes
− Increase the symbol rate
− Adaptive estimation and tracking
− Error control coding and interleaving

2 Example small-scale fading calculation

• Objective: Be able to determine which of the four channel combinations is applicable under a
given set of assumptins.
• Example: What kind of fading is experienced by GSM?
− BW=200kHz, T = 3.7µsec, burst time= 0.57ms
− Carrier frequency= 890 − 915, 935 − 960MHz
− rms delay spread
· NYC: 1300ns(avg.), 3500ns(max) @ 900MHz
· Indoor: 10-50ns @ 1500MHz
− Mobility
· Train(300km/h), car(120km/h), pedestrian(4km/h)
− Is GSM frequency selective?
· στ = 3.5µ → BC = 5kHz and BW= 10kHz → frequency-selective
− Is GSM time selective?
· Take 1.9GHz
1h
· Train 300km/h · 3600s · 1000m
1km = 83m/s → fm = 525
· TC ' 1/fm = 1.9ms
· Burst is 0.5ms, so roughly constant

2 Probability of Error Analysis in Rayleigh Fading Channels

• Objective: Calculate a bound on the probability of symbol error for Rayleigh fading channels.
• The coherence time & coherence bandwidth determine the equivalent system model used to
design the receiver
− Flat/Slow: y[n] = hs[n] + v[n]
· Estimate s[n] = h− 1y[n]
2
· SN R = |h|N0Es
· If |h| is small, BER will increase. We can use diversity techniques.
− Flat/Fast: y[n] = h[n]s[n] + v[n]

75
· Estimate h[n] maybe using h[n − 1] (Channel tracking)
· Consider error control coding, interleaving, or time diversity
P
− Frequency selective/Slow: y[n] = l h[l]x[n − l] + v[n]
· Estimate h[l] at the Rx and equalize
P
− Frequency selective/Fast: y[n] = l h[l, n]x[n − l] + v[n]
· Use adaptive loops to estimate h[l, n] and equalize
• The impact of fading in the channel on symbol error rate depends on the type of fading mitigation
techniques employed
− For complex systems this may be estimated using Monte Carlo simulations
− For some special cases the average probability of error can be calculated
• Consider estimating the symbol error rate in a flat fading channel with discrete-time model
p
y[n] = Ex h[n]s[n] + v[n]

− Assuming independent detection this model works for both slow fading and fast fading
− We explicitly factor the Ex term out of the equivalent channel to make SNR computations
easier
− Model h[n] as an i.i.d. random variable
• Instantaneous symbol error rate for M-QAM is a random variable
    r !  2 r !2
Ex 1 Ex 2
3 1 Ex 3
Pe |h[n] = 4 1− √ Q |h[n]| −4 1− √ |h[n]|2 .
No M No M −1 M No M −1

• Alternatively often measure the average probability of error


   
Ex Ex
Pe = Eh[n] Pe |h[n]
No No
Z  
Ex
= Pe |x fx (x)dx
x No

• Rayleigh fading channel is the most common flat fading model, typically a worst-case model
− h[n] ∼ NC (0, 1) ⇒ h[n] = hR [n] + jhI [n] with hR [n] & hI [n] ∼ i.i.d. N 0, 21


− |h[n]| ∼ Rayleigh (envelope)


− |h[n]|2 ∼ exponential, chi-square with 2 degrees of freedom
 
Ex
• Calculation of Pe N o
for a fading channel
1. Union upper bound on probability of error
s 
Ex d2min kh[n]k2 
 
Ex
Pe ≤ αQ 
No 2N0

76
x2
2. Chernoff bound: Q(x) ≤ e− 2

Es d 2
min |x|
2
Z

Pe (SNR) = e 4N0 fx (x)dx
x

3. Tricky way to evaluate using the property that


Z
1 2
fx (x) = e−|x| → fx (x)dx = 1
π

Es d 2
» –
Es d 2 2
min |x| 1 −|x|2 min +1
Z Z
− 1 −|x|2 4N0
e 4N0 e dx = e dx
x π x π
h 2 i−1 − " |x|2
Es dmin 2
#−1
Z +1
4N0
E d
s min
+1
4N0
= h 2 i−1 e dx
E d
x π s4Nmin
0
+ 1

1
⇒ Pe (SNR) ≤ Es d2min
≈ SNR−1
4N0 +1

Figure 43: Probability of error curve comparing Gaussian and Rayleigh fading.

• Impact of Rayleigh fading is that much more SNR is required to achieve a target probability of
symbol error

2 Diversity as a means to reduce impairments caused by fading

• Objective: Identify several sources of diversity.


• Diversity refers to sources of independent fading

Figure 44: Illustration of different diversity sources fading independently. The idea is to use a diversity
scheme to ride the peaks and avoid the valleys.

• Idea of exploiting is to send information on different diversity branches


− Coherence time ↔ Receive symbols at different times
− Coherence frequency ↔ Receive symbols at different frequencies
− Coherence space ↔ Receive symbols at different points in space
• Diversity may be explicit or implicit
1

2λ ∼λ headsets
• Receive antenna diversity - space antennas sufficiently for apart
10λ base − station

77
− Explicit form of diversity where multiple antennas are used for reception
− Exploit: combining, beamforming, or general DSP
− Easier to exploit antennas at the receiver, multiple transmit antennas require complex space-
time coding
− Receive diversity is used on most base stations in cellular systems, switched diversity is
used in IEEE 802.11
• Polarization diversity
− Explicit: send or receive data on multiple polarizations
− Exploit: similar to spatial diversity. Provides diversity advantage with small headsets (spac-
ing not required)
− Used on most base stations to accommodate different mobile device orientations
• Pattern (or angle) diversity
− Explicit: receive on multiple patterns, instead of multiple receive antennas
− Exploited: similar as array processing
− Less extensively studied than other forms of diversity
• Time diversity: receive data at different times
− Explicit: send data at multiple times. using coding and interleaving. ARQ.
− Exploited: similar as array processing but depends on the coding scheme
− Used in the GSM system (speech coded and interleaved over four GSM bursts)
• Frequency (or path) diversity
− Explicit: send data on multiple frequencies
− Implicit: use frequency selective equalizer
− Used in the GSM system, IEEE 802.11, IS-95, WCDMA, etc.
− Degree of diversity obtained depends on the type of receiver employed

2 Diversity combining schemes and symbol error rate performance

• Objective: Describe two different techniques for exploiting diversity channels.


• Objective: Demonstrate analytically the symbol error rate performance improvement that de-
rives from using diversity.
• Consider receive antenna diversity

y1 [n] = Es h1 [n]s[n] + v1 [n]
..
√ .
yM [n] = Es hM [n]s[n] + vM [n]

• Selection diversity: process the received signal from the diversity branch with the largest instan-
taneous SNR

Ex hm∗ [n]s[n] + vm∗ [n] with m∗ = arg max |hm [n]|2


p
z[n] =
m

78
• Maximum ratio combining - the SNR maximizing combination of all diversity sources
 
y1 [n]
z[n] = w∗  .. ∗
=w y
 
.
yM [n]
∗ 2
s |w h|
Find W to maximize SNR → solve max E|w| 2
N w 0

Note that |hw, hi|2 ≤ |w|2 |h|2 : Cauchy − Schwartz inequality


M M
⇒ Equality iff w = h, so w∗ h = h∗m [n]hm [n] = |hm [n]|2
P P

m=1 m=1

 M
∗ 2
|hm [n]|2 s[n] + w∗ v[n] and SNR = ENs |w|w|h|2 = |h|2 Es
P
Then, z[n] = Es N0
0
m=1
• Evaluating the symbol error probability
v  M 
u
2
u Ex d2min
P
|hm [n]| 
u
 
Ex t m=1 
Pe |h1 [n], ..., hM [n] ≤ αQ  
No 
 2N0 

v  M   
u
u Es d2min |hm [n]|2 
u P
  
Ex  t m=1 
Pe ≤ Eh αQ  
No 


 2N 0



M
 
Es d2 |hm [n]|2
P
min
m=1
 −
≤ Eh αe 4N0

∗ −1
M dimensional NC (0, R) ∼ f (x) = 1
π M |R|
e−x R x
Hence,
" ∗
# ∗
Es d2
min h h Es d2
min h h
Z Z
− − 1 −h∗ h
Eh e 4N0 = ··· e 4N0 e dh
πM
Es d2
„ «
−1
−h∗ Es d2min

min +1
1 |R|
Z Z
4N0
h
= ··· e dh ←R= +1 I
π M |R| 4N0
1
= M
Es d2min

4N0 +1

Consequently,  
Ex α
Pe =  M
No Es d2min
4N0 +1

79
Figure 45: Illustration of the probability of error for different orders of diveresity assuming independent
Rayleigh fading.

2 Link Budget

• Objective: Calculate a bound on the probability of symbol error for Rayleigh fading channels.
• A link budget is an accounting of how power is “spent” in the link
• Helps determine cell range or area under different assumptions
− Type of service - influences target BER, data rate
− Type of environment (indoor, outdoor) - determines assumptions about channel
− System configuration - antennas, available power, losses, gains
− Coverage probability - determines allowable outage
− Cost - determines quality of hardware and associated parameters
• Typical link budget
− Transmit power
− Antenna gain
− Effective noise temperature
− Interference power
− Cable and body losses
− Model specific parameters like path-loss exponent, frequency, reference distance
− Small-scale fade margin - depends on amount of diversity available
− Shadowing variance
− Large-scale fade margin
− Range
• Typically one computes a link budget to find the given range but it can be used to backsolve for
basically any parameter e.g., required transmit power, antenna gain, noise figure, or diversity
order
• UMTS link budget calculation
− Go through link budget example
− Discuss how budget “adds up”

2 Cellular reuse design concept

• Objective: Describe benefits of cellular reuse.


• Objective: Calculate S/I under different reuse assumptions.
• In wireless systems, bandwidth is scare → reuse bandwidth as much as possible
• Cellular reuse is a concept that allows frequencies to be reused through geographic separation
− A given bandwidth is divided in to F carrier frequencies

80
Figure 46: Example link budget from http://www.umtsworld.com/technology/linkbudget.htm for the uplink
of the UMTS cellular system.

− Frequencies are split up (evenly) among N cells in a cluster K = F/N


− All frequencies are reused in every cluster
• Classic example is the hexagonal tessellation
− R is the radius of a cell
− D is the co-channel reuse distance
− N = i2 + j 2 + ij is the number of cells i, j are integers

− Q = D/R = 3N is the co-channel reuse ratio
• S/I background
− Measures the signal-to-interference ratio at the worst point in the cell
− May be different on uplink versus downlink

81
Figure 47: Illustration of the hexagonal tessellation.

− Does not include noise


• S/I calculations
S S
= P (36)
I k Ik

Ik is the k − th interferer with distance Dk . Typically consider the first tier of interferers.
− Assume log-distance path loss model Pr (d) = Pt (d/d0 )−n

S R−n
= P −n
I k Dk

− Uplink S/I calculation (generally easier) with hex cells and first tier of interference. All
interfering base stations are distance D away

S R−n Qn
= =
I 6(D)−n 6
− Downlink S/I calculation (generally harder) with hex cells and first tier of interference

S R−n

I 2(D − R)−n + 2(D + R)−n + 2D−n
or
S 1
≈ −n
I 2(Q − 1) + 2(Q + 1)−n + 2Q−n

82
Assuming all interferers are distance D − R away (worst case)

S R−n (Q − 1)n
≈ =
I 6(D − R)−n 6
• Example of S/I calculations as a function of reuse
− Hex: N = 3, Q = 3, n = 2, S/Iup = 1.8dB,S/Idown ≈ −1.7dB
− Hex: N = 3, Q = 3, n = 4, S/Iup = 11.3dB,S/Idown ≈ 4.25dB
− Hex: N = 7, Q = 4.58, n = 4, S/Iup = 18.6dB,S/Idown ≈ 14.4dB
• SINR calculations
 −1  −1 !−1
S S S
= + (37)
I + Noise I Noise

• Larger reuse versus smaller reuse


− Larger reuse → better S/I, poor trunking efficiency (few channels)
− Smaller reuse → worse S/I, better trunking efficiency (more channels)

2 Improving S/I

• Objective: Summarize different strategies for interference reduction.


• Sectoring

Figure 48: Illustration of sectoring using either three sectors or six sectors.

− One approach for reducing interference is to employ cell sectoring courtesy of directional
antennas
− Basic idea is to divide up frequencies among different sectors
− Reduces the number of first tier interferers (60o sector has one strong interferer, 120o sector
has two strong interferers)
− Often expressed as N × K
· Uplink S/I calculation
S R−n Qn
= =
I 2(D)−n 2
· Downlink S/I calculation (worst case)

S R−n (Q − 1)n
≈ =
I 2(D − R)−n 2
− Example of S/I calculations as a function of reuse w/ sectors
· Hex: N = 3, Q = 3, n = 2, 3 sectors, S/Iup = 6.5dB,S/Idown ≈ 3dB
· Hex: N = 3, Q = 3, n = 4, 3 sectors, S/Iup = 16dB,S/Idown ≈ 9dB
· Hex: N = 7, Q = 4.58, n = 4, 3 sectors, S/Iup = 23dB,S/Idown ≈ 19dB

83
− Issues with sectors
· Loss in trunking efficiency since the frequencies are dividing among cells
· Sectors are not quite orthogonal in the presence of multipath interference
· Need to consider handoff between sectors
• Multiuser detection
− Refers to a broad class of signal processing techniques for mitigating effects of interference
− Typically use a model that includes structure of the interference, i.e. does not assume it is
Gaussian noise

p p I
X
y[n] = Ex h0 s0 [n] + Ex hk sk [n] + v[n]
k=1

− Many different types of multiuser receivers


· Joint receivers - solve a maximum likelihood detection problem for {sk }Ik=0
· Successive canceling receivers - detect one or more users, subtract their detected con-
tribution, repeat
· Linear receivers - attempt to filter out the interference
− For multiuser receivers to work, typically some additional degrees of freedom are requried
· Use code division multiple access. Each user spreads information across a distinct
signature, or spreading code
· Use multiple antennas. Generally can cancel M − 1 interferers with M antennas

2 Handoff

• Objective: Describe different handoff strategies in cellular networks.


• Users that move in cellular systems transition from one cell to the next
• Network needs to automatically transition users from one cell to the next without dropping their
current call
• Handoff (or handover) is the term for passing a user from one cell to the next
• Handoff is most often mobile assisted
− Mobile monitor signal strength of desired base station
− Monitor signal strength of pilots or beacons from neighboring base stations
− Informs desired base station of its intention to handoff
− Base station notifies neighboring base station, prepares to switch call
• Some hysteresis is used to avoid rapid handoffs back and forth
− Mobility may play a role in handoff decision process
• CDMA systems use soft handoff, which is a form of macro-diversity

84

You might also like