Chapter No.1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 53

DIGITAL COMMUNICATION

LECTURE NO 1

10 February 2018 BY DR.MUHAMMAD ADNAN.


TEXT BOOK:
DIGITAL COMMUNICATIONS. (2nd Edition)
Fundamentals and applications
by BERNARD SKLAR
REFERENCE BOOKS

• Modern Digital and Analog Communication Systems


4th Edition by B.P.Lathi and Zhi Ding
• An Introduction to Analog and Digital Communication
2nd Edition by Simon Haykin and Michael Moher.
• Digital Communications,
5th Edition, by J .G. Proakis and Masoud Salehi
• Analog and Digital Communication Systems,
8th edition by Leaon W. Couch .
Analog Communication Vs
Digital Communication
• Digital transmission offers Data processing options and other
flexibilities not available with analog transmission.e.g Data
compression,Error correction and Equalization.
• In DCS ,during a finite interval of time, it sends a waveform
from a finite set of possible waveforms in contrast to an analog
communication system.
• In DCS , objective is not to reproduce the transmitted
waveform but to determine from a noise-perturbed signal
which waveform from the finite set of waveform was sent by
the transmitter.
Analog Communication Vs
Digital Communication

• Digital techniques lend themselves naturally to signal


processing functions that protect against jamming and
interference.
• With the help of DCS it will become possible to increase the
system capacity.(modulation and multiple access techniques.)
• DCS helps to i p o e the uality of today’s co u icatio
system.
Analog Communication Vs
Digital Communication
• Digital hardware implementation is flexible and permits the use
of microprocessors, mini-processors, digital switching and VLSI
• The use of LSI and VLSI in the design of components and systems
have resulted in lower cost
• Digital multiplexing techniques – Time & Code Division Multiple
Access - are easier to implement than analog techniques such as
Frequency Division Multiple Access
• DCS deal with different type of data- voice, video and text.
• Capable to do the packet switching.
Disadvantages & Performance Criterion

Disadvantages
Re ui es elia le syn h onization
Requires A/D conversions at high rate
Requires larger bandwidth

Performance Criteria
Probability of error or Bit Error Rate
• Digital techniques need to distinguish between discrete symbols
allowing regeneration versus amplification.
DIGITAL COMMUNICATION SYSTEM (DCS)
CLASSIFICATION OF SIGNALS

• Deterministic and Random Signals.


• Periodic and Non-periodic Signals.
• Analog and Discrete Signals.
• Energy and Power Signals.
• The Unit Impulse Function.
DETERMINISTIC AND RANDOM SIGNALS

o A Deterministic signal has no uncertainty w.r.t its value at


any time.It can be expressed in the form of explicit
expression e.g x(t)=cos(t)

o A random signal has some degree of uncertainty before the


signal actually occurs.

o A random signal may exhibit certain regularities that can be


described in terms of probabilities and statistical averages.
Periodic and Non-periodic Signals

A signal x(t) is called periodic in time if there exists a constant


T0 > 0 such that

x(t) = x(t + T0) for -∞ < t < ∞

t denotes time
T0 is the period of x(t).
Analog and Discrete Signals

o An analog signal x(t) is a continuous function of time; that is, x(t)


is uniquely defined for all t

o A discrete signal x(kT) is one that exists only at discrete times; it


is characterized by a sequence of numbers defined for each
time, kT,
o where
k is an integer
T is a fixed time interval.
ENERGY AND POWER SIGNALS

o An energy signal has finite energy but zero average power.

o A power signal has finite average power and infinite energy.

o Periodic signals and random signals are classified as power


signals and non-periodic and deterministic signals are classified
as energy signals.(in general.)
THE UNIT IMPULSE FUNCTION

Dirac delta function δ(t) or impulse function is an abstraction—an


infinitely large amplitude pulse, with zero pulse width, and unity
weight (area under the pulse), concentrated at the point where its
argument is zero.

∫ δ (t) dt = 1
δ t = fo t ≠
δ (t) is bounded at t =0

Sifting or Sampling Property

∫ x(t) δ (t – t0) dt = x(t0)


SPECTRAL DENSITIES

The spectral density of a signal characterizes the distribution


of the sig al’s e e gy o po e i the f e ue cy do ai .

This concept is particularly important when considering


filtering in communication systems while evaluating the signal
and noise at the filter output.

The energy spectral density (ESD) or the power spectral


density (PSD) is used in the evaluation.
Energy Spectral Density (ESD)

Energy spectral density describes the signal energy per unit


bandwidth measured in joules/hertz.

Represented as ψx(f), the squared magnitude spectrum


ψx(f) = |X(f)|2
According to Pa se al’s theorem, the energy of x(t):
∞ ∞
Ex = ∫ x2(t) dt = ∫ |X(f)|2 df
-∞ -∞
Therefore:

Ex = ∫ ψx (f) df
-∞
The Energy spectral density is symmetrical in frequency about
origin and total energy of the signal x(t) can be expressed as:

Ex = 2 ∫ ψx (f) df
0
Power Spectral Density (PSD)

The power spectral density (PSD) function Gx(f ) of the periodic


signal x(t) is a real, even, and nonnegative function of frequency
that gives the distribution of the power of x(t) in the frequency
domain.
PSD is represented

as:
Gx(f = Σ |Cn|2 δ (f – nf0)
n=-∞
Whereas the average power of a periodic signal x(t) is represented
as:
T0/2 ∞
Px = 1/T ∫ x2(t) dt =Σ |Cn|2
-T0/2 n=-∞

Using PSD, the average normalized power of a real-valued signal is


represented as:
∞ ∞
Px = ∫ Gx(f ) df = 2 ∫ Gx(f ) df
-∞ 0
Autocorrelation
Autocorrelation of an Energy Signal
Correlation is a matching process; autocorrelation refers to the matching of a
signal with a delayed version of itself.
Autocorrelation function of a real-valued energy signal x(t) is defined as:
Rx(τ) = ∫ x(t) x (t + τ) dt for - ∞ < τ < ∞
The autocorrelation function Rx(τ) provides a measure of how closely the signal
matches a copy of itself as the copy is shifted τ units in time.
Rx(τ)is not a function of time; it is only a function of the time difference τ between
the waveform and its shifted copy.
The autocorrelation function of a real-valued energy signal has the following properties:

1. Rx(τ) = Rx(-τ) symmetrical in about zero

2. Rx(τ) ≤ R (0) for all τ maximum value occurs at the origin


x

3. R (τ) ↔ ψ (f)
x x autocorrelation and ESD form a Fourier transform
pair, as designated by the double-headed arrows

4. Rx(0) = ∫ x2(t) dt value at the origin is equal to the energy of the signal
-∞
Autocorrelation of a Power Signal
Autocorrelation function of a real-valued power signal x(t) is defined as:

T/2
Rx(τ) = lim 1/T ∫ x(t) x (t + τ) dt for - ∞ < τ < ∞ .
T→∞ -T/2

When the power signal x(t) is periodic with period T0, the autocorrelation function
can be expressed as
T0/2
Rx(τ) = 1/T0 ∫ x(t) x (t + τ) dt for - ∞ < τ < ∞
-T0/2
The auto correlation function of a real-valued periodic signal has the following properties
similar to those of an energy signal:

Rx(τ) = Rx(-τ) symmetrical in about zero

Rx(τ) ≤ R (0)
x for all τ maximum value occurs at the origin

R (τ) ↔ G (f)
x x autocorrelation and PSD form a Fourier transform pair
T0/2
Rx(0) = 1/T0 ∫ x2(t) dt value at the origin is equal to the average power of the signal
-T0/2
Random Signals
1. Random Variables
• All useful message signals appear random; that is, the receiver does not
know, a priori, which of the possible waveform have been sent.

• Let a random variable X(A) represent the functional relationship between


a random event A and a real number.

• The (cumulative) distribution function FX(x) of the random variable X is


given by
Fx(x = P X ≤ x) (1.24)

• Another useful function relating to the random variable X is the probability


density function (pdf)
Px(x) = d Fx(x) / dx (1.25)
1.1 Ensemble Averages
∞ • The first moment of a
mX = E{X} = ∫ x Px(x) dx probability distribution of a
-∞
random variable X is called
mean value mX, or expected
value of a random variable X

E{X2} =∫ x2 Px(x) dx • The second moment of a
-∞ probability distribution is the
mean-square value of X
• Central moments are the
Var(X) = E{(X – mX)2 } moments of the difference

between X and mX and the
= ∫ (x – mX)2 Px(x) dx second central moment is the
-∞
variance of X
• Variance is equal to the
difference between the mean-
Var(X) =E{X2} –E{X}2 square value and the square of
the mean
2. Random Processes
• A random process X(A, t) can be viewed as a function of two
variables: an event A and time.
2.1 Statistical Averages of a Random
Process
• A random process whose distribution functions are continuous can be
described statistically with a probability density function (pdf).

• A partial description consisting of the mean and autocorrelation function


are often adequate for the needs of communication systems.

• Mean of the random process X(t) :


E{X(tk)} = ∫ x Px(x) dx = mx(tk) (1.30)

• Autocorrelation function of the random process X(t)



Rx(t1,t2) = E{X(t1) X(t2)} = ∫ ∫ xt1xt2Px(xt1,xt2) dxt1dxt2 (1.31)
-∞
2.2 Stationarity
• A random process X(t) is said to be stationary in the strict sense if
none of its statistics are affected by a shift in the time origin.

• A random process is said to be wide-sense stationary (WSS) if two


of its statistics, its mean and autocorrelation function, do not vary
with a shift in the time origin.

E{X(t)} = mx= a constant (1.32)

Rx(t1,t2) = Rx (t1 – t2) (1.33)


2.3 Autocorrelation of a Wide-Sense
Stationary Random Process

• For a wide-sense stationary process, the autocorrelation


function is only a function of the time difference τ = t1 – t2;
Rx(τ) = E{X(t) X(t + τ)} for - ∞ < τ < ∞ (1.34)
• Properties of the autocorrelation function of a real-valued
wide-sense stationary process are
3. Time Averaging and Ergodicity

• When a random process belongs to a special class, known as an


ergodic process, its time averages equal its ensemble averages.

• The statistical properties of such processes can be determined by


time averaging over a single sample function of the process.

• A random process is ergodic in the mean if


T/2
mx = lim 1/T ∫ x(t) dt (1.35-a)
T→∞ -T/2

• It is ergodic in the autocorrelation function if


T/2
lim 1/T ∫ x(t) x (t + τ) dt (1.35-b)
Rx(τ) = T→∞
-T/2
4. Power Spectral Density &
Autocorrelation

• A random process X(t) can generally be classified as a power


signal having a power spectral density (PSD) GX(f )

• Principal features of PSD functions


5. Noise in Communication Systems
• The term noise refers to unwanted electrical signals that are
always present in electrical systems
• Can describe thermal noise as a zero-mean Gaussian random
process.
• A Gaussian process n(t) is a random function whose amplitude
at any arbitrary time t is statistically characterized by the
Gaussian probability density function

(1.40)
Noise in Communication Systems
• The normalized or standardized Gaussian density function of a
zero-mean process is obtained by assuming unit variance.
5.1 White Noise
• The primary spectral characteristic of thermal noise is that its
power spectral density is the same for all frequencies of
interest in most communication systems
• Power spectral density Gn(f )

(1.42)

• Autocorrelation function of white noise is

(1.43)

• The average power Pn of white noise is infinite

(1.44)
• The effect on the detection process of a channel with additive
white Gaussian noise (AWGN) is that the noise affects each
transmitted symbol independently.

• Such a channel is called a memoryless channel.

• The te additi e ea s that the oise is si ply


superimposed or added to the signal
Signal Transmission through
Linear Systems
• A system can be characterized equally well in the time
domain or the frequency domain, techniques will be
developed in both domains

• The system is assumed to be linear and time invariant.

• It is also assumed that there is no stored energy in the


system at the time the input is applied
1. Impulse Response
• The linear time invariant system or network is characterized in
the time domain by an impulse response h (t ), to an input unit
impulse (t)
h(t) = y(t) when x(t) = (t) (1.45)
• The response of the network to an arbitrary input signal x (t ) is
found by the convolution ∞
of x (t ) with h (t )
y(t) = x(t)*h(t) =- ∞ x()h(t- )d (1.46)
• The system is assumed to be causal,which means that there can
be no output prior to the time, t =0,when the input is applied.
• convolution integral can be expressed as:

y(t) =  x() h(t- )d (1.47)
-∞
2. Frequency Transfer Function
• The frequency-domain output signal Y (f )is obtained by taking the
Fourier transform
Y(f) = H(f)X(f) (1.48)
• Frequency transfer function or the frequency response is defined as:
H(f) = Y(f) / X(f) (1.49)

H(f) = |H(f)| ej (f) (1.50)

• The phase response is defined as:

(f) = tan-1 Im{H(f)} / Re{H(f)} (1.51)


2.1. Random Processes and Linear Systems

• If a random process forms the input to a time-invariant linear


system,the output will also be a random process.

• The input power spectral density GX (f ) and the output power


spectral density GY (f )are related as:

Gy(f)= Gx(f) |H(f)|2 (1.53)


3. Distortionless Transmission
What is the required behavior of an ideal
transmission line?
• The output signal from an ideal transmission line may have some
time delay and different amplitude than the input
• It must have no distortion—it must have the same shape as the
input.
• For ideal distortionless transmission:

Output signal in time domain y(t) = K x( t - t0 ) (1.54)

Output signal in frequency domain Y(f) = K X(f) e-j2 f t0 (1.55)

System Transfer Function


H(f) = K e-j2 f t0 (1.56)
What is the required behavior of an ideal transmission line?

• The overall system response must have a constant magnitude


response
• The phase shift must be linear with frequency
• All of the sig al’s frequency components must also arrive with
identical time delay in order to add up correctly
• Time delay t0 is related to the phase shift  and the radian
frequency  = 2f by:
t0(seconds)=(radians)/2f(radians/seconds) (1.57a)

• Another characteristic often used to measure delay distortion


of a signal is called envelope delay or group delay:
(f) = -1/2  (d(f) / df) (1.57b)
3.1. Ideal Filters
• For the ideal low-pass filter transfer function with bandwidth
Wf = fu hertz can be written as:

H(f) = | H(f) | e-j  (f) (1.58)

Where

| H(f) | = { 1 for |f| < fu


0 for |f|  fu } (1.59)

e-j  (f) = e-j2 f t0 (1.60)

Figure1.11 (b) Ideal low-pass filter


3.1. Ideal Filters
• The impulse response of the ideal low-pass filter:
h(t) = F-1 {H(f)}

=  H(f) e-j2 f t df (1.61)
-∞
fu
=  e-j2 f t0 e-j2 f t df
- fu
fu
=  e-j2 f (t - t0) df
- fu

= 2fu * sin 2fu(t – t0)/ 2fu(t – t0)

= 2fu * sinc 2fu(t – t0)


(1.62)
3.1 Ideal Filters
• For the ideal band-pass  For the ideal high-pass filter
filter transfer function transfer function

Figure1.11 (a) Ideal band-pass filter Figure1.11 (c) Ideal high-pass filter
3.2. Realizable Filters
• The simplest example of a realizable low-pass filter; an RC filter
H(f) = 1/ 1+ j2 f RC = e-j (f) /  1+ (2 f RC )2 (1.63)

Figure 1.12
3.2. Realizable Filters
 Phase characteristic of RC filter

Figure 1.13
3.2. Realizable Filters
• There are several useful approximations to the ideal low-pass filter
characteristic and one of these is the Butterworth filter

| Hn(f) | = 1/(1+ (f/fu)2n)0.5


n 1 (1.65)

 Butterworth filters are


popular because they are
the best approximation to
the ideal, in the sense of
maximal flatness in the
filter passband.
4. Bandwidth Of Digital Data
4.1 Baseband versus Bandpass
• An easy way to translate the spectrum
of a low-pass or baseband signal x(t) to
a higher frequency is to multiply or
heterodyne the baseband signal with a
carrier wave cos 2fct
• xc(t) is called a double-sideband (DSB)
modulated signal
xc(t) = x(t) cos 2fct (1.70)
• From the frequency shifting theorem
Xc(f) = 1/2 [X(f-fc) + X(f+fc) ] (1.71)
• Generally the carrier wave frequency is
much higher than the bandwidth of the
baseband signal
fc >> fm and therefore WDSB = 2fm
4.2 Bandwidth
• Theorems of
communication and
information theory are
based on the
assumption of strictly
bandlimited channels

• The mathematical
description of a real
signal does not permit
the signal to be strictly
duration limited and
strictly bandlimited.
4.2 Bandwidth
• All bandwidth criteria have in common the attempt to specify
a measure of the width, W, of a nonnegative real-valued
spectral density defined for all frequencies f < ∞

• The single-sided power spectral density for a single


heterodyned pulse xc(t) takes the analytical form:

(1.73)
Different Bandwidth Criteria

(a) Half-power bandwidth.


(b) Equivalent rectangular
or noise equivalent
bandwidth.
(c) Null-to-null bandwidth.
(d) Fractional power
containment
bandwidth.
(e) Bounded power
spectral density.
(f) Absolute bandwidth.

You might also like