Lecture Notes Signals and Systems LECTUR
Lecture Notes Signals and Systems LECTUR
Lecture Notes Signals and Systems LECTUR
LECTURE NOTES
ON
SIGNALS AND SYSTEMS
(17CA04303)
EDITED BY
MR. R. RAVI KUMAR M.E.,(PH.D)
ASSOCIATE PROFESSOR
Course objectives:
To study about signals and systems.
To do analysis of signals & systems (continuous and discrete) using time
domain & frequency domain methods.
To understand the stability of systems through the concept of ROC.
To know various transform techniques in the analysis of signals and
systems.
Learning Outcomes:
Analyze the spectral characteristics of continuous-time periodic and a
periodic signals using Fourier analysis
Classify systems based on their properties & determine the response of LSI
system using convolution
Apply the Laplace transform and Z- transform for analysis of continuous-
time and discrete- time signals and systems
UNIT I
INTRODUCTION TO SIGNALS & SYSTEMS: Definition and classification of Signal and
Systems (Continuous time and Discrete-time), Elementary signals such as Dirac delta, unit step,
ramp, sinusoidal and exponential and operations on signals. Analogy between vectors and
signals, Orthogonality, Mean Square error
FOURIER SERIES: Trigonometric & Exponential, concept of discrete spectrum.
UNIT II
FOURIER TRANSFORM:
CONTINUOUS TIME FOURIER TRANSFORM: Definition, Computation and properties of
Fourier Transform for different types of signals. Statement and proof of sampling theorem of
low-pass signals.
DISCRETE TIME FOURIER TRANSFORM: Definition, Computation and properties of
Fourier Transform for different types of signals.
UNIT III
SIGNAL TRANSMISSION THROUGH LINEAR SYSTEMS: Linear system, impulse
response, Response of a linear system, linear time-invariant (LTI) system, linear time variant
(LTV) system, Transfer function of a LTI system. Filter characteristics of linear systems.
Distortion less transmission through a system, Signal bandwidth, system bandwidth, Ideal LPF,
HPF and BPF characteristics, Causality and Poly-Wiener criterion for physical realization,
Relationship between bandwidth and rise time. Energy and Power Spectral Densities
UNIT IV
LAPLACE TRANSFORM: Definition, ROC-Properties, Inverse Laplace transforms-the S-
plane and BIBO stability-Transfer functions-System Response to standard signals-Solution of
differential equations with initial conditions.
UNIT V
The Z–TRANSFORM: Derivation and definition-ROC-Properties
Z-TRANSFORM PROPERTIES: Linearity, time shifting, change of scale, Z-domain
differentiation, differencing, accumulation, convolution in discrete time, initial and final value
theorems-Poles and Zeros in Z –plane, The inverse Z-Transform
SYSTEM ANALYSIS: Transfer function-BIBO stability-System Response to standard signals-
Solution of difference equations with initial conditions. .
TEXT BOOKS:
1. B. P. Lathi, “Linear Systems and Signals”, Second Edition, Oxford University press,
2. A.V. Oppenheim, A.S. Willsky and S.H. Nawab, “Signals and Systems”, Pearson, 2nd
Edn.
3. A. Ramakrishna Rao,“Signals and Systems”, 2008, TMH.
REFERENCES:
1. Simon Haykin and Van Veen, “Signals & Systems”, Wiley, 2nd Edition.
2. B.P. Lathi, “Signals, Systems & Communications”, 2009,BS Publications.
3. Michel J. Robert, “Fundamentals of Signals and Systems”, MGH International Edition,
2008.
4. C. L. Philips, J. M. Parr and Eve A. Riskin, “Signals, Systems and Transforms”, Pearson
education.3rd
UNIT-I
SIGNALS & SYSTEMS
UNIT-I
Example: voice signal, video signal, signals on telephone wires, EEG, ECG etc.
System : System is a device or combination of devices, which can operate on signals and
produces corresponding response. Input to a system is called as excitation and output from it is
called as response. (or) System is a combination of sub units which will interact with each other
to achieve a common interest.
For one or more inputs, the system can have one or more outputs.
0; 𝑡≠0
Impulse function is denoted by δ(t). and it is defined as δ(t) ={ ∞; 𝑡=0 }
∞
∫−∞ 𝛿 (𝑡)𝑑𝑡 = 1
Ramp Signal
Parabolic Signal
Signum Function
sgn(t) = 2u(t) – 1
Exponential Signal
Rectangular Signal
Triangular Signal
Sinusoidal Signal
Where T0 = 2π/w0
Classification of Signals:
Let x(t) = t2
∴ t2 is even function
Example 2: As shown in the following diagram, rectangle function x(t) = x(-t) so it is also even
function.
where
A signal is said to be periodic if it satisfies the condition x(t) = x(t + T) or x(n) = x(n + N).
Where
The above signal will repeat for every time interval T 0 hence it is periodic with period T 0.
NOTE:A signal cannot be both, energy and power simultaneously. Also, a signal may be neither
energy nor power signal.
1. Amplitude
2. Time
Amplitude Scaling
Addition
Addition of two signals is nothing but addition of their corresponding amplitudes. This
can be best explained by using the following example:
Subtraction
Multiplication
Time Shifting
Time Scaling
x(At) is time scaled version of the signal x(t). where A is always positive.
Note: u(at) = u(t) time scaling is not applicable for unit step function.
Time Reversal
Classification of Systems:
Example:
y(t) = x2(t)
Solution:
Which is not equal to a1 y1(t) + a2 y2(t). Hence the system is said to be non linear.
A system is said to be time variant if its input and output characteristics vary with time.
Otherwise, the system is considered as time invariant.
y (n , t) = y(n-t)
y (n , t) ≠ y(n-t)
Example:
y(n) = x(-n)
If a system is both liner and time variant, then it is called liner time variant (LTV) system.
If a system is both liner and time Invariant then that system is called liner time invariant (LTI)
system.
For present value t=0, the system output is y(0) = 2x(0). Here, the output is only dependent upon
present input. Hence the system is memory less or static.
For present value t=0, the system output is y(0) = 2x(0) + 3x(-3).
Here x(-3) is past value for the present input for which the system requires memory to get this
output. Hence, the system is a dynamic system.
A system is said to be causal if its output depends upon present and past inputs, and does not
depend upon future input.
For non causal system, the output depends upon future inputs also.
For present value t=1, the system output is y(1) = 2x(1) + 3x(-2).
Here, the system output only depends upon present and past inputs. Hence, the system is causal.
For present value t=1, the system output is y(1) = 2x(1) + 3x(-2) + 6x(4) Here, the system output
depends upon future input. Hence the system is non-causal system.
A system is said to invertible if the input of the system appears at the output.
∴ Y(S) = X(S)
→ y(t) = x(t)
Hence, the system is invertible.
The system is said to be stable only when the output is bounded for bounded input. For a
bounded input, if the output is unbounded in the system then it is said to be unstable.
Let the input is u(t) (unit step bounded input) then the output y(t) = u2(t) = u(t) = bounded
output.
Let the input is u (t) (unit step bounded input) then the output y(t) = ∫u(t)dt = ramp signal
(unbounded because amplitude of ramp is not finite it goes to infinite when t → infinite).
Vector
A vector contains magnitude and direction. The name of the vector is denoted by bold
face type and their magnitude is denoted by light face type.
Example: V is a vector with magnitude V. Consider two vectors V1 and V2 as shown in the
following diagram. Let the component of V1 along with V2 is given by C12V2. The component of
a vector V1 along with the vector V2 can obtained by taking a perpendicular from the end of V1
to the vector V2 as shown in diagram:
V1= C12V2 + Ve
But this is not the only way of expressing vector V1 in terms of V2. The alternate possibilities
are:
V1=C1V2+Ve1
V2=C2V2+Ve2
The error signal is minimum for large component value. If C12=0, then two signals are said to be
orthogonal.
V1 . V2 = V1.V2 cosθ
V1. V2 =V2.V1
Signal
The concept of orthogonality can be applied to signals. Let us consider two signals f1(t) and f2(t).
Similar to vectors, you can approximate f1(t) in terms of f2(t) as
One possible way of minimizing the error is integrating over the interval t 1 to t2.
However, this step also does not reduce the error to appreciable extent. This can be corrected by
taking the square of error function.
Where ε is the mean square value of error signal. The value of C 12 which minimizes the error,
you need to calculate dε/dC12=0
Derivative of the terms which do not have C12 term are zero.
A complete set of orthogonal vectors is referred to as orthogonal vector space. Consider a three
dimensional vector space as shown below:
Consider a vector A at a point (X1, Y1, Z1). Consider three unit vectors (VX, VY, VZ) in the
direction of X, Y, Z axis respectively. Since these unit vectors are mutually orthogonal, it
satisfies that
The vector A can be represented in terms of its components and unit vectors as
Any vectors in this three dimensional space can be represented in terms of these three unit
vectors only.
If you consider n dimensional space, then any vector A in that space can be represented as
=A.VG...............(3)
Substitute equation 2 in equation 3.
Let us consider a set of n mutually orthogonal functions x1(t), x2(t)... xn(t) over the
interval t1 to t2. As these functions are orthogonal to each other, any two signals x j(t), xk(t) have
to satisfy the orthogonality condition. i.e.
Let a function f(t), it can be approximated with this orthogonal signal space by adding the
components along mutually orthogonal signals i.e.
The component which minimizes the mean square error can be found by
All terms that do not contain Ck is zero. i.e. in summation, r=k term remains and all other terms
are zero.
The average of square of error function fe(t) is called as mean square error. It is denoted by ε
(epsilon).
For k=1,2,.. then f(t) is said to be orthogonal to each and every function of orthogonal set.
This set is incomplete without f(t). It becomes closed and complete set when f(t) is included.
f(t) can be approximated with this orthogonal set by adding the components along mutually
orthogonal signals i.e.
If f1(t) and f2(t) are two complex functions, then f1(t) can be expressed in terms of f2(t) as
Fourier series:
To represent any periodic signal x(t), Fourier developed an expression called Fourier
series. This is in terms of an infinite sum of sines and cosines or exponentials. Fourier series uses
orthoganality condition.
Jean Baptiste Joseph Fourier,a French mathematician and a physicist; was born in
Auxerre, France. He initialized Fourier series, Fourier transforms and their applications to
problems of heat transfer and vibrations. The Fourier series, Fourier transforms and Fourier's
Law are named in his honour.
We know that
by Euler's formula,
Hence in equation 2, the integral is zero for all values of k except at k = n. Put k = n in
equation 2.
Replace n by k
Linearity Property
sinnω0t and sinmω0t are orthogonal over the interval (t0,t0+2πω0). So sinω0t,sin2ω0t
forms an orthogonal set. This set is not complete without {cosnω0t } because this cosine set is
also orthogonal to sine set. So to complete this set we must include both cosine and sine terms.
Now the complete orthogonal set contains all cosine and sine terms i.e. {sinnω0t,cosnω0t } where
n=0, 1, 2...
Equation 1 represents exponential Fourier series representation of a signal f(t) over the interval
(t0, t0+T). The Fourier coefficient is given as
Consider a periodic signal x(t), the TFS & EFS representations are given below respectively
Similarly,
Problems
1. A continuous-time signal x ( t ) is shown in the following figure. Sketch and label each
of the following signals.
Sol:
2. Determine whether the following signals are energy signals, power signals, or
neither.
And by using
we obtain
Sol:
UNIT – II
CONTINUOUS TIME
FOURIER TRANSFORM
UNIT - II
The main drawback of Fourier series is, it is only applicable to periodic signals. There
are some naturally produced signals such as nonperiodic or aperiodic, which we cannot
represent using Fourier series. To overcome this shortcoming, Fourier developed a
mathematical model to transform signals between time (or spatial) domain to frequency domain
& vice versa, which is called 'Fourier transform'.
Fourier transform has many applications in physics and engineering such as analysis of
LTI systems, RADAR, astronomy, signal processing etc.
In the limit as T→∞,Δf approaches differential df, kΔf becomes a continuous variable f, and
summation becomes integration
FT of GATE Function
FT of Impulse Function:
FT of Exponentials:
FT of Signum Function :
Linearity Property:
Figure 3:
Let gs(t) be the sampled signal. Its Fourier Transform Gs(ω) is given by
Aliasing:
Aliasing is a phenomenon where the high frequency components of the sampled signal
interfere with each other because of inadequate sampling ωs < ωm
Aliasing leads to distortion in recovered signal. This is the reason why sampling
frequency should be atleast twice the bandwidth of the signal.
Oversampling:
In practice signal are oversampled, where fs is signi_cantly higher than Nyquist rate to
avoid aliasing.
Problems
1. Find the Fourier transform of the rectangular pulse signal x(t) defined by
Hence we obtain
The following figure shows the Fourier transform of the given signal x(t)
Hence, we get
(a) (b)
Fig: (a) Signal x(t) (b) Fourier transform X(w) of x(t)
Thus, the Fourier transform of a unit impulse train is also a similar impulse train. The following
figure shows the Fourier transform of a unit impulse train
We know that
Note that sgn(t) is an odd function, and therefore its Fourier transform is a pure imaginary
function of w
-----------------(1)
Where
------------------(2)
Equation (1) and (2) give the Fourier representation of the signal. Equation (1) is referred
as synthesis equation or the inverse discrete time Fourier transform (IDTFT) and equation (2)is
Fourier transform in the analysis equation. Fourier transform of a signal in general is a complex
valued function, we can write
where is magnitude and is the phase of. We also use the term Fourier
spectrum or simply, the spectrum to refer to. Thus is called the magnitude spectrum and
is called the phase spectrum. From equation (2) we can see that is a periodic
function with period i.e.. We can interpret (1) as Fourier coefficients in the representation of
a periodic function. In the Fourier series analysis our attention is on the periodic function, here
we are concerned with the representation of the signal. So the roles of the two equation are
interchanged compared to the Fourier series analysis of periodic signals.
Now we show that if we put equation (2) in equation (1) we indeed get the signal.
Let
where we have substituted from (2) into equation (1) and called the result as.
Since we have used n as index on the left hand side we have used m as the index variable for the
sum defining the Fourier transform. Under our assumption that sequence is absolutely
summable we can interchange the order of integration and summation. Thus
Example: Let
its Fourier transform of this signal is periodic in w with period 2∏ , and is given
Now consider a periodic sequence x[n] with period N and with the Fourier series representation
Let and be two signal, then their DTFT is denoted by and. The notation
is used to say that left hand side is the signal x[n] whose DTFT is is given at right hand
side.
From this, it follows that ReX(e jw )is an even function of w and ImX (e jw )is an odd
function of w . Similarly, the magnitude of X(e jw ) is an even function and the phase angle is
an odd function. Furthermore,
The impulse train on the right-hand side reflects the dc or average value that can result from
summation.
For example, the Fourier transform of the unit step x[n] u[n] can be obtained by using the
accumulation property.
6. Time Reversal
7. Time Expansion
For discrete-time signals, however, a should be an integer. Let us define a signal with k a
positive integer,
For k 1, the signal is spread out and slowed down in time, while its Fourier transform is
compressed.
Example: Consider the sequence x[n] displayed in the figure (a) below. This sequence can be
related to the simpler sequence y[n] as shown in (b).
As can be seen from the figure below, y[n] is a rectangular pulse with 2 1 N , its Fourier
transform is given by
8. Differentiation in Frequency
The right-hand side of the above equation is the Fourier transform of jnx[n] . Therefore,
multiplying both sides by j , we see that
9. Parseval’s Relation
UNIT – III
SIGNAL TRANSMISSION
THROUGH LINEAR SYSTEMS
UNIT – III
SIGNAL TRANSMISSION THROUGH LINEAR SYSTEMS
Linear Systems:
Example:
y(t) = 2x(t)
Solution:
Impulse Response:
The impulse response of a system is its response to the input δ(t) when the system is
initially at rest. The impulse response is usually denoted h(t). In other words, if the input to an
initially at rest system is δ(t) then the output is named h(t ).
δ(t) h(t)
system
Liner Time variant (LTV) and Liner Time Invariant (LTI) Systems
If a system is both liner and time variant, then it is called liner time variant (LTV) system.
If a system is both liner and time Invariant then that system is called liner time invariant (LTI)
system.
Response of a continuous-time LTI system and the convolution integral
(i)Impulse Response:
The impulse response h(t) of a continuous-time LTI system (represented by T) is defined to
be the response of the system when the input is δ(t), that is,
--------(2)
Since the system is linear, the response y( t of the system to an arbitrary input x( t ) can be
expressed as
--------(3)
--------(4)
Substituting Eq. (4) into Eq. (3), we obtain
-------(5)
Equation (5) indicates that a continuous-time LTI system is completely characterized by its impulse
response h( t).
(iii)Convolution Integral:
Equation (5) defines the convolution of two continuous-time signals x ( t ) and h(t) denoted
by
-------(6)
Equation (6) is commonly called the convolution integral. Thus, we have the fundamental
result that the output of any continuous-time LTI system is the convolution of the input x ( t ) with
the impulse response h(t) of the system. The following figure illustrates the definition of the impulse
response h(t) and the relationship of Eq. (6).
Thus, the step response s(t) can be obtained by integrating the impulse response h(t).
Differentiating the above equation with respect to t, we get
Thus, the impulse response h(t) can be determined by differentiating the step response s(t).
Transmission is said to be distortion-less if the input and output have identical wave
shapes. i.e., in distortion-less transmission, the input x(t) and output y(t) satisfy the condition:
k = constant.
= K FT[x(t - td)]
Thus, distortion less transmission of a signal x(t) through a system with impulse response h(t) is
achieved when
A physical transmission system may have amplitude and phase responses as shown below:
FILTERING
One of the most basic operations in any signal processing system is filtering. Filtering is
the process by which the relative amplitudes of the frequency components in a signal are
changed or perhaps some frequency components are suppressed. As we saw in the preceding
section, for continuous-time LTI systems, the spectrum of the output is that of the input
multiplied by the frequency response of the system. Therefore, an LTI system acts as a filter on
the input signal. Here the word "filter" is used to denote a system that exhibits some sort of
frequency-selective behavior.
Fig: Magnitude responses of ideal filters (a) Ideal Low-Pass Filter (b)Ideal High-Pass Filter
UNIT – IV
LAPLACE TRANSFORM
UNIT – IV
LAPLACE TRANSFORM
we know that for a continuous-time LTI system with impulse response h(t), the output y(t)of the
system to the complex exponential input of the form est is
A. Definition:
The function H(s) is referred to as the Laplace transform of h(t). For a general continuous-time
signal x(t), the Laplace transform X(s) is defined as
We know that
Dirichlet's conditions are used to define the existence of Laplace transform. i.e.
The function f has finite number of maxima and minima.
There must be finite number of discontinuities in the signal f ,in the given interval of
time.
It must be absolutely integrable in the given interval of time. i.e.
Linearity Property
Region of convergence.
The range variation of σ for which the Laplace transform converges is called region of
convergence.
If x(t) is absolutely integral and it is of finite duration, then ROC is entire s-plane.
If x(t) is a two sided sequence then ROC is the combination of two regions.
Example 1: Find the Laplace transform and ROC of x(t)=e− at u(t) x(t)=e −atu(t)
Example 2: Find the Laplace transform and ROC of x(t)=e at u(−t) x(t)=e atu(−t)
Example 3: Find the Laplace transform and ROC of x(t)=e −at u(t)+e at u(−t)
x(t)=e−atu(t)+eatu(−t)
ROC: −a<Res<a
A system is said to be stable when all poles of its transfer function lay on the left half of
s-plane.
A system is said to be unstable when at least one pole of its transfer function is shifted to
the right half of s-plane.
A system is said to be marginally stable when at least one pole of its transfer function
lies on the jω axis of s-plane
UNIT – V
Z - TRANSFORM
UNIT – V
Z-Transform
Analysis of continuous time LTI systems can be done using z-transforms. It is a powerful
mathematical tool to convert differential equations into algebraic equations.
The bilateral (two sided) z-transform of a discrete time signal x(n) is given as
The unilateral (one sided) z-transform of a discrete time signal x(n) is given as
Z-transform may exist for some signals for which Discrete Time Fourier Transform (DTFT) does
not exist.
Z-transform of a discrete time signal x(n) can be represented with X(Z), and it is defined as
The above equation represents the relation between Fourier transform and Z-transform
Inverse Z-transform:
Z-Transform Properties:
Linearity Property:
Convolution Property
Correlation Property
Initial value and final value theorems of z-transform are defined for causal signal.
For a causal signal x(n), the initial value theorem states that
This is used to find the initial value of the signal without taking inverse z-transform
This is used to find the final value of the signal without taking inverse z-transform
The range of variation of z for which z-transform converges is called region of convergence of z-
transform.
If x(n) is a finite duration causal sequence or right sided sequence, then the ROC is entire
z-plane except at z = 0.
If x(n) is a finite duration anti-causal sequence or left sided sequence, then the ROC is
entire z-plane except at z = ∞.
If x(n) is a infinite duration causal sequence, ROC is exterior of the circle with radius a.
i.e. |z| > a.
If x(n) is a infinite duration anti-causal sequence, ROC is interior of the circle with radius
a. i.e. |z| < a.
If x(n) is a finite duration two sided sequence, then the ROC is entire z-plane except at z
= 0 & z = ∞.
The plot of ROC has two conditions as a > 1 and a < 1, as we do not know a.
In The transfer function H[Z], the order of numerator cannot be grater than the order of
denominator.
Inverse Z transform:
Three different methods are:
1. Partial fraction method
2. Power series method
3. Long division method
For z not equal to zero or infinity, each term in X(z) will be finite and consequently X(z) will
converge. Note that X ( z ) includes both positive powers of z and negative powers of z. Thus,
from the result we conclude that the ROC of X ( z ) is 0 < lzl < m.
Sol:
From the above equation we see that there is a pole of ( N - 1)th order at z = 0 and a pole at z = a .
Since x[n] is a finite sequence and is zero for n < 0, the ROC is IzI > 0. The N roots of the
numerator polynomial are at