SS Unit 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

Unit-1: Signals & Systems

Table of Contents
1.1 FUNDAMENTALS OF SIGNALS ............................................................................................................................... 1

1.1.1 BASIC DEFINITIONS OF SIGNALS & SYSTEMS .................................................................................................................. 1


Signals ................................................................................................................................................................................... 1
Systems .................................................................................................................................................................................. 1
1.1.2 CLASSIFICATION OF SIGNALS ........................................................................................................................................... 2
Continuous & Discrete Time Signals ................................................................................................................................... 2
Continuous-Time Signals ........................................................................................................................................................................ 2
Discrete-Time Signals ............................................................................................................................................................................. 2
Analog and digital signals .................................................................................................................................................... 3
Analog Signals ........................................................................................................................................................................................ 3
Digital Signals ......................................................................................................................................................................................... 3
1.1.3 PROPERTIES & CLASSIFICATION OF SIGNALS BASED ON PROPERTIES .......................................................................... 3
Even & odd signals and Sequences ...................................................................................................................................... 4
Even Signals and Sequences ................................................................................................................................................................... 4
Odd Signals & Sequences ....................................................................................................................................................................... 4
Periodic & Non-periodic signals & Sequences .................................................................................................................... 5
Periodic Signals ...................................................................................................................................................................................... 5
Non-Periodic Signals .............................................................................................................................................................................. 5
Periodic Sequences ................................................................................................................................................................................ 5
Sum of periodic functions ...................................................................................................................................................................... 6
Energy Signals and Power Signals ....................................................................................................................................... 7
Continuous-Time Signals ........................................................................................................................................................................ 7
Discrete time signals .............................................................................................................................................................................. 8
Causal & non-Causal Signals and Sequences ..................................................................................................................... 8
1.1.4 ELEMENTARY SIGNALS .................................................................................................................................................... 9
DC signal ............................................................................................................................................................................... 9
Sinusoidal Signals ............................................................................................................................................................... 10
Unit Step Signal .................................................................................................................................................................. 10
Signum Function ................................................................................................................................................................ 11
Rectangular Pulse ................................................................................................................................................................ 12
Delta or Unit Impulse Function δ(t)................................................................................................................................... 12
Unit Ramp Signal ................................................................................................................................................................ 13
Complex Exponential Signals............................................................................................................................................. 14
Sinc Function ...................................................................................................................................................................... 16
Relation between Step, Ramp and Delta Function ............................................................................................................ 17
1.1.5 OPERATIONS ON SIGNALS & SEQUENCES ...................................................................................................................... 17
Time Shift ............................................................................................................................................................................. 17
Time Reversal ...................................................................................................................................................................... 17
Time Scaling ......................................................................................................................................................................... 18
Combination of Operations ................................................................................................................................................. 18
Decimation and Expansion .................................................................................................................................................. 19
Amplitude Scaling................................................................................................................................................................ 19
Amplitude Shifting ............................................................................................................................................................... 20
Addition and Subtraction .................................................................................................................................................... 20
Multiplication ...................................................................................................................................................................... 20
Differentiation and Integration .......................................................................................................................................... 21

1.2 FUNDAMENTALS OF SYSTEMS ............................................................................................................................ 21

1.2.1 CONTINUOUS-TIME AND DISCRETE-TIME SYSTEMS ..................................................................................................... 21


1.2.2 INTERCONNECTS OF SYSTEMS........................................................................................................................................ 23

Department of ECE, SVCE, TPT. i|Page


Unit-1: Signals & Systems

1.2.3 BASIC SYSTEM PROPERTIES ........................................................................................................................................... 23


Systems with and without Memory .................................................................................................................................... 23
Invertibility and Inverse System .......................................................................................................................................... 24
Causality .............................................................................................................................................................................. 24
Stability ................................................................................................................................................................................ 25
Time Invariance ................................................................................................................................................................... 25
Linearity ............................................................................................................................................................................... 26
1.2.4 CONVOLUTION ................................................................................................................................................................ 27
Convolution in Discrete Time or Convolution Sum ............................................................................................................. 27
Definition and Properties of Convolution Sum .................................................................................................................................... 28
Continuous-time Convolution/Convolutional Integral ....................................................................................................... 30
1.2.5 CORRELATION ................................................................................................................................................................ 30
Auto Correlation Function ................................................................................................................................................... 30
Properties of Auto-correlation Function of Energy Signal.................................................................................................................... 31
Auto-correlation Function of Power Signal .......................................................................................................................................... 31
Cross Correlation Function .................................................................................................................................................. 31
Properties of Cross Correlation Function of Energy and Power Signals ............................................................................................... 32

1.3 ANALOGY BETWEEN VECTORS AND SIGNALS .............................................................................................. 32

1.3.1 VECTOR ........................................................................................................................................................................... 32


Dot Product of Two Vectors ................................................................................................................................................ 33
1.3.2 SIGNAL ............................................................................................................................................................................ 33
1.3.3 ORTHOGONAL VECTOR SPACE ...................................................................................................................................... 34
1.3.4 ORTHOGONAL SIGNAL SPACE ........................................................................................................................................ 35
1.3.5 MEAN SQUARE ERROR ................................................................................................................................................... 36
1.3.6 ORTHOGONALITY IN COMPLEX FUNCTIONS ................................................................................................................. 36

1.4 FOURIER SERIES ...................................................................................................................................................... 37

1.4.1 FOURIER SERIES REPRESENTATION OF CONTINUOUS TIME PERIODIC SIGNALS ........................................................ 37


Trigonometric Fourier Series (TFS) ...................................................................................................................................... 37
Exponential Fourier Series (EFS) .......................................................................................................................................... 38
Relation Between Trigonometric and Exponential Fourier Series ..................................................................................... 39
Convergence of the Fourier Series ...................................................................................................................................... 39
Properties of the Continuous-Time Fourier Series .............................................................................................................. 40
Linearity property ................................................................................................................................................................................ 40
Time Shifting property ......................................................................................................................................................................... 40
Time Reversal property ........................................................................................................................................................................ 41
Time scaling property........................................................................................................................................................................... 41
Time differential property .................................................................................................................................................................... 41
Time integration property .................................................................................................................................................................... 41
Convolution theorem or property........................................................................................................................................................ 42
Modulation or Multiplication property ................................................................................................................................................ 42
Parseval’s Relation or Theorem or property ........................................................................................................................................ 43

Department of ECE, SVCE, TPT. ii | P a g e


Unit-1: Signals & Systems

1.1 FUNDAMENTALS OF SIGNALS


1.1.1 Basic Definitions of Signals & Systems
Signals
A signal is a quantitative description of a physical phenomenon, event or process. A signal can be
represented in many ways. In all the cases the information in a signal is contained in a pattern of variations
of some form.

“Signals are represented mathematically as a function of one or more independent variables, which
convey information on the nature of physical phenomenon.”
Some common examples include
1. Electrical current or voltage in a circuit.
2. Daily closing value of a share of stock last week.
3. Audio signal continuous-time in its original form, or discrete-time when stored
on a CD.
For our convenience let us restrict our attention to one dimensional signal defined as single valued
function of time, which means that at every instant of time there is a unique value of the function. The
value may be real or complex, if the value is real, we can call that signal as real valued signal, otherwise we
will call it as complex valued signal. In both the cases, the independent variable time is real-valued.
If the function depends on a single variable, then the signal is said to be one dimensional signal. If
the function depends on two or more variables, then the signals is said to be multi-dimensional signals.
Systems
A system is an entity that processes one or more input signals in order to produce one or more output
signals. There is no unique purpose for a system; rather, the purpose depends on the application of interest.
In an automatic speaker recognition system, the function of the system is to extract the information to
recognize/identify the speaker. In a communication system, the function of the system is to transport the
information from source to destination.

“A system may be defined as a set of elements or functional blocks which are connected together and
produces an output in response to an input signal. The response or output of the system depends upon
transfer function of the system.”

The interaction between a system and its associated signals can be shown schematically as follows

Department of ECE, SVCE, TPT. 1|Pa ge


Unit-1: Signals & Systems

In the same way a communication system can be shown as

And a control System which can manipulate the output of the systems adaptively based on a feedback path
is shown below

1.1.2 Classification of signals


Continuous & Discrete Time Signals
Continuous-Time Signals
In this case the independent variable time is continuous, and thus these signals are defined for
continuum values of time. It may be defined as a mathematical continuous function. Here the independent
variable time is represented as ‘t’, where t is a real-valued variable denoting time, i.e., t  . A continuous
time signal is represented as x(t) and is shown in following figure.

Thus from the figure we can say that a continuous signal will have some value at every instant of time.
Discrete-Time Signals
In this case the independent variable takes discrete values, and thus the signals are defined for discrete
values of time. A discrete signal is defined only at certain time instants. Here amplitude between two-time
instants is just not defined (that means its value may exist but not defined). So, dependent variables like

Department of ECE, SVCE, TPT. 2|Pa ge


Unit-1: Signals & Systems

amplitude are continuous in discrete time signals but only independent variables like time are discrete, i.e.,
t  . Here the independent variable time is denoted by ‘n’, and the signal is represented as x[n].
These are also called as sequences.
Mathematically, a discrete-time signal or a sequence is denoted as
X[n]=[………..0,0,1,2,0,-1,1,2,0,0…..]

Where the arrow indicates the value of x[n] at n=0, in the above example the value of x[n] at n=0 is 1.

Analog and digital signals


Analog Signals
A signal with a continuous dependent variable is said to be continuous valued (e.g., voltage waveform). A
continuous-valued CT signal is said to be analog (e.g., voltage waveform). i.e. it is having continuous values
at every instant of time.
Digital Signals
A signal with a discrete dependent variable is said to be discrete valued (e.g., digital image). A discrete-
valued DT signal is said to be digital (e.g., digital audio). In this case both independent variable (time) and
dependent variable (amplitude) are discrete in nature.
1.1.3 Properties & Classification of Signals based on Properties
1. Symmetry - Even & Odd Signals and Sequences
2. Periodicity - Periodic & Non-Periodic Signals and Sequences
3. Causality - Causal & Non-Causal Signals and Sequences
4. Deterministic & Random Signals and Sequences
5. Energy & Power Signals and Sequences

Department of ECE, SVCE, TPT. 3|Pa ge


Unit-1: Signals & Systems

Even & odd signals and Sequences


Even Signals and Sequences
These are also called as symmetric signals. A continuous time signal is said to be even signals if it
satisfies the following condition
x(-t) = x(t) for all values of t.
A sequence x is said to be even if it satisfies
x(-n) = x(n) for all n.
Geometrically, the graph of an even signal is symmetric about the origin.

Odd Signals & Sequences


These are also called as anti-symmetric/ asymmetric signals.
A continuous time signal is said to be odd signal if it satisfies the following condition
x(-t )= -x(t) for all values of t.
A sequence x is said to be odd if it satisfies
x(n) = −x(−n) for all n.
Geometrically, the graph of an odd signal is antisymmetric about the origin.
 These definitions of even and odd signals are only valid for real valued signals.
 In the case of complex valued signal, we may speak of conjugate symmetry.
 A complex valued signal is said to be conjugate symmetric if it satisfies the following condition
x(-t)=x*(t)
where ‘*’ denotes complex conjugation.
 Let
x(t)=a(t)+j b(t)
where a(t) is real part of x(t) and b(t) is imaginary part.
From the above equations it is clear that a complex valued signal x(t) is conjugate symmetric if its real
part is even and its imaginary part is odd.

Department of ECE, SVCE, TPT. 4|Pa ge


Unit-1: Signals & Systems

An important fact is that any signal can be decomposed into a sum of two signals, one of which is even
and other is odd. Every function x has a unique representation of the form x(t) = x e(t)+xo(t), where the
functions xe and xo are even and odd, respectively. In particular, the functions xe and xo are given by
[ ( ) ( )] [ () ( )]
xe(t) = and xo(t) =

The functions xe and xo are called the even part and odd part of x, respectively.

Periodic & Non-periodic signals & Sequences


Periodic Signals
 A signal x(t) is said to be periodic if it satisfies the condition
x(t)=x(t+T) for all t
where T is a positive constant.
 Here, the signal x(t) repeats itself after a period T.
 The smallest value of T that satisfies the above equation is called as fundamental period of x(t).
 The fundamental period T defines the duration of one complete cycle of x(t).
 The reciprocal of the fundamental period T is called the fundamental frequency of the periodic signal
x(t).
 The fundamental frequency describes how frequently the periodic signal x(t) repeats itself.
 We can define fundamental frequency as
f =1/T.
 The frequency ‘f’ is measured in hertz (Hz) or cycles per second.
 The angular frequency, measured in radians per second, is defined by
2𝜋
𝜔=
𝑇
Non-Periodic Signals
 These are also called as aperiodic signals.
 A signal x(t) is said to be an aperiodic or non-periodic signal if it doesn’t satisfy the periodicity
condition.
i.e.x(t)≠x(t+T) for all t
Periodic Sequences
 A discrete time signal x[n] is said to be periodic if it satisfies the condition
x[n]=x[n+N] for all integer values of n
where N is positive integer.
 The smallest value of integer N for which the above equation is satisfied is called the fundamental
period of the sequence x[n].
 The fundamental angular frequency or simply fundamental frequency of x[n] is defined by
2

N
Non-Periodic Sequences

Department of ECE, SVCE, TPT. 5|Pa ge


Unit-1: Signals & Systems

 If a sequence doesn’t satisfy the equation x[n]=x[n+N], then the sequence is said to be non-
periodic sequence.
Sum of periodic functions
Let x1 and x2 be periodic functions with fundamental periods T1 and T2, respectively. Then, the sum y
= x1 +x2 isa periodic function if and only if the ratio T1/T2 is a rational number (i.e., the quotient of
two integers). Suppose that T1/T2 = q/r where q and r are integers and coprime (i.e., have no common
factors), then the fundamental period of y is rT1 (or equivalently, qT2, since rT1 = qT2). (Note that rT1
is simply the least common multiple of T1 and T2.) Although the above theorem only directly addresses
the case of the sum of two functions, the case of N functions (where N > 2) can be handled by applying
the theorem repeatedly N −1 times.
Periodic Signal and Sequence Aperiodic Signal and Sequence

Deterministic & Random signals and sequences


 Deterministic signal is a signal about which there is no uncertainty with respect to its value at any
time.
 A deterministic signal can be modeled as completely specified functions of time.
 All the periodic signals like square wave shown as an example in the above case are the examples
for deterministic signals
 The pattern of this type of signal is regular and can be characterized mathematically.
 The nature and amplitude of such signal at any time can be predicted.
 Random signal is a signal about which there is uncertainty before its actual occurrence.
 The occurrence of this signal is random in nature.
 Pattern of such signal is quite irregular.
 Random signals may be viewed as a part/group of signals, with each part having a different
waveform.
 Here we can’t expect the characteristics of the signal.
 But each part will have a probability of occurrence.
 Noise is an example of random signal.

Department of ECE, SVCE, TPT. 6|Pa ge


Unit-1: Signals & Systems

Random Signal and Sequence

Energy Signals and Power Signals


Continuous-Time Signals
 In an electrical system, a signal may represent voltage or current.
 Consider a voltage v(t) developed across a resistor R, producing a current i(t).
 The instantaneous power dissipated in this resistor is defined by
2( t )
p( t )  or p( t )  Ri 2 ( t )
R
 In both the cases, the instantaneous power is proportional to squared amplitude of the signal.
 In signal analysis we define power in terms of 1-ohm resistor.
 So regardless of whether a given signal x(t) is voltage or current we can express the instantaneous
power as
p( t )  x 2 ( t )
 Now, we can define the total energy of the continuous time signal as
T
2
E  lim  x ( t )dt
2
T 
T

2

  x ( t )dt
2



 Average power is defined as


T
2
1
P  lim  x ( t )dt
2
T  T
T

2

 The average power of the periodic signal x(t) of fundamental period T is given by
T
2
1
P  x ( t )dt
2

T T

2

 The square root of the average power P is called the root mean square (rms) value of the signal
x(t).
Department of ECE, SVCE, TPT. 7|Pa ge
Unit-1: Signals & Systems

Discrete time signals


 In case of discrete time signals the integrals are replaced by corresponding sums.
 The total energy of x[n] is defined by

E x
n 
2
[n]

 Average power is defined as


N
1
P  lim
N  2N
x
n  N
2
[n]

 The average power in a periodic sequence x[n] with fundamental period N is given by
1 N 1
P   x2 [ n ]
N n 0

 A signal is referred to as an energy signal, if and only if the total energy of the signal satisfies the
condition
0  E  ; P  0
 In the same way x(t) is said to be a power signal, if and only if the average power satisfies the
condition
0  P  ; E  

 Signals for which 0  E   and 0  P   are neither energy nor power signals, ex; x(t)=t
 The energy and power classification of signals are mutually exclusive.
 All the periodic and random signals are referred to as power signals, whereas, signals that are both
deterministic and non-periodic are energy signals (energy signals are limited duration signals).
Causal & non-Causal Signals and Sequences
A signal x is said to be right sided if, for some (finite) real constant t 0, the following condition holds
x(t) = 0 for all t < t0
(i.e., x is only potentially nonzero to the right of t 0). An example of a right-sided signal is shown below.

A signal x is said to be causal if


x(t) = 0 for all t < 0.
A causal signal is a special case of a right-sided signal.
A signal x is said to be left sided if, for some (finite) real constant t 0, the following condition holds
x(t) = 0 for all t > t0
(i.e., x is only potentially nonzero to the left of t 0). An example of a left-sided signal is shown below.

Department of ECE, SVCE, TPT. 8|Pa ge


Unit-1: Signals & Systems

Similarly, a signal x is said to be anti-causal if


x(t) = 0 for all t > 0.
An anti-causal signal is a special case of a left-sided signal.
A signal that is both left sided and right sided is said to be finite duration (or time limited). An example of
a finite duration signal is shown below.

A signal that is neither left sided nor right sided is said to be two sided. An example of a two-sided signal
is shown below.

1.1.4 Elementary Signals


Some of the standard signals used in signals systems are as follows
1. A dc signal 6. Delta or unit impulse function
2. Sinusoidal signal 7. Unit ramp signal
3. Unit step signal 8. Exponential signal
4. Signum function 9. Sinc function
5. A rectangular pulse
DC signal
Continuous time dc signal
A dc signal is having constant amplitude for all the values of time. A dc signal is shown in the
following figure. The amplitude of this signal is independent of time.
A dc signal can be defined as
x(t)=A for all values of t.
Discrete time dc signal
It is sequence of samples each of amplitude A and extending from -∞ to ∞.
A dc sequence is defined as
x[n]=A for-∞ < n < ∞.

Department of ECE, SVCE, TPT. 9|Pa ge


Unit-1: Signals & Systems

Sinusoidal Signals
Continuous time signal
The sinusoidal signals include sine and cosine signals. Mathematically these signals are represented as
A sine signal x(t) =A sin(ωt) = A sin(2πft)
A cosine signal x(t)=A cos(ωt) = A cos(2πft)
Discrete Time Signals
A discrete time sinusoidal waveform is denoted by
Sine sequence x[n] = Asin(ωn)
Cosine Sequence x[n] = Acos(ωn)
Where AAmplitude
fFrequency
ωAngular Frequency 2πf.
DC Signal and Sequence Sine Signal and Sequence

Unit Step Signal


Continuous Time Unit Step Signal
The unit step signal has constant amplitude of unity for zero and positive values of time (t). It has zero
amplitude for negative values of time.
Mathematically it is represented as
1 for t  0
u( t )  
0 for t  0
Discrete Time Unit Step Signal
A discrete time unit step signal is denoted by u[n]. its value is unity for all the positive values of n.
i.e. its value is one (1) for n≥0.while for other values of n, its value is zero.
1 for n  0
u[ n ]  
0 for n  0
The above equation can be written in the form of sequence as

Department of ECE, SVCE, TPT. 10 | P a g e


Unit-1: Signals & Systems

u[n]= {…..00000111111…..}

Unit Step Signal and Sequence Cosine Signal and Sequence

Signum Function
Continuous Time Signum Function
The Signum function is shown in the following figure. Mathematically Signum function is given as
1 for t  0
sgn( t )  
1 for t  0
The Signum function is odd or antisymmetric function.
Discrete Time Signum Function
A discrete time Signum function can be obtained by sampling the continuous time Signum function.
Its value is +1 for positive values of n and -1 for negative values of n. Mathematically it is given as
1 for n  0
sgn( n )  
1 for n  0
Signum Function

Department of ECE, SVCE, TPT. 11 | P a g e


Unit-1: Signals & Systems

Rectangular Pulse
A rectangular pulse of unit amplitude and duration is shown in the following figure. It is centered
about y-axis i.e. about 0. Mathematically it is represented as

 1 1
1 for t
rect( t )   2 2
0 otherwise

The general type rectangular pulse having amplitude of ‘A’, over duration T is given as

 T T
t   A for t
Arect   2 2
T  0 otherwise

In the above expression, T shows that it is a function of time and T represents the width of the rectangular
pulse, and A represents the amplitude. The rectangular pulse is an even function.

Delta or Unit Impulse Function δ(t)


The delta function is an extremely important function used for the analysis of communication
systems. The impulse response of the system is its response to a delta function applied at the input.
The delta function is shown in the following figure. It presents only at t=0, its width tends to 0 and
its amplitude at t=0, is infinitely large so that the area under the pulse is unity. Due to unity area, it is called
as a unit impulse function
1 for t  0
(t ) 
0 for t  0
The area under unit impulse is given as,

  ( t )dt  1


The impulse δ(t) is also referred to as the Dirac delta function.

A graphical description of impulse δ[n] is shown in the figure.


There are two important properties for delta function they are as follows,
1. Shifting property
2. Replication property
Shifting Property The shifting property of delta function states that

𝑥(𝑡)𝛿(𝑡 − 𝑡 ) 𝑑𝑡 = 𝑥(𝑡 )

Department of ECE, SVCE, TPT. 12 | P a g e


Unit-1: Signals & Systems

Where 𝛿(𝑡 − 𝑡 ) represents the time shifted delta function. This delta function is present only at t=t m.
RHS represents the value of x(t) at t=tm
This result indicates that the area under the product of a function with an impulse is equal to the value of
that function at the instant where the impulse is located.
Replication Property
This property states that the convolution of any function x(t) with delta function yields the same
function. The sign * in the below given equation represents convolution.
𝑥(𝑡) ∗ 𝛿(𝑡) = 𝑥(𝑡)

Discrete Time Impulse Function/Unit Sample Sequence


The discrete time version of unit impulse signal is the unit sample sequence.
A discrete tie unit impulse function is denoted by δ[n]. Its amplitude Is 1 at n=0 and for all other values of
n, its amplitude is zero.
1 𝑓𝑜𝑟 𝑛 = 0
δ[n]=
0 𝑓𝑜𝑟 𝑛 ≠ 0
In the sequence form it can be represented as
δ[n] = {……,0,0,0,1,0,0,0,….}

Unit Ramp Signal


A continuous time ramp signal is denoted by r(t). Mathematically, it is represented as
t for t  0
r( t )  
0 for t  0
A discrete-time ramp signal is denoted as Ur[n]. Its value increases linearly with sample number n.
mathematically it is represented as

Department of ECE, SVCE, TPT. 13 | P a g e


Unit-1: Signals & Systems

n for n  0
ur [ n ]  
0 for n  0

Complex Exponential Signals


Continuous-Time

A continuous time complex exponential signal is of the form


𝑥(𝑡) = 𝐶𝑒
Where C and a are in general complex numbers, depending on these parameters, the complex exponential
signals may exhibit different characteristics.

Real Exponential signals


 The signal𝑥(𝑡) = 𝐶𝑒 , may be called as real exponential signal, if the parameters C and a, are real.
 Depending on the value of ‘a’, the real exponential signals may behave in two different ways.
 If the value of ‘a’ is positive then as time increases, x(t) is a growing exponential, as shown in the
figure.
 If the value of ‘a’ is negative then as time increases, x(t) is a decaying exponential, as shown in
figure.
 Also, note that for a=0, then x(t) is a constant. complex exponential Ce at , where C  C e j is

expressed in polar and a  r  j0 is expressed in rectangular form. Then


Department of ECE, SVCE, TPT. 14 | P a g e
Unit-1: Signals & Systems

Ceat  C e j e( r  j0 )t  C e jrt e(0t  )  C e jrt  cos(0t   )  j sin(0t   ) 

Thus, for r = 0, the real and imaginary parts of a complex exponential are sinusoidal.
For r > 0, sinusoidal signals multiplied by a growing exponential.
For r < 0, sinusoidal signals multiplied by a decaying exponential.
Damped signal – Sinusoidal signals multiplied by decaying exponentials are commonly referred to as
damped signal.

Discrete-time complex exponential

A discrete complex exponential or sequence is defined by

x[ n ]  C n

where C and are in general complex numbers. This can be alternatively expressed

x[ n ]  Ce  n

Where   e 

Real Exponential Signals


If C and are real, we have the real exponential signals.

Real Exponential Signal


x[ n ]  C n

(a) >1; (b) 0<<1;


(c) –1<<0; (d) <-1.

Department of ECE, SVCE, TPT. 15 | P a g e


Unit-1: Signals & Systems

Consider a complex exponential C n ,where C  C e j and    e j0 , then


n n
C n  C  cos( 0 n   )  j C  sinj( 0 n   )

Thus, for 1, the real and imaginary parts of a complex exponential are sinusoidal.
For 1, sinusoidal signals multiplied by a decaying exponential.
For 1, sinusoidal signals multiplied by a growing exponential.
Sinc Function
The cardinal sine function called sinc function or sinc pulse is mathematically expressed as under
sin(  x )
sin c( x )  for x  0
x
Where x is the independent variable
we can prove that sin c( x )  1 at x  0;
and sin c( x )  0 at x  1, 2, 3.......

Department of ECE, SVCE, TPT. 16 | P a g e


Unit-1: Signals & Systems

Relation between Step, Ramp and Delta Function


The relations between step, ramp and delta functions are as follows
i. Relation between unit step and unit ramp signals
The relation between unit step and unit ramp function be written as follows
d
r( t )  u( t ) or  u( t )dt  r( t )
dt
ii. Relation between unit step and delta functions
The relation between step and delta function is given as
d
u( t )   ( t ) or   ( t )dt  u( t )
dt
iii. Relation between unit ramp and delta functions
The relation between unit ramp and delta functions are given as
d2
r( t )    ( t )dt or r( t )   ( t )
dt 2
From above three relations we can conclude these relationships as
 ( t ) 
int egrate
 u( t ) 
int egrate
 r( t ) 
differentiate
u( t ) 
differentiate
(t )

1.1.5 Operations on Signals & Sequences


Time Shift
For any t0   and, n0   time shift is an operation defined as

x( t )  x( t  t0 )
x [ n ]  x [ n  n0 ]

If t0  0 , the time shift is known as “delay". If t0  0 , the time shift is known as “advance".

Example. In below figure, the left image shows a continuous-time signal x(t). A time shifted version x(t -
2) is shown in the right image.

Time Reversal
Time reversal is defined as
x( t )  x(  t )
x [ n ]  x [ n ]

which can be interpreted as the flip over the “y-axis".

Department of ECE, SVCE, TPT. 17 | P a g e


Unit-1: Signals & Systems

Time Scaling
Time scaling is the operation where the time variable t is multiplied by a constant a
x( t )  x(a t ), a  0

If a > 1, the time scale of the resultant signal is “decimated” or “compressed” (speed up). If 0 < a < 1, the
time scale of the resultant signal is “expanded” (slowed down).

Combination of Operations
In general, linear operation (in time) on a signal x(t) can be expressed as y(t) = x(at - b); a ,b   There are
two methods to describe the output signal y(t) = x(at - b).
Method A : “Shift , then Scale"  Recommended 
1. Define v  t   x  t  b  ,
2. Define y  t   v  at   x  at  b  .
Method B : “Scale, then Shift"
1. Define v  t   x  at  ,
b
2. Define y  t   v( t – )  x  at  b  .
a
Example.
For the signal x(t) shown in following figure, sketch x(3t - 5).

Example.
Department of ECE, SVCE, TPT. 18 | P a g e
Unit-1: Signals & Systems

For the signal x(t) shown in Figure, sketch x(1 - t).

Decimation and Expansion


Decimation and expansion are standard discrete-time signal processing operations.
Decimation.
Decimation is defined as
yD[n] = x[Mn]; for some integers M.
M is called the decimation factor.
Expansion.
Expansion is defined as

 n
x , n  int eger multiple of M
yE [ n ]    L 
 0 , otherwise

L is called the expansion factor.

Examples of decimation and


expansion for M = 2 and L = 2.

Amplitude Scaling
Amplitude scaling maps the input signal x to the output signal y as given by
y(t) = ax(t),

Department of ECE, SVCE, TPT. 19 | P a g e


Unit-1: Signals & Systems

where a is a real number.


Geometrically, the output signal y is expanded/compressed in amplitude and/or reflected about the
horizontal axis.

Amplitude Shifting
Amplitude shifting maps the input signal x to the output signal y as given by
y(t) = x(t)+b,
where b is a real number.
Geometrically, amplitude shifting adds a vertical displacement to x.

Addition and Subtraction


Let x1(t) and x2(t) be two continuous time signals, then the addition of these two signals is given as
y(t)=x1(t)+x2(t)
Similarly, subtraction is given as y(t)= x1(t)-x2(t)
The same procedure may be followed for Discrete time signals
Multiplication
Let x1(t) and x2(t) be two continuous time signals, then the multiplication of these two signals is given as

y  t   x1  t * x2  t   y[ n ]  x1 [ n ]* x2 [ n ]

Department of ECE, SVCE, TPT. 20 | P a g e


Unit-1: Signals & Systems

Differentiation and Integration

Let x(t) be a continuous time signal, then the differentiation of the signal x(t) with respect to time is given
as
dx( t )
y( t ) 
dt
Similarly, integration can be expressed as
t
y( t )   x( t )dt


“Differentiation and integration of signals x(t) cannot be directly applied for discrete time signals, but
similar operations like difference and accumulation were exist.”

Let x[n] be the discrete-time signal; then the difference operation is given as
y[n]=x[n]-x[n-1]
Similarly, the accumulation operation is given as,
n
y[n]  
k 
x[n]

1.2 FUNDAMENTALS OF SYSTEMS


A system is a quantitative description of a physical process which transforms signals (at its “input") to
signals (at its “output"). More precisely, a system is a “black box" (viewed as a mathematical abstraction)
that deterministically transforms input signals into output signals. In this chapter, we will study the
properties of systems.
1.2.1 Continuous-Time and Discrete-Time Systems
A continuous-time system is a system in which continuous-time input signals are applied and results in
continuous-time output signals.
A discrete-time system is a system in which discrete-time input signals are applied and results in
discrete-time output signals.

Simple Examples of Systems


Example 1 Consider the RC circuit in below figure

Department of ECE, SVCE, TPT. 21 | P a g e


Unit-1: Signals & Systems

The current i(t) is proportional to the voltage drop across the resistor
vs ( t )  vc ( t )
i( t ) 
R
The current through the capacitor is
dvc ( t )
i( t )  C
dt

Examples of systems. (a) A system


with input voltage vs ( t ) and output
voltage v0 ( t ) .
(b) A system with input equal to the
force f (t) and output equal to the
velocity v(t).

Equating the right-hand sides of above Eqs. we obtain a differential equation describing the relationship
between the input and output
dvc ( t ) 1 1
 vc ( t )  vs ( t )
dt RC RC
Example 2 Consider the system in Fig. (b), where the force f (t) as the input and the velocity v(t) as the
output. If we let m denote the mass of the car and v the resistance due to friction. Equating the
acceleration with the net force divided by mass, we obtain
dv( t ) 1 dv( t )  1
  f ( t )   v( t )  v( t )  f ( t )
dt m dt m m
The above Eqs. of two systems are two examples of first-order linear differential equations of the form

dy(t)
 ay( t )  bx( t )
dt
Example 3 Consider a simple model for the balance in a bank account from month to month. Let y[n]
denote the balance at the end of nth month, and suppose that y[n] evolves from month to month according
the equation
y[n] 1.01y[n 1] x[n] ,
or
y[n]1.01y[n 1] x[n] ,
where x[n] is the net deposit (deposits minus withdraws) during the nth month 1.01y[n 1] models the fact
that we accrue 1% interest each month.
Above equation is an example of the first-order linear difference equation, that is,

Department of ECE, SVCE, TPT. 22 | P a g e


Unit-1: Signals & Systems

y[n]ay[n 1] bx[n] .


1.2.2 Interconnects of Systems

Interconnection of systems. (a) A series or cascade interconnection of two systems; (b) A parallel
interconnection of two systems; (c) Combination of both series and parallel systems.
1.2.3 Basic System Properties
Systems with and without Memory
A system is memoryless if its output for each value of the independent variable as a given time is
dependent only on the input at the same time. For example
y[n] = (2x[n] - x 2 [n])2 ,

is memoryless.
A resistor is a memoryless system, since the input current and output voltage has
the relationship
v(t) = Ri(t) ,
where R is the resistance.
One particularly simple memoryless system is the identity system, whose output is identical to its input,
that is
y(t) = x(t) , or y[n] = x[n]
An example of a discrete-time system with memory is an accumulator or summer.
n n 1
y[ n ]  
k 
x[ k ]   x[ k ]  x[ n ]  y[n 1 ]  x[n],or
k 

y[n]  y[n 1 ]  x[n]

Department of ECE, SVCE, TPT. 23 | P a g e


Unit-1: Signals & Systems

Another example is a delay y[n]  x[n  1 ]

A capacitor is an example of a continuous-time system with memory,


t
v( t )  1
C  i(  )d ,


Invertibility and Inverse System


A system is said to be invertible if distinct inputs leads to distinct outputs.

Examples of non-invertible systems


y[n] 0,
the system produces zero output sequence for any input sequence.
y(t) x2 (t),
in which case, one cannot determine the sign of the input from the knowledge of the output.
Encoder in communication systems is an example of invertible system, that is, the input to the encoder
must be exactly recoverable from the output.
Causality
A system is causal if the output at any time depends only on the values of the input at present time and in
the past. Such a system is often referred to as being non-anticipative, as the system output does not
anticipate future values of the input.
The RC circuit in Stability is causal, since the capacitor voltage responds only to the present and past values
of the source voltage. The motion of a car is causal, since it does not anticipate future actions of the driver.
The following expressions describing systems that are not causal
y[n] x[n]x[n 1] ,
and
y(t) x(t 1) .
All memoryless systems are causal, since the output responds only to the current value of input.
Example Determine the Causality of the two systems

Department of ECE, SVCE, TPT. 24 | P a g e


Unit-1: Signals & Systems

(1) y[n] x[n]


(2) y(t) x(t) cos(t 1)
Solution
System (1) is not causal, since when n 0 , e.g. n 4 , we see that y[4] x[4], so that the output at this
time depends on a future value of input.
System (2) is causal. The output at any time equals the input at the same time multiplied by a number that
varies with time.
Stability
A stable system is one in which small inputs leads to responses that do not diverge. More normally, if the
input to a stable system is bounded, then the output must be also bounded and therefore cannot diverge.
Examples of stable systems and unstable systems

The above two systems are stable system.


n
The accumulator y[ n ]   x[ k ] , is not stable, since the sum grows continuously even if x[n] is bounded.
k 

Check the stability of the two systems


System 1 : y( t )  tx( t )
System 2 : y( t )  e x( t )

Systems 1 is not stable, since a constant input x(t) 1, yields y(t) t , which is not bounded – no matter
what finite constant we pick, |y(t)| will exceed the constant for some t.
System 2 is stable. Assume the input is bounded |x(t)| B , or B x(t) B for all t. We then see
that y(t) is bounded e-B y(t) eB.
Time Invariance
A system is time invariant if a time shift in the input signal results in an identical time shift in the output
signal. Mathematically, if the system output is y(t) when the input is x(t) , a time invariant system will have
an output of y( t  t0 ) when input is x( t  t0 ) .

Examples
 The system y(t) sin[x(t)] is time invariant.
 The system y[n] n x[n] is not time invariant. This can be demonstrated by using counter example.
Consider the input signal x1 [ n ]   [ n ] , which yields y1 [ n ]  0 . However, the input

Department of ECE, SVCE, TPT. 25 | P a g e


Unit-1: Signals & Systems

x2 [ n ]   [ n  1 ] yields the output y2 [ n ]  n [ n  1 ]   [ n  1 ] . Thus, while x2 [ n ] is the


shifted version of x1 [ n ] , y2 [ n ] is not the shifted version of y1 [ n ] .
 The system y(t) x(2t) is not time invariant. To check using counterexample. Consider x1 (t) shown
in following figure (a), the resulting output y1 (t) is depicted in Fig. (b). If the input is shifted by 2,
that is, consider x2 (t)  x1 (t  2 ) , as shown in Fig. (c), we obtain the resulting output y2 ( t )  x2 ( 2t )
shown in Fig. (d). It is clearly seen that y2 ( t )  y1 (t  2 ) , so the system is not time invariant.

Linearity
The system is linear if
The response to x1 (t)  x2 (t) is y1 (t)  y2 ( t ) - additivity property

The response to ax1 (t)is ay1(t) - scaling or homogeneity property.

The two properties defining a linear system can be combined into a single statement
Continuous time ax1 (t)  a x2 (t)  ay1 (t)  ay2 ( t ) ,

Discrete time ax1 [ n ]  bx2 [ n ]  a y1 [ n ]  by2 [ n ] .

Here a and b are any complex constants.


Superposition property If xk [n],k  1,2,3,4.......k are a set of inputs with corresponding outputs
yk [n],k  1,2,3,4.......k , then the response to a linear combination of these inputs given by

x[n]   ak xk [n]  a1 x1 [n]  a2 x2 [n]  a3 x3 [n]  ...

is

y[n]   ak yk [n]  a1 y1 [n]  a2 y2 [n]  a3 y3 [n]  ...

Department of ECE, SVCE, TPT. 26 | P a g e


Unit-1: Signals & Systems

which holds for linear systems in both continuous and discrete time.
For a linear system, zero input leads to zero output.
1.2.4 Convolution
Linear time invariant (LTI) systems are good models for many real-life systems, and they have properties
that lead to a very powerful and effective theory for analyzing their behaviour. In the followings, we want
to study LTI systems through its characteristic function, called the impulse response.
Convolution in Discrete Time or Convolution Sum
To begin with, let us consider discrete-time signals. Denote by h[n] the \impulse response" of an LTI system
S. The impulse response, as it is named, is the response of the system to a unit impulse input. Recall the
definition of a unit impulse
1, n  0
 [n]  
0, n  0

We have shown that


x[ n ] [ n  n0 ]  x[ n0 ] [ n  n0 ]

Using this fact, we get the following equalities


x[ n ] [ n ]  x[ 0 ] [ n ] ( n0  0 )
x[ n ] [ n  1]  x[ 1] [ n  1] ( n0  1 )
x[ n ] [ n  2 ]  x[ 2 ] [ n  2 ] ( n0  2 )
 

 


  
 x[ n ] 
   [ nk ] 

  x[ k ]  [ n  k ]
 k   k 

The sum on the left-hand side is


 x[ k ] [ n  k ]  x[ n ],
k 


because   [ n  k ]  1 for all n. The sum on the right-hand side is
k 

 x[ k ] [ n  k ]
k 

Therefore, equating the left-hand side and right-hand side yields



x[ n ]   x[ k ] [ n  k ]
k 

Department of ECE, SVCE, TPT. 27 | P a g e


Unit-1: Signals & Systems

In other words, for any signal x[n], we can always express it as a sum of impulses!
Next, suppose we know that the impulse response of an LTI system is h[n]. We want to determine the output
y[n]. To do so, we first express x[n] as a sum of impulses

x[ n ]   x[ k ] [ n  k ]
k 

For each impulse  [ n  k ] , we can determine its impulse response, because for an LTI system

 [ n  k ]  h[ n  k ]
Consequently, we have
 
x[ n ]  
k 
x[ k ] [ n  k ]  x[ n ]   x[ k ] h[ n  k ]  y[ n ]
k 

This equation,

y[ n ]   x[ k ] h[ n  k ]
k 

is known as the convolution sum (or) convolution in discrete time.


Definition and Properties of Convolution Sum
Given a signal x[n] and the impulse response of an LTI system h[n], the convolution between x[n] and h[n]
is defined as

y[ n ]   x[ k ] h[ n  k ]
k 

We denote convolution as y[n] = x[n] * h[n].


Equivalent form: Letting m = n - k, we can show that
  


k 
x[ k ] h[ n  k ]  y[ n ]  
k 
x[n m] h[m]   x[n k ] h[ k ]
k 

The following “standard" properties can be proved easily:


1. Commutative: x[ n ] * h[ n ]  h[ n ]* x[ n ]

2. Associative: x[ n ] *  h1[ n ] * h2[ n ]    x[ n ] * h1[ n ]  * h2[ n ]

3. Distributive: x[ n ] *  h1[ n ]  h2[ n ]    x[ n ] * h1[ n ]    x[ n ] * h2[ n ] 

To evaluate convolution, there are three basic steps:


1. Flip
2. Shift
3. Multiply and Add
Example 1. Consider the signal x[n] and the impulse response h[n] shown below.

Department of ECE, SVCE, TPT. 28 | P a g e


Unit-1: Signals & Systems

Let's compute the output y[n] one by one. First, consider y[0]:
 
y[ 0 ]   x[ k ] h[ 0  k ]   x[ k ] h[ k ]  1
k  k 

Note that h[-k] is the flipped version of h[k], and  x[ k ] h[ k ] is the multiply-add between x[k] and
k 
h[-k].
To calculate y[1], we ip h[k] to get h[-k], shift h[-k] go get h[1-k], and multiply-add to get

 x[ k ] h[ k ]
k 

To calculate discrete linear convolution:


Convolute two sequences x[n] = {a,b,c} & h[n] = [e,f,g]

Convoluted output = [ ea, eb+fa, ec+fb+ga, fc+gb, gc]


Note: if any two sequences have m, n number of samples respectively, then the resulting convoluted
sequence will have [m+n-1] samples.
Example: convolute two sequences x[n] = {1,2,3} & h[n] = {-1,2,2} using circular convolution

Department of ECE, SVCE, TPT. 29 | P a g e


Unit-1: Signals & Systems

Normal Convoluted output y[n] = [ -1, -2+2, -3+4+2, 6+4, 6]. = [-1, 0, 3, 10, 6]

Continuous-time Convolution/Convolutional Integral


Thus far we have been focusing on the discrete-time case. The continuous-time case, in fact, is analogous
to the discrete-time case. In continuous-time signals, the signal decomposition is

x( t )   x(  ) ( t   )d


The result is obtained by chopping up the signal x(t) in sections of width , and taking sum

Recall the definition of the unit pulse (t) ; we can define a signal xˆ(t) as a linear combination of
delayed pulses of height x(k )

x̂( t )   x( k  )  ( t  k  )
k 

Taking the limit as 0 , we obtain the integral of above equation, in which when0
 The summation approaches to an integral
 0
k    and x( k  )  x(  )

  d
 ( t  k )   ( t  )
By substituting above values, we can express x(t) as a linear combination of continuous impulses

x( t )   x(  ) ( t   )d

and consequently, the continuous time convolution is defined as

x( t )   x(  )h( t   )d


1.2.5 Correlation
Correlation is a measure of similarity between two signals. The general formula for correlation is

 x ( t ) x (t   )dt

1 2

There are two types of correlation:


 Auto correlation
 Cross correlation
Auto Correlation Function
It is defined as correlation of a signal with itself. Auto correlation function is a measure of similarity
between a signal & its time delayed version. It is represented with R(τ).
Consider a signals x(t). The auto correlation function of x(t) with its time delayed version is given by

Department of ECE, SVCE, TPT. 30 | P a g e


Unit-1: Signals & Systems

R11 (  )  R(  )   x( t ) x(t   )dt


  x( t ) x(t   )dt

Where τ = searching or scanning or delay parameter.
If the signal is complex then auto correlation function is given by

R11 (  )  R(  )   x( t ) x (t   )dt
*



  x (t   )x( t )dt
*



Properties of Auto-correlation Function of Energy Signal


 Auto correlation exhibits conjugate symmetry i.e. R(  )  R* (  )
 Auto correlation function of energy signal at origin i.e. at τ=0 is equal to total energy of that signal,
which is given as:


2
R( 0 )  E  x( t ) dt


 Auto correlation function is maximum at τ=0 i.e |R (τ) | ≤ R (0) ∀ τ


 R(τ)=x(τ)∗x(−τ)
 Auto correlation function and energy spectral densities are Fourier transform pairs. i.e.
F .T  R(  )   (  )

 R(  )e
 j
 ( )  d


Auto-correlation Function of Power Signal


The auto correlation function of periodic power signal with period T is given by
1 T2
R(  )  lim  T x( t ) x* (t   )dt
T  T 
2

 Auto correlation of power signal exhibits conjugate symmetry i.e. R(τ)=R∗(−τ)


 Auto correlation function of power signal at τ=0 (at origin)is equal to total power of that signal. i.e.
R(0)=ρ
 Auto correlation function of power signal is maximum at τ = 0 i.e., |R(τ)|≤R(0)∀τ
 R(τ)=x(τ)∗x(−τ)
 Auto correlation function and power spectral densities are Fourier transform pairs. i.e.,
F .T  R(  )  S(  )

 R(  )e
 j
S(  )  d


Cross Correlation Function


Cross correlation is the measure of similarity between two different signals.
Consider two signals x1(t) and x2(t). The cross correlation of these two signals R12(τ) is given by

Department of ECE, SVCE, TPT. 31 | P a g e


Unit-1: Signals & Systems

R12 (  )   x ( t ) x (t   ) dt

1 2


  x ( t ) x (t   )dt

1 2

If signals are complex then



R12 (  )   x ( t ) x (t   )dt
*
1 2


  x (t   )x ( t )dt
*
1 2


R21 (  )   x ( t ) x (t   )dt
*
2 1


  x (t   )x ( t )dt
*
2 1


Properties of Cross Correlation Function of Energy and Power Signals


 Auto correlation exhibits conjugate symmetry i.e. R12 (  )  R21
*
(  ) .
 Cross correlation is not commutative like convolution i.e. R12 (  )  R21 (  ) .

  x ( t ) x (t)dt  0 , then the two signals are said to be orthogonal.
*
If R12(0) = 0 means, if 1 2


1 T2
 For power signal if lim  T x1 ( t ) x*2 (t)dt  0 then two signals are said to be orthogonal.
T  T 
2

 Cross correlation function corresponds to the multiplication of spectrums of one signal to the
complex conjugate of spectrum of another signal. i.e.
R12 (  )  X 1 (  ) X 2* (  )
This also called as correlation theorem.

1.3 ANALOGY BETWEEN VECTORS AND SIGNALS


There is a perfect analogy between vectors and signals.
1.3.1 Vector
A vector contains magnitude and direction. The name of the vector is denoted by bold face type and their
magnitude is denoted by light face type.
Example: V is a vector with magnitude V. Consider two vectors V 1 and V2 as shown in the following
diagram. Let the component of V1 along with V2 is given by C12V2. The component of a vector V1 along
with the vector V2 can obtained by taking a perpendicular from the end of V 1 to the vector V2 as shown in
diagram:

The vector V1 can be expressed in terms of vector V2


V1= C12V2 + Ve
Where Ve is the error vector.
Department of ECE, SVCE, TPT. 32 | P a g e
Unit-1: Signals & Systems

But this is not the only way of expressing vector V 1 in terms of V2. The alternate possibilities are:
V1=C1V2+Ve1

V1=C2V2+Ve2

The error signal is minimum for large component value. If C 12=0, then two signals are said to be orthogonal.
Dot Product of Two Vectors
V1 .V2  V1V2 cos 
 = Angle between V1 and V2
V1V2  V2V1
V1 .V2
The components of V1 along V2 = V1 cos  
V2
From the diagram, components of V1 along V2 = C12 V2
V1 .V2
C12V2
V1 .V2
 C12 
V2

1.3.2 Signal
The concept of orthogonality can be applied to signals. Let us consider two signals f 1(t) and f2(t). Similar to
vectors, you can approximate f1(t) in terms of f2(t) as
f1(t) = C12 f2(t) + fe(t) for (t1 < t < t2)
⇒ fe(t) = f1(t) – C12 f2(t)
One possible way of minimizing the error is integrating over the interval t 1 to t2.
1
 fe ( t )dt
t2

t2  t1 t1
1
 f1 ( t )  C12 f2 ( t )dt
t2

t2  t1 t1
However, this step also does not reduce the error to appreciable extent. This can be corrected by taking the
square of error function.
1
 f e ( t ) dt
t2

2

t2  t1 t1
1 2

t  f1( t )  C12 f1 ( t ) dt
t2

t2  t1 1

Department of ECE, SVCE, TPT. 33 | P a g e


Unit-1: Signals & Systems

Where ε is the mean square value of error signal. The value of C12 which minimizes the error, you need
d
to calculate 0
d C12
 1 d 
 
t2
 
 t2  t1 t1
d C12
f 1 ( t )  C12 f 2 ( t ) dt  0

1 t2  d d d 
 
t2  t1 t1  d C12
f12 ( t ) 
d C12
2 f1 ( t )C12 f 2 ( t ) 
d C12
f 22 ( t )C122 dt  0

Derivative of the terms which do not have C12 term are zero.
1 t2

t2  t1 1  t
 2 f1 ( t ) f 2 ( t )  2C12 f 22 ( t )dt  0

  f ( t ) f ( t )dt component is zero, then two signals are said to be orthogonal.


t2

t1 1 2
If C12  t2
  f ( t )dt
2
2
t1

Put C12 = 0 to get condition for orthogonality.

  f ( t ) f ( t )dt
t2

t1 1 2
0 t2
  f ( t )dt
2
2
t1

  f ( t ) f ( t )dt  0 , is the orthogonality condition.


t2

t1 1 2

1.3.3 Orthogonal Vector Space


A complete set of orthogonal vectors is referred to as orthogonal vector space. Consider a three-dimensional
vector space as shown below:

Consider a vector A at a point (X1, Y1, Z1). Consider three - unit vectors (VX, VY, VZ) in the direction of X,
Y, Z axis respectively. Since these unit vectors are mutually orthogonal, it satisfies that
VX .VX  VY .VY  VZ .VZ  1
VX .VY  VY .VZ  VZ .VX  0
You can write above conditions as
1 a  b
Va .Vb  
0 a  b
The vector A can be represented in terms of its components and unit vectors as
A  X 1VX  Y1VY  Z 1VZ ………..(1)
Any vectors in this three-dimensional space can be represented in terms of these three unit vectors only.
If you consider n dimensional space, then any vector A in that space can be represented as
Department of ECE, SVCE, TPT. 34 | P a g e
Unit-1: Signals & Systems

A  X 1VX  Y1VY  Z1VZ  ......  N1VN …………….(2)


As the magnitude of unit vectors is unity for any vector A
The component of A along x axis = A.VX
The component of A along Y axis = A.VY
The component of A along Z axis = A.VZ
Similarly, for n dimensional space, the component of A along some G axis
=A.VG…………..(3)
Substitute equation 2 in equation 3.
CG   X 1VX  Y1VY  Z 1VZ  ........G1VG  ........  N 1VN  VG
 X 1VX VG  Y1VYVG  Z 1VZVG  ........G1VGVG  ........  N 1VNVG
 G1 sin ce VGVG  1
if VGVG  1 i.e. VGVG  k
AVG  G1VGVG  G1k
AVG
G1 
k
1.3.4 Orthogonal Signal Space
Let us consider a set of n mutually orthogonal functions x1(t), x2(t)... xn(t) over the interval t1 to t2. As these
functions are orthogonal to each other, any two signals xj(t), xk(t) have to satisfy the orthogonality
condition. i.e.
t2

 x ( t )x ( t )dt  0
t1
j k where j  k

t2

 x ( t )dt  K
2
Let k k
t1

Let a function f(t), it can be approximated with this orthogonal signal space by adding the components along
mutually orthogonal signals i.e.
f ( t )  C1 x1 ( t )  C2 x2 ( t )  ..........Cn xn ( t )  f e (t)
n
  Cr xr ( t )  f e (t)
r 1
n
f e (t)  f ( t )   Cr xr ( t )
r 1
t2
1
Mean sqaure error   
t2  t1 t1
 f e2 (t)dt

t 2
1 2 n

  
t2  t1 t1 
f ( t )  
r 1
Cr xr ( t ) dt

The component which minimizes the mean square error can be found by
d d d
  ......... 0
dC1 dC2 dCk
d
Let us consider 0
dCk

d  1 2  
t 2
n
f ( t )   Cr xr ( t ) dt   0
dCk  t2  t1 t1 
 
 r 1  

Department of ECE, SVCE, TPT. 35 | P a g e


Unit-1: Signals & Systems

All terms that do not contain Ck is zero. i.e. in summation, r=k term remains and all other terms are zero.
t2 t2

   2 f ( t )xk ( t )dt  2Ck   xk2 ( t ) dt  0


t1 t1
t2

 f ( t )x ( t )dt
t1
k

 Ck  t2

 x ( t )dt
2
k
t1
t2

  f ( t )xk ( t )dt  Ck K k
t1

1.3.5 Mean Square Error


The average of square of error function fe(t) is called as mean square error. It is denoted by ε (epsilon).
t2 2
1

t2  t1   f (t) dt
t1
e

t 2
1 2 n

  
t2  t1 t1 
f ( t )  
r 1
Cr xr ( t ) dt

t n t n t
1 2 2 2 2


t2  t1 t1
f ( t )dt  
r 1
C r  r
2
x 2
( t )dt  2 
r 1
C r  x r ( t ) f(t)dt
t1 t1
t2 t2

 x ( t )dt  C  x ( t ) f(t)dt  C
2 2 2
You know that C r r r r r Kr
t1 t1

1  t2 2 n n 
   f ( t )dt   Cr K r  2 Cr K r 
2 2

t2  t1  t1 r 1 r 1 
1 2 2 
t n
   f ( t )dt  Cr K r 
2

t2  t1  t1 r 1 
 t2 2 1 
  f ( t )dt   C1 K1  C2 K 2 .........  Cn K n 
2 2 2

 t1 t2  t1 
The above equation is used to evaluate the mean square error.
1.3.6 Orthogonality in Complex Functions
If f1(t) and f2(t) are two complex functions, then f1(t) can be expressed in terms of f2(t) as f1 ( t )  C12 f 2 ( t )
with negligible error
t2

Where C12 
t1
f1( t )f 2* ( t )dt
t2 2
 t1
f 2 ( t ) dt
Where f 2* ( t ) = complex conjugate of f2(t).
If f1(t) and f2(t) are orthogonal then C12 = 0

Department of ECE, SVCE, TPT. 36 | P a g e


Unit-1: Signals & Systems
t2


 t1
f1 ( t )f 2* ( t )dt
0
t2 2
 t1
f 2 ( t ) dt
t2
  f1 ( t )f 2* ( t )dt  0
t1
The above equation represents orthogonality condition in complex functions.

1.4 FOURIER SERIES


By 1807, Fourier had completed a work that series of harmonically related sinusoids were useful in
representing temperature distribution of a body. He claimed that any periodic signal could be represented
by such series – Fourier Series. He also obtained a representation for aperiodic signals as weighted integrals
of sinusoids – Fourier Transform.
It is advantageous in the study of LTI systems to represent signals as linear combinations of basic
signals that possess the following two properties:
 The set of basic signals can be used to construct a broad and useful class of signals.
 The response of an LTI system to each signal should be simple enough in structure to provide us
with a convenient representation for the response of the system to any signal constructed as a
linear combination of the basic signal.
Both of these properties are provided by Fourier analysis.
Fourier Series is in terms of an infinite sum of sines and cosines or exponentials. Fourier series uses
orthogonality condition.
1.4.1 Fourier Series Representation of Continuous Time Periodic Signals
A signal is said to be periodic if it satisfies the condition x (t) = x (t + T) or x (n) = x (n + N).
Where T = fundamental time period,
ω0= fundamental frequency = 2π/T
There are two basic periodic signals:
 x(t)=cosω0t , x(t)=sinω0t (sinusoidal) &
 x(t)=ejω0t (complex exponential)
These two signals are periodic with period T=2π/ω 0.
A set of harmonically related complex exponentials can be represented as {ϕ k(t)}
ϕk(t)={ejkω0t}={ejk(2πT)t}where k=0±1,±2..n.....(1)
All these signals are periodic with period T
According to orthogonal signal space approximation of a function x(t) with n, mutually orthogonal functions
is given by

x( t )  ae
k 
k
j 0t

Where ak= Fourier coefficient = coefficient of approximation.


Trigonometric Fourier Series (TFS)
 2 
sin( n0 t ) and cos( n0 t ) are orthogonal over the interval  t0 ,t0   . So sin( 0 t ),sin( 20 t ) forms an
 0 
orthogonal set. This set is not complete without cos( n0 t ) because this cosine set is also orthogonal to
sine set. So, to complete this set we must include both cosine and sine terms. Now the complete orthogonal
set contains all cosine and sine terms i.e. sin( n0 t ),cos( n0 t ) where n  0,1,2,.....

Department of ECE, SVCE, TPT. 37 | P a g e


Unit-1: Signals & Systems

 2 
∴ Any function x( t ) in the interval  t0 ,t0   can be represented as
 0 
x( t )  a0 cos( 00 t )  a1 cos( 10 t )  .....  an cos(n 0t )  ...
 b0 sin( 00t )  b1 sin( 10 t )  .....  bn sin(n 0 t )  ...
a0  a1 cos( 10 t )  .....  an cos(n 0t )  b1 sin( 10 t )  .....  bn sin(n 0 t )

 x( t )  a0    an cos( n0t )  bn sin(n 0 t ) ( t0  t  t0  T )
n 1

t0 T
 x( t ).1dt 1 t0 T
T t0
t0
where a0  t0 T
 x( t )dt

t0
12 dt
t0 T

an 
 t0
x( t )cos( n0 t )dt
t0 T

t0
cos 2 ( n0 t )dt
t0 T

bn 
 t0
x( t ) sin( n0 t )dt
t0 T

t0
sin 2 ( n0 t )dt
t0 T t0 T
Here t0
sin 2 ( n0 t )dt  
t0
cos 2 ( n0t )dt  T
2
2 t0 T
T t0
 an  x( t )cos( n0 t )dt

2 t0 T
bn   x( t ) sin( n0 t )dt
T t0
Exponential Fourier Series (EFS)
 
Consider a set of complex exponential functions e j n0t where n  0, 1, 2,..... which is orthogonal over
2
the interval  t0 ,t0  T  Where T  . This is a complete set so it is possible to represent any function
0
x( t ) as shown below
x( t )  C0  C1 j 0 t  .....  Cn jn 0 t  ...
x( t )  C1 j 0 t  C2  j 20t  .....  C n  jn 0 t  ...

 x( t )  C
n 
n e j n0t ( t0  t  t0  T )

Above equation represents exponential Fourier series representation of a signal x( t ) over the interval
 t0 ,t0  T  . The Fourier coefficient is given as
  dt
t0 T
 x( t ) e j n0 t *
t0
Cn 
 e  dt
t0 T
 e j n0 t j n0 t *
t0

1 t0 T
 Cn  
T 0
t
x( t )e  j n0 t dt

Department of ECE, SVCE, TPT. 38 | P a g e


Unit-1: Signals & Systems

Relation Between Trigonometric and Exponential Fourier Series


Consider a periodic signal x( t ) , the TFS & EFS representations are given below respectively

x( t )  a0    an cos( n0 t )  bn sin(n 0t ) ( t0  t  t0  T )
n 1

x( t )  C
n 
n e j n0t ( t0  t  t0  T )

an  Cn  C n
bn  j( Cn  C n )
c0  a0
an  jbn
cn 
2
a  jbn
c n  n
2
Convergence of the Fourier Series
1. Over any period, x(t) must be absolutely integrable, that is
 x( t ) dt  
T

This guarantees each coefficient ak will be finite. A periodic function that violates the first Dirichlet
1
condition is x( t )  , 0  t  1
t

2. In any finite interval of time, x(t) is of bounded variation; that is, there are no more than a finite
number of maxima and minima during a single period of the signal.
An example of a function that meets Condition1 but not Condition 2
 2 
x( t )  sin  , 0  t 1
 t 

Department of ECE, SVCE, TPT. 39 | P a g e


Unit-1: Signals & Systems

3. In any finite interval of time, there are only a finite number of discontinuities. Furthermore, each of
these discontinuities is finite.
An example that violates this condition is a function defined as
x( t )  1, 0  t  4,
x( t )  1 , 4  t  6 ,
2
x( t )  1 , 6  t  7,
4
x( t )  1 , 7  t  7.5,etc
8

4. One class of periodic signals that are representable through Fourier series is those signals which
have finite energy over a period,
2
 x( t )
T
dt  

Properties of the Continuous-Time Fourier Series


Let [x1(t) and x2(t)] are two periodic signals with period T and with Fourier series coefficients Cn and Dn
respectively.
Linearity property
The linearity property states that, if x1 ( t )  Cn and x2 ( t )  Dn , then ax1 ( t )  bx2 ( t )  aCn  b Dn
Proof:
1 t0 T
FS  ax1 ( t )  bx2 ( t )    ax1 ( t )  bx2 ( t )e  jn0t dt
T 0t

1 t0 T 1 t0 T
 a  x1 ( t )e  jn0t dt  b  x2 ( t )e  jn0t dt
T t0 T t0
 aCn  b Dn
Time Shifting property
The time shifting property states that, if x( t )  Cn then x( t  t0 )  e  jn0t0 Cn ,
Proof:

x( t )  C
n 
n e j n0t

x( t  t0 )  C
n 
n e j n0 ( t t0 )

  C
n 
n e  j n0 t0  e j n0 t

 FS 1 Cn e  j n0t0 

Department of ECE, SVCE, TPT. 40 | P a g e


Unit-1: Signals & Systems

Time Reversal property


The time reversal property states that, if x( t )  Cn ,then x( t )  C n
Proof:

x( t )  C
n 
n e j n0 (  t )

let n   p

x( t )   C p e jp 0t
p 

replace n  p

x( t )  C
n 
n e j n0 t

 FS 1 C n 

Time scaling property


The time scaling property states that, if x( t )  Cn , then x(a t )  Cn with 0  a0
Proof:

x( t )  C
n 
n e j n0 t

x( at )  C
n 
n e j n0 at

 C
n 
n e j n(a 0 )t

 FS 1 Cn 

Time differential property


dx( t )
The time differential property states that, if x( t )  Cn , then  jn0 Cn
dt
Proof:

x( t )  C
n 
n e j n0 t

dx( t ) 
de j n0 t
  Cn
dt n  dt

  C (j n
n 
n 0 )e j n0t

 FS 1 ( jn0Cn )

Time integration property


t Cn
The time integration property states that, if x( t )  Cn , then 

x(  )d 
jn0
( if C0  0 )

Proof:

x( t )  C
n 
n e j n0t

Department of ECE, SVCE, TPT. 41 | P a g e


Unit-1: Signals & Systems

t   
  n
t
 x(  )d 

Cn e j n0 d

C 
t
 n e j n0 d

n 
t
  e j n0 
   Cn 
 n  j n0  

Cn j n0t  C 
  j n0
e  FS 1  n 
n   j n0 

Convolution theorem or property


The convolution theorem or property states that, The Fourier series of the convolution of two-time
domain functions x1(t) and x2(t) is equal to the multiplication of their Fourier series coefficients, i.e.
“Convolution of two functions in time domain is equivalent to multiplication of their Fourier coefficients in
frequency domain”.
The convolution property states that, if x1 ( t )  Cn and x2 ( t )  Dn , then x1 ( t )* x2 ( t )  TCn Dn
Proof:

x( t )  C
n 
n e j n0 t

1 t0 T
FS  x1 ( t )* x2 ( t )    x1( t )* x2 ( t ) e jn0t dt
T t0

1 T
   x1 ( t )* x2 ( t ) e  jn0 t dt
T 0
x1 ( t )* x2 ( t )    x1 (  )x2 ( t   ) d
T

0
T
1 T
FS  x1 ( t )* x2 ( t )     x1 (  )x2 ( t   ) e  jn0t d dt
T0 0
Substitute t    p,we have dt  dp
1 T
 x2 ( p ) e jn0 ( p  )dp  d
1 T
FS  x1 ( t )* x2 ( t )    x1 (  ) 
T 
0 T 0

1 T  1 T 
 T   x1 (  )e  jn0 d    x2 (p)e  jn0 p dp 
T 0
 T 0

 TCn Dn
Modulation or Multiplication property
The Modulation or Multiplication property states that, if x1 ( t )  Cn and x2 ( t )  Dn , then
x1 ( t )x2 ( t )  Cl Dn l
Proof:

x( t )  C
n 
n e j n0 t

1 t0 T
FS  x1 ( t )x2 ( t )    x1 ( t )x2 ( t ) e  jn0t dt
T 0
t

Department of ECE, SVCE, TPT. 42 | P a g e


Unit-1: Signals & Systems

1 t0 T  

 
T t0   x1 ( t ) 
l 
Cl e jn0t  e  jn0 t dt

1 t0 T  

 
T 0 
t  x1 ( t ) 
l 
Cl e  j( n l )0 t  dt

Interchanging the order of integration and summation,

 1 t0 T 
FS  x1 ( t )x2 ( t )   Cl    x1 ( t )e  j( n l )0 t  dt 
l   T t0 
 Cl Dn l

Parseval’s Relation or Theorem or property



1 t0 T
If x1 ( t )  Cn and x2 ( t )  Dn , then
T t0
x1 ( t )x*
2
( t )dt  
n 
Cn Dn* and


1 t0 T
T t0

2
x( t ) dt  Cn2 if x1 ( t )  x2 ( t )  x( t )
n 

Proof:

x( t )  C
n 
n e j n0t

1 t0 T 1 t0 T    

T 0t

 x1 ( t )x*
2
( t )
 dt    
T 0  n 
t
 Cn e j n0t  x*2 ( t ) dt
 
Interchanging the order of int egration and summation

1 t0 T  1 t0 T  * 

T 0
t

 x1 ( t )x*
2
( t )
 dt  
n 
C n  
T 0t  x 2 ( t )e j n0t  dt 

 *
 1 t0 T 
  Cn    x2 ( t )e  j n0 t  dt 
n   T t0 

  C [D
n 
n n ]*

if x1 ( t )  x2 ( t )  x( t ), Then

1 t0 T
T t0 
 x1 ( t )x*
( t )
 dt  
n 
Cn [Cn ]*

1 t0 T
 
2
x( t ) dt  Cn2
T 0
t
n 

Department of ECE, SVCE, TPT. 43 | P a g e

You might also like