SS Unit 1
SS Unit 1
SS Unit 1
Table of Contents
1.1 FUNDAMENTALS OF SIGNALS ............................................................................................................................... 1
“Signals are represented mathematically as a function of one or more independent variables, which
convey information on the nature of physical phenomenon.”
Some common examples include
1. Electrical current or voltage in a circuit.
2. Daily closing value of a share of stock last week.
3. Audio signal continuous-time in its original form, or discrete-time when stored
on a CD.
For our convenience let us restrict our attention to one dimensional signal defined as single valued
function of time, which means that at every instant of time there is a unique value of the function. The
value may be real or complex, if the value is real, we can call that signal as real valued signal, otherwise we
will call it as complex valued signal. In both the cases, the independent variable time is real-valued.
If the function depends on a single variable, then the signal is said to be one dimensional signal. If
the function depends on two or more variables, then the signals is said to be multi-dimensional signals.
Systems
A system is an entity that processes one or more input signals in order to produce one or more output
signals. There is no unique purpose for a system; rather, the purpose depends on the application of interest.
In an automatic speaker recognition system, the function of the system is to extract the information to
recognize/identify the speaker. In a communication system, the function of the system is to transport the
information from source to destination.
“A system may be defined as a set of elements or functional blocks which are connected together and
produces an output in response to an input signal. The response or output of the system depends upon
transfer function of the system.”
The interaction between a system and its associated signals can be shown schematically as follows
And a control System which can manipulate the output of the systems adaptively based on a feedback path
is shown below
Thus from the figure we can say that a continuous signal will have some value at every instant of time.
Discrete-Time Signals
In this case the independent variable takes discrete values, and thus the signals are defined for discrete
values of time. A discrete signal is defined only at certain time instants. Here amplitude between two-time
instants is just not defined (that means its value may exist but not defined). So, dependent variables like
amplitude are continuous in discrete time signals but only independent variables like time are discrete, i.e.,
t . Here the independent variable time is denoted by ‘n’, and the signal is represented as x[n].
These are also called as sequences.
Mathematically, a discrete-time signal or a sequence is denoted as
X[n]=[………..0,0,1,2,0,-1,1,2,0,0…..]
Where the arrow indicates the value of x[n] at n=0, in the above example the value of x[n] at n=0 is 1.
An important fact is that any signal can be decomposed into a sum of two signals, one of which is even
and other is odd. Every function x has a unique representation of the form x(t) = x e(t)+xo(t), where the
functions xe and xo are even and odd, respectively. In particular, the functions xe and xo are given by
[ ( ) ( )] [ () ( )]
xe(t) = and xo(t) =
The functions xe and xo are called the even part and odd part of x, respectively.
If a sequence doesn’t satisfy the equation x[n]=x[n+N], then the sequence is said to be non-
periodic sequence.
Sum of periodic functions
Let x1 and x2 be periodic functions with fundamental periods T1 and T2, respectively. Then, the sum y
= x1 +x2 isa periodic function if and only if the ratio T1/T2 is a rational number (i.e., the quotient of
two integers). Suppose that T1/T2 = q/r where q and r are integers and coprime (i.e., have no common
factors), then the fundamental period of y is rT1 (or equivalently, qT2, since rT1 = qT2). (Note that rT1
is simply the least common multiple of T1 and T2.) Although the above theorem only directly addresses
the case of the sum of two functions, the case of N functions (where N > 2) can be handled by applying
the theorem repeatedly N −1 times.
Periodic Signal and Sequence Aperiodic Signal and Sequence
The average power of the periodic signal x(t) of fundamental period T is given by
T
2
1
P x ( t )dt
2
T T
2
The square root of the average power P is called the root mean square (rms) value of the signal
x(t).
Department of ECE, SVCE, TPT. 7|Pa ge
Unit-1: Signals & Systems
The average power in a periodic sequence x[n] with fundamental period N is given by
1 N 1
P x2 [ n ]
N n 0
A signal is referred to as an energy signal, if and only if the total energy of the signal satisfies the
condition
0 E ; P 0
In the same way x(t) is said to be a power signal, if and only if the average power satisfies the
condition
0 P ; E
Signals for which 0 E and 0 P are neither energy nor power signals, ex; x(t)=t
The energy and power classification of signals are mutually exclusive.
All the periodic and random signals are referred to as power signals, whereas, signals that are both
deterministic and non-periodic are energy signals (energy signals are limited duration signals).
Causal & non-Causal Signals and Sequences
A signal x is said to be right sided if, for some (finite) real constant t 0, the following condition holds
x(t) = 0 for all t < t0
(i.e., x is only potentially nonzero to the right of t 0). An example of a right-sided signal is shown below.
A signal that is neither left sided nor right sided is said to be two sided. An example of a two-sided signal
is shown below.
Sinusoidal Signals
Continuous time signal
The sinusoidal signals include sine and cosine signals. Mathematically these signals are represented as
A sine signal x(t) =A sin(ωt) = A sin(2πft)
A cosine signal x(t)=A cos(ωt) = A cos(2πft)
Discrete Time Signals
A discrete time sinusoidal waveform is denoted by
Sine sequence x[n] = Asin(ωn)
Cosine Sequence x[n] = Acos(ωn)
Where AAmplitude
fFrequency
ωAngular Frequency 2πf.
DC Signal and Sequence Sine Signal and Sequence
u[n]= {…..00000111111…..}
Signum Function
Continuous Time Signum Function
The Signum function is shown in the following figure. Mathematically Signum function is given as
1 for t 0
sgn( t )
1 for t 0
The Signum function is odd or antisymmetric function.
Discrete Time Signum Function
A discrete time Signum function can be obtained by sampling the continuous time Signum function.
Its value is +1 for positive values of n and -1 for negative values of n. Mathematically it is given as
1 for n 0
sgn( n )
1 for n 0
Signum Function
Rectangular Pulse
A rectangular pulse of unit amplitude and duration is shown in the following figure. It is centered
about y-axis i.e. about 0. Mathematically it is represented as
1 1
1 for t
rect( t ) 2 2
0 otherwise
The general type rectangular pulse having amplitude of ‘A’, over duration T is given as
T T
t A for t
Arect 2 2
T 0 otherwise
In the above expression, T shows that it is a function of time and T represents the width of the rectangular
pulse, and A represents the amplitude. The rectangular pulse is an even function.
( t )dt 1
𝑥(𝑡)𝛿(𝑡 − 𝑡 ) 𝑑𝑡 = 𝑥(𝑡 )
Where 𝛿(𝑡 − 𝑡 ) represents the time shifted delta function. This delta function is present only at t=t m.
RHS represents the value of x(t) at t=tm
This result indicates that the area under the product of a function with an impulse is equal to the value of
that function at the instant where the impulse is located.
Replication Property
This property states that the convolution of any function x(t) with delta function yields the same
function. The sign * in the below given equation represents convolution.
𝑥(𝑡) ∗ 𝛿(𝑡) = 𝑥(𝑡)
n for n 0
ur [ n ]
0 for n 0
Thus, for r = 0, the real and imaginary parts of a complex exponential are sinusoidal.
For r > 0, sinusoidal signals multiplied by a growing exponential.
For r < 0, sinusoidal signals multiplied by a decaying exponential.
Damped signal – Sinusoidal signals multiplied by decaying exponentials are commonly referred to as
damped signal.
x[ n ] C n
where C and are in general complex numbers. This can be alternatively expressed
x[ n ] Ce n
Where e
Thus, for 1, the real and imaginary parts of a complex exponential are sinusoidal.
For 1, sinusoidal signals multiplied by a decaying exponential.
For 1, sinusoidal signals multiplied by a growing exponential.
Sinc Function
The cardinal sine function called sinc function or sinc pulse is mathematically expressed as under
sin( x )
sin c( x ) for x 0
x
Where x is the independent variable
we can prove that sin c( x ) 1 at x 0;
and sin c( x ) 0 at x 1, 2, 3.......
x( t ) x( t t0 )
x [ n ] x [ n n0 ]
If t0 0 , the time shift is known as “delay". If t0 0 , the time shift is known as “advance".
Example. In below figure, the left image shows a continuous-time signal x(t). A time shifted version x(t -
2) is shown in the right image.
Time Reversal
Time reversal is defined as
x( t ) x( t )
x [ n ] x [ n ]
Time Scaling
Time scaling is the operation where the time variable t is multiplied by a constant a
x( t ) x(a t ), a 0
If a > 1, the time scale of the resultant signal is “decimated” or “compressed” (speed up). If 0 < a < 1, the
time scale of the resultant signal is “expanded” (slowed down).
Combination of Operations
In general, linear operation (in time) on a signal x(t) can be expressed as y(t) = x(at - b); a ,b There are
two methods to describe the output signal y(t) = x(at - b).
Method A : “Shift , then Scale" Recommended
1. Define v t x t b ,
2. Define y t v at x at b .
Method B : “Scale, then Shift"
1. Define v t x at ,
b
2. Define y t v( t – ) x at b .
a
Example.
For the signal x(t) shown in following figure, sketch x(3t - 5).
Example.
Department of ECE, SVCE, TPT. 18 | P a g e
Unit-1: Signals & Systems
n
x , n int eger multiple of M
yE [ n ] L
0 , otherwise
L is called the expansion factor.
Amplitude Scaling
Amplitude scaling maps the input signal x to the output signal y as given by
y(t) = ax(t),
Amplitude Shifting
Amplitude shifting maps the input signal x to the output signal y as given by
y(t) = x(t)+b,
where b is a real number.
Geometrically, amplitude shifting adds a vertical displacement to x.
y t x1 t * x2 t y[ n ] x1 [ n ]* x2 [ n ]
Let x(t) be a continuous time signal, then the differentiation of the signal x(t) with respect to time is given
as
dx( t )
y( t )
dt
Similarly, integration can be expressed as
t
y( t ) x( t )dt
“Differentiation and integration of signals x(t) cannot be directly applied for discrete time signals, but
similar operations like difference and accumulation were exist.”
Let x[n] be the discrete-time signal; then the difference operation is given as
y[n]=x[n]-x[n-1]
Similarly, the accumulation operation is given as,
n
y[n]
k
x[n]
The current i(t) is proportional to the voltage drop across the resistor
vs ( t ) vc ( t )
i( t )
R
The current through the capacitor is
dvc ( t )
i( t ) C
dt
Equating the right-hand sides of above Eqs. we obtain a differential equation describing the relationship
between the input and output
dvc ( t ) 1 1
vc ( t ) vs ( t )
dt RC RC
Example 2 Consider the system in Fig. (b), where the force f (t) as the input and the velocity v(t) as the
output. If we let m denote the mass of the car and v the resistance due to friction. Equating the
acceleration with the net force divided by mass, we obtain
dv( t ) 1 dv( t ) 1
f ( t ) v( t ) v( t ) f ( t )
dt m dt m m
The above Eqs. of two systems are two examples of first-order linear differential equations of the form
dy(t)
ay( t ) bx( t )
dt
Example 3 Consider a simple model for the balance in a bank account from month to month. Let y[n]
denote the balance at the end of nth month, and suppose that y[n] evolves from month to month according
the equation
y[n] 1.01y[n 1] x[n] ,
or
y[n]1.01y[n 1] x[n] ,
where x[n] is the net deposit (deposits minus withdraws) during the nth month 1.01y[n 1] models the fact
that we accrue 1% interest each month.
Above equation is an example of the first-order linear difference equation, that is,
Interconnection of systems. (a) A series or cascade interconnection of two systems; (b) A parallel
interconnection of two systems; (c) Combination of both series and parallel systems.
1.2.3 Basic System Properties
Systems with and without Memory
A system is memoryless if its output for each value of the independent variable as a given time is
dependent only on the input at the same time. For example
y[n] = (2x[n] - x 2 [n])2 ,
is memoryless.
A resistor is a memoryless system, since the input current and output voltage has
the relationship
v(t) = Ri(t) ,
where R is the resistance.
One particularly simple memoryless system is the identity system, whose output is identical to its input,
that is
y(t) = x(t) , or y[n] = x[n]
An example of a discrete-time system with memory is an accumulator or summer.
n n 1
y[ n ]
k
x[ k ] x[ k ] x[ n ] y[n 1 ] x[n],or
k
Systems 1 is not stable, since a constant input x(t) 1, yields y(t) t , which is not bounded – no matter
what finite constant we pick, |y(t)| will exceed the constant for some t.
System 2 is stable. Assume the input is bounded |x(t)| B , or B x(t) B for all t. We then see
that y(t) is bounded e-B y(t) eB.
Time Invariance
A system is time invariant if a time shift in the input signal results in an identical time shift in the output
signal. Mathematically, if the system output is y(t) when the input is x(t) , a time invariant system will have
an output of y( t t0 ) when input is x( t t0 ) .
Examples
The system y(t) sin[x(t)] is time invariant.
The system y[n] n x[n] is not time invariant. This can be demonstrated by using counter example.
Consider the input signal x1 [ n ] [ n ] , which yields y1 [ n ] 0 . However, the input
Linearity
The system is linear if
The response to x1 (t) x2 (t) is y1 (t) y2 ( t ) - additivity property
The two properties defining a linear system can be combined into a single statement
Continuous time ax1 (t) a x2 (t) ay1 (t) ay2 ( t ) ,
is
which holds for linear systems in both continuous and discrete time.
For a linear system, zero input leads to zero output.
1.2.4 Convolution
Linear time invariant (LTI) systems are good models for many real-life systems, and they have properties
that lead to a very powerful and effective theory for analyzing their behaviour. In the followings, we want
to study LTI systems through its characteristic function, called the impulse response.
Convolution in Discrete Time or Convolution Sum
To begin with, let us consider discrete-time signals. Denote by h[n] the \impulse response" of an LTI system
S. The impulse response, as it is named, is the response of the system to a unit impulse input. Recall the
definition of a unit impulse
1, n 0
[n]
0, n 0
x[ k ] [ n k ] x[ n ],
k
because [ n k ] 1 for all n. The sum on the right-hand side is
k
x[ k ] [ n k ]
k
In other words, for any signal x[n], we can always express it as a sum of impulses!
Next, suppose we know that the impulse response of an LTI system is h[n]. We want to determine the output
y[n]. To do so, we first express x[n] as a sum of impulses
x[ n ] x[ k ] [ n k ]
k
For each impulse [ n k ] , we can determine its impulse response, because for an LTI system
[ n k ] h[ n k ]
Consequently, we have
x[ n ]
k
x[ k ] [ n k ] x[ n ] x[ k ] h[ n k ] y[ n ]
k
This equation,
y[ n ] x[ k ] h[ n k ]
k
k
x[ k ] h[ n k ] y[ n ]
k
x[n m] h[m] x[n k ] h[ k ]
k
Let's compute the output y[n] one by one. First, consider y[0]:
y[ 0 ] x[ k ] h[ 0 k ] x[ k ] h[ k ] 1
k k
Note that h[-k] is the flipped version of h[k], and x[ k ] h[ k ] is the multiply-add between x[k] and
k
h[-k].
To calculate y[1], we ip h[k] to get h[-k], shift h[-k] go get h[1-k], and multiply-add to get
x[ k ] h[ k ]
k
Normal Convoluted output y[n] = [ -1, -2+2, -3+4+2, 6+4, 6]. = [-1, 0, 3, 10, 6]
The result is obtained by chopping up the signal x(t) in sections of width , and taking sum
Recall the definition of the unit pulse (t) ; we can define a signal xˆ(t) as a linear combination of
delayed pulses of height x(k )
x̂( t ) x( k ) ( t k )
k
Taking the limit as 0 , we obtain the integral of above equation, in which when0
The summation approaches to an integral
0
k and x( k ) x( )
d
( t k ) ( t )
By substituting above values, we can express x(t) as a linear combination of continuous impulses
x( t ) x( ) ( t )d
and consequently, the continuous time convolution is defined as
x( t ) x( )h( t )d
1.2.5 Correlation
Correlation is a measure of similarity between two signals. The general formula for correlation is
x ( t ) x (t )dt
1 2
x (t )x( t )dt
*
2
R( 0 ) E x( t ) dt
R( )e
j
( ) d
R( )e
j
S( ) d
x ( t ) x (t )dt
1 2
1 T2
For power signal if lim T x1 ( t ) x*2 (t)dt 0 then two signals are said to be orthogonal.
T T
2
Cross correlation function corresponds to the multiplication of spectrums of one signal to the
complex conjugate of spectrum of another signal. i.e.
R12 ( ) X 1 ( ) X 2* ( )
This also called as correlation theorem.
But this is not the only way of expressing vector V 1 in terms of V2. The alternate possibilities are:
V1=C1V2+Ve1
V1=C2V2+Ve2
The error signal is minimum for large component value. If C 12=0, then two signals are said to be orthogonal.
Dot Product of Two Vectors
V1 .V2 V1V2 cos
= Angle between V1 and V2
V1V2 V2V1
V1 .V2
The components of V1 along V2 = V1 cos
V2
From the diagram, components of V1 along V2 = C12 V2
V1 .V2
C12V2
V1 .V2
C12
V2
1.3.2 Signal
The concept of orthogonality can be applied to signals. Let us consider two signals f 1(t) and f2(t). Similar to
vectors, you can approximate f1(t) in terms of f2(t) as
f1(t) = C12 f2(t) + fe(t) for (t1 < t < t2)
⇒ fe(t) = f1(t) – C12 f2(t)
One possible way of minimizing the error is integrating over the interval t 1 to t2.
1
fe ( t )dt
t2
t2 t1 t1
1
f1 ( t ) C12 f2 ( t )dt
t2
t2 t1 t1
However, this step also does not reduce the error to appreciable extent. This can be corrected by taking the
square of error function.
1
f e ( t ) dt
t2
2
t2 t1 t1
1 2
t f1( t ) C12 f1 ( t ) dt
t2
t2 t1 1
Where ε is the mean square value of error signal. The value of C12 which minimizes the error, you need
d
to calculate 0
d C12
1 d
t2
t2 t1 t1
d C12
f 1 ( t ) C12 f 2 ( t ) dt 0
1 t2 d d d
t2 t1 t1 d C12
f12 ( t )
d C12
2 f1 ( t )C12 f 2 ( t )
d C12
f 22 ( t )C122 dt 0
Derivative of the terms which do not have C12 term are zero.
1 t2
t2 t1 1 t
2 f1 ( t ) f 2 ( t ) 2C12 f 22 ( t )dt 0
t1 1 2
If C12 t2
f ( t )dt
2
2
t1
f ( t ) f ( t )dt
t2
t1 1 2
0 t2
f ( t )dt
2
2
t1
t1 1 2
Consider a vector A at a point (X1, Y1, Z1). Consider three - unit vectors (VX, VY, VZ) in the direction of X,
Y, Z axis respectively. Since these unit vectors are mutually orthogonal, it satisfies that
VX .VX VY .VY VZ .VZ 1
VX .VY VY .VZ VZ .VX 0
You can write above conditions as
1 a b
Va .Vb
0 a b
The vector A can be represented in terms of its components and unit vectors as
A X 1VX Y1VY Z 1VZ ………..(1)
Any vectors in this three-dimensional space can be represented in terms of these three unit vectors only.
If you consider n dimensional space, then any vector A in that space can be represented as
Department of ECE, SVCE, TPT. 34 | P a g e
Unit-1: Signals & Systems
x ( t )x ( t )dt 0
t1
j k where j k
t2
x ( t )dt K
2
Let k k
t1
Let a function f(t), it can be approximated with this orthogonal signal space by adding the components along
mutually orthogonal signals i.e.
f ( t ) C1 x1 ( t ) C2 x2 ( t ) ..........Cn xn ( t ) f e (t)
n
Cr xr ( t ) f e (t)
r 1
n
f e (t) f ( t ) Cr xr ( t )
r 1
t2
1
Mean sqaure error
t2 t1 t1
f e2 (t)dt
t 2
1 2 n
t2 t1 t1
f ( t )
r 1
Cr xr ( t ) dt
The component which minimizes the mean square error can be found by
d d d
......... 0
dC1 dC2 dCk
d
Let us consider 0
dCk
d 1 2
t 2
n
f ( t ) Cr xr ( t ) dt 0
dCk t2 t1 t1
r 1
All terms that do not contain Ck is zero. i.e. in summation, r=k term remains and all other terms are zero.
t2 t2
f ( t )x ( t )dt
t1
k
Ck t2
x ( t )dt
2
k
t1
t2
f ( t )xk ( t )dt Ck K k
t1
t 2
1 2 n
t2 t1 t1
f ( t )
r 1
Cr xr ( t ) dt
t n t n t
1 2 2 2 2
t2 t1 t1
f ( t )dt
r 1
C r r
2
x 2
( t )dt 2
r 1
C r x r ( t ) f(t)dt
t1 t1
t2 t2
x ( t )dt C x ( t ) f(t)dt C
2 2 2
You know that C r r r r r Kr
t1 t1
1 t2 2 n n
f ( t )dt Cr K r 2 Cr K r
2 2
t2 t1 t1 r 1 r 1
1 2 2
t n
f ( t )dt Cr K r
2
t2 t1 t1 r 1
t2 2 1
f ( t )dt C1 K1 C2 K 2 ......... Cn K n
2 2 2
t1 t2 t1
The above equation is used to evaluate the mean square error.
1.3.6 Orthogonality in Complex Functions
If f1(t) and f2(t) are two complex functions, then f1(t) can be expressed in terms of f2(t) as f1 ( t ) C12 f 2 ( t )
with negligible error
t2
Where C12
t1
f1( t )f 2* ( t )dt
t2 2
t1
f 2 ( t ) dt
Where f 2* ( t ) = complex conjugate of f2(t).
If f1(t) and f2(t) are orthogonal then C12 = 0
t1
f1 ( t )f 2* ( t )dt
0
t2 2
t1
f 2 ( t ) dt
t2
f1 ( t )f 2* ( t )dt 0
t1
The above equation represents orthogonality condition in complex functions.
2
∴ Any function x( t ) in the interval t0 ,t0 can be represented as
0
x( t ) a0 cos( 00 t ) a1 cos( 10 t ) ..... an cos(n 0t ) ...
b0 sin( 00t ) b1 sin( 10 t ) ..... bn sin(n 0 t ) ...
a0 a1 cos( 10 t ) ..... an cos(n 0t ) b1 sin( 10 t ) ..... bn sin(n 0 t )
x( t ) a0 an cos( n0t ) bn sin(n 0 t ) ( t0 t t0 T )
n 1
t0 T
x( t ).1dt 1 t0 T
T t0
t0
where a0 t0 T
x( t )dt
t0
12 dt
t0 T
an
t0
x( t )cos( n0 t )dt
t0 T
t0
cos 2 ( n0 t )dt
t0 T
bn
t0
x( t ) sin( n0 t )dt
t0 T
t0
sin 2 ( n0 t )dt
t0 T t0 T
Here t0
sin 2 ( n0 t )dt
t0
cos 2 ( n0t )dt T
2
2 t0 T
T t0
an x( t )cos( n0 t )dt
2 t0 T
bn x( t ) sin( n0 t )dt
T t0
Exponential Fourier Series (EFS)
Consider a set of complex exponential functions e j n0t where n 0, 1, 2,..... which is orthogonal over
2
the interval t0 ,t0 T Where T . This is a complete set so it is possible to represent any function
0
x( t ) as shown below
x( t ) C0 C1 j 0 t ..... Cn jn 0 t ...
x( t ) C1 j 0 t C2 j 20t ..... C n jn 0 t ...
x( t ) C
n
n e j n0t ( t0 t t0 T )
Above equation represents exponential Fourier series representation of a signal x( t ) over the interval
t0 ,t0 T . The Fourier coefficient is given as
dt
t0 T
x( t ) e j n0 t *
t0
Cn
e dt
t0 T
e j n0 t j n0 t *
t0
1 t0 T
Cn
T 0
t
x( t )e j n0 t dt
an Cn C n
bn j( Cn C n )
c0 a0
an jbn
cn
2
a jbn
c n n
2
Convergence of the Fourier Series
1. Over any period, x(t) must be absolutely integrable, that is
x( t ) dt
T
This guarantees each coefficient ak will be finite. A periodic function that violates the first Dirichlet
1
condition is x( t ) , 0 t 1
t
2. In any finite interval of time, x(t) is of bounded variation; that is, there are no more than a finite
number of maxima and minima during a single period of the signal.
An example of a function that meets Condition1 but not Condition 2
2
x( t ) sin , 0 t 1
t
3. In any finite interval of time, there are only a finite number of discontinuities. Furthermore, each of
these discontinuities is finite.
An example that violates this condition is a function defined as
x( t ) 1, 0 t 4,
x( t ) 1 , 4 t 6 ,
2
x( t ) 1 , 6 t 7,
4
x( t ) 1 , 7 t 7.5,etc
8
4. One class of periodic signals that are representable through Fourier series is those signals which
have finite energy over a period,
2
x( t )
T
dt
1 t0 T 1 t0 T
a x1 ( t )e jn0t dt b x2 ( t )e jn0t dt
T t0 T t0
aCn b Dn
Time Shifting property
The time shifting property states that, if x( t ) Cn then x( t t0 ) e jn0t0 Cn ,
Proof:
x( t ) C
n
n e j n0t
x( t t0 ) C
n
n e j n0 ( t t0 )
C
n
n e j n0 t0 e j n0 t
FS 1 Cn e j n0t0
let n p
x( t ) C p e jp 0t
p
replace n p
x( t ) C
n
n e j n0 t
FS 1 C n
FS 1 Cn
dx( t )
de j n0 t
Cn
dt n dt
C (j n
n
n 0 )e j n0t
FS 1 ( jn0Cn )
Proof:
x( t ) C
n
n e j n0t
t
n
t
x( )d
Cn e j n0 d
C
t
n e j n0 d
n
t
e j n0
Cn
n j n0
Cn j n0t C
j n0
e FS 1 n
n j n0
1 t0 T
FS x1 ( t )* x2 ( t ) x1( t )* x2 ( t ) e jn0t dt
T t0
1 T
x1 ( t )* x2 ( t ) e jn0 t dt
T 0
x1 ( t )* x2 ( t ) x1 ( )x2 ( t ) d
T
0
T
1 T
FS x1 ( t )* x2 ( t ) x1 ( )x2 ( t ) e jn0t d dt
T0 0
Substitute t p,we have dt dp
1 T
x2 ( p ) e jn0 ( p )dp d
1 T
FS x1 ( t )* x2 ( t ) x1 ( )
T
0 T 0
1 T 1 T
T x1 ( )e jn0 d x2 (p)e jn0 p dp
T 0
T 0
TCn Dn
Modulation or Multiplication property
The Modulation or Multiplication property states that, if x1 ( t ) Cn and x2 ( t ) Dn , then
x1 ( t )x2 ( t ) Cl Dn l
Proof:
x( t ) C
n
n e j n0 t
1 t0 T
FS x1 ( t )x2 ( t ) x1 ( t )x2 ( t ) e jn0t dt
T 0
t
1 t0 T
T t0 x1 ( t )
l
Cl e jn0t e jn0 t dt
1 t0 T
T 0
t x1 ( t )
l
Cl e j( n l )0 t dt
Interchanging the order of integration and summation,
1 t0 T
FS x1 ( t )x2 ( t ) Cl x1 ( t )e j( n l )0 t dt
l T t0
Cl Dn l
1 t0 T
T t0
2
x( t ) dt Cn2 if x1 ( t ) x2 ( t ) x( t )
n
Proof:
x( t ) C
n
n e j n0t
1 t0 T 1 t0 T
T 0t
x1 ( t )x*
2
( t )
dt
T 0 n
t
Cn e j n0t x*2 ( t ) dt
Interchanging the order of int egration and summation
1 t0 T 1 t0 T *
T 0
t
x1 ( t )x*
2
( t )
dt
n
C n
T 0t x 2 ( t )e j n0t dt
*
1 t0 T
Cn x2 ( t )e j n0 t dt
n T t0
C [D
n
n n ]*
if x1 ( t ) x2 ( t ) x( t ), Then
1 t0 T
T t0
x1 ( t )x*
( t )
dt
n
Cn [Cn ]*
1 t0 T
2
x( t ) dt Cn2
T 0
t
n