ELEC301 2x-Week1 Rev2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 93

Orthogonal Bases

Transforms and Orthogonal Bases

We now turn back to linear algebra to understand transforms, which map signals between
different “domains”

Recall that signals can be interpreted as vectors in a vector space (linear algebra)

We now review the concept of a basis for a vector space

As we will see, different signal transforms (and “domains”) correspond to different bases

Caveat: This is not a course on linear algebra!

2
Basis
DEFINITION

A basis {bk } for a vector space V is a collection of vectors from V that are linearly
independent and span V

N −1
Span: All vectors in V can be represented as a linear combination of the basis vectors {bk }k=0
N
X −1
x = αk bk = α0 b0 + α1 b1 + · · · + αN −1 bN −1 ∀x ∈ V
k=0

Linearly independent: None of the basis vectors can be represented as a linear combination of
the other basis vectors

Dimension of V : The number of vectors in the basis (N in the above)

Fact: The dimension of RN and CN equals N (we will focus on these spaces)

3
Basis Matrix
Stack the basis vectors bk as columns into the N × N basis matrix
 
B = b0 |b1 | · · · |bN −1

Stack the scalar weights αk into an N × 1 column vector


 
α0
 α1 
a= . 
 
 .. 
αN −1

We can now write a linear combination of basis elements as the matrix/vector product
 
α0
N −1
 α1 

X  
x = α0 b0 + α1 b1 + · · · + αN −1 bN −1 = αk bk = b0 |b1 | · · · |bN −1  .  = B a
 .. 
k=0
αN −1

4
Orthogonal and Orthonormal Bases

−1
An orthogonal basis {bk }N
DEFINITION

k=0 for a vector space V is a basis whose elements are


mutually orthogonal
hbk , bl i = 0, k 6= l

−1
An orthonormal basis {bk }N k=0 for a vector space V is a basis whose elements
are mutually orthogonal and normalized (in the 2-norm)
DEFINITION

hbk , bl i = 0, k 6= l

kbk k2 = 1

5
Example: Orthogonal and Orthonormal Bases in R2
   

b0 =  , b1 =  

 

B= 

6
Inverse of a Matrix

The inverse of a square matrix A is a matrix A−1 such that


DEFINITION

AA−1 = A−1 A = I

where I is the identity matrix of the same size as A

If B is a basis matrix and x = Ba, then a = B−1 x

7
Inverse of an Orthonormal Basis Matrix
−1
When the basis matrix B contains an orthonormal basis {bk }N
k=0 , its inverse B
−1
is trivial to
calculate

Fact: B−1 = BH (recall H


is complex conjugate transpose)

Terminology: B is a unitary matrix (aka complex-valued orthogonal)

To prove, write out BH B and use the fact that the columns of B are orthonormal
 H   
b0 1
 bH   1 
 1 
BH B =  .  b0 |b1 | · · · |bN −1 =
  
 .. 
 .. 
 . 
bH
N −1 1

8
Signal Representation by Orthonormal Basis
−1
Given an orthonormal basis {bk }Nk=0 and orthonormal basis matrix B, we have the following
signal representation for any signal x
N
X −1
x = Ba = αk bk (synthesis)
k=0

a = BH x or, each αk = hx, bk i (analysis)

Synthesis: Build up the signal x as a linear combination of the basis elements bk weighted by
the weights αk

Analysis: Compute the weights αk such that the synthesis produces x; the weight αk measures
the similarity between x and the basis element bk

9
Summary
Orthonormal bases make life easy

−1
Given an orthonormal basis {bk }Nk=0 and orthonormal basis matrix B, we have the following
signal representation for any signal x
N
X −1
x = Ba = αk bk (synthesis)
k=0

a = BH x or, each αk = hx, bk i (analysis)

In signal processing, we say that the vector a is the transform of the signal x with respect to the
−1
orthonormal basis {bk }Nk=0

Clearly the transform a contains all of the information in the signal x (and vice versa)

10
Eigenanalysis
Eigenanalysis

We continue borrowing from linear algebra by recalling the eigenvectors and eigenvalues of a
matrix

Applying this point of view to circulant matrices (LTI systems for finite-length signals) will lead
to an amazing result that ties together many threads of thought

Caveat: This is not a course on linear algebra!

2
Eigenvectors and Eigenvalues
DEFINITION

Given a square matrix A, the vector v is an eigenvector with eigenvalue λ if

Av = λv

Geometric intuition: Multiplying an eigenvector v by the matrix A does not change its direction;
it changes only its strength by the factor λ ∈ C

Example in R2 :
   
3 1 1
A= , v= , λ=2
1 3 −1
    
3 1 1 2
Av = = = 2v
1 3 −1 −2

3
Eigendecomposition
An N × N matrix A has N eigenvectors and N eigenvalues (not necessarily distinct, though)

−1
Stack the N eigenvectors {vm }N
m=0 as columns into an N × N matrix
 
V = v0 |v1 | · · · |vN −1

−1
Place the N eigenvalues {λm }N
m=0 on the diagonal of an N × N diagonal matrix
 
λ0
 λ1 
Λ=
 
.. 
 . 
λN −1

Then we can write


AV = VΛ

4
Diagonalization
Recall the eigendecomposition of a matrix A

AV = VΛ

When the eigenvector matrix V is invertible, we can multiply both sides of the
eigendecomposition on the left by V−1 to obtain

V−1 AV = V−1 VΛ = IΛ = Λ

We say that the eigenvector matrix V diagonalizes the matrix A

V−1 AV = Λ

Much easier to multiply a vector by Λ than by A! (We simply scale each entry)

Clearly V and Λ contain all of the information in A (and vice versa)

5
Summary

Multiplying an eigenvector v by the matrix A does not change its direction; it changes only its
strength by the factor λ

The eigenvectors/values contain all of the information in the matrix A (and vice versa)

Diagonalization by eigendecomposition

V−1 AV = Λ

or, equivalently,
A = VΛV−1

6
Eigenanalysis of
LTI Systems
(Finite-Length Signals)
LTI Systems for Finite-Length Signals

x H y

y = Hx

For length-N signals, H is an N × N circulant matrix with entries

[H]n,m = h[(n − m)N ]

where h is the impulse response

Goal: Calculate the eigenvectors and eigenvalues of H

Eigenvectors v are input signals that emerge at the system output unchanged (except for a
scaling by the eigenvalue λ) and so are somehow “fundamental” to the system

2
Eigenvectors of LTI Systems
Fact: The eigenvectors of a circulant matrix (LTI system) are the complex harmonic sinusoids

ej N kn
    
1 2π 2π
sk [n] = √ = √ cos kn + j sin kn , 0 ≤ n, k ≤ N − 1
N N N N

λ k cos( 2π
16 k n), k = 2

cos( 2π
16 k n), k=2 2

1 1

0 0

−1 −1
0 5 10 15
n −2

0 5 10 15
n
sk H λk sk
λ k sin( 2π
16 k n), k = 2

λ k sin( 2π
16 k n), k=2 2

0.2 1

0 0
−0.2
−1
0 5 10 15
n
−2

0 5 10 15
n

3
Harmonic Sinusoids are Eigenvectors of LTI Systems

sk H λk sk

Prove that harmonic sinusoids are the eigenvectors of LTI systems simply by computing the
circular convolution with input sk and applying the periodicity of the harmonic sinusoids
N −1 N −1 2π
X X ej N k(n−m)N
sk [n] ~ h[n] = sk [(n − m)N ] h[m] = √ h[m]
m=0 m=0
N
N −1 2π N −1 j 2π kn
X ej N k(n−m) X e N 2π
= √ h[m] = √ e−j N km h[m]
m=0
N m=0
N
N −1
! 2π
X 2π ej N kn
= e−j N km h[m] √ = λk sk [n]
m=0
N

4
Eigenvalues of LTI Systems
The eigenvalue λk ∈ C corresponding to the sinusoid eigenvector sk is called the frequency
response at frequency k since it measures how the system “responds” to sk
N −1

X
λk = h[n] e−j N kn = hh, sk i = Hu [k] (unnormalized DFT)
n=0

Recall properties of the inner product: λk grows/shrinks as h and sk become more/less similar
λ k cos( 2π
16 k n), k = 2

cos( 2π
16 k n), k = 2
2

1 1

0 0

−1 −1
0 5 10 15
n −2

0 5 10 15
n
sk H λk sk
λ k sin( 2π
16 k n), k = 2

λ k sin( 2π
16 k n), k=2 2

0.2 1

0 0
−0.2
−1
0 5 10 15
n
−2

0 5 10 15
n

5
Eigenvector Matrix of Harmonic Sinusoids
Stack the N normalized harmonic sinusoid {sk }N −1
k=0 as columns into an N × N complex
orthonormal basis matrix  
S = s0 |s1 | · · · |sN −1
2π √
The row-n, column-k entries of S have a very simple structure: [S]n,k = ej N kn / N
√ √
real part: cos( 2π
N kn)/ N imaginary part: sin( 2π
N kn)/ N

Example: Eigenvector
matrix for N = 16

Note the symmetries:


[S]n,k = [S]k,n
or S = ST
(same sinusoids on
rows/columns)
6
Diagonal Matrix of Eigenvalues

The eigenvalues are the frequency response (unnormalized DFT of the impulse response)
N −1

X
λk = h[n] e−j N kn = hh, sk i = Hu [k] (unnormalized DFT)
n=0

Place the N eigenvalues {λk }N −1


k=0 on the diagonal of an N × N matrix
   
λ0 Hu [0]
 λ1   Hu [1] 
Λ=  = 
   
.. .. 
 .   . 
λN −1 Hu [N − 1]

7
Eigendecomposition and Diagonalization of an LTI System
Given the
• circulant LTI system matrix H
• Fixed matrix of harmonic sinusoid eigenvectors S (corresponds to DFT/IDFT)
• Diagonal matrix of eigenvalues Λ (frequency response, changes with H)
we can write
H = SΛSH

Example for N = 16 (Note: Only plotting real part of S, SH , and Λ)

y H x S Λ SH x
LTI system IDFT Freq. response DFT

= =

8
Summary

Harmonic sinusoids are the eigenfunctions of LTI systems for finite-length signals
(circulant matrices)

Therefore, the discrete Fourier transform (DFT) is the natural tool for studying LTI systems for
finite-length signals

Frequency response H[k] equals the unnormalized DFT of the impulse response h[n]

Diagonalization by eigendecomposition implies

H = SΛSH

9
Discrete Fourier Transform
(DFT)
Discrete Fourier Transform
Another cornerstone of this course in particular and signal processing in general

Jean Baptiste Joseph Fourier (21 March 1768 – 16 May 1830) had the radical idea of proposing
that “all” signals could be represented as a linear combination of sinusoids

Amazingly, it’s true (at least in CN )!

Suggestion: Re-watch the lectures on Sinusoids from Week 1

2
Recall: Signal Representation by Orthonormal Basis
Given an orthonormal basis {bk }N −1
k=0 for C
N
and orthonormal basis matrix B, we have the
following signal representation for any signal x
N
X −1
x = Ba = αk bk (synthesis)
k=0

a = BH x or αk = hx, bk i (analysis)

Synthesis: Build up the signal x as a linear combination of the basis elements bk weighted by
the weights αk

Analysis: Compute the weights αk such that the synthesis produces x; the weight αk measures
the similarity between x and the basis element bk

3
Harmonic Sinusoids are an Orthonormal Basis
Recall the length-N normalized complex harmonic sinusoids (normalized!)

ej N kn
    
1 2π 2π
sk [n] = √ = √ cos kn + j sin kn , 0 ≤ n, k ≤ N − 1
N N N N

√ √
cos( 21 π6 kn)/ N , k = 2 sin( 2π
16 k n)/ N , k = 2
0.2 0.2
0 0
−0.2 −0.2
0 5 10 15 0 5 10 15
n n

Recall that harmonic sinusoids are mutually orthogonal and normalized

hsk , sl i = 0, k 6= l, ksk k2 = 1

It is easy to show that N orthonormal vectors in an N -dimensional must be an orthonormal basis


4
Orthonormal Basis Matrix of Harmonic Sinusoids
Stack the N normalized harmonic sinusoid {sk }N −1
k=0 as columns into an N × N complex matrix
 
S = s0 |s1 | · · · |sN −1
2π √
The row-n, column-k entries of S have a very simple structure: [S]n,k = ej N kn / N
√ √
real part: cos( 2π
N kn)/ N imaginary part: sin( 2π
N kn)/ N

Example: Eigenvector
matrix for N = 16

Note the symmetries:


[S]n,k = [S]k,n
or S = ST
(same sinusoids on
rows/columns)
5
Inverse of Orthonormal Basis Matrix of Harmonic Sinusoids
The row-n, column-k entries of S have a very simple structure

ej N kn
[S]n,k = √
N

S is unitary, and so its inverse matrix S−1 = SH

Entries of the inverse matrix



!∗ 2π
−1 H ej N nk e−j N kn ∗
[S ]n,k = [S ]n,k = √ = √ = ([S]n,k )
N N

Thanks to the symmetry of S, we have the interesting equalities: S−1 = SH = S∗

6
Signal Representation by Harmonic Sinusoids
N −1
Given the normalized complex harmonic sinusoids {sk }k=0 and the orthonormal basis matrix S,
we define the (normalized) discrete Fourier transform (DFT) for any signal x ∈ CN

Analysis (Forward Normalized DFT)

X = SH x
N −1 2π
X e−j N kn
X[k] = hx, sk i = x[n] √
n=0
N

Synthesis (Inverse Normalized DFT)

x = SX
N −1 2π
X ej N kn
x[n] = X[k] √
k=0
N

7
Interpretation: Signal Representation by Harmonic Sinusoids
Analysis (Forward DFT)
• Choose the DFT coefficients X[k] such that the synthesis produces the signal x
• The weight X[k] measures the similarity between x and the harmonic sinusoid sk
• Therefore, X[k] measures the “frequency content” of x at frequency k

N −1 2π
X e−j N kn
X[k] = hx, sk i = x[n] √
n=0
N

Synthesis (Inverse DFT)


• Build up the signal x as a linear combination of harmonic sinusoids sk weighted by the
DFT coefficients X[k]

N −1 2π
X ej N kn
x[n] = X[k] √
k=0
N

8
Example: Signal Representation by Harmonic Sinusoids
Analysis (Forward DFT)
• Choose the DFT coefficients X[k] such that the synthesis produces the signal x
• X[k] measures the similarity between x and the harmonic sinusoid sk
• Therefore, X[k] measures the “frequency content” of x at frequency k
• Even if the signal x is real-valued, the DFT coefficients X will be complex, in general

N −1 2π
X e−j N kn
X[k] = hx, sk i = x[n] √
n=0
N

x[n] |X [k]|
1
5
0

−1 0
0 5 10 15 20 25 30 0 5 10 15 20 25 30
n k
time domain frequency domain

9
The Unnormalized DFT
Normalized forward and inverse DFT
N −1 2π
X e−j N kn
X[k] = x[n] √
n=0
N
N −1 2π
X ej N kn
x[n] = X[k] √
k=0
N

Unnormalized forward and inverse DFT is more popular in practice (we will use both)
N −1

X
Xu [k] = x[n] e−j N kn
n=0

N −1
1 X 2π
x[n] = Xu [k] ej N kn
N
k=0

10
Summary
The discrete Fourier transform (DFT) is an orthonormal basis transformation based on the
harmonic sinusoids 2π
ej N kn
sk [n] = √
N

The DFT maps signals from the “time domain” (x[n]) to the “frequency domain” (X[k])

The DFT coefficient X[k] measures the similarity between the time signal x and the harmonic
sinusoid sk with frequency k

The set of DFT coefficients X contains all of the information in the signal x (and vice versa)

Do not confuse the normalized and unnormalized DFTs! The normalized DFT is more elegant,
but the unnormalized DFT is much more popular in practice

11
Discrete Fourier Transform
Examples
Discrete Fourier Transform in Matlab

Useful Matlab commands: fft, fftshift, semilogy. Click here to view a video
demonstration.

2
Discrete Fourier Transform
Properties
Properties of the DFT
Normalized forward and inverse DFT
N −1 2π
X e−j N kn
X[k] = x[n] √
n=0
N
N −1 2π
X ej N kn
x[n] = X[k] √
k=0
N

Unnormalized forward and inverse DFT is more popular in practice (we will use both)
N −1

X
Xu [k] = x[n] e−j N kn
n=0

N −1
1 X 2π
x[n] = Xu [k] ej N kn
N
k=0

2
DFT Pairs

If x[n] and X[k] are such that


N −1 2π
X e−j N kn
X[k] = x[n] √
n=0
N
N −1 2π
X ej N kn
x[n] = X[k] √
k=0
N

then we say they are a DFT pair


DFT
x[n] ←→ X[k]

3
The DFT is Periodic
The DFT is of finite length N , but it can also be interpreted as periodic with period N
X[k] = X[k + lN ], l∈Z

Proof
N −1 2π N −1 2π
X e−j N (k+lN )n X e−j N kn −j 2π lN n
X[k + lN ] = x[n] √ = x[n] √ e N = X[k] X
n=0
N n=0
N

DFT of length N = 16
|X [k]|
5

0
−15 −10 −5 0 5 10 15 20 25 30
k
4
DFT Frequencies
N −1 2π
X e−j N kn
X[k] = hx, sk i = x[n] √
n=0
N

X[k] measures the similarity between the time signal x and the harmonic sinusoid sk

Therefore, X[k] measures the “frequency content” of x at frequency



ωk = k
N
|X [k]|

0
0 5 10 15
k

5
DFT Frequencies and Periodicity
Periodicity of DFT means we can treat frequencies mod N


X[k] measures the “frequency content” of x at frequency ωk = N (k)N

Example: X[N − 1] = X[(−1)N ] measures the “frequency content” of x at the (same)


frequencies
2π 2π
ωN −1 = (N − 1) = −
N N
|X [k]|

0
−15 −10 −5 0 5 10 15 20 25 30
k
6
DFT Frequency Ranges
Periodicity of DFT means every length-N interval of k carries the same information

Typical interval 1: 0 ≤ k ≤ N − 1 corresponds to frequencies ωk in the interval 0 ≤ ω < 2π

|X [k]|
5

0
0 5 10 15
k

Typical interval 2: − N2 ≤ k ≤ N
2 − 1 corresponds to frequencies ωk in the interval −π ≤ ω < π
|X [k]|

0
−8 −6 −4 −2 0 2 4 6
k

7
The Inverse DFT is Periodic
N −1 2π
X ej N kn
x[n] = X[k] √
k=0
N

The time signal produced by the inverse DFT (synthesis) is periodic with period N

x[n] = x[n + mN ], m∈Z

Proof
N −1 2π N −1 2π
X ej N k(n+mN ) X ej N kn j 2π kmN
x[n + mN ] = X[k] √ = X[k], √ e N = x[n] X
n=0
N n=0
N

This should not be surprising, since the harmonic sinusoids are periodic with period N

8
DFT and Circular Shift

If x[n] and X[k] are a DFT pair then


DFT 2π
x[(n − m)N ] ←→ e−j N km X[k]

Proof: Use the change of variables r = (n − m)N


N −1 2π N −1 2π N −1 2π
X e−j N kn X e−j N k(r+m) 2π
X e−j N kr
x[(n − m)N ] √ = x[r] √ = e−j N km x[r] √
n=0
N r=0
N r=0
N


= e−j N km X[k] X

9
DFT and Modulation

If x[n] and X[k] are a DFT pair then


2π DFT
ej N ln x[n] ←→ X[(k − l)N ]

Proof:
N −1 2π N −1 2π
X 2π e−j N kn X e−j N (k−l)n
x[n] ej N ln √ = x[n] √ = X[(k − l)N ] X
n=0
N n=0
N

10
DFT and Circular Convolution

x h y

N
X −1
y[n] = x[n] ~ h[n] = h[(n − m)N ] x[m]
m=0

If
DFT DFT DFT
x[n] ←→ Xu [k], h[n] ←→ Hu [k], y[n] ←→ Yu [k]
then
Yu [k] = Hu [k] Xu [k]

Circular convolution in the time domain = multiplication in the frequency domain

11
DFT and Circular Convolution – Proof
x h y

N
X −1
y[n] = x[n] ~ h[n] = h[(n − m)N ] x[m]
m=0
Proof
−1 −1 −1
N N N
!
2π 2π
X X X
Yu [k] = y[n] e−j N kn = h[(n − m)N ] x[m] e−j N kn
n=0 n=0 m=0

−1 −1 −1 −1
N N
! N N
!
−j 2π −j 2π
X X X X
= x[m] h[(n − m)N ]e N kn = x[m] h[r]e N k(r+m)

m=0 n=0 m=0 r=0

−1 −1
N
! N
!
−j 2π −j 2π
X X
= x[m]e N km h[r]e N kr = Xu [k] Hu [k] X
m=0 r=0

12
The DFT is Linear

It is trivial to show that if


DFT DFT
x1 [n] ←→ X1 [k], x2 [n] ←→ X2 [k]
then
DFT
α1 x1 [n] + α2 x[2] ←→ α1 X1 [k] + α2 X2 [k]

13
The DFT is Complex Valued
Even if the signal x[n] is real-valued, the DFT is complex-valued, in general
x[n]
1

−1
0 5 10 15 20 25 30
n

Re(X [k]) |X [k]|


5 5
0

−5 0
0 5 10 15 20 25 30 0 5 10 15 20 25 30
k k
Im(X [k]) 6 X [k])
4
2 2
0 0
−2 −2
−4
0 5 10 15 20 25 30 0 5 10 15 20 25 30
k k

14
DFT Symmetry Properties (1)

The harmonic sinusoid basis elements sk [n] = ej N kn of the DFT have symmetry properties:
 
 2π  2π
Re ej N kn = cos kn (even function)
N
 
 2π  2π
Im ej N kn = sin kn (odd function)
N

These induce corresponding symmetry properties on X[k] around the frequency k = 0

Even signal/DFT
x[n] = x[(−n)N ], X[k] = X[(−k)N ]

Odd signal/DFT
x[n] = −x[(−n)N ], X[k] = −X[(−k)N ]

15
DFT Symmetry Properties (2)
x[n] X[k] Re(X[k]) Im(X[k]) |X[k]| ∠X[k]

real X[−k] = X[k]∗ even odd even odd

real & even real & even even zero even

real & odd imaginary & odd zero odd even

imaginary X[−k] = −X[k]∗ odd even even odd

imaginary & even imaginary & even zero even even

imaginary & odd real & odd odd zero even

16
DFT Symmetry Properties (3)

Prove that if x[n] is real, then X[−k] = X[k]∗

Simply compute X[k]∗ and use the fact that x[n] is real

N −1 2π
!∗ N −1 2π N −1 2π

X e−j N kn X e+j N kn X e−j N (−k)n
X[k] = x[n] √ = x[n]∗ √ = x[n] √ = X[−k] X
n=0
N n=0
N n=0
N

Easy to continue on to prove that Re(X[−k]) = Re(X[k]) (that is, the real part of X[k] is even)
by taking the real part of both sides of the equation X[−k] = X[k]∗

17
DFT Symmetry Properties (4)
Example: Real-valued signal x[n]

x[n]
1

−1
0 5 10 15 20 25 30
n

Re(X [k]) |X [k]|


5 5
0

−5 0
0 5 10 15 20 25 30 0 5 10 15 20 25 30
k k
Im(X [k]) 6 X [k])
4
2 2
0 0
−2 −2
−4
0 5 10 15 20 25 30 0 5 10 15 20 25 30
k k
18
DFT Symmetry Properties (5)
Example: Real-valued signal x[n], but plotting X[k] using Matlab fftshift command

x[n]
1

−1
0 5 10 15 20 25 30
n

Re(X [k]) |X [k]|


5 5
0

−5 0
−15 −10 −5 0 5 10 15 −15 −10 −5 0 5 10 15
k k
Im(X [k]) 6 X [k])
4
2 2
0 0
−2 −2
−4
−15 −10 −5 0 5 10 15 −15 −10 −5 0 5 10 15
k k
19
Duality of the DFT

Note that the inverse and forward DFT formulas are identical except for conjugation of the
harmonic sinusoids
N −1 2π
X e−j N kn
X[k] = x[n] √
n=0
N
N −1 2π
X ej N kn
x[n] = X[k] √
k=0
N

Thus, any DFT property that is true for x[n] is also true for X[−k]

Example: If X[k] is real, then Re(x[n]) is even and Im(x[n]) is odd

20
Summary

DFT and inverse DFT are periodic

Useful to index X[k] by − N2 ≤ k ≤ N2 − 1 (frequencies −π ≤ 2π


N k < π) as well as by
0 ≤ k ≤ N − 1 (frequencies 0 ≤ 2πN k < 2π)

Circular convolution in time becomes simple multiplication in frequency

DFT has useful symmetry properties

21
Fast Fourier Transform
(FFT)
Cost to Compute the Discrete Fourier Transform
Recall the (unnormalized) DFT of the time signal x[n]
N −1

X
Xu [k] = x[n] e−j N kn , 0≤k ≤N −1
n=0

What is the computational cost of the DFT?



Number of Multiplies: Must multiply x[n] e−j N kn for each value of

n = 0, 1, . . . , N − 1 and k = 0, 1, . . . , N − 1 ⇒ N 2 total multiplies



Number of Additions: Must sum the N products x[n] e−j N kn for each value of

k = 0, 1, . . . , N − 1 ⇒ N (N − 1) ≈ N 2 total adds

Total computational cost of DFT: N 2 adds and N 2 multiplies – O(N 2 ) complexity

2
Fast Fourier Transform
O(N 2 ) computational complexity is too high for many important applications; it is not
uncommon to have N = 107 or more

Important step forward in 1965: Cooley and Tukey “discovered” the fast Fourier transform
(FFT), which lowers the computational complexity to O(N log N )

Example: For N = 107

2N 2 = 2 × 1014 2N log2 N = 4.65 × 108

It turns out that Gauss invented the FFT in 1805 (Heideman, Johnson, Burrus, 1984)

3
Fast Fourier Transform
There are many different kinds of FFTs; here we will study the simplest:
the radix-2, decimation-in-time FFT

Clearly we can use the same methods to speed up both the forward and inverse DFT (by duality);
we will work with the forward DFT (and drop the subscript u for unnormalized)
N −1

X
X[k] = x[n] e−j N kn
n=0


To keep the notation clean, define the twiddle factor: WN = e−j N ∈ C
N
X −1
X[k] = x[n] WNkn
n=0

4
Twiddle Factors are Periodic


Note that the twiddle factors WN = e−j N are periodic in n and k

k(n+N ) (k+N )n
WNkn = WN = WN

Proof
2π 2π 2π
e−j N kn = e−j N k(n+N ) = e−j N (k+N )n

5
FFT – Step 1
In the radix-2, decimation-in-time FFT, the signal length N is a power of 2

The FFT is a divide and conquer algorithm: We will split the length-N into two length-N/2
FFTs and then iterate; each split will save on computations

We will work out the specific example of an N = 8 DFT, but the ideas extend to any
power-of-two length

Step 1: Break the signal x[n] into two sub-signals:


• even samples: x[2n]
• odd samples: x[2n + 1], n = 0, 1, . . . , N/2 − 1

N −1 N/2−1 N/2−1
k(2n) k(2n+1)
X X X
X[k] = x[n] WNkn = x[2n] WN + x[2n + 1] WN
n=0 n=0 n=0

6
FFT – Step 2
Step 2: Reorganize the two sums into two length-N/2 DFTs

N −1 N/2−1 N/2−1
k(2n) k(2n+1)
X X X
X[k] = x[n] WNkn = x[2n] WN + x[2n + 1] WN
n=0 n=0 n=0
N/2−1 N/2−1
X X
= x[2n] WN2kn + WNk x[2n + 1] WN2kn
n=0 n=0


Note that WN2kn = e−j N 2kn = e−j N/2 kn = WN/2

kn
and so we have . . .

PN/2−1 kn
Term 1 = E[k] = n=0 x[2n] WN/2 = N/2-point DFT of the even samples of x[n]

PN/2−1
Term 2 = WNk O[k] = WNk n=0
kn
x[2n + 1] WN/2 = N/2-point DFT of the odd samples of x[n]

7
FFT – Step 3
Step 3: Not so fast! We need to evaluate
N/2−1 N/2−1
X X
kn
X[k] = x[2n] WN/2 + WNk kn
x[2n + 1] WN/2
n=0 n=0
= E[k] + WNk O[k]

for the entire range k = 0, 1, . . . , N − 1 and not just k = 0, 1, . . . , N/2 − 1

Periodicity of the twiddle factors implies that E[k] and O[k] are also periodic with period N/2
N/2−1 N/2−1
(k+N/2)n
X X
kn
E[k + N/2] = x[2n] WN/2 = x[2n] WN/2 = E[k]
n=0 n=0

and similarly
O[k] = O[k + N/2]

8
FFT – The Result
X[k] = E[k] + WNk O[k], k = 0, 1, . . . , N − 1

9
FFT – Iterate
Divide and Conquer! Break the two length-N/2 DFTs into four length-N/4 DFTs

10
FFT – Divided and Conquered

Iteration ends when we reach a length-2 DFT (here N/4 = 2)

Length-2 DFT has a lovely butterfly structure

11
FFT of Length 8

12
Computational Savings
FFT: Multiplies and adds required

2 × 8 = 16 multiplies

3 × 8 = 24 adds

DFT: Multiplies and adds required

82 = 64 multiplies

82 − 8 = 56 adds

In general, since a length-2q FFT consists


of q stages, the total number of multiplies
and adds scales as

N log2 N  N 2

13
Summary
The FFT has been called the “most important computational algorithm of our generation”

The field of digital signal processing exploded after its introduction (1965)

Why it works:
• Symmetry and periodicity of sinusoids
• Divide and conquer

There are are many different kinds of FFTs for different lengths and different situations

Rice University’s resident FFT expert: Prof. C. Sidney Burrus

14
Fast Convolution
Cost to Compute a Circular Convolution
Recall the circular convolution of two length-N time signals x[n] and h[n]
N
X −1
y[n] = x[n] ~ h[n] = h[(n − m)N ] x[m], 0≤n<N −1
m=0

What is the computational cost of circular convolution?

Number of Multiplies: Must multiply h[(n − m)N ] x[n] for each value of

m = 0, 1, . . . , N − 1 and n = 0, 1, . . . , N − 1 ⇒ N 2 total multiplies

Number of Additions: Must sum the N products h[(n − m)N ] x[n] for each value of

n = 0, 1, . . . , N − 1 ⇒ N (N − 1) ≈ N 2 total adds

Total computational cost: N 2 adds and N 2 multiplies – O(N 2 ) complexity

2
Circular Convolution via the FFT
We can reduce the computational cost substantially if we move to the frequency domain using
the DFT as computed by the FFT

Step 1: Compute H[k] and X[k] of h[n] and x[n], respectively


• Computational cost = O(N log N )

Step 2: Multiply Y [k] = H[k] × X[k]


• Computational cost = O(N )

Step 3: Compute y[n] via the inverse DFT of Y [k]


• Computational cost = O(N log N )

Total computational cost: O(N log N )  N 2

3
Extension to Fast Convolution of Infinite-Length, Finite-Duration Signals

Applications tend to use (at least implicitly) infinite-length convolution more often than circular
convolution

Fortunately there is a clever way to trick a circular convolution into performing an infinite-length
convolution

Basic idea: zero pad the signals such that any circular wrap-around effects are zeroed out by the
zero padding

4
Duration of Convolution
Recall that, if x has duration Dx samples and h has duration Dh samples, then the infinite-length
convolution y = x ∗ h has duration at most Dx + Dh − 1 samples (proof by picture is simple)
x[n], D x = 24
2

0
0 10 20 30 40 50
n
h[n], D h = 14
1

−1
0 10 20 30 40 50
n
y[n], D h = 37
10
0
−10
0 10 20 30 40 50
n
5
Extension to Fast Convolution of Infinite-Length, Finite-Duration Signals
If x has duration Dx samples and h has duration Dh samples, then the infinite-length
convolution y = x ∗ h has duration at most Dx + Dh − 1 samples

If we zero pad both x and h to length Dx + Dh − 1 and compute their circular convolution
y 0 = xzp ~ hzp . . .

Then the nonzero entries of the circular convolution y 0 will agree with those of the infinite-length
convolution y

6
Summary

Convolution computation can be expensive – O(N 2 ) complexity

Fast convolution is much faster – only O(N log N ) complexity

Can compute circular convolution by multiplying DFTs computing via FFT

Can compute finite-duration, infinite-length convolution by zero padding before FFT

7
More Orthogonal Bases
Recall: Signal Representation by Orthonormal Basis
−1
Given an orthonormal basis {bk }Nk=0 for C
N
and orthonormal basis matrix B, we have the
following signal representation for any signal x
N
X −1
x = Ba = αk bk (synthesis)
k=0

a = BH x or αk = hx, bk i (analysis)

Synthesis: Build up the signal x as a linear combination of the basis elements bk weighted by
the weights αk

Analysis: Compute the weights αk such that the synthesis produces x; the weight αk measures
the similarity between x and the basis element bk

2
More Orthonormal Bases
The DFT is the right transform to study LTI systems; the frequency domain arises naturally

DFT is based on complex-valued basis elements (the harmonic sinusoids)

Challenge 1: A signal x ∈ RN has N complex DFT coefficients (real and imaginary parts of
each DFT coefficient)
• This is a problem in compression applications, where we would like approximate a smooth signal x
by just a few DFT coefficients (2× redundancy)

Challenge 2: Some signals are best represented neither in the time domain nor the frequency
domain
• For example, in a domain in between time and frequency (“time-frequency”, like a musical score)

Due to these and other challenges, there has been significant interest in developing additional
orthonormal basis transformations beyond the DFT

3
Discrete Cosine Transform (DCT)
A DFT-like transform but using real-valued basis functions (dct in Matlab)
• There are actually several different DCTs; here we present just one of them (the “DCT-II”)
• DCT is the transform inside JPEG image compression and MPEG video compression

DCT Basis functions of length N (orthogonal)


   
π 1
dk [n] = cos n+ k , 0 ≤ n, k ≤ N − 1
N 2

Example: N = 32, k = 3
d 3 [n]
1

−1
0 5 10 15 20 25 30
n

4
DCT Orthogonal Basis Matrix
DCT basis matrix compared to the real/imaginary parts of the DFT basis matrix (N = 16)

real part DFT imaginary part DFT DCT

5
DCT vs. DFT for Compression
DFT (real and imaginary parts) and DCT of a test signal

test signal DCT


1 20

0.5 10

0
0
−10
−0.5
−20
−1
0 5 10 15 20 25 30 35 40 45 0 5 10 15 20 25 30 35 40 45
n n

real part FFT imaginary part FFT


1.5 20
1
10
0.5
0 0
−0.5
−10
−1
−1.5 −20
0 5 10 15 20 25 30 35 40 45 0 5 10 15 20 25 30 35 40 45
n n

6
Between Time and Frequency
Some signals are best represented neither in the time domain nor the frequency domain

For example, many transient signals (audio, speech, images, etc.) are best represented in a
domain in between time and frequency (“time-frequency”, like a musical score)

7
Haar Wavelet Transform
Haar wavelet transform (1910): Key departure from DFT and DCT
• Basis functions are local (short duration, “local waves”)
• Basis functions are mulitscale (many different durations) edge detectors (derivatives)
fine scal e wavel ets
1

−1
0 5 10 15
n
mi d scal e wavel ets
1

−1
0 5 10 15
n
coarse scal e wavel et
1

−1
0 5 10 15
n

8
Haar Wavelet Transform Basis Matrix
Wavelets are inside JPEG2000 image compression and many image analysis/processing systems

Haar wavelet basis matrix compared to the real/imaginary parts of the DFT basis matrix
(N = 16)

real part DFT imaginary part DFT Haar wavelets

9
Short-Time Fourier Transform (1)

STFT analyzes how a signal’s frequency content changes over time – local Fourier analysis

Given a signal x[n] to analyze and a window w[n]


• Window the signal with the window shifted to time m: x[n] w[n − m]
• Compute the DFT of the windowed signal and stack as a column in an STFT matrix
• Plot the STFT matrix as a 2D image (imagesc in Matlab, for example)

10
Short-Time Fourier Transform (2)
STFT analyzes how a signal’s frequency content changes over time – local Fourier analysis

11
Short-Time Fourier Transform (3)
|STFT|2 is called the spectrogram (spectrogram in Matlab)

The STFT can be configured to be an orthonormal basis, but this is generally not done in practice

12
Summary

The DFT is fundamental, especially for studying LTI systems


• Only the DFT is defined in terms of basis functions that are eigenfunctions of LTI systems
(harmonic sinusoids)

But other orthogonal bases play important roles in diverse applications, especially signal analysis
and compression

Examples: DCT, wavelets, short-time Fourier transform, and beyond

13

You might also like