Lecture 20 of Goertzel Algo

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Massachusetts Institute of Technology

Department of Electrical Engineering and Computer Science


6.341: Discrete-Time Signal Processing
OpenCourseWare 2006
Lecture 20
The Goertzel Algorithm and the Chirp Transform

Reading: Sections 9.0 - 9.2 and 9.6 in Oppenheim, Schafer & Buck (OSB).

In the previous lecture we discussed a well-known class of algorithms for computing the DFT
eciently. While the fast Fourier transforms various incarnations have gained considerable
popularity, careful selection of an appropriate algorithm for computing the DFT in practice
need not be limited to choosing between these so-called fast implementations. Well there
fore focus in this lecture on two other techniques for computing the DFT: the Goertzel algorithm
and the chirp transform.
Before going further, think about the following question:
Given a signal x[n] with corresponding 8-point DFT X[k], what is the most ecient
way, in terms of total multiplications required, to compute X[5] from x[n]?
To start, note that using a radix-2 FFT algorithm isnt the best choice; judicious application of
the canonical DFT equation is enough to beat it. Computing X[5] requires roughly 8 log 2 8 = 24
complex multiplications when employing a radix-2 FFT
since X[k] for k = 0, 1, . . . , 7

7algorithm,
j(2/8)5n
using OSB Equation
must rst be computed. However, calculating X[5] = n=0 x[n]e
8.11 requires only 8 complex multiplications and is strikingly simple compared to the radix-2
FFT algorithms, depicted in OSB Figure 9.10. This example illustrates the point that while
the FFT algorithms may be useful for a wide class of problems, they are by no means the most
ecient choice in all situations.

The Goertzel Algorithm

Well now discuss the Goertzel Algorithm, an ecient method (in terms of multiplications) for
computing X[k] for a given k. The derivation of the algorithm, which is developed in OSB
Section 9.2, begins by noting that the DFT can be formulated in terms of a convolution.
X[k] =
=

N
1

n=0
N
1

x[n]WNnk ,

k(N n)

x[n]WN

n=0

WN = ej N nk
,

WNkN = 1

N
1

k(N r)

x[r]WN

r=0

x[n] WNnk
n=N

nk
x[n] WN u[n]

n=N

Specically, processing a signal x[n] through an LTI lter with impulse response h[n] = W Nnk u[n]
and evaluating the result, yk [n], at n = N will give the corresponding N -point DFT coecient
X[k] = yk [N ]. This LTI ltering process is illustrated below.

x[n] h[n] = WNnk u[n] yk [n] = x[n] WNnk u[n]


Representing the lter by its z-transform, we have
x[n]

m=0

WNmk z m yk [n],

and since
H(z) =

WNmk z m

m=0

=
=
=
=

1 WNk z 1

WNmk z m

1 WNk z 1 m=0

mk m
z
m=0 WN

(m+1)k (m+1)
z
m=0 WN
k 1
WN z

WN0 z 0
WNk z 1

1
,
1 WNk z 1

the ltering operation can be equivalently performed by the system


x[n]

1
1 WNk z 1

yk [n],

with the associated ow graph depicted in OSB Figure 9.10.


Noting that
H(z) =

1
1 WNk z 1
2

1 WNk z 1
1
k
1
1 WN z 1 WNk z 1

1 WNk z 1

,
1 2 cos 2k
z 1 + z 2
N

the ltering operation can also be equivalently performed by


x[n]

1 WNk z 1
1

z + z 2
1 2 cos 2k
N

yk [n],

with the associated ow graph depicted in OSB Figure 9.20.


Since were only evaluating the output of this lter at y k [N ], the multiplier WNk need only
be used at time n = N . With this in mind, the algorithm requires N real multiplications
and a single complex multiplication to compute X[k] for a given k. This compares favorably
to N complex multiplications as required by the canonical DFT equation and approximately
N log 2 N complex multiplications as required by the radix-2 FFT algorithms, when computing
X[k] for a

given k. A nal note about the Goertzel algorithm: since it is not restricted to
1
nk
computing N
n=0 x[n]WN for integer values of k, the algorithm has the convenient property
that it can eciently compute X(ej ) for arbitrary choice of .

The Chirp Transform Algorithm

The chirp transform algorithm, which is derived in detail in OSB Subsection 9.6.2, computes
X(ej ) for uniformly-spaced samples of anywhere along the unit circle, as depicted in OSB
Figure 9.25. The algorithm therefore computes

1
j(0 +k)n ,
X(ej(0 +k) ) = N
k = 0, 1, . . . , M 1 ,
n=0 x[n]e
or equivalently,
X(ejk ) =

N
1

x[n]ejk n ,

k = 0, 1, . . . , M 1,

n=0

where
k = 0 + k,

k = 0, 1, . . . , M 1.

Dening W = ej , this becomes


X(ejk ) =
=

N
1

n=0
N
1

ej0 n W nk
x[n]ej0 n W n

2 /2

Wk

2 /2

W (kn)

2 /2

n=0

= Wn

2 /2

ej0 n W n
3

2 /2

2
x[n] W n /2 .

By dening g[n] = ej0 n W n

2 /2

x[n], the above equation is equivalently represented as

2
2
X(ejk ) = W n /2 g[n] W n /2 ,

which leads to the block diagram depicted in OSB Figure 9.26. This system isnt realizable
as-is, since it is neither causal nor stable. If, however, we restrict ourselves to operating on a
nite-length causal signal x[n], the system can be implemented, since were also only interested
its output y[n] over a given, nite time interval. There are several possible causal realizations
of this, and in each case the implementation relies on appropriate bookkeeping.
There may also be some question as to how implementing X(e jk ) as in OSB Figure 9.26 could
be practically useful. Note however that for a given , the analysis parameter 0 appears only
at the rst modulator block. As long as the frequency sample spacing is xed, the system
can therefore be easily adjusted to compute for frequency samples beginning at any point on
the unit circle. Another advantage of this technique is that it computes DTFT samples using a
xed convolution block, a property which is convenient when using discrete-time analog hard
ware, such as charge-coupled devices (CCDs).

You might also like