1559
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 42, NO.6, JUNE 1994
REFERENCES
Fig. 5.
[I) R. G. Vaughan, N. L. Scott, and D. R. White, "The theory of bandpass
sampling," IEEE Trans. Signal Processing, vol. 39, no. 9, Sept. 1991.
[2) A. Kohlenberg, "Exact interpolation of band-limited functions," 1. Appl.
Phys., vol. 24, Dec. 1953.
[3) D. A. Linden, "A discussion of sampling theorems," in Proc. IRE, vol.
47, 1959, pp. 1219-1226.
[4) O. D. Grace and S. P. Pitt, "Sampling and interpolation of bandlimited
signals by quadrature methods," J. Acoust. Soc. Amer., vol. 48, no. 6,
1970.
[5) M. A. Poletti and A. J. Coulson, "On quadrature and uniform bandpass
sampling," under review.
[6) D. W. Rice and K. H. Wu, "Quadrature sampling with high dynamic
range," IEEE Trans. Aerosp. Electron. Syst., vol. AES-18, no. 4, Nov.
1982.
[7] W. M. Waters and B. R. Jarrett, "Bandpass signal sampling and coherent
detection," IEEE Trans. Aerosp. Electron. Syst., vol. AES-18, no. 4, Nov.
1982.
[8) A. J. Coulson, R. G. Vaughan, and M. A. Poletti, "Interpolation in
bandpass sampling," in Proc. Int. Symp. Signal Processing, Applicat.
(ISSPA-92), Aug. 1992, pp. 23-26.
Spectra of a uniform quadrature sampled bandpass signal.
e j [" 1l" - 1l" /2]. The time-domain description is the band-limited Hilbert
transformer
SQ
()
t
=
2(-I)"sin2(7l"Bt)
ttt.
.
(16)
Applying the method of Section III-B, it is found that the time delay
in the sampling of the interpolant which ensures constructive aliasing
in the spectra of the digital quadrature interpolant is TQ = -k. This
time-shift is equal and opposite to the initial time difference between
the sample streams. Using this result to digitize sQ(t), the digital
quadrature interpolant is found to be the delayed, digital Hilbert
transformer
(17)
The two post-interpolation sample streams are time-aligned thus,
following Section III-C, the digitized frequency-shifting function
y~l
(t) = (-1)P applied to both sample streams will produce
baseband [61(t) and Q61(t).
This result corresponds to those described by Rice andWu [6],
and by Waters and Jarrett [7], but for half the latter sampling rate.
The above derivation shows that the previous methods for producing
uniform quadrature sampling are special cases of the above method
for general second-order sampling. It is only in the special case of
uniform quadrature sampling that the combined effect of secondorder bandpass sampling, interpolant under-sampling, and digital
frequency-shifting produces two time-aligned, digital sample streams.
V. CONCLUSION
It is possible to frequency-shift a bandpass signal by using bandpass
sampling and the appropriate interpolants. Where the signal is secondorder sampled, the frequency-shifted signal is phase-shifted relative
to the original signal. This phase-shift may be varied by changing
the sample streams' separation.
It has been shown that interpolants for frequency-shifting a bandpass signal using second-order bandpass sampling can be implemented digitally. It has also been shown how previous digital
implementations of quadrature interpolants are a special case of the
general, second-order digital interpolants.
Polar Coordinate Quantizers That
Minimize Mean-Squared Error
Stephen D. Voran and Louis L. Scharf
Abstract-A quantizer for complex data is defined by a partition of
the complex plane and a representation point associated with each cell of
the partition. A polar coordinate quantizer independently quantizes the
magnitude and phase angle of complex data. We derive design equations
for minimum mean-squared error polar coordinate quantizers and report some interesting theoretical results on their performance, including
performance limits for "phase-only" representations. The results provide
a concrete example of a biased estimator whose mean-squared error is
smaller than that of any unbiased estimator. Quantizer design examples
show the relative importance of magnitude and phase encoding.
I.
INTRODUCfION
A quantizer for complex data partitions a region of interest in
the complex plane into a finite number of cells and assigns a
representation point to each. The most general complex quantization
problem is illustrated in Fig. 1. That is, 2 = z; whenever Z E Ci.
The quantizer design problem is to determine the cells C, and the
quantized representations z, so that Ee, the expected value of the
error function e, is minimized.
If the error function increases monotonically with IZ - 2 I, then it is
clear that the C, should form a nearest neighbor partition of the Zi.
If the error function is further restricted to the squared error function
e( z, 2) = Iz - 21 2 , it is easy to show that each representation point
z, must be the conditional mean of Z when Z is in the corresponding
Manuscript received February 16, 1993; revised August 18, 1993. This work
was supported by the Office of Naval Research under contract No. Nooo1489-J-1070 and by the NSF Center for Optoelectronic Computing Systems at
the University of Colorado, under contract No. 8622236. The associate editor
coordinating the review of this paper and approving it for publication was
Prof. Tamal Bose.
The authors are with the Department of Electrical and Computer Engineering, University of Colorado, Boulder, Colorado, USA.
IEEE Log Number 9400039.
1053-587X/94$04.00 © 1994 IEEE
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL.
1560
II--
z - - - - - i....
Q
Quantization Rule:
Quantization Problem:
Z
- - - - - I••
E C;
~
==> Z = Zj
min
E[f( z, z)]
{Cil{",{zil{"
z
(1.1)
z
The conditional mean estimator (CME)
is unbiased. That is,
E[z] = E[z]. The LM equations are necessary but not sufficient
conditions for the minimization of Elz - z12. In the case where z is
a real scalar, Fleischer [4] has shown that, if the density function for
z, ! z (z), satisfies the weak convexity condition
d2
dz2In(fz(z» < 0 V z
(1.2)
then the LM equations are necessary and sufficient for the minimization of Elz - z12. Because the LM equations are coupled, an iterative
solution is necessary in all but the most trivial cases.
By constraining the cell shapes to be some regular shape and
possibly constraining the representation points, we can find quantizers
which are easy to implement but which have larger average error
than does the LM quantizer. In this correspondence we investigate
a "sector-annulus" partition of the complex plane which allows
independent quantization of magnitude and phase angles in a polar
representation of complex data. This representation may be advantageous for directional data and data from quadrature demodulators.
Our results generalize the results of Bucklew and Gallagher [5], [6]
to include arbitrary distributions and "nonasymptotic" bit rates. Our
results also provide a concrete example of a biased estimator whose
mean-squared error is smaller than that of any unbiased estimator.
11. POLAR COORDINATE QUANTIZERS
By selecting an appropriate partition of the complex plane, it is
possible to quantize the complex data z = reJ</> by independently
quantizing the magnitude (or radius) of z, r = Izl, and the phase
angle of z, 6 = arg(z). The result is that the data re j </> is
represented as feJ<P, where l' and ¢ are quantized versions of rand 6,
respectively, and fejJ, approximates the original data re j 1> as closely
as possible. In the development that follows, we assume a stochastic
model for the data and design the polar coordinate quantizer that
minimizes the average quantization noise energy, or mean-squared
error, of the quantizer:
~; = Elre j </> - fe j J, 12
(2.1)
= r,
when t ,
J, = 6;
when d,
l'
1994
111. MINIMUM MEAN-SQUARED ERROR POLAR QUANTIZER DESIGN
quantization cell C;. This pair of results defines what is commonly
called the Lloyd-Max quantizer [I], [2], although Lukaszewicz and
Steinhaus deserve recognition [3]. The Lloyd-Max (LM) equations
are = z, when z E C;, where
C; = {z : Iz - z;j2 ~ Iz - Zj 12}.
j = 1 to N. j i= i
= E[zlz E C;].
NO.6. JUNE
representation points N m = 2 m • When b bits are available, one
should examine these error expressions for different (m. p) pairs that
satisfy m + p = b to find the quantization scheme that makes the
most efficient use of these b bits. Section IV provides two examples
that demonstrate the relative importance of magnitude and phase
encoding.
Fig. 1. Quantizer.
z,
42,
< r ~ t;+I:
< 6 ~ d;+I.
Note that independent quantization of rand 6 permits us to replace
the general cell C; by the intervals (ti. t,+tl for rand (d i . d,+tl
for o.
Note that in general, ~; is parameterized by both the number of
phase representation points N p = 2P and the number of magnitude
We assume that the data re?" comes from a stochastic source, the
probability density functions for r and 6 are known, and rand 6 are
statistically independent. We then obtain
~; = Elre j 1> - fe j <P 12 = E(r 2) - 2E(rf)Ecos(6 - ¢) + E(f2)
= E(r 2) - 20:E(rf) + E(f2):
0: = E[cos(<f> - ¢)]. (3.1)
Since l' is a function only of r and since rand l' are positive
requires the maximization of
quantities, the minimization of ~;
0: = E[cos(6 - ¢)]. The parameter 0: depends only on r/J and
J, and is the natural performance measure for the quantization of
qJ and e J 1> in the context of this problem. The estimator ¢ that
maximizes 0: = E[cos(<f> - ¢)] is also the estimator that minimizes
the mean-squared error (MSE) of a phasor quantizer. That is,
~ = E[le j </> - ej<P1 2] = E[2(1- cos(r/J= 2(1 - 0:). (3.2)
-»
Therefore, when ¢ maximizes 0: = E[cos( q) - J)], the corresponding
phasor estimator eJ 1> is the minimum mean-squared error (MMSE)
estimator of the phasor e'", This means that eJ</> is a conditional
mean estimator (CME). As every CME is unbiased, we see that eN
is an unbiased estimator of e'":
(3.3)
If the phase q) happens to be uniformly distributed on (0•• 27l"], then
Eel</> = O. But, more generally, we can write
= fe j </> and note
that the mean of
is
z
z
E[z] = E[f]E[el<P].
(3.4)
This will equal E[z] iff E[f] = E[r]. As we shall see, E[f] i= E[r]
except in the case of infinitely fine phase quantization. So, generally,
the estimator l' is a biased estimator of r and is a biased estimator
of z, but these biased estimators have smaller variance and smaller
MSE than unbiased estimators would have. When the phase quantizer
the MSE
is infinitely fine, then 0: = I and the MSE ~; is just
of a magnitude quantizer:
z
e,
~;
e when 0: = 1.
= E[(r - 1')2] =
(3.5)
Phase Quantizer Design: We assume the quantizer design described by
(3.6)
and k = 0,1 •...• N p - 1. The free parameters in this design are the
thresholds {d d ~ p - I and the representation points {q)d ~ p - I . The
expected cosine error becomes
J
2"
o
= E[coS(q) -
¢)]
=
cos(r/J - J,)!</>(q)#
o
Np-I
L
k=O
[(,"'(0,) ']',o,I")!,(O) d")
dk
+
(,in IO,) ""
l
)]
sill(o)!1>(6)do
.
(3.7)
1561
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 42, NO.6, JUNE 1994
A necessary condition for a to be maximized is
vqa
= 0,
qT
=
[do,d 1 , ... ,dNp-l,cPO,cPl, ... ,cPNp-d.
(3.8)
This results in two sets of equations. The first is
°= -() =
()a
fq,(dk)[cos(d k - cPk-d - COS(dk - cPk)]'
dk
If fq,(dk) = 0, then d k can be moved without consequence. Hence,
we assume fq,(d k) =1= and obtain
°
dk
=
do
=
k = 1 to N p - 1
cPk-l + cPk;
2
cPo + cPN p-l - 271"
2
'
dNp = do + 271".
(3.9)
Fig. 2.
Correspondence between the optimum quantizer and the LM
quantizer.
Then following the procedure outlined for the phase quantizer, the
LM equations are
h=
The second set of equations is
o=~
==}
r»
es,
= cos( cPk )E[sin( cP) I dk ::; cP < dk+dP[dk ::; cP < dk+d
- sine cPk )E[cos(cP) I dk ::; cP < dk+dP[dk ::; cP < dk+d
-1 {E[Sin(cP) I dk ::; cP < dk+l] }
cPk = tan
;
E[cos( q'» I dk ::; cP < dk+d
k
=
°
to N p
(3.10)
1
-
where tan -1 denotes the four-quadrant arctangent function.
The quantizer that maximizes a must satisfy equations (3.9) and
(3.10) for all k. We compute the solution by an iterative technique.
First, uniformly spaced {d k} and {cPk} are assumed. We then
alternately adjust {cPk} to satisfy (3.10) and {dk} to satisfy (3.9).
At each iteration, the vector q moves closer to a critical point since
one of the two optimality criteria is enforced. Thus the algorithm
converges to a critical point and, in practice, this critical point
maximizes a.
Phase-Only Quantizer: If we quantize only phase and not magnitude, then the best prior choice for l' is the choice that minimizes
(3.11)
That is,
l'
= aE(r).
(3.12)
rk
+2rk-l ;
k=ltoN",-2.
to=O,tN=-I=OO
= E[r I h < r ::; tk+d.
(3.15)
The solution for rk is, of course, the CME of r given h
The CME estimator l' is unbiased:
E(i)
<
= E(r).
r ::;
tk+l.
(3.16)
The design of an LM quantizer starts with uniformly spaced {tk}
and {rk} and alternatively enforces the two conditions in (3.15).
The parameters {t k} and {rk} converge so that both conditions are
simultaneously satisfied. Fleisher's weak convexity condition (1.2)
on fr(r) guarantees that the LM conditions are also sufficient.
What relevance does the LM quantizer l' have to the problem at
hand when a < I? To answer this question, we rewrite the equation
for ~; as
2)E[1'2].
~; = E[r 2]-2aE[r1'] + E[1'2] = E[{r _ a1')2] + (l_a
(3.17)
We are denoting the quantizer by l' to distinguish it from the LM
quantizer r. In general, l' will have its own set of representations
qk and thresholds Sk. However, they may always be written in
terms of the corresponding representations and thresholds for the
LM quantizer. That is, as illustrated in Fig. 2, the (qk,Sk) for the
optimum quantizer may be placed in 1:1 correspondence with the
(r k, h) of the LM quantizer. This means that the quantizer l' may
actually be referenced to the LM quantizer:
Then the MSE of the phase-only quantizer is
= E(r 2) _
2a 2E 2(r)
+ a 2E 2(r) = E(r 2) _ a 2E 2(r). (3.13)
If the phase quantizer is infinitely fine, then a = 1 and ~; = var( r ).
~;
This is as small as the MSE can be for phase-only quantization. If
the phase is uniformly distributed on (0,271"], then the optimum phase
quantizer is a uniform quantizer and a = sinc(71"/Np ) .
Magnitude Quantizer Design We have seen that the optimum
phase quantizer design is independent of the magnitude distribution.
We simply satisfy (3.9) and (3.10) and then calculate a using (3.7).
On the other hand, the design of the optimum magnitude quantizer
is dependent on the phase quantizer design. This is evident from
(3.1). We begin our study of the optimum magnitude quantizer by
studying equation (3.1) in the special case where phase quantization
is arbitrarily fine, meaning that a = 1 and also meaning that the
of a magnitude quantizer:
MSE ~; is just the MSE
e
(3.14)
(We are using l' in place of l' to indicate that l' is a quantizer for r
when a = 1; it is not the optimum quantizer for r when a < 1.)
In this case, the Lloyd-Max quantizer l' is found in the usual way:
define l' = rk whenever ti: < r ::; h+l for k = 0,1 •... , N", - 1.
However, rather than test Sk < r ::; Sk+l, we may simply test
tk < r ::; tk+l and transform r i :
(3.19)
This makes l' a nonlinear function of the LM quantizer 1'. Now we
exploit the orthogonality of
give and take l' in the first term of ~;,
r - i to any function of l' (such as f), and write
(3.20)
or
(3.21)
We have used the arguments (r , f) and (1', f) to denote that in one
case ~; is the MSE between rejq, and 1'ei:;' and in the other case ~;
is the MSE between reiq, and 1'ei:;'. The term
= E[(r - 1')2] is
just the MSE of a LM magnitude quantizer.
It is not hard to show that the solution for l' that minimizes ~; (1', 1') .
f) is
and hence minimizes ~;(r,
e
l'
= ar = ark
whenever it:
<
r ::; tk+l.
(3.22)
1562
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 42, NO.6, JUNE 1994
TABLE I
QUANTlZER SNR IN dB
TABLE II
QUANTIZER SNR IN dB
•
p, bits of phase encoding
0
0
0.0'
1.7'
2
3
4
5
6
4.4'
5.9
6.5
6.6
6.7
0
0
2
3
4
5
6
0.0'
2.9'
3.8
4.5
4.7
4.8
4.8
0.0
4.5'
6.4'
8.2'
9.0
9.2
9.2
2
0.0
5.3
8.0
11.4'
13.3'
14.0
14.2
E 3
0.0
5.6
8.7
12.2
16.9'
18.8
19.4
0.0
5.6
9.0
13.9
18.9'
22.5
24.1
~
~
'8
~
0.0
2.0
6.0'
9.2'
10.7
11.2
11.3
0.0
2.2
6.6
10.8'
13.3'
14.2
14.5
.g
·8
g2
'5
1]
'1
0
]
~
.s
E( 3
l
•
p, bits of phase encoding
4
0.0
2.2
6.9
11.7
15.0'
16.5'
17.0
0.0
2.2
7.0
11.9
15.6
17.5'
18.1
The quantizer r is a worse approximation to r than the LM quantizer
r . Furthermore, it is biased, and the corresponding polar coordinate
quantizer is biased:
= aE[r] = aE[r],
E[r]
E[i]
= E[areJ<P] = aE[z].
= E [lrej<P -
re i J, 12 ]
= E[r 2 ] _
2aE[rnr]
+
Np-l
= E[r 2] -
2
a E[r
2]
= E[r
2
] -
a
2
L
Pk r % ~
(}2
4
We select (J" = 1/2. We assume that the magnitude of the data is
gamma distributed and independent of the phase angle distribution:
(3.23)
However, this bias in i is compensated by low variance, which makes
bias-squared plus variance for i smaller than the variance of any
competing unbiased estimator. The MSE of the quantizer i = rei:;'
is
~;
l
E[r 2 ]
e·
k=O
(3.24)
This shows that the polar coordinate quantizer cannot work better than
a LM quantizer for magnitude used in conjunction with an infinitely
fine phase quantizer. When phase quantization is finite, however, it
outperforms the LM quantizer for magnitude.
We select (J"" = 1.
The results of this correspondence allow us to design first a
phase quantizer and then a magnitude quantizer for the complex data
z = re!", The resulting quantizer SNRs for this data distribution are
given in Table II. As in the previous example, the asterisks indicate
the highest SNR for a fixed antidiagonalline, m + P = b. Because the
phase distribution is tightly clustered around 0 and IT, phase encoding
is not as difficult as in the previous example. When only one bit is
available, phase-only representation is optimum. When two bits are
used, one bit of magnitude encoding along with one bit of phase
encoding will save 0.7 dB of SNR compared with two bits of phaseonly encoding. As the total bit constraint is increased, we trace out a
somewhat irregular staircase towards the southeast comer of the table.
V. CONCLUSION
IV. EXAMPLES
We have derived MMSE polar coordinate quantizer designs for
specific complex data distributions. Two examples are given here.
The case of complex data z with independent, identically distributed
Gaussian real and imaginary parts (alternately, Izl is Rayleigh and
is independent from arg(z), which is uniform) has been treated in
[6], and our results concur. Table I shows MMSE quantizer signalto-noise ratios (SNR's) for a range of P (bits of phase encoding) and
m (bits of magnitude encoding) values. SNR is defined as
SNR
= 1OIog 1o {
Elrei<P12
}
. "
ElreJ<P - reJ<P 12
(4.1)
The single asterisk on the (b + l)'t antidiagonal of the table indicates
the best SNR given the constraint P + m = b.
For a second example, we assume a phase distribution of complex
data clustered near the real line. The phase angles are drawn from
a truncated Laplacian distribution. The mean of the Laplacian distribution takes the values 0 or IT with equal probability. The resulting
probability density function is
=
I<p(o)
c
{
-f- e-I<p~ .=.¥J- ,
a
c;};;c--"-.
-~
~
~ ~
Q
Q
<
<
~
*
(4.2)
We have derived design equations for the polar coordinate quantizer that minimizes mean-squared error. This 2-D quantizer for
complex data can be implemented as a pair of scalar quantizers
which function independently. The phase angle quantizer is a LM,
conditional mean quantizer which can be designed with no knowledge
of the magnitude data. We have provided necessary conditions for the
optimality of a phase quantizer. In practice, these conditions behave
like necessary and sufficient conditions. The magnitude quantizer
turns out to be a modified LM quantizer, and the modification
is a function of the phase quantizer used. The design given is
both necessary and sufficient for optimum performance. In general,
the total complexity of the quantizer may be appropriately divided
between the phase portion and the magnitude portion so that best
performance is attained. The examples show that the appropriate
division is dependent on the' complex data distribution.
REFERENCES
[I] S. P. Lloyd, "Least squares quantization in PCM," unpublished memorandum, Bell Laboratories, 1957
[2] J. Max, "Quantizing for minimum distortion," IRE Trans, Inform,
Theory, vol. IT-6, pp. 7-12, 1960.
[3] A. Gersho, "Principles of quantization," IEEE Trans, Circuits, Syst.,
vol. CAS-25, pp. 427-436, 1978.
[4] P. E. Fleischer, "Sufficient conditions for achieving minimum distortion
in a quantizer," IEEE Int. Convention. Rec., 1964, pt. I, pp. 104-111.
IEEE TRANSACfIONS ON SIGNAL PROCESSING, VOL. 42, NO.6, JUNE 1994
1563
[5] Bucklew and Gallagher, "Quantization schemes for bivariate Gaussian
random variables," IEEE Trans Inform. Theory, vol. IT-25, no. 3, Sept.
1979.
[6] Bucklew and Gallagher, "Two-dimensional quantization of bivariate
circularly symmetric densities," IEEE Trans. Inform. Theory., vol.
IT-25, no. 6, Nov. 1979.
in which .\; is the ith nonzero eigenvalue of matrix M p, and
is the minimum norm vector in the null-space of M p+ 1 under the
1
constraint ab+ = I, as introduced by Kumaresan and Tufts [4] in
the context of parameter estimation.
a;+\
II. DERIVATION OF THE RECURRENCE
A Recurrence Relation for the Product of the Nonzero
Eigenvalues of Singular Symmetric Toeplitz Matrices
Jean Laroche
Abstract-This communication presents an extension of a well-known
recurrence relation for Toeplitz symmetric matrices to the case of incomplete rank matrices. It is shown that the product of the nonzero
eigenvalues of the matrix of order p+1 can be obtained from the product
of the non-zero eigenvalues of the matrix of order p and the so-called
minimum-norm prediction vector introduced by Kumaresan and Tufts in
the context of parameter estimation.
I. INTRODUCTION
In all that follows, we will suppose that the symmetric Toeplitz
matrices M, have the same incomplete rank r , for every p 2:
r, M p+ 1 being obtained by adding a new coefficient r p+l and
completing M, to a symmetric Toeplitz matrix.
Furthermore, for any scalar 6, we will define M, (6) £ M, + 6I p,
I p being the p + 1 by p + 1 identity matrix. Because M, is supposed
to be of rank r S; p. Det(Mp(6» is a polynomial in 6 whose roots
include the opposite of the r nonzero eigenvalues of M, along with
p + 1 - r null roots. We have
p+
Det(M p(6» = 6 1-'Qp(6)
in which Q p (6) is a polynomial of degree r whose roots are all
nonzero. The constant term Qp(O) is therefore the product of the r
nonzero eigenvalues of matrix M p. We now proceed to determine
Qp(O).
M p+1(6) can be written as
This correspondence presents an extension of a classical result concerning the determinant of symmetric nonnegative Toeplitz matrices
to the case when the matrices are singular. Let us define the (p + 1)
by (p + 1) nonnegative Toeplitz symmetric matrix:
rl
rl
Mp
ro
r2
rl
ro
r p-
l
with r p £ (r p+l . r p, r p- l ..... rd •
A well-known result on bordered matrices [5] yields
'.
"'1 .
= ['":
r»
r p- 2
Det(Mp+d6)
It is possible [1] to associate to M, a linear prediction vector a p and
defined by
a linear prediction error o-~
r
['"
i
rp
rl
rz
ro
rl
r p- l
r p-
rp
r p- l
ro
2
0
0
aPp
0
(1)
2
= o-p+l·
.\p+l
rr:'=1 A;
i-I
i
= II
mn
p
a +l
11 2
r~M;I(6)p]
6Qp+d6) = Qp(6) [(ro + 6) - r~M;I(6)p].
(4)
Note that M, (6) is supposed to be full-rank (which means that -6
should not be equal to any of the eigenvalues of Mj,).
Because M p +1 is singular, there exists at least one linear prediction
= 0 [6]. We will denote up the
vector a p+ l satisfying (1) with 0-~+1
vector obtained by reversing the order of the p last coefficients of
ap +l :
(5)
(2)
This correspondence presents an extension of this result to the case
of symmetric Toeplitz matrices of incomplete rank r S; p. Symmetric
Toeplitz matrices of incomplete rank arise in various situations, e.g.,
the autocorrelation matrix of a signal containing r constant amplitude
complex sinusoids is a symmetric Toeplitz matrix of rank r [3]. In
such cases, Det( M p) is null for r S; p, and (2) cannot be applied.
We will show that in this case, the preceding relation becomes
Il:
= Det(Mp(6»[(ro + 6) -
or equivalently
2
0- p
aP1
aP2
with a p = (ab. at, ... , a~)t
and ab = 1. A well-known result on
full-rank symmetric Toeplitz matrices is [2]
Det(Mp+1)
Det(M p)
+6
t
(3)
Manuscript received April 21, 1993; revised September 20, 1993. The
associate editor coordinating the review of this paper and approving it for
publication was Dr. James Zeidler.
The author is with Telecom Paris, Department Signal, Paris, France.
IEEE Log Number 9400416.
Because M; is centro-symmetric, we have
Mj.u;
Inserting (6) into r~M;1
r~M;I(6) p
= - rp •
(6)
(6)rp, we obtain successively
= -r~up
= ro -
+ 6r~M;I()up
611upl12
+ 62 U~M;I(6)up
(7)
= -ro, and the symmetry of
in which we used the fact that r~up
M, (6). II. II refers to the standard Euclidian norm. Inserting (7) into
(4), we obtain
2
Qp+d6) = Qp(6)[1 + IIup l1 - 6u~M;I()p].
(8)
Note that (6) is valid both if M, is singular or nonsingular. By
contrast, the equality r~ up = -ro is valid only if M, is singular.
We now evaluate 6u~M;I()p.
1053-587X/94$04.00 © 1994 IEEE