A New Algorithm for Solving Toeplltz Systems of Equations zyxwvutsrqponmlkj
Frank de Hccg
L?iutiun zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
of M athematics and Statistics
CSZRO
P.O. Box 1965
Canberra City , ACT, 2601, Australia
In memory of James H. W ilkinson
Submitted by Gene H. Golub
ABSTRACT
W e present some recurrences that are the basis for an algorithm to invert an
n x n Toeplitz system of linearequationswith computationalcomplexity0( n log2 n).
The recurrences used are closely related to the Zohar-Trenchand Bareiss algorithms
but do not have any obviousconnection with other asymptoticallyfast algorithmsfor
the inversionof Toeplitz systems.
1.
INTRODUCTION
The problem of finding the solution of the system of linear equations zyxwvutsrqponmlkj
0.1)
T,,y = h,
where
y,h E C” and T, E C” X C” is a Toeplitz matrix, arises in many data
processing applications, such as the solution of the truncated W einer-Hopf
equations [7, 111. Since a Toeplitz matrix has the form zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR
Tn=(tij),
tij=ti_j,
i,j=l,...,
(1.2)
n,
so that the entries along the diagonal and parallel to the diagonal are
constant, it is not surprising that the structure of (1.2) can be exploited to
obtain efficient algorithms for the solution of (1.1). In fact, if the principal
LINEAR ALGEBRA AND ZTS APPLICATIONS 88/89:123-138
(1987)
123
0 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
Ekevier Science Publishing Co., Inc., 1987
52 Vanderbilt Ave., New York, NY 10017
0024-3795/87/$3.50
124
zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
FRANK DE HOOG zyxwvutsrqp
minors of zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
T, are non zero,
k=l,...,n,
det( T,) f 0,
(I-3)
then a number of algorithms [l, 16, 17, 181 are available that will solve (1.1)
using only 0( n2) operations rather than the O(n3) required for Gaussian
elimination. For problems for which the condition number of Tk, k = 1,. . . , n,
is not large, these algorithms have been found to be an efficient way to solve
(1.1). Fortunately, many problems in practice have additional structure. For
example, if T,, is well conditioned and positive definite, then Tk, k = 1,. . . , n,
are well conditioned and positive definite. If however such additional structure is not present, the above algorithms can become unstable. This difficulty
may have been overcome by Sweet [ 151, who has proposed an 0( n2)
algorithm based on orthogonal factorization of T,,.
Recently, a number of asymptotically fast algorithms [3, 4, 131 have been
proposed for the solution of (1.1) that require only 0( n log2 n) operations.
Brent, Gustavson, and Yun [4] use the fact that any algorithm for the
calculation of entries in the Pad& table can be used to calculate the solutions
of
T,,a, =
(1.4a)
ql,
(1.4b)
T,,b, = Ike, y
where
el=
(1,0 ,..., O)‘,
e,=(O,O
,..., l)r,
and (Y,,, & E C. As explained in Section 2, a, and b, completely specify T; ‘ .
In [4], two fast algorithms are presented for the calculation of entries in the
Pad& table and both could be used to solve (1.4a,b). Unfortunately the
algorithms are not necessarily stable even if T,, is positive definite. The
reason is that the algorithms described in [4] really solve Hankel problems
and will become unstable if the leading principal submatrices of the corresponding Hankel matrix becomes ill conditioned. For example, if we consider
4
1
l+&
’
;
;
1+&
1
1+&
1
1
l+&
1 1’
4
125 zyxwvutsrqpo
SOLVING TOEPLITZ SYSTEMS OF EQUATIONS
then the corresponding Hankel matrix is
jj=
+
[
4
l+&
;
‘:
1
1
l+&
4
I
l+& ’
1 zyxwvutsrqponmlkjihgfedcbaZYXW
1
and clearly its principal submatrix of order two is ill conditioned if E is small.
The algorithms given by Bitmead and Anderson [3] and Morf [13] are
nearly identical and are based on the concept of the displacement rank of a
matrix as developed in [6, lo]. The drawback of this approach is that the
hidden constant in the 0( n log 2 n) estimate is very large. Specifically, Sexton
[14] estimates that the operation count in the Bitmead-Anderson algorithm is
bounded by 63OOnlog2 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJI
n + 0( n log n) when I’,, is symmetric. Although this
may well be an overestimate [2], it would appear that the algorithm in its
present state of development is not practical.
In this paper, we present another asymptotically fast O(n log2 n) algorithm for the solution of (1.1) which does not appear to make contact with
the concepts underlying the algorithms given in [4] and [3,13]. Since the
hidden constant in the asymptotic estimate is moderate and the algorithm is
closely related to the Zohar-Trench [16-181 and Bareiss [l] algorithms, it is
hoped that the present approach (or a modification of it) will lead to a
practical algorithm when n is large.
The paper is organ&d as follows. In Section 2 preliminary results and
notation are presented. The algorithm is then presented in Section 3. Finally,
some concluding remarks are made in Section 4.
2.
PRELIMINARIES
In the sequel we assume that (1.3) holds. Indeed as the recurrences
described will become unstable if one or more of the Tk, k = 1,. . . , n, become
nearly singular (i.e. has a large condition number relative to that of T,,), we
are really only concerned with the application of the recurrences to matrices
for which this is not the case.
The key to many of the 0(n2) algorithms is the fact that the inverse of T,
is completely specified by the solutions of (1.4a,b) (essentially the first and
last columns of T;‘). SpecificaUy ,
126
zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
FRANK DE HOOG
where
.......I..............1>
0
b
“1
W*=
...
...
0
0
0
0
0
0
zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
b ran-2
b nl
0
***
[ b IIn0
G,
=
0
I 0
arm
.*.
an3
an2
0
..’
an4
an3
. . . . . . . . . . . . . . . . . . .
0
...
0
0
1
is the Gohberg-Semencul
[8] formulation of the Trench formula. It is easy to
verify that (2.1) can also be rewritten as
T-l=
n
_+”
- GnwJ
(2.2)
7ll
The advantages associated with (2.1) and (2.2) are fairly clear. Firstly,
they allow a representation of the inverse that requires only O(n) storage.
Secondly, as pointed out by Jain [9] and others, the factors U,,, L,, W,,, and
Z, are Toeplitz, and hence the solution T; ‘h can be calculated in only
0( n log n) operations using the fast Fourier transform.
In what follows, we shall find it convenient to associate a polynomial with
a vector and vice versa. Thus, with a vector
we
associate the polynomial
and conversely. By g we mean the vector whose components are the complex
conjugate of p, and we denote by P(x) the polynomial associated with @.
SOLVING TOEPLI’IZ SYSTEMSOF EQUATIONS
127
Finally, we define the “truncated” vector
P(k)=(P1,...,PJ
zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB
and associate with pck’ the polynorkl P(k)(~). zyxwvutsrqponmlkjihgfedcbaZYXWVUT
Now consider the polynomials Ak(x) and Bk(x), k = 1,. . . , n, associated
with the solutions of
Tkak = “kel,
k=l,...,n,
(2.3a) zyxwvutsrqp
Tkbk
k=l,...,n,
(2.3b)
=
Bkek,
where (Yk and Pk are such that the first and last components of ak and b,
respectively are one (i.e. ukl = 1 = bkk). lt is wefl known (see for example
[ 121) and easy to verify that for k = 0,. . . , n - 1,
where
A,(x) = 1,
B,(r) = l/x,
Yk+l=
qk/pk,
O k+,=
xk/ak,
y1 =
WI
k > 0,
=
0,
(2.5a)
(2.5b)
k > 0,
zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONM
w4
k-l
(Yk=
C
t_jakj+l?
k > 0,
(2.54
tk-jbkjy
k > 0,
cw
tk-jakj+l’
k > 0,
(2.5f)
k > 0.
w% 3)
j=O
k-l
bk=
c
j-1
k-l
qk-
xk=
c
j-0
2 t-jbkjr
j-l
128
zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
FRANK DE HOOG zyxwvutsrqpo
The advantage of the scaling of ak and bk becomes apparent when it is noted
that
x zyxwvutsrqponmlkjihgfedcbaZYXW
kvk
ak+l-
k+l=ak-*
-P zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO
(2.6)
ak
Since the condition det(Tk) f 0, k = 1,. . . , n, ensures that ak # 0 and Pk # 0,
n, the recurrences are well defined.
Equations (2.4), (2.5), and (2.6) are essentially the basis of the
Zohar-Trench
algorithm [16-181 and provide a decomposition of the form
(2.1) in 2n2 + O(n) multiplications and a similar number of additions. In
addition, if T, is Hermitian, then r)k =Xk, Bk(x) = xkP1xk(l/ x), k = 1,. . . , 72,
and the computational complexity is reduced by a factor of two.
We shall also require some knowledge of an algorithm due to Bareiss [l]
for the solution of (1.1). For a description of this algorithm it is convenient to
use the “ shift” matrix Zk = (2;;‘) defined by
k = I...,
z!‘T) =
‘I
1
if
0
otherwise.
j-i=k,
Thus, premultiphcation of a matrix by Zk shifts the rows of the matrix up k
places with zero fill.
The Bareiss algorithm is as follows. Let
T(l) :=
T =: Tt-
l),
(2.7)
h”’ := h =: h’-“,
and for k = 2,. . . , n define
0, =
#-“/Q-k’,
(2.8a)
6, =
t,t-‘Opi;-“,
(2.8b)
T(k) = T’k-I’_
T’-k’ = T” -k’_
htk’ = htk-
ht-k) = h”-k’_
~kZkP,T(‘-k’
3
6 kZ i_kTCk-‘),
1) _ 8 k z
k-lh(1-k),
8 k z l_kh(k-?
(2.8~)
(2.8d)
(2.&)
(2.8f)
129 zyxwvutsrq
SOLVING TOEPLI’IZ SYSTEMS OF EQUATIONS
The application of (2.8) to (1.1) yields two systems of linear equations:
T(“$
T(-“‘,,
= h’“‘,
(2.9a)
= h(-“,.
(2.9b)
Since the structure of (2.8) is such that
t!k) = 0 ?
‘I
tj.-k’ =
0
‘I
(2.10a)
O-q-ick,
O-ci-
j-ck,
it follows that T(“) is lower triangular while W ”)
Equations
(2.1Ob)
is upper triangular. Thus,
(2.9a, b) can be solved by forward and backward substitution
respectively. Furthermore, as the first n + zyxwvutsrqponmlkjihgfedcbaZYXWVUTS
1 - k rows of Tck) and the last
retain their Toeplitz structure, the steps (2.&,d) can
n + 1 - k rows of W k)
be performed quite efficiently. Specifically, the calculation of T(“) and T(-“)
requires 2n2 + O(n) multiplications and a similar number of additions.
Our interest in the Bareiss algorithm stems from the fact that the
multipliers 8,, 6,, k = 2,. . ., n, are equal to the coefficients yk, ok, k =
0 . . , n, in the Levinson recurrence (2.4).
A,.
LEM M A2.1.
Fork=2 ,..., n, let a,,&,
and 8,, 6, be defined by (2.8). Then
Yk, wk be defined by (2.4)-(2.6)
&=Yk>
k=2,...,n,
&=a,,
k = 2,...,n.
and
Proof.
Consider the solution of (1.4a, b) where (Y, and /I,, have been
chosen such that
a nl = b,,, = 1.
From (2.4),
a “” =-
YIl
130
FRANK DE HOOG zyxwvutsrqp
zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
and zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
b,,=
-a,.
Now apply the Bareiss algorithm to (1.4a). Clearly,
h(“’ = q,el,
h(-“)=cu,(l,
-6,
,...,
-&Jr,
and hence from (2.9a,b)
a nl =a
n
/@‘=l,
6 /t(-“)
a nn = - (y“”
nn
= - y 7I.
Similarly, on applying the Bare& algorithm to (1.4b) we obtain
b,, =
b
nn
c/j
-,8,6,/t{;‘=
-
w,,
/t(-“)=l.
”
nn
From (2.6), (Y, = &,, and hence
a fl” = - 6” = - y,
and
b,, = - 8, = - 0,.
W e have therefore established the result when k = 12. However, ek, 6,
depend only on zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB
tj, - k < j < k, and are unchanged if the Bareiss algorithm is
applied to a problem of dimension n + 1. As n is arbitrary, the result follows
n
for k - 2,. . . , n.
131
SOLVING TOEPLITZ SYSTEMSOF EQUATIONS
zyxwvutsrq
W e conclude this section by deriving a recurrence relationship for some
polynomials associate with the Bareiss algorithm. Let
k=l,...,fl,
M &x) = &n,jxj,
and
V,(X) = Cukjxj,
k=l,...,n,
be zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
rational functions defined by
M,(X) := C t_jd=:vl(x)
ljl < n
and
M&x) = M,_,(x) - WkXk-lVk_l(X),
V,(x) =vk-l(x)
- y$-kM k_l(X).
(2.11a)
(2.11b)
From Lemma 2.1 and Equations (2.8), (2.10), and (2.11) we find that
t!k) = mk ,J ._. 1’
i<n-k+1,
'I zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
t!.-Q=y zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
iak,
k,J- z’
‘J
Ocj- ck, zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR
mkj = 0,
v kj
=
-k<j<O,
0,
and
Yk
=
ok- l,l- k/mk- l,O ~
wk
=
mk- l,k- l/Vk- l,O*
For k = l,..., n, we can now write
Mk(X)
vk(x)
= xkQk(X)+
= Pk(X)+
sk(l/x),
“-kRk(l/x),
FRANK DE HOOG
132
zyxwvutsrqp
where Pk(x), Qk(r), Rk(x), S,(x) are zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQP
poZyrwmiaZs defined by
c t_jxj
P,(x):=
(2.12a)
j>O
Qo(X):=
c t_jxj,
(2.12b)
c
tjx’,
(2.12c)
tjd;
(2.12d)
j>O
R,(x) :=
j>O
S,(X) :=
1
j>O
p,(x)
&(x)
[
II
:=
1
- Yk
P,_,(r)
(2.13a)
$? zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONML
;
1 Qk- dX)
’
‘[
I
- -; zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK
Rk-l(X)
(2.13b) zyxwvutsrqpon
Sk-i(X)
1
k
I[
’
1
and
Rk-l(o)
rk-l,O
yt=S,,(O)=
Qk-l(0)
wk=
Pk-l(“)
sk-
_
(2.14a)
1,0
qk-1,O
’
(2.14b)
Pk-I,,’
Equations (2.12)-(2.14)
can be used as a basis for calculating the yk, ok,
k = 2,..., n, required in the Levinson recurrence (2.4). If we take into
account that only n - k coefficients need to be retained in Pk(x),
Q/c(X), zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
Rk(X)>
sk(x)
d uring the computation, then the calculation is equivalent to the Bareiss algorithm. Thus 2n2 + O(n) multiplications are required
to calculate yk, wk, k = 2,. . . , n, and a further n2 + O(n) multiplications are
required to calculate A,(x), B,,(x) using (2.4). This makes a total of 3n2 +
O(n) multiplications, which is to be compared with 2n2 + O(n) multiplications required to implement (2.4)-(2.6).
However, Equations (2.12)-(2.14)
are completely uncoupled from (2.4), and this fact will be exploited.
SOLVING TOEPLI’IZ SYSTEM S OF EQUATIONS
3.
A DOUBLING
STRATEGY FOR TOEPLITZ
133
SYSTEM S
In this section we describe a fast algorithm for the calculation of a,,, b, the
solutions of (1.4a, b). This then provides the decomposition (2.1) for the
inverse of T; ‘, which can then be used to calculate the solution of (1.1).
From (2.4),
Ak(4
I[
I[
- Y,X
1
-Ykx
...
’
x
l/r 1’
&(X)
=
-wk
’
a1
1
[
[ zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
where yk,ak,
yi=w,=o).
k=l,...,
1
n, can be calculated using (2.12)-(2.14)
1[
. . .
-
Clearly, C,(x),
Thus,
(note that
Let
&(x),
-ylr
1.
1
(3.1)
x
zyxwvutsrqponmlkjihgfedcbaZYX
*1
zyxwvutsrqponmlkjihgfedcba
Ek(x), Fk(X) are polynomiab Of degree k - 1, and
Ak(X)
=
Bk(X)
=
a knowledge of C,(r),
ck(x)+
(3.2a)
Dk(x),
(3.2b)
zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFED
Ek(x)+
Fk(x)*
DJx),
E,(x),
F,(x)
immediately yields the
required A,,(x), B,( r ).
Our aim wilI be to apply a doubling (or divide and conquer) technique to
the product (3.1). On examining (2.12)-(2.14)
it is clear that yj, wj, j =
1, . . . , k, depend only on the first k coefficients of P,(x), Qa(x), R,(x), and
S,,(x) (i.e. depend only on pbk), qbk’, rhk), and ~6~‘). W hen we wish to
emphasize this dependence we shall write it explicitly. Furthermore,
Y~+~,w~+~,j = l,...,
asyj,wj,
j=l,...,
since
k, depend on p\k),q\k),rfk),sjk)in exactly the same way
k, depend on pbk’,qbk),r6k’,sbk’, we may write
Ck(x,p(lk),q\k),rfk’,sjk))
XDk(X,pjk),q(lk),rfk)7sjk))
Ek(X,p(lk),q(lk),rfk),Slk’)
XFk( x,pfk),qjk’,rfk’~sjk’)
1
=
-
W l+k
- Yl+kx
x
1[
...
-
1
ml+1
- Yl+lx
x
1
1
*
134 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
FRANK DE HOOG zyxwvutsrq
Consequently,
C,(r,pb” ‘,4h^‘,r~“ ‘,sh” )) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF
ro,(r,p6”‘,46”‘.6”‘.s6”)) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR
[
1
=cn_k(
r,pc=~k’,sc=-k’,~~“
xD,_,(
.,p~~k’,,~-k’
~k
[
1
[
1
E,( CC,&),~$‘),$),S$,“))
rF,( x,&‘),s$‘),d”).s~))
E,_k(x,p~-k’,c&‘-k’,r~n-k),s~-k)) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO
rF,~k(r,p~-k),,~-k’,,t”k),s~-~‘)
’
Ck(X,~k),9hk’.r~k’,s6k’)
x&( % ,phk’,shk’,r~k’,s~k’)
Ek(x,phk),shk’,r$,k),sbk))
~Fk(~,Phk’,S6k’.r~k’,S~k’)
(3.3)
Clearly, (3.3) can be used as the basis of a doubling strategy provided that
the vectors p~-k),q~-k),rk”-k),sl”-k)
can be calculated efficiently from
po,q,,r,,s,.
It is easy to veky that
1
Yl
1
X
and
Yl
x - kCk(X)
X’-kE,(X)
zyxwvutsrqponmlkjihg
X1-k&(X)
Hence, from (2.13a, b) we obtain
(3.4a)
[Qh
and
X-kc,(X)
z
X1-kEk(X)
R,(x)
X-kDk(X)
XlpkFk(X)
I[
s,(X) 1’
(3.4b)
SOLVING TOEPLITZ SYSTEMSOF EQUATIONS
135
from which Pf”-k)(~),Q~-k)(~),
Rp-k)(~), Sppk)(x) can be calculated by
retaining only the first n - zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIH
k coefficients.
We are now in a position to present an algorithm for the calculation of the
product (3.1) when the vectors pa,q,,r,,s, are defined by (2.12). For clarity
it is presented in a stepwise fashion.
ALGORITHM
1.
If n = 1, then
C,(x)= Cn(x,po,qo,ro~so)
= 1,
D,(r) = ~,(x,po,qo,ro~so)= - ~~%Io~
2.
E,(x)
= D,(x,p,,q,,r,,,s,)
= -900/p,,
F,(r)
= F,(r,pa7q,,r0,s0)
= 1.
If n > 1,
let I = [log, n] - 1, k = 2’.
CaU algorithm to provide
Ck(x) = C,(X,pSk),qbk),r5k),sbk)),
Z&(X) = D,(x,p6k),qbk),r6k),sSk)),
Ek(r) = Ek(X,p6k’,qbk’,r~k’,s~k’),
F,(x) =
3.
4.
Fk(*,p~k’,q~k’,r~k),s6k))’
Calculate p~-k),q(kn-k),r~-k),s~-k)
CaIl algorithm to provide
using (3.4a,b).
FRANK DE HOOG
136 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
5. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
Calculate
using Equation (3.3).
Of course, once C,,(x), D,,(x), E,(x) and F,(x) have been calculated,
A,(x) and B,(x) can be obtained from (3.2a, b).
The potential advantage of the algorithm is that steps 2 and 4, which
essentially consist of polynomial multiplication, can be implemented using
the fast Fourier transform. There are of course many ways to implement the
fast Fourier transform. However, if we use the fast Fourier transform based
on powers of 2, then it is not difficult to verify that the above algorithm can
be implemented using 5n log; n + 0( n log n) multiplications and 10n log: n
+ 0( n log n) additions.
If further structure is available, the above algorithm can be simplified and
+e operation count reduced. If T, is Hermitian, then P,,(x) = s,(x), Qn( x) =
R,(x) and yk = Wk, k = 1,. . . , n. From (2.12), (2.14), (3.1),
k=l,...,n
Qk(4= ~k(4 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ
k=l,...,n
and
k=l,...,n.
It is now easy to devise a modification to the algorithm which reduces the
complexity
to zn log: n + O(n log n) multiplications and 5n log: n +
0( n log n) additions.
137
SOLVING~TOEPIXIZ
SYSTEM S OF EQUATIONS
4.
REM ARKS zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK
CONCLUDING
We have presented an algorithm for the solution of Toeplitz systems of
equations that has complexity O(n log2 n) and for which the hidden constant
in the complexity estimate is moderate. Specifically, the algorithm requires
5n log: n + 0( n log n) and zn log; n + 0( n log n) multiplications for the
general and Hermitian cases respectively.
If in addition the equations are real, these complexity estimates can be
approximately halved, although of course the multiplication count is still in
terms of complex multiplications. However, it is clear that in practice n
needs to be quite large in order that algorithms such as the present one
become competitive from a complexity point of view.
It has been shown by Sweet [15] that the Bareiss algorithm is as stable as
Gaussian elimination for positive definite matrices. Cybenko [5] has established the same result for the Zohar-Trench algorithm. The present algorithm
uses the Bareiss algorithm to calculate the coefficients in the Levinson
recurrence (2.4) and then calculates the solution of (1.4) using the Levinson
recurrence. We therefore expect that our algorithm is also as stable as
Gaussian elimination without pivoting for positive definite matrices.
The author wishes to thank Adam Bojanczyk and Richard
number of helpfil discussions during the course of this work.
Brent zyxwvutsrqponml
fora
REFERENCES
E. H. Bare&, Numerical solution of linear equations with Toeplitz and vector
Toeplitz matrices, Numer. Math. 13404-424 (1969).
R. R. Bitmead, private communication.
R. R. Bitmead and B. D. 0. Anderson, Asymptotically fast solutions of Toeplitz
and related systems of linear equations, Linear Algebra Appl. 34:103-116 (1989).
R. P. Brent, F. G. Gustavson, and D. Y. Y. Yun, Fast solutions of Toeplitz systems
of equations and computation of Pade approzimants, J. Algorithms 1:259-295
(1980).
G. Cybenko, The numerical stability of the Levinson-Durbin algorithm for
Toeplitz systems of equations, SZAM J. Sci. Statist. Cotnput. 1303-319
(1980).
B. Friedlander, M . M orf, T. KaiIath, and L. Ljung, New inversion formulas for
matrices classified in terms of their distance from Toeplitz matrices. Linear
AZgebruAppl. 27:31-69 (1979).
I. C. Gohberg and I. A. Feldman, Conwlution E9uutims and Projection Methods for their Solution, Translations of M athematical M onographs, Vol. 41, Amer.
M ath. Sot., Providence, RI., 1974.
I. C. Gohberg and A. A. SemencuI, On the inversion of finite Toeplitz matrices
and their continuous analogs, Mat. Z&d. 2:201-233 (1972).
FRANK DE HOOG
138
9
10
11
12
13
A. K. Jam, An efficient algorithm for a large Toeplitz set of linear equations, zyxwvutsrqp
IEEE Truns. Acoust. Speech Signal Process. 27:612- 615 (1979).
T. Kailath, S. Y. Kung, and M. Morf, Displacement ranks of matrices and linear
equations, J. M ath. Anal. Appl. 68:395- 407 (1979).
T. Kailath, A. Viera, and M. Morf, Inverses of Toeplitz operators, innovations and
orthogonal polynomials, SIAM Rev. 20:166-119 (1978).
N. Levinson, The Wiener RMS error criterion in filter design and prediction, J.
M ath. Phys. 25:261- 278 (1947).
M. Morf, Doubling algorithms for Toeplitz and related equations, in Proceedings
of the IEEE Znternutionul Conference on Acoustics, Speech and Signal Processing,
14
15
16
17
18
Denver, 1980, pp. 954-959.
H. Sexton, An analysis of an algorithm of Bitmead and Anderson for the inversion
of Toeplitz systems, Naval Oceans Systems Center Technical Report No. 756,
1982.
D. R. Sweet, Numerical methods for Toephtz matrices, Ph.D. Thesis, Dept. of
Computing Science, Univ. of Adelaide, 1982.
W. Trench, An algorithm for the inversion of finite Toeplitz matrices, J. Sot.
Indust. Appl. M ath. 12:512- 522 (1966).
S. Zohar, The algorithm of W. F. Trench, I. Assoc. Comput. M uch. 16:592-691
(1969).
S. Zohar, The solution of a ToepIitz set of linear equations, J. Assoc. Comput.
M uch. 21:272- 276 (1974).
Received
1 Febmay
1984; revised 13 September 1985