Cours 4 Part 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

192 5 Approximation of Eigenvalues and Eigenvectors

5.3 The Power Method


The power method is very good at approximating the extremal eigenvalues
of the matrix, that is, the eigenvalues having largest and smallest module,
denoted by λ1 and λn respectively, as well as their associated eigenvectors.
Solving such a problem is of great interest in several real-life applica-
tions (geosysmic, machine and structural vibrations, electric network analy-
sis, quantum mechanics,...) where the computation of λn (and its associated
eigenvector xn ) arises in the determination of the proper frequency (and the
corresponding fundamental mode) of a given physical system. We shall come
back to this point in Section 5.12.
Having approximations of λ1 and λn can also be useful in the analysis of
numerical methods. For instance, if A is symmetric and positive definite, one
can compute the optimal value of the acceleration parameter of the Richardson
method and estimate its error reducing factor (see Chapter 4), as well as
perform the stability analysis of discretization methods for systems of ordinary
differential equations (see Chapter 11).

5.3.1 Approximation of the Eigenvalue of Largest Module

Let A ∈ Cn×n be a diagonalizable matrix and let X ∈ Cn×n be the matrix


of its right eigenvectors xi , for i = 1, . . . , n. Let us also suppose that the
eigenvalues of A are ordered as

|λ1 | > |λ2 | ≥ |λ3 | . . . ≥ |λn |, (5.16)

where λ1 has algebraic multiplicity equal to 1. Under these assumptions, λ1


is called the dominant eigenvalue of matrix A.
Given an arbitrary initial vector q(0) ∈ Cn of unit Euclidean norm, consider
for k = 1, 2, . . . the following iteration based on the computation of powers of
matrices, commonly known as the power method:

z(k) = Aq(k−1) ,
q(k) = z(k) /�z(k) �2 , (5.17)
ν (k) = (q(k) )H Aq(k) .

Let us analyze the convergence properties of method (5.17). By induction on


k one can check that

(k) Ak q(0)
q = , k ≥ 1. (5.18)
�Ak q(0) �2

This relation explains the role played by the powers of A in the method.
Because A is diagonalizable, its eigenvectors xi form a basis of Cn ; it is thus
possible to represent q(0) as
5.3 The Power Method 193
n

(0)
q = αi x i , αi ∈ C, i = 1, . . . , n. (5.19)
i=1

Moreover, since Axi = λi xi , we have


� n � �k �
� α i λi
Ak q(0) = α1 λk1 x1 + xi , k = 1, 2, . . . (5.20)
α
i=2 1
λ1

Since |λi /λ1 | < 1 for i = 2, . . . , n, as k increases the vector Ak q(0) (and
thus also q(k) , due to (5.18)), tends to assume an increasingly significant
component in the direction of the eigenvector x1 , while its components in the
other directions xj decrease. Using (5.18) and (5.20), we get

α1 λk1 (x1 + y(k) ) x1 + y(k)


q(k) = = µ k ,
�α1 λk1 (x1 + y(k) )�2 �x1 + y(k) �2

where µk is the sign of α1 λk1 and y(k) denotes a vector that vanishes as k → ∞.
As k → ∞, the vector q(k) thus aligns itself along the direction of eigen-
vector x1 , and the following error estimate holds at each step k.

Theorem 5.6 Let A ∈ Cn×n be a diagonalizable matrix whose eigenvalues


satisfy (5.16). Assuming that α1 �= 0, there exists a constant C > 0 such that
� �k
� λ2 �
�q̃ − x1 �2 ≤ C �� �� ,
(k)
k ≥ 1, (5.21)
λ1

where

�n � �k
(k) q(k) �Ak q(0) �2 αi λi
q̃ = = x1 + xi , k = 1, 2, . . . (5.22)
α1 λk1 i=2
α 1 λ 1

Proof. Since A is diagonalizable, without losing generality, we can pick up the


nonsingular matrix X in such a way that its columns have unit Euclidean length,
that is �xi �2 = 1 for i = 1, . . . , n. From (5.20) it thus follows that
n �
� � � � n
� � �
αi λi k αi λi k
�x1 + x i − x1 �2 = � x i �2
α1 λ1 α1 λ1
i=2 i=2
� �1/2 � �1/2
n �
� � � � � �k n �
� �
αi 2 λi 2k � λ2 � αi 2
≤ ≤� � ,
α1 λ1 λ1 α1
i=2 i=2

� n
�1/2

that is (5.21) with C = (αi /α1 )2 . �
i=2
194 5 Approximation of Eigenvalues and Eigenvectors

Estimate (5.21) expresses the convergence of the sequence q̃(k) towards x1 .


Therefore the sequence of Rayleigh quotients
� �H
((q̃(k) )H Aq̃(k) )/�q̃(k) �22 = q(k) Aq(k) = ν (k)

will converge to λ1 . As a consequence, limk→∞ ν (k) = λ1 , and the convergence


will be faster when the ratio |λ2 /λ1 | is smaller.
If the matrix A is real and symmetric it can be proved, always assuming
that α1 �= 0, that (see [GL89], pp. 406-407)
� �2k
� λ2 �
|λ1 − ν (k) | ≤ |λ1 − λn | tan2 (θ0 ) �� �� , (5.23)
λ1

where cos(θ0 ) = |xT1 q(0) | �= 0. Inequality (5.23) outlines that the convergence
of the sequence ν (k) to λ1 is quadratic with respect to the ratio |λ2 /λ1 | (we
refer to Section 5.3.3 for numerical results).
We conclude the section by providing a stopping criterion for the iteration
(5.17). For this purpose, let us introduce the residual at step k

r(k) = Aq(k) − ν (k) q(k) , k ≥ 1,


� �H
and, for ε > 0, the matrix εE(k) = −r(k) q(k) ∈ Cn×n with �E(k) �2 = 1.
Since
εE(k) q(k) = −r(k) , k ≥ 1, (5.24)
� �
we obtain A + εE(k) q(k) = ν (k) q(k) . As a result, at each step of the power
method ν (k) is an eigenvalue of the perturbed matrix A+εE(k) . From (5.24) and
from definition (1.20) it also follows that ε = �r(k) �2 for k = 1, 2, . . .. Plugging
this identity back into (5.10) and approximating the partial derivative in (5.10)
by the incremental ratio |λ1 − ν (k) |/ε, we get

�r(k) �2
|λ1 − ν (k) | � , k ≥ 1, (5.25)
| cos(θλ )|

where θλ is the angle between the right and the left eigenvectors, x1 and y1 ,
associated with λ1 . Notice that, if A is an hermitian matrix, then cos(θλ ) = 1,
so that (5.25) yields an estimate which is analogue to (5.13).
In practice, in order to employ the estimate (5.25) it is necessary at each
step k to replace | cos(θλ )| with the module of the scalar product between two
approximations q(k) and w(k) of x1 and y1 , computed by the power method.
The following a posteriori estimate is thus obtained

�r(k) �2
|λ1 − ν (k) | � , k ≥ 1. (5.26)
|(w(k) )H q(k) |

Examples of applications of (5.26) will be provided in Section 5.3.3.


5.3 The Power Method 195

5.3.2 Inverse Iteration

In this section we look for an approximation of the eigenvalue of a matrix


A ∈ Cn×n which is closest to a given number µ ∈ C, where µ �∈ σ(A). For this,
−1 −1
the power iteration (5.17) can be applied to the matrix (Mµ ) = (A − µI) ,
yielding the so-called inverse iteration or inverse power method. The number
µ is called a shift.
The eigenvalues of M−1
µ are ξi = (λi − µ)
−1
; let us assume that there exists
an integer m such that

|λm − µ| < |λi − µ|, ∀i = 1, . . . , n and i �= m. (5.27)

This amounts to requiring that the eigenvalue λm which is closest to µ has


multiplicity equal to 1. Moreover, (5.27) shows that ξm is the eigenvalue of
M−1
µ with largest module; in particular, if µ = 0, λm turns out to be the
eigenvalue of A with smallest module.
Given an arbitrary initial vector q(0) ∈ Cn of unit Euclidean norm, for
k = 1, 2, . . . the following sequence is constructed:
(A − µI) z(k) = q(k−1) ,
q(k) = z(k) /�z(k) �2 , (5.28)
σ (k) = (q(k) )H Aq(k) .

Notice that the eigenvectors of Mµ are the same as those of A since


Mµ = X (Λ − µIn ) X−1 , where Λ = diag(λ1 , . . . , λn ). For this reason, the
Rayleigh quotient in (5.28) is computed directly on the matrix A (and not
on M−1µ ). The main difference with respect to (5.17) is that at each step
k a linear system with coefficient matrix Mµ = A − µI must be solved. For
numerical convenience, the LU factorization of Mµ is computed once for all
at k = 1, so that at each step only two triangular systems are to be solved,
with a cost of the order of n2 flops.
Although being more computationally expensive than the power method
(5.17), the inverse iteration has the advantage that it can converge to any de-
sired eigenvalue of A (namely, the one closest to the shift µ). Inverse iteration
is thus ideally suited for refining an initial estimate µ of an eigenvalue of A,
which can be obtained, for instance, by applying the localization techniques
introduced in Section 5.1. Inverse iteration can be also effectively employed
to compute the eigenvector associated with a given (approximate) eigenvalue,
as described in Section 5.8.1.
In view of the convergence analysis of the iteration (5.28) we assume that A is
diagonalizable, so that q(0) can be represented in the form (5.19). Proceeding
in the same way as in the power method, we let
�n � �k
(k) αi ξi
q̃ = xm + xi ,
α m ξm
i=1,i�=m
196 5 Approximation of Eigenvalues and Eigenvectors

where xi are the eigenvectors of M−1 µ (and thus also of A), while αi are as
in (5.19). As a consequence, recalling the definition of ξi and using (5.27), we
get
lim q̃(k) = xm , lim σ (k) = λm .
k→∞ k→∞

Convergence will be faster when µ is closer to λm . Under the same assumptions


made for proving (5.26), the following a posteriori estimate can be obtained
for the approximation error on λm

(k) ��r(k) �2
|λm − σ |� , k ≥ 1, (5.29)
� (k) )H q(k) |
|(w

r(k) = Aq(k) − σ (k) q(k) and w


where � � (k) is the k-th iterate of the inverse power
method to approximate the left eigenvector associated with λm .

5.3.3 Implementation Issues

The convergence analysis of Section 5.3.1 shows that the effectiveness of the
power method strongly depends on the dominant eigenvalues being well-
separated (that is, |λ2 |/|λ1 | � 1). Let us now analyze the behavior of
iteration (5.17) when two dominant eigenvalues of equal module exist (that
is, |λ2 | = |λ1 |). Three cases must be distinguished:
1. λ2 = λ1 : the two dominant eigenvalues are coincident. The method is still
convergent, since for k sufficiently large (5.20) yields

Ak q(0) � λk1 (α1 x1 + α2 x2 )

which is an eigenvector of A. For k → ∞, the sequence q̃(k) (after a


suitable redefinition) converges to a vector lying in the subspace spanned
by the eigenvectors x1 and x2 , while the sequence ν (k) still converges to
λ1 .
2. λ2 = −λ1 : the two dominant eigenvalues are opposite. In this case the
eigenvalue of largest module can be approximated by applying the power
2
method to the matrix A2 . Indeed, for i = 1, . . . , n, λi (A2 ) = [λi (A)] ,
so that λ21 = λ22 and the analysis falls into the previous case, where the
matrix is now A2 .
3. λ2 = λ1 : the two dominant eigenvalues are complex conjugate. Here, un-
damped oscillations arise in the sequence of vectors q(k) and the power
method is not convergent (see [Wil65], Chapter 9, Section 12).
As for the computer implementation of (5.17), it is worth noting that nor-
malizing the vector q(k) to 1 keeps away from overflow (when |λ1 | > 1) or
underflow (when |λ1 | < 1) in (5.20). We also point out that the requirement
α1 �= 0 (which is a priori impossible to fulfil when no information about the
eigenvector x1 is available) is not essential for the actual convergence of the
algorithm.
5.3 The Power Method 197

Indeed, although it can be proved that, working in exact arithmetic,


the sequence (5.17) converges to the pair (λ2 , x2 ) if α1 = 0 (see Exercise
10), the arising of (unavoidable) rounding errors ensures that in practice the
vector q(k) contains a non-null component also in the direction of x1 . This
allows for the eigenvalue λ1 to “show-up” and the power method to quickly
converge to it.
An implementation of the power method is given in Program 26. Here and
in the following algorithm, the convergence check is based on the a posteriori
estimate (5.26).
Here and in the remainder of the chapter, the input data z0, tol and nmax
are the initial vector, the tolerance for the stopping test and the maximum
admissible number of iterations, respectively. In output, lambda is the approxi-
mate eigenvalue, relres is the vector contain the sequence {�r(k) �2 /| cos(θλ )|}
(see (5.26)), whilst x and iter are the approximation of the eigenvector x1
and the number of iterations taken by the algorithm to converge, respectively.

Program 26 - powerm : Power method


function [lambda,x,iter,relres]=powerm(A,z0,tol,nmax)
%POWERM Power method
% [LAMBDA,X,ITER,RELRES]=POWERM(A,Z0,TOL,NMAX) computes the
% eigenvalue LAMBDA of largest module of the matrix A and the corresponding
% eigenvector X of unit norm. TOL specifies the tolerance of the method.
% NMAX specifies the maximum number of iterations. Z0 specifies the initial
% guess. ITER is the iteration number at which X is computed.
q=z0/norm(z0); q2=q;
relres=tol+1; iter=0; z=A*q;
while relres(end)>=tol & iter<=nmax
q=z/norm(z); z=A*q;
lambda=q’*z; x=q;
z2=q2’*A; q2=z2/norm(z2); q2=q2’;
y1=q2; costheta=abs(y1’*x);
if costheta >= 5e-2
iter=iter+1;
temp=norm(z-lambda*q)/costheta;
relres=[relres; temp];
else
fprintf(’Multiple eigenvalue’); break;
end
end
return
A coding of the inverse power method is provided in Program 27. The
input parameter mu is the initial approximation of the eigenvalue. In output,
sigma is the approximation of �the computed eigenvalue � and relres is a vec-
(k) (k) H (k)
� ) q | (see (5.29)). The LU
tor that contain the sequence �� r �2 /|(w
factorization (with partial pivoting) of the matrix Mµ is carried out using the
MATLAB intrinsic function lu.
198 5 Approximation of Eigenvalues and Eigenvectors

Program 27 - invpower : Inverse power method


function [sigma,x,iter,relres]=invpower(A,z0,mu,tol,nmax)
%INVPOWER Inverse power method
% [SIGMA,X,ITER,RELRES]=INVPOWER(A,Z0,MU,TOL,NMAX) computes the
% eigenvalue LAMBDA of smallest module of the matrix A and the
% corresponding eigenvector X of unit norm. TOL specifies the tolerance of the
% method. NMAX specifies the maximum number of iterations. X0 specifies
% the initial guess. MU is the shift. ITER is the iteration number at which
% X is computed.
M=A-mu*eye(size(A)); [L,U,P]=lu(M);
q=z0/norm(z0); q2=q’; sigma=[ ];
relres=tol+1; iter=0;
while relres(end)>=tol & iter<=nmax
iter=iter+1;
b=P*q;
y=L\b; z=U\y;
q=z/norm(z); z=A*q; sigma=q’*z;
b=q2’; y=U’\b; w=L’\y;
q2=w’*P; q2=q2/norm(q2); costheta=abs(q2*q);
if costheta>=5e-2
temp=norm(z-sigma*q)/costheta; relres=[relres,temp];
else
fprintf(’Multiple eigenvalue’); break;
end
x=q;
end
return

Example 5.3 Let us consider the following matrices


   
15 −2 2 −0.944 0.393 −0.088
   
A= 1 10 −3  , V =  −0.312 0.919 0.309  . (5.30)
−2 1 0 0.112 0.013 0.947

Matrix A has the following eigenvalues (to five significant figures): λ1 = 14.103,
λ2 = 10.385 and λ3 = 0.512, while the corresponding eigenvectors are the vector
columns of matrix V.
To approximate the pair (λ1 , x1 ), we have run the Program 26 with initial datum
z(0) = [1, 1, 1]T . After 71 iterations of the power method the absolute errors are
(71)
|λ1 − ν (71) | = 2.2341 · 10−10 and �x1 − x1 �∞ = 1.42 · 10−11 .
In a second run, we have used z(0) = x2 + x3 (notice that with this choice
α1 = 0). After 215 iterations the absolute errors are |λ1 − ν (215) | = 4.26 · 10−14 and
(215)
�x1 − x1 �∞ = 1.38 · 10−14 .
Figure 5.2 (left) shows the reliability of the a posteriori estimate (5.26). The se-
quences |λ1 − ν (k) | (solid line) and the corresponding a posteriori estimates (5.26)
5.4 The QR Iteration 199

100 100

10−2
10−2
10−4 (NS)
10−4
10−6
10−6
10−8
10−8
(S)
10−10
10−10 10−12
10−12 10−14
0 10 20 30 40 50 60 70 80 0 6 12 18

Fig. 5.2. Comparison between the a posteriori error estimate and the actual ab-
solute error for matrix A in (5.30) (left); convergence curves for the power method
applied to matrix A in (5.31) in its symmetric (S) and nonsymmetric (NS) forms
(right)

(dashed line) are plotted as a function of the number of iterations (in abscissae).
Notice the excellent agreement between the two curves.
Let us now consider the matrices
   
134 816
   
A = 3 1 2, T = 3 5 7 (5.31)
421 492
where A has the following spectrum: λ1 = 7.047, λ2 = −3.1879 and λ3 = −0.8868
(to five significant figures).
It is interesting to compare the behaviour of the power method when computing λ1
for the symmetric matrix A and for its similar matrix M = T−1 AT, where T is the
nonsingular (and nonorthogonal) matrix in (5.31).
Running Program 26 with z(0) = [1, 1, 1]T , the power method converges to the eigen-
value λ1 in 18 and 30 iterations, for matrices A and M, respectively. The sequence
of absolute errors |λ1 − ν (k) | is plotted in Figure 5.2 (right) where (S) and (NS) refer
to the computations on A and M, respectively. Notice the rapid error reduction in
the symmetric case, according to the quadratic convergence properties of the power
method (see Section 5.3.1).
We finally employ the inverse power method (5.28) to compute the eigenvalue of
smallest module
√ λ3 = 0.512 of matrix A in (5.30). Running Program 27 with q(0) =
[1, 1, 1] / 3, the method converges in 9 iterations, with absolute errors |λ3 −σ (9) | =
T
(9)
1.194 · 10−12 and �x3 − x3 �∞ = 4.59 · 10−13 . •

5.4 The QR Iteration


In this section we present some iterative techniques for simultaneously ap-
proximating all the eigenvalues of a given matrix A. The basic idea consists of
reducing A, by means of suitable similarity transformations, into a form for
which the calculation of the eigenvalues is easier than on the starting matrix.
200 5 Approximation of Eigenvalues and Eigenvectors

The problem would be satisfactorily solved if the unitary matrix U of


the Schur decomposition theorem 1.5, such that T = UH AU, T being upper
triangular and with tii = λi (A) for i = 1, . . . , n, could be determined in a
direct way, that is, with a finite number of operations. Unfortunately, it is
a consequence of Abel’s theorem that, for n ≥ 5, the matrix U cannot be
computed in an elementary way (see Exercise 8). Thus, our problem can be
solved only resorting to iterative techniques.
The reference algorithm in this context is the QR iteration method, that is
here examined only in the case of real matrices. (For some remarks on the
extension of the algorithms to the complex case, see [GL89], Section 5.2.10
and [Dem97], Section 4.2.1).
Let A ∈ Rn×n ; given an orthogonal matrix Q(0) ∈ Rn×n and letting T(0) =
(Q(0) )T AQ(0) , for k = 1, 2, . . ., until convergence, the QR iteration consists of:

determine Q(k) , R(k) such that


Q(k) R(k) = T(k−1) (QR factorization);
(5.32)
then, let
T(k) = R(k) Q(k) .

At each step k ≥ 1, the first phase of the iteration is the factorization of the
matrix T(k−1) into the product of an orthogonal matrix Q(k) with an upper
triangular matrix R(k) (see Section 5.6.3). The second phase is a simple matrix
product. Notice that

T(k) = R(k) Q(k) = (Q(k) )T (Q(k) R(k) )Q(k) = (Q(k) )T T(k−1) Q(k)
(5.33)
= (Q(0) Q(1) · · · Q(k) )T A(Q(0) Q(1) · · · Q(k) ), k ≥ 0,

i.e., every matrix T(k) is orthogonally similar to A. This is particularly relevant


for the stability of the method, since, as shown in Section 5.2, the conditioning
of the matrix eigenvalue problem for T(k) is not worse than it is for A (see
also [GL89], p. 360).
A basic implementation of the QR iteration (5.32), assuming Q(0) = In , is
examined in Section 5.5, while a more computationally efficient version, start-
ing from T(0) in upper Hessenberg form, is described in detail in Section 5.6.
If A has real eigenvalues, distinct in module, it will be seen in Section 5.5 that
the limit of T(k) is an upper triangular matrix (with the eigenvalues of A on
the main diagonal). However, if A has complex eigenvalues the limit of T(k)
cannot be an upper triangular matrix T. Indeed if it were T would necessarily
have real eigenvalues, although it is similar to A.
Failure to converge to a triangular matrix may also happen in more general
situations, as addressed in Example 5.9.
For this, it is necessary to introduce variants of the QR iteration (5.32),
based on deflation and shift techniques (see Section 5.7 and, for a more detailed
5.5 The Basic QR Iteration 201

discussion of the subject, [GL89], Chapter 7, [Dat95], Chapter 8 and [Dem97],


Chapter 4).
These techniques allow for T(k) to converge to an upper quasi-triangular ma-
trix, known as the real Schur decomposition of A, for which the following result
holds (for the proof we refer to [GL89], pp. 341-342).

Property 5.8 Given a matrix A ∈ Rn×n , there exists an orthogonal matrix


Q ∈ Rn×n such that
 
R11 R12 . . . R1m
 
 0 R22 . . . R2m 
 
QT AQ =  .. .. .
,
 (5.34)
. . . . 
 . .. 
0 0 . . . Rmm

where each block Rii is either a real number or a matrix of order 2 having
complex conjugate eigenvalues, and
� �
Q = lim Q(0) Q(1) · · · Q(k) (5.35)
k→∞

Q(k) being the orthogonal matrix generated by the k-th factorization step of
the QR iteration (5.32).

The QR iteration can be also employed to compute all the eigenvectors


of a given matrix. For this purpose, we describe in Section 5.8 two possible
approaches, one based on the coupling between (5.32) and the inverse iteration
(5.28), the other working on the real Schur form (5.34).

5.5 The Basic QR Iteration

In the basic version of the QR method, one sets Q(0) = In in such a way
that T(0) = A. At each step k ≥ 1 the QR factorization of the matrix T(k−1)
can be carried out using the modified Gram-Schmidt procedure introduced in
Section 3.4.3, with a cost of the order of 2n3 flops (for a full matrix A). The
following convergence result holds (for the proof, see [GL89], Theorem 7.3.1,
or [Wil65], pp. 517-519).

Property 5.9 (Convergence of QR method) Let A ∈ Rn×n be a matrix


with real eigenvalues such that

|λ1 | > |λ2 | > . . . > |λn |.


202 5 Approximation of Eigenvalues and Eigenvectors

Then  
λ1 t12 . . . t1n
 
0 λ2 t23 . . . 
 
lim T(k) =
 .. .. . . ..  .
 (5.36)
k→+∞ .
 . ..  
0 0 . . . λn
As for the convergence rate, we have
�� �k �
(k) � λ i ��
|ti,i−1 | = O �� , i = 2, . . . , n, for k → +∞. (5.37)
λi−1 �

Under the additional assumption that A is symmetric, the sequence {T(k) }


tends to a diagonal matrix.
If the eigenvalues of A, although being distinct, are not well-separated, it
follows from (5.37) that the convergence of T(k) towards a triangular matrix
can be quite slow. With the aim of accelerating it, one can resort to the
so-called shift technique, which will be addressed in Section 5.7.

Remark 5.2 It is always possible to reduce the matrix A into a triangular


form by means of an iterative algorithm employing nonorthogonal similar-
ity transformations. In such a case, the so-called LR iteration (known also as
Rutishauser method, [Rut58]) can be used, from which the QR method has ac-
tually been derived (see also [Fra61], [Wil65]). The LR iteration is based on the
factorization of the matrix A into the product of two matrices L and R, respec-
tively unit lower triangular and upper triangular, and on the (nonorthogonal)
similarity transformation
L−1 AL = L−1 (LR)L = RL.
The rare use of the LR method in practical computations is due to the loss
of accuracy that can arise in the LR factorization because of the increase in
module of the upper diagonal entries of R. This aspect, together with the
details of the implementation of the algorithm and some comparisons with
the QR method, is examined in [Wil65], Chapter 8. �

Example 5.4 We apply the QR method to the symmetric matrix A∈ R4×4 such
that aii = 4, for i = 1, . . . , 4, and aij = 4 + i − j for i < j ≤ 4, whose eigenvalues are
(to three significant figures) λ1 = 11.09, λ2 = 3.41, λ3 = 0.90 and λ4 = 0.59. After
20 iterations, we get
 
11.09 6.44 · 10−10 −3.62 · 10−15 9.49 · 10−15
 
 6.47 · 10−10 3.41 1.43 · 10−11 4.60 · 10−16 
 
T(20) = .
 1.74 · 10−21 1.43 · 10−11 0.90 1.16 · 10−4 
 
2.32 · 10−25 2.68 · 10−15 1.16 · 10−4 0.58

You might also like