Bab-2 Linear Algebraic Equations

Download as pdf or txt
Download as pdf or txt
You are on page 1of 51

: 1 'ii\

ii
,l;
,) ti. liY

Linear AlgebraicEquations

2-l-f$rqdusligtr
One of thecommonestnumericaltasksfacingengineers
is the solutionof setsof linear
algebraicequationsof the form

A11X1* Ay2X2* A13X3:b1

a2yx1* a22x2* a4X3: b2 Q.1)


a31X1* a32X2* a1sX3: b3

commonly written

Ax: b (2.2)

whereA is a'matrix' and x and b are'vectors'.


In theseequationstheaiiare constantknown quantities,asarethe bi.The problem
is to determinetheunknownx1.In thischapterwe shallconsidertwo differentsolution
'direql' and 'iterative' methods.The direct methods are
techniques,usually termed
'elimination'of terms,a processusually
consideredfirst and are basedon row by row
called'Gaussianelimination'.

22 Gagqqnr-gllsq4qo"
!.
We begin with a specific set of equations:

10x1* x2- 5x.:l (a)


-20x1*3x2+20xt:2 (b) (2.3)
5x1*3x2* 5x.:6 (c)

To'eliminate'terms,we could,for example,multiply equation(a) by two and add to


equation(b). This would producean equationfrom which the term in xt had been
eliminated.Similarly,wecouldmultiply equation(a)by 0.5and subtractfrom equation
(c).This would alsoeliminatethe term in x1 leavingan equationin (at most)x2 ard x3.
12 2
CHAPTER

We could formally write this process


/-?o\
(b) - { -# I " @) -5xz* l0xr:{ (d)
\!u./
(2.4)

(d - (*) x(a) +2.Jv,*7-5x.:J.5 (e)

Onemorestepof thesameprocedure
wouldbe
/r s\
(.) - {7]r*(d) -2.5x':l.J (2.s)
\ J , /

Thus, for setsof N simultaneousequations, however big N might be, after N steps of
this processa singleequationinvolving only the unknown xrywould remain. Working
backrtards from eq. 2.5- a procedure usually called'b49!5_uipj!tqt_i.q!_'-x3 can first bc
found as 3.512.5or 1.4.Knowing x3, substitution in eq.2.4{d) gives x2 as -2.0 and
finally substitutionin eq.2.3(a)givesx, as 1.0.Writing the back-substitutionprocessin
terms of matricesand vectors.we have

['l;':,]{i,}{;,}
of,
(2.6)

I
l|' Ux: Y (21)
The matrix U is called an'upper triangular matrix'and it is clearthat suchmatriceswill
I
be very convenient in linear equation work.
:
In a similar way, if we had the system of equations
t
f , , ,o o l l ' , , . )f r , l
lr
t t
l r r,,o (2 8)
1 , , ,lr, l1r,l:\y,l
/..1
L/r, tx.J [n.J
Of,

Lx:y Q.9)
It would be relativelyeasyto calculatex givenL and y. The matrix L is calleda'lower
triangularmatrix',and theproc€ss of findingx in eqs2.9is called'forward-substitution'.
The directmethodswe shalldiscussall involve,in someway or another,matriceslike
L and U.

Example 2.1. Hand solution of three equations

Use Gaussianelimination to solve the following set of equations:


L I N E A RA L GE B R A T C
EOUATIONS 1?

2x1-3x2* xr: 7
Xr- Xz-2xr- _ 2
3x1* x2- xJ : 0

Solution 2.1

Eliminate the first column

Zxt- 3x2* x3: 7


0.5x2-2.5x3: -5.5
5.5x2-2.5x3: - 10.5

Eliminate the secondcolumn

2tr- 3x2* Xl: 7


'
0 . 5 x ,- 2 . 5 x r : -5.5
25x.: 56
Back-substitute

r- -)
xz:(- s.s+2.s( 2) ) 10.s:
_I
xr:(7-2 1 _3( _\) 12 r

Program 2.1. Gaussian elimination


For our first computer program in
this chapter, ret us construct a program
elimination folrowing the steps in eqs 2.4 for Gaussian
"na i.s. The code is risted ". i.og.u,,,
from simple counters, involvis the iollowing 2.r and,,aparr
norn.n"l"tu..,

Simple uariables
N Number of equations to be solved

Variable lcngth arrays


A Coefficient matrix of the equations
B right-hand side vecror

PARAMETERrestriction
IN>N
I
PROGRAM P21
c
c PROGRAM 2.1 GAUSSTAN ELTMTNATION
c
c ALTER NEXT LTNE TO CHANGE
PROBLEM STZE
c
PARAMETER (rN=2o)

REAL A(TN,IN),8(IN)
c
READ (5,*) N
14 CHAPTER 2

READ (s,*) ((A(I,J),J=l,N),I=1,N)


READ (5,*) (B(I),r=1,N)
wRrTE(6,*) ( r************** G A U S S I A NE L I M I N A T I O N * * * * * * * * * * * * r )
wRr"E(6,*)
WRITE(6, *) ('COEFFICIENT MATRIX' )
CALL PRINTA(A, IN, N, N, 6)
W R I T E ( 6 ,* )
WRITE(6,*) ('RIGHT HAND SIDE VECTOR')
CALL PRINTV(8,N,6)
wRrTE(6,*)
CONVERT TO UPPER TRIANGUI.,AR FORM
DO I K = 1,N - 1
rF (ABs (A (K, K) ) . cT. 1. E-6 ) THEN
DO 2 I = K + l,N
x = A(I,K) /A(K,K)
DO 3 J = K + 1,N
A(I,J) = A(I,J) - A(K,J)*x
CONTINUE
B(I) = B(I) - B(K)*1
CONTINUE
PI qF

I { R f T E ( 6 , * ) ( TZ E R O P M T FOUND IN LINE: ')


WRITE (6,*) K
STOP
END IF
1 CONTINUE
wRIrE(6, *) ( ' M O D I F T E D M A T R I X T)
CALL PRINTA(A, IN,N, N, 6)
wRrTE(6,*)
WRITE(6,*) ('MODTFIED RIGHT HAND SIDE VECTOR')
CALL PRTNTV(B,N,6)
b 7 R r T 8 ( 6 ,* )
. BACK SUBSTTTUTION.
D O 5 I = N , 1 , - 1
sitM = B(r)
IF (I.LT.N) THEN
DO 6 J = f + 1,N
SUl,r : SUH - A ( I, J) *g 1y;
6 CONTTNUE
END IP
B(I) : sIJu/A(I,I)
5 CONTINUE
PRINT THE RESULTS
wRrTE(6, *) ( ' S O L U : r I O N V E C T O R ')
CALL PRINTV(8,N,5)
STOP
END

In eqs 2.4 and 2.5 it can be seen that the elements of the original arrays A and b are
progressively altered during the calculation. In Program 2.1, once the terms in x have been
calculated, they are stored in b since the original b has been lost anyway.
In passingit should be noted that eqs2.4 involve division by the coefficientar, (equal to t0 in
this case)while eqs 2.5 involve division by the modifel coefficientc22 (equal to 5 in this case).
Thesecoefficientsall where | <k <n are called the 'pivots' and it will be clear that they might
be zero, either at the beginning of the elimination process,or during it. We shall return to this
problem later, but for the moment shall merelycheck whether a11is or has becomezero and stop
the calculation if this is so. Input details are shown in Fig. 2.1(a)with output in Fig. 2.1(b).
The program begins by reading in N, A and b. In the section commented'convert to upper
triangular form' the check is first made to see if a.* is greater than 'zero' (a small number in
this case).Rows 2 to N are then processedaccording to eqs 2.4 and 2.5 and the modified A and
b printed out for comparison with eqs 2.6. Note that only the numbers in the upper triangular
L I NE A R A L G E B R A I CE O U A T I O N S

Numberof equations N
3
C o e f f i c i e nmt a t r i x (A0,J),i:1,N), I:l,N
10.0 1.0 - 5.0
- 20.0 3.0 20.0
5.0 3.0 5.0
Right-handside B0), I: I, N
1.0 2.0 6.0
Fig. 2.1(a) Input data for Program2.1

************** GAUSSIAIIELIUTNATION ************

COEFFICIENT HATRTX
.10008+02 .10008+o1 -. 50008+01
-. 2000E+02 . 30008+01 .2000E+02
.5000E+o1 .3000E+01 .50o0E+01
a
RIGHT HAND SIDE VECTOR
.100oE+o1 .2000E+01 .60008+01

MODTFIED HATRTX
.1000E+02 .1000E+01 -. 5000E+o1
-.20008+02 .5000E+01 .10008+02
.50008+01 .2500E+01 .25008+01

U O D I F T E D R I G H T H A N D S I D E VECTOR
. l-oo0E+01 .4000E+01 .3500E+O1

SOLIj:TTON VECTOR
. 1000E+ot -. 2000E+01 .1400E+01

Fig. 2.1(b) Resultsfrom Program2 . 1

part ofA are ofany significance.The back-substitution calculation is then performed, leaving the
original unknowns x stored in b which is printed out.

on the eliminationprocess
2.2.1 Observations
In Program 2.1, some terms below the diagonal in matrix U (eq. 2.7) have been
computedevenalthoughtheyareknownin advance to be'zero'.Thisis obviouslywork
whichneednot be done.Further,duringtheconversionof A to uppertriangularform,
it wasnecessary to operatealsoon b. Thereforeif equationswith the samecoefficients
A haveto be solvedfor differentb, whicharenot known in advanceasis oftenthe case,
the conversionof A to triangularform would be necessary for every b using thi/
method.
We thereforeseeka way of implementingGaussianeliminationso that multiple
right-handsideb vectorscan be processed afteronly a single'decomposition'of A to
triangular form. Such methodsinvolve 'factorisation'of A into triangular matrix
components.For example,it canbe shownthat matrix A canalwaysbe written as the
product

A:LU (2.10)
16 C H A P T E R2

where L is a lower tri angular matrix and U an upper triangular matrix, in the forms

[ " ' o ol
t:l "' I" o I (2.11)

/.r l
L /., Ir,

'
and

":[t'
;':;:i] (2.12)

The numbers I11and u141tearbitrary except that their product is known. For example

I11u1f:as (2'13)

It is conventional to assume that either lvvor uv1,are unity, hence, typically

[r,, erzo,.l [r o o l [ r , , t t r z, , . - l
t t t
o, e zz o * I, I. o to uzz t ,r , l ( 2.r 4\
I l:l || |
Lo' e tz a rrj L/r , Ir , tJ Lo g ,.r J
is a usual statementof LU factorisationwhich is elaboratedin the next section.

2.3 Equation solution using LU factorisation


When the triangular factors L and U in eqs2.10and 2,14have beencomputed,equation
solution proceeds as follows:

Ax:b (2.I 5)
or LUx:b
We now let
Ux:Y (2.16)

Hence
Ly:b (2.r7)
SinceL and b aie known, and L doesnot dependon b, this processis simply the
'forward-substitution'we sawin eq.2.9.Onceeq.2.17hasbeensolvedfor y, eq' 2.16is
then the'back-substitution'described by eq.2.1.A solutionalgorithmwill therefore
consistof threephases, namelya factorisation(eq.2.I 5)followedby a forward-substitu-
tion (eq.2.17\ and a back-substitution(eq.2.16).The proceduresof factorisationand
forward- and back-substitutionwill be used in other contextsin this book and
elsewhere, so that it makessenseto codethem aslibrary subroutines. They arecalled
LUFAq suBFoRand sussnc respectively and their actions and parametersare
described in Appendix1 with full listingsin Appendix2.
L I N E A RA L G E B R A I CE O U A T I O N S 17

Equations2.14are evaluatedas follows:


ROw I Utt:Att, ttL2:er2, Ut3:ets.

This shows that with unity on the diagonal of L, the


first row of U is simply a copy of
the first row of A. Subroutine LUFACthereforebegins
by nulring L (called lowrnr)
and U (called unrnr) and by copying the first row of
A into uprRr.

R o w2 I21u1r:a21 .'.12r:U
utt
Having found /21, u12 and u23 can be computed since

Izturz*u22:a22 . . U 2 2 : d 2 2 - l z ru r z
and

urt*u23--a23 i. u23: e23-lzt urt


lzr

Row3 . t Q l t
/3,u1,:d3, . - - J l

utt

., A32-l.31u12
ltt urz*132u22:a32 ' t32:-.----
uzz
Having found /r, and l.32,u33 can be computed from

/ r r U r :* I 3 2 u r r + u 3 3 : d 3 3 : . u J 3 : a 3 3 - f r r U r :- l y u z t
Subroutine LUFACcarries out these operations in two parts,
commented .lower
triangular components'and 'upper triangular components'respectively. .zero,
A pivot
is testedfor in the same way as was done in program 2.1.

Example 2.2

Use LU factorisation to solve the following set of equations:

2xt-3x2* xa: 7
. X 1- x 2 - 2 x 3 : - l
3x1* X2- xl: 0

Solution2.2

Factorisethe coefficientmatrix into upper and lower triangular matrices.


hence
A: LU

fz -3

ilt'
rl [r 0 t utz ,rrl
l - r t - 2 l t 2 ,I
lr 1:1 uzz "rt
t

L3 | -lJ L/.,
t l
1., 0 urrJ
l
1B C H A P T E R2

solving for l;; and urygives

":[;, : :-l
-i'
Ltr r 1l

':[' ;']
Forward-substitution:

Lv:u
'o tl
[r r olfr,l f
l0.s 0l{y,l:\-21
L" n 'Jlr,J I oJ
Ir:7, t2: -2 *0.5(7): -5.5
r t : - 1 . 5 ( 7 )1
+l ( 5 . 5 ) : 5 0 .
Back-substitution:
Ux:y

l oz -03. 5- 2' .lsf ' , 1f ? l


I 11',):1-s.s)
Lo o z s l [ ' . JI t o J
xt:2, xz:(- 5.5+2.5(2))10.5: - 1
x 1 : ( 7- 2 * 3 ( - l D l 2 : 1

Program 2.2. Equation solution by LU factorisation


The program which implements the process is listed as Program 2.2. It involves the
nomenclature:

Simple uariables
N number of equations to be solved

Variable length anays


A coefficient matrix of the equations
B right-hand side vector
UPTRI upper triangular factor of A
LowrRI lower triangular factor of A

PARAMETERrestriction
IN >N.
L I N E A RA L GE B R A I CE O U A T I O N S 19

PROGRAM P22

c PROGRAM2.2 GAUSSIAN ELIMINATION USING L*U FACTORISATION

L ALTER NEXT LTNE TO CHANGE PROBLEM SIZE


c
PARAMETER (IN:2o)
c
R E A L A ( I N , I N ) , U P T R I ( I N , I N ) , L O W T R I( I N , I N ) , 8 ( T N )
c
READ (5,*) N
READ (5,*) ((A(I,J),J=l,N),I=1,N)
READ (5,*) (B(I),r=1,N)
wRrTE(6,*) (r** GAUSSIAN ELII'IINATrON USING LU FACTORTSATION **1)
WRITE(6, *)
w R r T E ( 6 , * ) ( T C O E F F I C T E N T M A T R T X T)
CALL PRTNTA(A,TN,N, N, 6)
WRITE(6,*)
WRITE(6,*) (TRIGHT HAND SrDE VECTORT)
CALL PRINTV(8,N,6)
WRITE(6,*)
CALL LUFAC (4, UPTRI, LOWTRI, IN, N)
WRITE(6,*) ( , U P P E R T R T A N G U L A RF A C T O R S ' )
CALL PRINTA(UPTRT, IN. N, N, 6)
WRITE(6, *)
I { R I T E ( 6 , * ) ( ' I O W E R T R T A N G U L A R F A C T O R ST)
CALL PRINTA(LOWTRI, IN, N, N, 6)
WRITE(6,*)
CALL SUBFOR(I.OWTRI, IN, B, N)
C A L L S U B B A C( U P T R I , I N , B . N )
WRITE(6, *) ( ' SOLUTTONVECToR' )
CALL PRINTV(B,N,6)
STOP
END

Input data are as in Fig. 2.2(a),with output in Fig. 2.4b).


The program simply consists of reading in N, A and b followed by three subroutine calls to
LUFAC, suBFoR and sussec. The lower and upper triangular factors are printed out, followed
by the solution which has overwritten the original right-hand side in b.

Number of equations N
J

Coefficient matrix (A(r,J),J:1,Nr :)1, , N


10.0 1.0 - 5.0
- 20.0 3.0 20.0
5.0 3.0 5.0
Right-hand side B 0 ) r, : l , N
1.0 7.0 6.0
Fig. 2.2(a) Input datafor Program2.2

** GAUSSIA.N ELIMINATION USING LU FAqIORISATION **

COEFFICIENT MATRIX
.1000E+o2 .1000E+01 -.500OE+o1
-.2OOOE+O2 .3OOOE+O1 .2000E+02
.50008+01 .3000E+01 .5O00E+01

RIGHT }IAND SIDE VECTOR


.1000E+01 .2000E+01 .6000E+01"
C H A P T E R2

UPPER TRIANGUI,AR FACTORS


.1000E+02 .1000E+o1 -.500oE+o1
.0000E+oo .50008+01 .1000E+02
. ooooE+oo . ooooE+oo .2500E+o1

I,OWER TRTANGUI,AR FACTORS


.1000E+01 . o000E+oo .00008+oo
-.2000E+o1 .10008+01 . oo00E+00
.5OOOE+00 .sOOOE+OO .1OOoE+O1

SOLUTION VECTOR
. 1000E+o1 -.2000E+01 .1400E+01

Fig. 2.2(b) Resultsfrom Program2.2

2.3.1 Observations on the solution process by factorisation

Compq.rison of outputs in Figs 2.1(b) and 2.2(b) will show that the upper triangular
factor of A in Fig. 2.2(b)is precisely the same as the modified upper triangular part of
A in Fig.2.l(b). Thus Programs 2.1 and2.2have much in common. However, were
many b vectors to be processed,it would merely be necessaryto call LUFAc once in
Program 2.2 and to createa small loop reading in each new b and calling susron and
SUBBAC to produce the solutions.Sincethe time taken in LUFACis substantiallymore
than that taken in suBFoRand sussec, this yields great economiesas N increases.
Inspection of the arithmetic in LUF^c will show that storage could be saved by
overwriting A by r-owrn.land urrRr, and this will be done in subsequentprograms.It
has not beenimplemented in this first factorisation program in an attempt to make the
computational processas clear as possible.

\
2.4 F,quationswith symmetriel coefficient matrix
If the coefficients
of the A matrix satisfvthe condition

aij:aji (2.I 8)

that matrix is said to be symmetrical. For example the matrix

^:[';
:i] (2.re)

hassymmetricalcoefficients.[f subroutine LUFAc is used to factorise this matrix, the


resultwill be found to be

":[',''
0
I
- 1.5
:l
1_J
(2.20)
L I N E A RA L G E B R A I CE O U A T I O N S
21

and

['u o rl
u:f 0 4 _61
(2.2r)
L0 0 e_j
If the rows of U are then divided by
uro,we have

[r o.2s orl
u ' : fo I _1.5f
(2.22)
f o o l J
and it can be seen that l;i:uli' The scalingof u to (Jr is accomprished
terms by in matrix

IJ:DUl
(2.23)
whereD is the diagonalmatrix:
T - l

I r uo 0l
D:l 0 4 0f
(2.24)
Io o eJ
Thus, if A is a symmetrical matrix,
we can write
A:LU_LDU':LDLT.
(2.2s)
since the terms in Lr (the 'transpose')
can be inferred from the terms in L, it
sufficient to compute only L (or Lr), involving will be
approximatery urr the work in the
LU factorisation of unsymmetrical matrices.

Example 2.3

Use LDLI factorisation to solve the symmetric


equations
3xr-2xr+ X.s: 3
-2x1 *3x2*2x3 - -3

x1 * 2n2 * 2x3: 2

Solution2.3
Factorisethe coefficient matrix into upper
and lower triangular matrices. hence
A:LU

t - 3 - 2rl [r o ol l-,,t ut2 ,ral


t _ /^ J ' l : 1, , , r o l l o
I ilzz ur t
f
L L
r
L
1 2J 1,., t,z rJ Lo U uztJ
I
22 C H A P T E R2

Solving for I;, and u,, gives

l-r o ol
l : l - o . e o z to l
L 0.3331.6 lJ

[t-2 r I
u:l 0 1 . 6 6 7 2 . 6 6 7|
Lo o - 2.6-l
Divide each row of U the diagonal term

-0.66i 0.3331
Ir
u':l o r r.6
l:L'
L o o l l
where

[: o ol
":l o t.66i o
L0 o -2 .6)I

Hence

U:DUl:DLI

and LDLr x:b

Forward-substitution:

Lv: u

l - , o o l f ' , f] , l
l, ?J
L-llll t;:J:i-;j
. ! t : 3 , l z : - 3 + 3 ( 0 . 6 6 7 )-: 1
Yt : 2 - 0.333(3) + 1.6: 2.6
Back-substitution:
D L Tx : y

, l [ ' , 1[ , . |
[ : - '2667
L: -);,2"
ll;.i:t-;,j
L I N E A RA L GE B R A I CE O U A T I O N S z5

i ( 3 : - 1 ,x r : 1 - 1 + 2 . 6 6 7 ) 1 1 .:616 7
xr:(3+t+2)13:2

Program 2.3. Symmetrical equation solution by LDLT factorisation


Program 2.3 enables the solution of systems of equations with symmetrical coefficients using
LDLr factorisation. It resemblesProgram 2.2 very closely with the new decomposition routine,
Lnlt replacing LUFAC. However, this time storage economies are made by overwriting the
original A by its factor (Ur in this case).
The nomenclature used in the program is as follows:

Simple uariables
N Number of equations to be solved

Variable length arrays


A Symmetrical coefficient matrix (overwritten by its upper triangular factor Ur)
D Diagonal terms for eq.2.24 stored as a vector
F Right-hand side vector (overwritten by the solution).

PARAMETERrestriction
IN>N
c
C PROGRAM2.3 GAUSSIAN ELIMINATTON
C USING A:L*D*LT FAqTORISATION LT OVERWRITESA
c
C ALTER NEXT LTNE TO CHANGEPROBLEI{SIZE
c
PARAMETER(rN=2o)

R E A LA ( r N , r N ) , D ( r N ) , 8 ( r N )

READ(5,*) N
R E A D( s , * ) ( ( A ( I , J ) , J = 1 , N ) , I : r , N )
R E A D( 5 , * ) ( B ( I ) , I = 1 , N )
I.TRITE(6,*) ('** GAUSSIANELIMTNATIONUSING LDLT FACTORISATION**')
I { R r T E( 6 , * ) ( r * * * * * * * r * * * * ****************
S Y M M E T R T C AELQ U A T I O N S t)
I{RITE(6,*)
hrRrTE(6, *) ( TCoEFFICTENTMATRIXT)
CALL PRTNTA (4, IN, N,N, 6)
V I R I T E ( 6 ,* )
w R r T E ( 6 , * ) ( ' R I G H T H A N DS r D E V E C T O R T )
CALL PRINTV(8,N,6)
WRITE(6.*)
CALL LDLT(A,IN,D,N)
W R I T E ( 6 , * ) ( ' U T M A T R I X )'
Do L r=1,N
1 wRrTE(6,100) (A(I,J) ,J=1,I)
W R I T E ( 6 ,* )
w R r T E ( 6 , * ) ( . D T A G O N A Lr E R M S ')
CALL PRINTV(D,N,6)
CALL LDLFOR(A,IN,B.N)
DO2 I : 1,N
DO2 J = 1,N
2 A(I,J) = A(I,J)/D(I)
CALL SUBBAC(A,IN,B,N)
WRITE(6,*)
W R I T E ( 6 ,* ) ( ' S O L U T I O N V E C T O R)T
CALL PRINTV(8,N,6)
100 FoRMAT(5812.4)
STOP
END
C H A P T E R2

Number of equations N
3
Coefficient matrix ( A ( r J, ) ,J : l , N ) , r : l , N
16.0 4.0 8.0
4.0 5.0 -4.0
8.0 - 4.0 22.0
Right-hand side B(I),r: I, N
4.0 2.0 5.0
Fig. 23(a) Input data for Program 2.3

** GAUSSTAN ELIMINATION USING LDLT FACTORISATION **


************ S Y H M E T R T C A LE Q U A T I O N S * * * * * * * * * * * * * * * *

COEFFTCTENT I{A?RIX
. 16O0E+02 .4000E+01 - 8OOOE+OI
..{000E+01 .5O008+o1 -.4OOOE+o1
.8000E+01 - . 400oE+o1 -2200E+O2

RIGHT TIAND STDE VECTOR


.4000E+01 .2000E+01 .5oooE+o1

I].I MATRIX
. 1600E+O2
.4OOOE+o1 .4O00E+O1
.80008+01 -.60008+01 .90ooE+o1
DIAGONAL TERMS
. 1600E+O2 .4000E+o1 .9oooE+o1

SOLIITION VECTOR
-.25008+OO -1000E+01 .5oOOE+OO

i,. Fig. 23(b) Resultsfrom Program2.3

The input dataarein Fig.2.3(a)and theoutput resultsin Fig.2.3(b).


The programreadsin the
numberofequationsto besolved,the full (symmetrical)coefficientmatrix and theright-handside
vector.
SubroutineLoLt forms the matrix Ur in eq. 2.21,which overwritesA, and the diagonal
termsof matrix D in eg.2.24which are storedin a v@tor D. Both A and D are printedout.
In the program,forward substitutionis then accomplishedusing the spcciatsubroutine
LDLFORbecauseUr is taking the placeof L. Matrix Ur can then be scaled,usingD, to give
Ur as in eq.2.22.Conventionatback-substitution usingsuBBAccompletesthe processand the
resultsare printed.
A useful by-product of this faaorisation is that the determinantof the coefficientmatrix
A can be found as the product of the componentsof D, that is 16x 4 x 9 in this case.or 576.

2.4.1 Quadratic form and positive definiteness


'quadratic
A form' is a second degree expression in n variables of the form

Q 6 ) : a 1 x 1 2+ 2 a r r x r x z* . . . * 2 a y n x1 x n
* a 2 2 x 2 2+ . . . + 2 e 2 n X 2 X n
+...+
* annxn2
L I N E A RA L G E B R A I CE O U A T I O N S

Usually, terms like 2a12x1X2 are separated into two terms

2 a 12 x 1x 2 : e 1 2 X1x 2 + a t 2 x 2 x I
: Q r 2 X rX 2 * A 2 1X 2 X i

so that a symmetrical matrix version can be written

Q @ ) : o r 1 x 1 2* a 1 2 x 1 x z * . . . I e l n X l X n
* C I 2 1 X 21X* a 2 2 x2 2* . . . * e 2 n x 2 x n

: : :
I anlxnxl * an2xnx2* ... * annxnz

This quadraticform is 'positive'if it is equalto or greaterthanzerofor all valuesof its


A positiveform whichis zeroonly for the valuesxt: Xz- xl : ... : xn:0 is
variables.
saidto be'positivedefinite'.A positivequadraticform whichis not positivedefiniteis
saidto be'positivesemi-definite'.
With our usualdefinitionof vectorsand matrices
r \
[t'l r a t z " ' . a t "r
|
| :" I
.:{"}, o:l, i I
l
("'J
l r
La"' "'- 4"") t
the quadraticform can be written compactlyas
,
\'
0(x): xr Ax
whereA is the 'matrix of the quadraticform Q,.r)'
. QG)is 'singular'or 'nonsingular'if
A is zero or nonzerorespectively.
For the quadraticform xrAx to be positivedefinite,the quantities

r , lo, a r z o r r l l o , , . . .o r , l
larr arzl | | l. I
arr, l^ dzz aztl, ,I
l a z t a^ zlz' ll o u l'
l o . , e t z o . r l l o " ,. . . o - l
must all be positive.

Example2.4

Show that the quadraticform

l - 2t 52 -- 4r 1{|.''')
x, ):
x 1 2+ 2 x 1 x 2 - 2 x 1 x 3
{x'xrx3}l 1 2 x 2 x 1* 5xr2-4x2x3

l-z -4 sl [,;J -2x3x1 - 4x3x2* 5xs'

is positivedefinite.
C H A P T E R2

Solution 2.4

The three quantities

Irrl l rz 2 -tl
I
l' I l:l and I s -41:t
lz sl l-, -4
I
5l
all are positive.The quadraticform is

Q@): (x, * 2x2- 2x), + x 12+ xrz


which can only be zero if Xz:x3:0 and hencex1 :0 as well.

2.4.2 Choiesky's method

When the coefficient matrix A is symmetric and positive definite, a slightly different
factorisation can be obtained by forcing U to be the transposeof L. This factorisation
can be written
A: LLT (2.26)
or

[], ',',',
arzo,,l
';'
I',',;rr_l:L;r:
l:,', LlL: ;:l
[1,,
o o I [,,, r,, r,,l
(2.27)

In this caself1:drr aod so I11:n/a11 and in each row a square root evaluation is

'":[;
necessary.For the A given by eq. 2.19 the Cholesky factors are

:']t,: 1] (2.28)

This method is programmed in the next section, which deals with the situation when
A has a special structure.

Example2.5

Use Choleskyfactorisation to solve the system of equations


l6x1 + 4x2* 8x. : 16
4x, * 5x2- 4xt: l8
8x, - 4x2 * 22xt : -22
L I N E A RA L G E B R A I CE O U A T I O N S t'7

Solution 2.5

The coefficient matrix is positive definite, hence factorise as

A:LLT

['u o rl [,,, o o I [,,, 1,,',.-l


| + s - 4 l : lr , r , o l l o t , h r l
I s -4 22)L,., t,, l.,l Lo o I,.l
Solving for /;; gives

[+ ool
L:l' , ol
lz -3 3l
Forward-substitution:

Lv: u

la o ol[r,l |, 'ul
l ' ' o l 1 r , f ": 1
lz -3 rl (.y.Jl-zz)
f
I
I I t : 4 , y 2 : X t 8- 4 ) : 7 , h : X - 2 2 + 2 1 -8): -3
II Back-substitution:

I Lrx:y

[+2' - ,l3 1|..,.|f .l


l lo
[o o
1 ' , 1 7: (>
3J (x3J [-rJ
x3: - l, xr:!(1 -3):2, x1:X4-2- *2):1

2.5 Banded equations


In many engineeringapplications, equations have to be solved when the coefficients

tt
have a'banded'structure (see,for example, Chapter 8).This meansthat the coefficients
are clusteredaround the diagonal stretching from the top left-hand corner of the matrix
C H A P T E R2

A to the bottom right-hand corn€r. A typical example is given below:

/"
.-b\-

*"tl- o,, Atz dri.. 0 0 o l


( l o , , Qzz azl "ri'-l o l
^:t--:,:
l u '
Qtz

"{:'
Qv

atz
att

Qta
a r s ' -.q

Qts
I
".;'l"-.
(2.2e)

a s s o,.

L:
ast
l
\J '. g(1

0 0 " . ..40+ AAS "^6 6 l


|

\' t r a d i n g
diagon

of the
In this case there are never more than two non-zero coefficientsto either side
'bandwidth' of this system is said to be 5. If the
leldine diagonal in any row. The
per
.[.m"i"i"nt, ure symmetrical,only the leading diagonal and two more coefficients
row need to be stored and operatedon. In this casethe'half bandwidth' is said to be 2.
If we wish to store only the lower triangular factors of the symmetrical A, the
coefficientsof interest are

lr, o o o o o
Ir,. Irr 0 o 0 o
1 . , I : i - 1 . . o o o (2.30)
L:
o l+z lot loo 0 0
o o lr. Iro Ir, o
0 0 lun lu, loa

and it can be seen that out of 36 potential locations, only 15 are nonzero. A more
economicalmethod of storageis to keep the band in a rectangular array, by shifting the
rows to obtain the structure

0 0 I,,
0 /r, In
lil In Jr.
LB: (2.3t)
Io, lo, Ino
lrr lto lts
Ioo les lua

in that zeroswhicharenot requiredoccurin the first two


This is still slightlyinefficient
L I N E A RA L G E E R A I CE O U A T I O N S 29

rows,but it hasthe advantagethat thereare threeterms(thehalf bandwidthplus l) in


eachrow, which makesprogrammingrather easier.

Progrem2-4. Symmetricalequationsolutionby Cholesky'smethod


Program 2.4 illustratesthe Choleskysolutionprocessfor symmetrical,bandedequations.It
makesuseof two library routines,cHoFAc and cHosus which performthe LLr factorisation
and combinedforward- and back-substitutions The stepsfollowedare just eqs
respectively.
2.15to 2.17againwith Lr in placeof U.

PROGRAM P24
c
c P R O G R A M2 . 4 CHOLESKY FACTORISATION
c WITH BANDED STORAGE
c
c
tc
ALTER NEXT LINE TO CHANGE PROBLEM STZE

PARAMETER (IN=20, IIw=10)

l 'c REAL LB(INiIIW),B(IN)

READ (s,*) N,IW


IWP1:II{+l-
READ (s,*) ((LB(I,J),J=1,IwP1),I=1,N)
R E : A D( s , * ) (B(I),I=1,N)
I{RITE(6,*) ('*** CHOLESKYFACTORISATION WITH BANDED STORAGE ****')
WRITE(6,*) ('************ S Y M M E T R I C A LE Q U A T I O N S* * * * * * * * * * * * * * * * ' )
wRrTE(6, *)
WRITE(6,*) ('BANDED COEFFICIENI MATRIX')
Do 1 I=1,N
1 wRrTE(6,1o0) (LB(I,J),J=l,IwP1)
I.rRrTE(6, *)
VJRITE(6,*) ('RIGHT HAND SIDE VECTOR')
CALL PRINTV(8,N,6)
I{RITE(6, *)
C A L L C H O F A C( L B . I N , N , I W )
WRITE(6,*) ('LT rN BAND FORM')
CALL PRINTA(LB, IN,N, IWPI,6)
WRITE(6,*)
C A L L C H O S U B( L B , T N , B , N , I W )
w R r T E ( 6 , * ) ( ' S O L U T T O N V E C T O R ')
CALL PRINTV(8,N,6)
10o FoRMAT(5812.4)
STOP
END

Numberof equations N IW
and half bandwidth J I

Coefficientmatrix (LB,(r,J),J: I, I W + l ) , I : l , N
0.0 0.0 16.0
0.0 4.0 5.0
8.0 -4.0 22.0
Rieht-hand side B 0 ) r, : 1 , N
4.0 2.0 5.0
Fig. 2.4(a) Input data for Program2.4
30 C H A P T E R2

* * * C H O L E S K Y F A C T O R I S A T T O N W I T H B A N D E D S I I O R J \ G E* * * *
************ S Y M M E T R I C A LE Q U A T I O N S* * * * * * * * * * * * * * * *

BANDED COEFFICIENT MATRIX


. ooooE+oo .ooQQ[+oO .16008+02
. ooooE+oo .4oooE+ol .5000E+01
.8oooE+o1 -. 4000E+01 .2200E+O2

RTGHT HAND SIDE VECTOR


.4000E+01 .2000E+01 .5000E+01

LT IN BAND FORM
. ooooE+oo .0000E+00 .40008+01
. o000E+00 .1000E+01 .2000E+01
.20008+01 -,3000E+01 .3000E+01

SOLUTION VEqIOR
-.2500E+O0 .10008+01 .50008+00

Fig. 2.aft) Resultsfrom Program2.4


I
t
is as
Inpht data aie listedin Fig. 2.(a) and output resultsin Fig. 2.(b). The nomenclature
follows:

Simple uariables
N Number of equations to be solved
IW Half bandwidth of symmetrical banded coefficient matrix
Variable size anays
LB Lower triangular components of A stored as a rectangular band matrix with leading
diagonal terms A(K, K) stored in LB (K, IW+ l).
B Right-hand side vector - overwritten by solution.

PARAMETER restrictions
IN>N
IIW >IW
The program is extremely simple given the subroutine library. The number of equations and half
'half '
bandwidth are read in, followed by the lower of A in the appropriate form of eq. 2.31, and
then the resulting factor is printed out. A call to cHosuB completes the substitution phase and
the results, held in B, are printed.

2.6 Compact storage for variable band widths


It is quite common to encounter symmetrical equation coefficientmatrices which have
the following structure:

Skyline
, I
atr
0J0
i--;;;--;;; _
o ;i "", .i -oi

,,,
0 i0 io io
0 Qsz att orr
i_0___j
Qta

l-f
A: (2.32)
0 0 0 e+t dts 0
Qst 0 asl Qsa QSS QSA

0 0 aal 0 cu, aea


L I N E A RA L G E B R A I CE O U A T I O N S
31

The systemis banded,with a half bandwidthof 4,


but alsohasa significantnumberof
zeroswithin the band-If theprogressof factorisation
is monitored,i will befoundthat,
in thelowertriangle,zeroslyingcloserto thediagonal
thana nonzerocoefficientin that
row will becomenonzeroduring the calculation(will
be'filledin') whereaszeroslying
furtherawayfrom thediagonalthan theoutermost
nonzerocoefficientin that row will
remainzero throughoutthe calculationand need
not be storedor processed.
In the uppertriangle,thesamecan be saidof corumns,
and in eq.2.32adashedline
delineates the outermostextentof coefficients which needto be processed. Because of
the appearance of thislinein theuppertriangleit is sometimes calledthe.skyline,of the
coefficients.
In the caseof the coefficientsin eq.2.32onlyr4 actuailyneedbe stored,compared
with 30 in a fixedbandwidthmethodsuchas that
usedin program 2.4. Thepenalty
incurredby usinga variablebandwidthtechnique
is that " ool must be kept of the
numberpf coefficients to be processedin eachrow (column).

Program25' 'skyrine'sorutionof symmetrical


equationsby cholesky,smethod
Program2'5 illustratesthe procedure,
usinga choieskyfactorisation.
processed Th" l+ "o,nponentsto be
in eq.2.32would be storedin a vectoras follows:
A :(a1 d z z a t z a ; . 3a + + a , 1 0a e a
s+ ass a60a55 a56) (2.33)
PROGRAM P25
c
c PROGRAM 2.5 CHOLESKY FAqTORTSATION
c SKYLTNE STORAGE OF A AS A VECTOR
c
c ALTER NEXT LINE TO CHANGE PROBLEf,,TSIZE
c
PARAMETER ( IA=5o, IB=2o)
c
R E A LA ( r A ) , 8 ( r B )
I N T E G E RK D I A G ( I B )
c
READ(5,*) N
R_EAD (s,*) (KDTAG(r),r=1,N)
IR = KDrAG(N)
R E A D( s , * ) ( A ( r ) , r = 1 , r R )
*; (B(r),r=1,N)'
IEAD (s,
wRrrE(6'*) ( ' * * c H o L E s K YF A c r o R r s A T r o N w r r ' s K y L r N E
w R r T E( 6 , * ) ( ' * * * * * * * * * * * sroRAGE ***r)
s y M l , t E T R r c A LE e u A T r o N s * * * * * * * * * * * * * * * * ,
wRrTE(6,*) )
w R r T E ( 6 , * ) ( , C o E F F I C I E N T V E C T O RI '
C A L L P R I N T V ( A I, R , 6 )
t{RIrE(6,*)
W R I T E ( 6 ,* ) ( , D I A G O N A LL O C A T T o N S) ,
I { R r T E ( 6 ,1 0 0 ) ( r t D r A c ( r ) , r = 1 , N )
I.IRITE(6, *)
lllTnre,"t ( , R r G H r H A N Ds r D E v E c r o R , )
CALL PRINW(8, N,6)
VIRITE(6,*)
CALL SKYFAC(A, N, KDIAG)
C A L L S K Y S U (BA , B , N , X D I A G }
rrRrrE(6, *) (,soLUTroN vicroR,
)
CALL PRTNTV(8,N,6)
r-oo FORMAT(10r6)
STOP
END
C H A P T E R2

The extra information about number of components per row (column) is kept in an integer vector
called KDIAG. This could contain the number of components per row, or a running total,
representingthe'address'of each leading diagonal component of the original array in A, which is
the method used in this program. Thus the KDIAG associatedwith eqs 2.32 and 2.33 would be

xor,rc:{l24510 14]r (2.34)


in rows I to 6 respectively.
to be processed
reflectingthat thereare l, l, 2, 1,5 and 4 coefficients
To testtheprogram,we returnto our familiarsetof threesimultaneous equations, Ieadingto
the input data of Fig. 2.5(a)and the output resultsof Fig. 2.5(b).

Number of equations N
3
Skyline location KDIAG(I),I:1,N
1 3 6
Coefficient vector A(r),
r: l,rR
16.0 4.0 5.0 8.0 -4.0 22.0
Right-hand side B 0 )r,: 1 , N
4t0 2.0 5.0
I
Fig. 25(a) Input data for Program2.5

** CHOLESKY FACTORISATION WITH SKYLINE STORi\GE ***


********r** S Y W E T R I C A L E Q U A T I O N S* * * * r * * * * * * * * * * *
\
COEFFICIENT VECTOR
.16OOE+O2 .4OOOE+OI .5000E+01 .800OE+01 -.4OOoE+OI- .22OOE+02

DIAGONAL I,OCATIONS

RIGHT T{AND SIDE VECTOR


.4oooE+o1 .2o0oE+o1 .5000E+01

SOLU:IION VECTOR
-.2500E+OO .1000E+01 .50008+00

Fig. 25(b) Resultsfrom Program 2.5

The nomenclature used is as follows:

Simple oariables
N The number of equations to be solved
IR The total number of coefficients to be processed

Variable length arrays


A Equation coefficients stored as a vector
B RighGhand side - overwritten by the solution
KDIAG 'Address' vector
holding positions of diagonal terms

PARAMETER restrictions
IA >IR
IB >N.

In the program, N is first read in, followed by the'address'vector KDIAG as describedin eq.2.34.
L I N E A RA L G E E R A I CE Q U A T I O N S

Then the equation coefficients.areread in,


row by row, foilowed by rhe right_hand side
calls to the 'skyline'flactorisationand substitutio'n vector.
routiness K y FAC and sx vs ua complete
solution process and the results, stored in the
B, can be printed.

2.7 Pivoting
In the solution of unsymmetricalequations
using conventional Gaussianelimination
(Program 2.1)or LU factorisation(progra
m 2.2) weside-steppedthe problem of what
to do should a leading diagonal component
of the coefficienrmatrix be zeroto start
with' or becomezero during the solution process.
In the next program, we illustrate
how to cope with this by using a row interchange
techniqueusuaily cailed .pivoting,.
we saw that the crucialtermsor pivots lay on the
leadingdiagonal and so the technique
involves searchingfor the largest(absolute)coefficient
in the rows not so far processed,
and moving it into the leading diagonal position.
' For example,returning to eqs 2.3,it can be seen
that the largestcoefficientsoccur
initially in row 2, and so this would be interchanged
with row r to vield
2 0 x 3* 3 x 2I 2 O x ,: 2
-5xr* x2*10x,:l
(2.35)
5 x 3* 3 x 2* 5x, :6
After one stepof elimination(or factorisation)
we would have
20x3* 3x2-20xr:).
1.75x2-t 5x, : 1.5
(2.36)
2.25x,* l0x, : J.5
The largest coefficientin rows 2 to N is now in row 3
and so this would be interchanged
with row 2 to yield

20x3-20x1* 3x2-2
l0x1*2.25xr:J-5
(2.37)
5 x 1* l . 7 5 x r : 1 . 5

leading to the final factorisation

2 0 x 3 - 2 0 x 1* 3xr- 2
l0x1 * 2.25x2: 5.5 (2.38)
0.625x2: -1.25

and the solution as before, but in the order

( x r ,x r , x 3 ): ( - 2 . 0 ,1 . 0 ,1 . 4 ) .
The processjust described is actually called 'partial pivoting'.
An extension could be
made to'total pivoting'by extendingthe searchfor the
maximum modulus elementto
both rows and columns.
J+ C H A P T E R2

Program 2.6. Equation solution by LU factorisation with pivoting


In this program the interchangesare carried out in a special LU factorisation using subroutine
LUP FAC.This storesthe new row order in an integer array Row for usein the substitution phase,
carried out by subroutine LupsoL The factorisation arithmetic remains the same.
PROGRAI,I P26

c PROGRAT'{2.6 LU FACTORISATION USING PTVOTTNG


c
C ALTE-R NEXT LINE TO CIIANGE PROBLE.{ STZE
c
PARATYETER (IN:2o)
c
REAL A(IN, rlr),8(IN),X(IH)
TNTEGER ROw(IN)
c
READ (5,*) N
READ (s,*) ((a(I,J),J:1,N),I=J.,N)
READ (5,*) (B(r).i=1,1r)
I{?rTE(6,*) (r***r** L u F A C T O R I S A T I O i , IW r T H P I V O T I N G * * * * * * r )
VTRITE(6,*)
WRITE(6,*) ( ' C O E F F I C I E I I T M A T R I X T)
CALL PR,NTA(A, TN,N, N, 6)
wRrrE(6, *)
WRITE(6,*) ( T R I G H T P . A N DS I D E V E C T O R ' )
CALL PRINW(8,N,5)
lrRrTE(6, * )
CALL LUPFAC(A, IN, N,ROI{)
CALL LUPSOL(A, IN, B, X, N, ROW)I
wRrrn(6, *) ( 'REoRDERED
now lruuarns,;
W R I T E ( 6 ,1 0 o o ) ( R o w ( I ) , I = 1 , N )
WRITE(6,*)
w R I T E ( 6 , * ) ( ' S O L U T I O N V E C T O R ')

F Number
CALL PRINTV(X,N,6)
1000 FoRMAT (1015)
STCP
END

of equations N
?
Coefficient matrix (A(I,J),J: l, 1.0,I : I, N
10.0 1.0 - 5.0
-20.0 3.0 20.0
i l
5.0 3.0 5.0
Right-hand side B 0 ) I, : 1 , N
i.0 2-.o 6.0
Fig. Z6(a) Input datafor Program2.6
****** LU FACTORTSATION WITH PTVOTING *t****
COEFFICIENT MATRTX
. Lo00E+02 .10COE+01 .50008+01
-.2000E+o2 .3000E+c1 .20008+02
- 5000E+01 .30008+01 .500oE+o1
RIGHT }IAND SIDE VECTOR
.1000E+O1 .20008+01 .60008+o1
REORDERED ROW N-d}iEERS
2 3 1
SOLUTION VECTOR
. :.000E+01 -.20ooE+01 .1400E+0i
Fig. 2.6(b) Resultsfrom proqram 2.6
LINEAR
A L G E B R A IECO U A T I O N S

Input data are shownin Fig. 2.(a) and output resultsin Fig. 2.(b).
The nomenclatureusedis as follows:

Simpleuariables
N The numberof equationsto be solved

Variablelengtharrays
A The matrix of coefficients
(overwrittenby the factorsL and U).
B The right-handside
X The solution
Row The new row orderins

PARAMETER
restriction
IN>N.

The programreadsin N, A and B, caltsthefactorisationand substitutionroutinesand


outputs
the new row order and the solution.

For equations with unsymmetrical coefficients,pivoting is to be recommended.


Even if a pivot does not become zero in a conventional elimination, a 'small' pivot
is
undesirable becauseof potential 'round-off' errors (seeChapter l). Should the pivot
become 'zero', the set of equations is singular or very nearly so. Fortunately, for
equations with symmetridal coefficientswhich are 'positive definite' (seeSection
2.3.1),
pivoting is not necessaryand programs 2.3 to 2.5 can be used.

2.7.1 lll-conditioning

Despite our best efforts to selectoptimum pivots, a set of linear equations may not be
solvable accurately by any of the methods we have just described, especially when
'hand'calculation
techniquesare employed. When this is so, the set of equations is said
to be 'ill-conditioned'. Fortunately, even modern electronic 'hand' calculators are
capable of working to many decimal places of accuracyand so the conditioning of sets
of equationsis not of suchgreat importane as it was when'hand'calculationsimplied
only a few decimal places of accuracy.
A very well-known example of ill-conditioned equations arises in the form of the
'Hilbert
matrix'obtained from polynomial curve fitting. The set of equations so derived
takes the form

, + 1 +* f .'J
+ r * * +l t ' I
+ + + 1'.+l:
1 1 1 1 1
4 5 6 ' 1 8

+ lx+l
* + + i +l',.J
In Table2.1,numericalsolutionsrounded off to 4, 5, 6,7 and 8 decimal placesare
comparedwith the true solution.lt can be seenthat something like 8 decimal place

t
r
C H A P T E R2

Table 2.1

True solution xr:30 xz: -420 x:: 1680 x+: -2520


4 decimalplaces - 8.8 90.5 - 196
5 58.9 7s.6
55.3 -746.5 2854.9 -4105.2
6 -229.9 t976.9
r3 . l 1442.7 - 1696.4
7 898.I
25.9 -JtJ.t
1524.4 -2318.6
8 -4t6.02 lt7 1.4
29.6s 1666.62 -2502.69 1252.39

accuracy rs nec€ssaryfor an answer adequate


'mainframe' for engineeringpurposes. when using
computers with 32-bit words, engineers
are advised to use .d;";i;
precision',i.e.64-bit words, which shourd
achievesomethinglike l5 decimal prace
accuracy and be sufficient for most practical
purposes.
Althodgh mathematicarmeasuresof ill-conditioning,
calrcd .condition numbers,
isexpensive
andnorused
i.,"ngin"".tns
L 6 practice.
f::::::::Y:1:T:rycess
rosorurion Provrrw. By
D y tfar
al
accuracv
areindependenr checks
on"the
phvsicar
:Tirrj::r,i:s
o[ the results.
consisrency

2.8 Equations with prescribed.variables'


consider a common engineeringo@urrence
of the systemof equationsAx: b where
A representsthe stiffnessof an erasticsolid,
b the forcesappried to the sorid and x
resultingdisplacements. the
In some caseswe will know all the components
of b and have
to calculateall the componentsof x,
but in others we may be iord that some
displacements are known in advance. In of the
this event we could eriminate the known
components of x from the system of equations,
but this invorvessome quite eraborate
coding to reshuffiethe rows and corumns
of A. A simple scaling procedure can be
instead to produce the desired result used
without modifying the structure of A.
Suppose that in our symmetrical system

o
['u
Ax:L; i"'l 8l
_; ;:l (.,..J
lrrl:b (2.3e)

we know that x2 hasthe value5,whireb, and


b. havethevalues4 and 5 asbefore.we
forcex2 to be 5 by scalingazzup by adding
arargenumber to it, say l0ro. The purpose
of this is to makethe diagonalterm in row
2 muchlargerthan theoffdiagonal terms.
That is, row 2 reads

4x, *(5* l O r o ) x 2- 4 x . : 6 ,
(2.40)
and the termin x2 swampstheothertwo terms.To get
thedesiredresurtof x2:5, it is
clearthat on the right-handsideweshouldput the
varue(5+ l0ro)x 5. we couldalso
L I N E A RA L G E B R A I CE O U A T T O N S
37

use this techniqueto make x2 'zero' by putting


any number which is orders of
magnitude lessthan l0ro in the right-hand
sideb2 position.

Program2.7. Equationsolution with prescribed.variables,


This programirrustrates the process,,iing ". a startingpoint program
2.5.The input (seeFig.
2.7(a))and program descriptionur. u".y ii.ilar
to thosefor prolram Z.i.
Additionardata invorvethe numberof varuesto-be
fixed r"ir"*.a by the particular
numberof thefixedvariable(No(I)) and therequired.value NFri),
(vAI-(I)).in rt. Lr. considered here,
one variableis to be fixed,namelythe valuex2 which
is equatio S.
The leadingdiagonaltermsin thecompacteJstorage
vectorA can be locatedusingKDIAG
asshownand the appropriateentry in A hasthe nu-b".
l0ro addedt. ii. i;" appropriateterm
PROGRAH P27
c
c PROGRAM 2.7 CHOLESKY FACTORISATION
t WITH SKYLINE STORAGE
c PRESCRIBED VARTABLES BY 'BIG SPRINGS'
c
c ALTER NEXT LTNE TO CIIANGE PROBLEI,T
SIZE
c
PARAIIIETER (IA=so, tB=2a, rNo=5)
c
REAL A(IA), B(IB),VAL(JNO)
T N T E G E R K D r A G( r B ) , N o t i r y o j

READ (5,*) N
(s,*) (KDIAG(r),r=1,N)
I - E A=D
IR KDIAG(N)
R E I A D( s , * ) (A(r),r=1,rR)
READ (5, *) (B(I) , r=1,N)
READ (5,*) NFIX
IIlg (s,-t (No(r),vAL(r),r=1,NFrX)
wRrrE(6,*) ( ' * * c H o L E S X yE a c i o R r s a r r o N w r r x
I{RrrE ( 6, *) (' *********** sxyLrNE ST'RAGE ***,1
S Y H M E T R T c AELe u A T r o p g * * * * * i * * * * * * * * * * ,
gtITE(5, *) (, r********** PRESCRIBEO -
vva, *nrrAa D s a
n r!,tnr b * *' *r *r * * * * * * * * * * * * * ' ) )
wnrrrio,*f
*) (,coEFFrcIENT VEqroR,)
!TrTE(6,
CALL PRINTV(A,IR,6)
I{RITE(6, *)
gtlTE(6, r) ('DrAcoNAL r-ocATroNs, )
w R r T E ( 6 , 1 o o ) ( K D T A G ( r,)r = 1 , N )
W R I T E ( 6 ,* )
W R I T E ( 6 , * ) ( , R I G H T t t A N DS I D E V E C T O R ' )
CALL PRINTV(8,N,6)
W R I T E ( 6 ,* )
W R T T E ( 6 , * ) ( , F R E E D O MN U H B E RA N D P R E S C R I B E D
VALUE')
DO 1 I=1INFIX
1 W R r T E ( 62, 0 0 )N o ( r ) , v A L ( r )
I{RrTE(6, *)
DO 2 f=lrNFfX
^ 1
2 Bt_5(?NIo4(c)I ()=N o ( I ) ) )= A ( x D r e c ( N o ( r ) )+) 1 . E 1 0
A ( K D T A G ( N ot i(-rv) i r , ? i j
CALL SKYFAC(A, N, KDTAG)
C E L L S K Y S U B( A , B , N , K D I A G )
t{RrrE(6, a) (,soLUTroN vicroR, r
CALL PRINTV(8,N,6)
100 FoRMAT(10r6)
200 FoRMAT(rs,Er2.4)
STOP
END
2
CHAPTER

Number of equations N
3
Skyline locations KDIAG (I),I-1,N
l J o

Coefficient vector (A(r),


r: r,rR)
16.0 4.0 5.0 8.0 -4.0 22.0
Right-hand side B ( I ) ,I : r , l . i
4.0 2.0 5.0
Itiumber of NFIX
plescribed valiables I
i\umber and NO(D,
vAr-O,r:1, NFrX
rrragnitudeof each 7, 5.0
{tig. 2.7(a) Input data fbr Program2.7

[* CHOLESKY FA(]ITORISATTOI.{ WITH SKYLINE STIORI\GE *Tt


. r* * * * * * * * * * S Y M I 4 E ' I R I C A L E Q U A T I O N S * * * t * * * * * * * * * * * *
r*r******** P P E S C : R I B E D\ T A R I A B L n S * * * * * * * * t * * * * * * t *
,*.]ET'FICI
ENT'I[:.:J'I'OR
.16O0E+O2 " 400O8+01 .5O00E+O1 .80008E+01 -.4000E+o1 .2200E+o2
r'I AGONAL LOCA'| IONS

":tGII'I HAND S-TJI,;1,i.'IJCTOR


. ./.000E+01 1000E+01 .500llEr-01
'ij.i:iFii)r)M NUI,IBER ANn I'RESCP,IBED VALUE
:t . 5000E+ ri )
/
;Ol l-rylloN VECTC tl
| 917E+O1 ,50008+01 .18338+O1

;.ir: :"1(b) Results fr,rrn Program 2.T

r jl ;:, 1l:crrset to ttre .rLigrnented diagonal multiplicd by the desired12, namel] 5.0. The results
! ' , . r , : r lr i r l ; i g . ) . i ( h ) .

qj
I ltcrative methods
' ; iii(i "vious secriions 'dircct'
lir of this chaptu', soltrlionmel.hodsfor systetnsof linear
r ' ; 1 ; : ), ,' , ' : i . i r r d . i i o [ ] r , '
l t ; t v € [ x ; e t t c l e s c r i f tIcJrVl .' d i r e r J " l v t : m e atthi la l f h t , s o l t r l i c r t l r n e t h o d
'i,, ir,l 1,. tlle ar!swer in a fjxr:d nulrrtx:r etf arithrnctic olx;rafi(lus. f irrhicct to
1r,. , 1 r resi l ( : ; ; s ' e x a c l . ' afsh < :( ; o l r t p u t ( : r . l t a r d w a1. rr rr r' i r n i t t e d .
r r . r fi j . , i r s , ' ; t . l l r i ( i r ,w
. i . ) ( i p i ) ' I i f f r r : r i l y f t r r ) n t e t r r r r l r l . i o i tb Y l l r t : t t s ( ) t l r 1 ] i c s t i l t i i i t l r r t l t ( ) ( ) e $ s .
'lr-lrlit 'f'hese
r r .j \ i t l r , i " i i : . j . : I i o n, i r : : r l r ; w i f l ; stJ' <tl tl.etalive tttr.{li(r(ls of so]trlilrrl.
,r ,:rr 'r i,lr;; r..i.i liir;t ;,rur:sl;ini:. iir) i!rrswet, v,'lti<tltis {lir:li $tt(rlrst;irrel! rxlrlected
: i t ; r : { i r ; , , i r , . r r : i ! , i , r ;ri , . : v e l u i n r r : t h o r l s ; i v i l } l x l r l < r s t : l - i l r t , < l , v , , l i i t ; hicl rl li }f rl -cct lc c h n i q u e
; 1 ; , , , ' l i ; . . , i i ' . . . { , } r e { : i r i i n s A - r ( li l a d e , a i r r } w i } l t X : f o t t t r d { o l t : A r i { O r J i f } r " i ' e nrta t e s O f
,!rJr!i'fq'lluc t{r ihe truc solutiOn.
L I N E A RA L G E B R A I CE O U A T I O N S 39

Thequestionwhetherconvergence will beachievedat all is outsidethescopeof this


book' In generalit can be statedthat equationswith a coefficient
'diagonally matrix which is
dominant' are likely to have convergentiterative solutions.
Roughly
defined,this meansthat the diagonalterm in any row is greaterthan
the sum of the
off-diagonalterms. As we saw earlier, it is fortunate ihat in
many engineering
applicationsthis diagonaldominanceexists.
Sinceit is very unlikelythat the user'sstartingguesswill be accurate,
the solution
will normally becomegradually closerand closer to the converged
one, and the
opportunityis affordedfor userinterventionin that a decisionwhether
"closeenough'to the the solutionis
desiredone is possible.For example,if the sorutionafter an
iterationhasin somesensehardlychangedfrom the previous
iteration,theprocesscan
be terminated.Thus,in generalterms,the numberof arithmeticoperations
to reach
a solutionis noi known in advancefor thesemethods.

Z.q.l fh. iterative process

Suppose we are looking for an iterative method of solving the


equations
2x1+ xr:4
x1*2xr:J Q.4l)

The central feature of iterative processesis to arrange to have the unknowns


occurring
on both sides of the equation. For example, we could write eqs 2.4r
as
2xr:4- *,
2xr:5-t, I (2.42)

and then make a guessat (xr, x2) on the right-hand side to seeif it fits the
left-hand si<ie.
For this purpose it would be more convenient to have the simple variables
x1 and x2 on
the left-hand side, so we can write

xr:2 -0.5 x2
xz:2.5-0.5 xt (2.43)

Now we make a guessat (x,., xr) on the right-hand side, say (r.0, 1.0).Equations
2.43
then yield

{l;}:{li} {2.44)

so the guesswaswrong.However,if this simpleiterationprocesswereto converge, we


could speculatethat eqs2.44representa better solution than the initial guess,and
substituteit on the right-handsideof eqs2.43.
This gives

I',i: It.rI
l.7sJ (2.4s)
lr, ) i
CHAPTER
2

and the processis continued,with the results

Iteration number
0 1 2 3 4

[ . . ' l I . o 1 . 5 t . o 1 . 1 2 5t . o
t..rj l.o 2.L t.ls 2.0 1.9375

In this trivial example,it is clear that the iterativeprocessis leadingto a convergenc€


to
t h e t r u e s o l u t i o nx r : 1 . 0 , x z : 2 . 0 .
A generalstatementof eqs 2.43 for systemsof equations Ax:b is obtained by
scalingthe coefficientsof A and b such that the leading diagonal terms of A are unity

Ir a,zo,.ll.",.l[o'l
'o,,
1,,: ;"Ji;:j:ii,j (2.47)

The coefficl e n t m a t r i x m a y n o w b e s p l i t i n t o

A:I-L* U (2.48)
in which

l-o
I
o o l
t:l * o,, 0 0 l (2.4e)
L - o . , -e tz ol
and

[o *erz -o,.1
u : l o o - o , .1
[o o o]

Notethat thesetriangularmatricesL and U havenothingto do with the'LU'factors


we met previouslyin directmethods.They are additivecomponentsof A, not factors.
With thesedefinitions,the systemAx: b may be written
(I-L-U)x:b (2.s1)
or x:b+(L+U)x (2.s2)
in which the unknownsnow appearon both sidesof the equations,leading to the
iterativescheme
xr*1:b+(L+U)xt (2.53)
L I N E A RA L G E B R A I CE O U A T I O N S 41

Example 2.6

Solve the equations

l6x1+4x2* 8x.:4
4x1+5x2- 4xt:2
8 x r- 4 x 2 * 2 2 x t : 5

by simple iteration.

Solution 2.6

Divide each equation by the diagonal term and rearrange as

t ( t + r- b + ( L + L I ) x r

hence

o -o2s -orl
[',1-.' lozs
I t-
{',) :(0.4 )+l-o.s o
[,.,.}-
o . 8l { x , )
[';J lorrrt) L-03636 or8r8 o ] ['.J
let

f.,J'ftJ
{',}:{r) |
l".J['J
hence
/' \1 /' \

l',1 l-0, I
1*rl:1 04 |
[..J I o.oosrJ
and

|.',.|'forzz:l[-o.zsl(aftermanyiterations)
:10.83641-1 I
1r, I |
[ ' , J [ o + a r aI l o r J
In this mse, @nvergenceis very slow (seeProgram 2.8).To operate this scheme on
a computer, we can see that the library modules we shall need are merely
a matrix-vector multiplication routine to compute (L+{.I)x and a vector addition to
add b to the result of the multiply. A check on convergencewould also be helpful. This
'Jacobi
method is called iteration'.
42 C H A P T E R2

Program 2.8. Equation solution by Jacobi iteration


Our first program involving an iteration method usesthe straightforward Jacobi techniquejust
described.The chosen set of equations has a symmetric coefficientmatrix, with a strong leading
diasonal

l6x1*4x2* 8x.:4
4xr*5x2- 4xt:2 (2.s4)
8x1-4x2*22xr:J

The convergencecriterion adopted is a simple one, namely that the change in the largest
component of the solution vector shall not b€ greaterthan a small proportion of that component,
called the'tolerance'.

PROGRAM P28

'PRocRAl't 2.9 JAcoBr rrERATroN


C
c ALTER NEXT LTNE TO CHANGE PROBLET'{ SIZE
c
PARAMETER (rN=20)

REAL A(IN, IN),LPU(IN, IN),8(IN),X(TN),XNEW(TN),TEMP(IN)

READ (5,*) N, ( (A(r,J) ,J=1,N),I=1,N)


READ (5,*) (B(I),I:l,N)
READ (s,*) (X(r),r:1,N)
READ (5,*) I'OL,rTS
vIRITE(6,*) ( r*************** JACOBI ITERATION ***************** |
)
I{RITE(6,*)
t WRITE(6,*) ( ' C O E F F I C I E N T M A T R I X T)
[l CALL PRTNTA(A, IN, N, N, 6)
wRrTE(6, *)
WRITE(6,*) ('RIGHT HAND SrDE VECTOR')
CALL PRINTV(8,N,6)
WRITE(6,*)
DO 1 I : 1,N
DIAG: A(I,I)
DO 2 J - 1,N
) A(I,J) : A(I,J) /DIAG
1 B(I) = B(I)/DIAG
CALL NULL(LPU,IN,N,N)
Do 3 I = ].,N
Do 3 J : 1,N
IF (I.NE.J) LPU(I,J): -A(I,J)
I,IRITE(6,*) ( IFIRST FEW ITERATIONSI )
rTERS = o
ITERS=fTERS+1
C A L L M V M U L T( L P U , I N , X , N , N , T E I { P )
C A L L V E C A D D( B , T E M P , X N E W ,N )
C A L L C H E C O N( X N E ' W X
, , N, TOL, ICON)
IF (ITERS.LE.5) CALL PRINTV(X,N,6)
IF (ICON.EQ.O.AND.ITERS.LT.ITS) GO TO 4
WRITE(6,*)
WRITE(6,*) ('TTERATIONS TO COI{VERGENCE')
WRITE(6,*) ITERS
wRrTE(6,*)
I{RITE(6,*) ( TSOLtTTION VECAOR' )
CALL PRINTV(X,N,6)
STOP
END
L I N E A RA L G E B R A I CE O U A T I O N S 43

Number of equation N
J

Coefficientmatrix (A(r,J),J: l, NI r: 1,N


16.0 4.0 8.0
4.0 5.0 -4.0
8.0 -4.0 22.0
Right-handside B(I),I:t,N
4.0 2.0 5.0
Initial guessof x(I),I:l,N
solution 1.0 1.0 1.0
Tolerance TOL
l.E-5
Iteration limit ITS
100
'fig.
Z.A1a; Input data for program 2.g
*************** JACOBI ITERATION *****************
COEFFICTENT HATRIX
- 1600E+02 .4000E+o1 .80008+01
.4000E+01 - 50008+01 -.40008+01
.8OOOE+o1 -.4OOOE+o1 .22OOE+O2

RIGHT HAND SIDE VECTOR


,4000E+o1 .2000E+o1 .5000E+01

FrRsr FEwrrr"errofr
-.5000E+oo .4000E+00 .4545E-O1
. 12738+00 .8364E+OO .48188+OO
-.2000E+oo .68368+00 .3331E+OO
-.87448-01 .8264E+OO .42438+OO
. -.1688E+00 -8094E+00 .40938+OO

ITERATIONS TO. CONVERGENCE


51

SOLUTION VECTOR
-.2500E+OO . 1000E+01 .50008+oo
Fig. 2.8(b) Results from Program 2.8

The initial guessedsolution is (x1, x2, x3): (1.0, 1.0, 1.0).The program data are listed in Fig.
2.8(a).Apart from integer counters and temporary storage the variables used are

Simple oariables
N Number of equations to be solved
TOL Convergencetolerance, defined above
ITERS Number of iterations carried out
ITS Maximum number of iterations

Variable length arrays


A Coefficient matrix of equations
LPU The matrix L+ U (eqs 2.53)
B Right-hand side vector
X )
Thesolutionto Ax:b on successive
iterations
XNEWf
44 C H A P T E R2

PARAMETERrestrictions
IN>N.

The number of equations to be solved, the coefficientmatrix, the right-hand side vector and
the initial vector are read in, followed by the convergencetolerance and an upper limit to the
number of iterations.
The program then scalesthe original coefficients and right-hand sides by dividing by the
diagonal coefficientin each row (called DIAG in the program). The library routine NULL is used
to fill.the matrix LPU with zeros, and then LPU, as defined in eqs 2.53 is formed.
The guessedsolution, X, is input as data and the iteration loop entered.As called for by eqs
2.53 the library routine MvMULT multiplies LPU by the currcnt solution X and puts the
temporary result in XNEW. This can then be added to right-hand side B using library routine
vacroo, to give a new solution stilt calledXNEW. This can be comparedwith X by using the
convergencelibrary subroutine cuecoH. If XNEW agrees with X to the specified tolerance,
ICON is set to I and the iterationsterminated-Note that the cuEcoN subroutinealso updates
X to XNEW.
Thdprogram prints out the number ofiterations taken and the valuesofthe solution vector
for the first 5 iterations and at convergence.This output is shown in Fig. 2.8(b),illustrating that 51
iterations are required to give a result accurate to abut 4 placesof decimals using this simple
technique.

2.9.2 Yery sparsesystems

An obvious advantageof iteration methods occuls.rvhenthe set of equation coefficients


is very'sparse',that is there are very lew nonzrro entries.In the caseof direct methods,
we saw that'fill-in'occurred in the coefficientmatrix during solution, exceptoutside the
'skyline'(Section
2.4), whereas in iterative methods, the coefficient matrix retains its
sparsitypattern throughout the calculation.In suchcases,one would not wish to retain
the matrix form for A in programs like Program 2.8 but rather use specialisedcoding
involving pointers to the nonzero terms in the c,oefficientmatrix.

2.9.3 The Gauss-Seidel method

In Jacobi's method, all of the components of xt* I (called XNEW in Program 2.8) are
evaluated using all the componentsof xt (X in the program). Thus the new information
is obtaindd entirely in terms of the old.
However, after the first row of eqs 2.53 has been evaluated, there is a new xrl*1
available which is presumably a better approximation to the solution than xrt. In the
Gauss-Seideltechnique,the new value x1t* I is immediately substituted for xr& in the
'old'
solution X. After evaluation of row 2, x2r* I is substituted for x2r and so on. The
convergenceof this processoperating on eqs 2.41 is

Iteration number
./. 5

f''l 1.0 1.5 1.t25 t.03r2s (2.55)


I x r J 1.0 r.7s r.937
s r.98437
s
w h i c h is clearly better than that in eqs 2.46
L I N E A RA L GE E R A I CE O U A T I O N S

The Gauss-seideliteration can be written


(t-L)xt*r:b+{Jx&
(2.s6)
whichis possiblebecause in theoperation(Jxt, theevaluationof row i doesnot
oD x;,)ri;-r,etc.,and so they can be updated depend
as they becomeavailable.
It will be obviousby comparingeqs2.56with
2.53that for non-sparsesystemsthe
right-handsidecanbecomputedagainby
usinga matrix-vector murtipryand a vector
addition,leadingto
([-L)x*+t:y
(2.s7)
These equations take the form

'l f''J-.'fr,l
l_' :to,, :t;:l
L"::,I i;:j (2.58)

which is just one of the processeswe encountered 'direct'


'forward-substitution'- in methods when we called it
Library subroutine suBFoRis availableto carry
out this task.

Example 2.7
\
Solve the equations

l6x1I 4x2* 8x, :4


4x1*5x2-44-2
8x1-4x21-22xr:5

using Gauss-Seidel iteration.

Solution 2.7 .

Divide each equation by the diagonal term and


rearrange as
(I-L)o*r:b+IJxt

Hence

f''l'.' lozs -ozs


-':l
[, -0,,,, I :l :i;1,,,1.1:
i l-: I''1-
Lo,u,uI i::j : ;'Jt;:l
Let

f''l':tll
['l
i;;J
46 C H A P T E R2

'
.lo.8
[ r
r o oo l (lf','l
x,):<
f-or I
1.2 |
fo.ruru 1J f,,J
-0.1818 I orrrt)
Henceby forward-substitution

|.',.l'[-orl
(x,):( 1.6)
l'',i I o'J
[r
_1,,,, :i o
llt..Jz):,J
ol [',.]' [-o' I
L::,,,
and also by forward-substitution

:t lil..J-l
f",l' [-0, I [-ozsl(after many iterations)
;,J
i-:i
Convergenceis still slow (see Program 2.9), but not as slow as in Jacobi's method'

Program2.9. Equationsolutionby Gauss-Seidel iteration


This-programillustratesthe coding of the Gauss-Seidelprocess.It clearlyresemblesProgram2.8
quitecloielyand the input dataare the same(Fig.2.9(a)).The samescalingof A is necessarybut
-
matrix U is requiredrather than LPU. At the sametime as U is formedfrom A, the matrix I L
by
overwritesA. In the iterationloop, the right-handsideof eqs2'56,calledXNEW' is formed
successive callsto MVMULTand VeCnOO. The forward substitution then takesplaceusing
SUBFOR leavingthenewsolutionin XNEW. As before,convergence is checkedusingCUeCON'
Theresultsarelistedin Fig.2.9(b)and showthat thenumberof iterationsrequiredto achievethe
sametolerancehas droppedto 30, comparedto Jacobi's51'

PROGRAM P29
c
P R O G R A M2 . 9 GAUSS-SEIDEL ITERATION
C
c AI,TER NEXT LINE TO CHANGE PROBLEM SIZE
c
PARAMETER (IN=zo)
c
(IN)
R E A LA ( I N , I N ) , U ( I N ' I N ) , B ( I N ) , X ( I N ) , X N E W
c
R E A D( 5 , * ) N , ( ( A ( I , J ) , J : l ' N ) , I : 1 , N )
R E A D( s , * ) ( B ( I ) , I = 1 , N )
L I N E A RA L G E B R A I CE O U A T I O N S 47

(s, *) (X(I) ,I=1,N)


IIIAD
READ (s,*) ToL,rTS
IIJT!(6,*) ('*********** G A U S S _ S E T D E LT T E R A T T O N* * * * * * * * * * * * * * * t
WRITE(6, *) )

*t ( 'coEFFrcrENr
llJlr(4, MATRTX')
PRINTA (A, IN, N, N, 6)
.CALL
WRITE(6, *)
*) ( 'RrcHT HAND srDE
II_IIE(6, vEcToR' )
CALL PRINTV(8,N,6)
wRITE ( 6, *)
DO 1 r : l,N
DIAG = A(r,r)
DO 2 J = l,N
2 A(r,J) = A(r.J)/DrAG
t- B ( I ) = B ( T ) / D I A G
CALL NULL(U,IN,N,N)
DO 3 r = l,N
DO 3 J = I + 1,N
u(I,J) = -A(I,J)
A(I,J) : 6.s
WRITE(6,*) ('FTRST FEW ITERATIONS')
ITERS = 0
4 fTERS = ITERS + 1
C A L L M V I ' I U L T( U , I N , X , N , N , X N E W )
C A L L V E C A D D( 8 , X N E W ,X N E W ,N )
C A L L S U B F O R ( A ,T N , X N E W ,N )
C A L L _ C H E C O N( X N E W, X , N , T O L , r c o N
)
rF (rTERS.LE.s) CALL lnrxrvlx,N,o;
r F ( r c o N . E e . o . A N D . r r r R s . L T . r r s l ' c i lr o a
W R I T E ( 6 ,* )
grITE(6,*) (,rTERATroNs TO{ONVERGENCE'
wRrTE(6,*) rTERS )
I{RITE(6,*)
*) ( 'soLUTrON vEcToR,
lllrE(6, )
CALL PRINTV(X,N,6)
STOP
END

Number of equations N
3
Coefficient matrix ( A 0 , J J) ,: 1 , N I r : r , N
16.0 4.0 8.0
4.0 5.0 -4.0
8.0 -4.0 22.0
Right-handside B 0 ) r, : t , N
4.0 2.0 5.0
Initial guessof x(I),r: I, N
solution 1.0 1.0 1.0
Tolerance TOL
l.E-5
Iterationlimit ITS
t00

Fig. 2"9(a) Input data for program 2.9


48 C H A P T E R2

*********** GAUSS-SEIDEL ITERATION ***** ** ** * *****

COEFFICIENT !{ATRTX
.1600E+02 .40008+01 .8000E+o1
.4000E+01 .50008+01 -. 4000E+01
.80008+01 -.4000E+01 .22OOE+O2

RIGHT HAND SIDE VECTOR


.4000E+01 .2000E+01 .5000E+01

FIRST FEW ITERATIONS


-,50008+0o .1600E+01 . 7 0 0 0 E + 00
-. 50008+00 .1360E+01 .65648+00
-.4182E+00 .1260E+01 .50848+OO
-.3691E+00 .1182E+O1 .5764E+O0
-.3337E+00 .1128E+01 .5537E+OO

ITERATIONS TO COIWERGENCE
30

SOLUTION VECTOR
-.2500E+OO - 1000E+01 .5000E+00
a

Fig. 2.9(b) Resultsfrom Program2.9

2.9.4 Successiveoverrelaxatiorr 4

In this technique,the differencebetweensuccessive iterationsis multiplied by a scalar


'overrelaxation [actor':
parameter ar called the

xt+1-xt-a{xr*1-xt) (2.5e)
For the Gauss-Seidelmethod, co: l, but in general I <ar<2. Thus larger than
I }

usual changesare enforced in the solution as it proceeds.


From the simple Jacobi method (eq. 2.53)we have

o)xr * | : rob * (col * <ofl)x* (2.60)

'r and from eq.2.59

I
xt*1:(l -co)xt+a;xr*1 (2.61)

so that

x* * t - (l - rrr)xr: clb * (c.rl.+ o-rlI)xt (2.62)

or, using updating of the solution

(I - ol.)xr + I : crrb* [(1 - co)I+ crrl-I]xt (2.63)

Equations2.63havesimilarpropertiesto eqs2.56in that theevaluationof row i on the


right-handsidedoesnot dependoD xi-r etc.,and so updatingis possible.

Program2.10.Equationsolutionby successive overrelaxation


oneand is illustratedin Program
The algorithmwill clearlybe very similarto the Gauss-Seidel
2.10,with its input data in Fig. 2.1(a).
L I N E A RA L G E B R A I CE O U A T I O N S 49

There is one extra input quantity, OMEGA, which is the overrelaxation scalar,but no other
major changes.First A and b are scaledas usual and b given the multiple rr,las required by eqs
2.63.Using library routine MSMULT, A is then multiplied by the scalar a; and U and A organised
as before.In order to end up with (I * <oL)and [(l - ro)I + otJ] on the left- and right-hand sidesof
eqs 2.63 the diagonals of A and of U are replaced by 1.0 and 1.0-ar respectively.

P R O G R A MP 2 1 0

c PROGRAM 2. 1O SUCCESSIVE OVERREI,AXATION


c
c ALTER NEXT LINE TO CHANGE PROBLEI{ SIZE
c
PARAMETER (IN:20)
c
REAL A(IN, IN) ,U(IN, IN) ,B(IN) ,X(IN) ,XNEW(IN)
c
READ (5, *) N, ( (A(I,J),J=1,N) ,I:1,N)
. READ (s,*) (B(I),r=1,N)
READ (5,*) (X(I),I=1,N)
READ (5, *) TOL,ITS,OMEGA
WRITE(6, *) ( r *********** S U C C E S S M O V E R R E L A X A T I O N* * * * * * * * * * * * |
)
WRITE(6, *)
I{RITE(6, *) ( ' C O E F F I C I E N T M A T R I X T)
CALL PRINTA (A, IN, N,N, 6)
wRrTE(6,*)
WRITE(6,*) ('RIGHT HAND SrDE VEqTOR')
CALL PRINTV(8,N,6)
WRITE(6,*)
wRrTE(6,*) ( ' O V E R R E T . , A X A T T O NS C A L , A R ')
wRrTE(6,1OO)OMEGA
W R I T E( 6 , * )
DO 1 I = 1,N
DIAG = A(I,I)
DO 2 J : 1,N
2 A(I,J) = A(r,J) /DIAG
1 B(I) = oMEGA*B(r)/DIAG
CALL NULL(U,IN,N,N)
CALL lilsl{Ul.T (A, IN, OMEGA,N, N)
Do 3 f - 1,N
DO 3 J : I + l.N
U(I'J) : -A(I'J)
3 A(I,J) = 0.0
DO 5 I = 1,N
A(I,I) = 1's
s u(I,I) : r.o - oMEGA
wRrTE(6,*) (,FTRST FEW ITERATIONS' )
ITERS = O
4 ITERS = fTERS + 1
CALL MWULT (U, IN, X,N, N, XNEW)
C A L L V E C A D D( 8 , X N E W ,X N E W ,N )
C A L L S U B F O R( A , I N , X N E W ,N )
C A L L C H E C O N( X N E W , X , N , T O L , r C O N )
IF (ITERS.LE.5) CALI PRINTV(X,N,6)
IF (ICON.EQ.O.AND.ITERS.LT.TTS) GO TO 4
I{RITE(6,*)
wRrTE(5,*) ( T T T E R A T T O N S T O C O I T V E R G E N C E)'
I,IRITE(6, *) ITERS
wRrTE(6, *)
WRITE(6, *) ( TSOLUTTON VEqrORr )
CALL PRTNTV(X,N,6)
100 FoRMAT(F1o-2)
STOP
END
50 C H A P T E R2

Number of equations N
3
Coefficientmatrix ( A ( r , J J) ,: 1 , I 0 , I : 1 , N
16.0 4.0 8.0
4.0 5.0 -4.0
8.0 - 4.0 22.0
Right-handside B 0 )r,: 1 , N
4.0 2.0 5.0
Initial guessof x ( r )r,: l , N
solution 1.0 1.0 1.0
Tolerance TOL
t.E-5
Iterationlimit ITS
100
Overrelaxation OMEGA
scalar 1.5

Fig. 2-10(a) Input data for Program 2.10


i

t
I
*** t**** ***

COEFFICIENT UATRIX
.16008+02 .40008+01
** * * *** ** * * *
succEssrvE oVERRELAXATION

.8000E+01

F\
.4000E+01 .5000E+01 -.40008+ol,
.80008+01 -.40008+o1 .2200F,+O2

RIGHT HAND STDE VECTOR


It .40008+01 .2000E+01 .5000E+o1
I
OVERREI"AXATION SCAI,AR
1. 50

FIRST FEI'I ITERATIONS


-.12508+01 .28008+01 .12858+O1
-.10158+01 .19518+01 .7852E+00
-.4427E+OO .1094E+O1 .4877E+00
-.1795E+OO .8538E+OO .4279E+OO
-.1763E+OO .89818+OO .46818+00

ITERATIONS TO CONVERGENCE
lo

SOLUTION VECTOR
-.2500E+O0 .1000E+01 .5000E+00

Fig. 2.10(b) Resultsfrom Program2.10

Iteration proceedsusing exactly the same loop as for Gauss-Seidel.In the case of the input
shown in Fig- 2.10(a),the number ofiterations to convergencehas droppcd to 18(seeFig. 2.l0(b).
Figure 2.11 shows the variation of iteration count with overrelaxation scalar co. For these
equations the optimum cr;is about 1.4, but this will be difficult to predict.
L I N E A RA L G E B R A I CE O U A T I O N S 51

130
120
o 110
c
3,1oo
> v u
c
8eo
9
c
--o An
6 - '
b - ^
= 5 U

Eo
ao
€- s o
z20
10

1. 0 1.2 1.4 1. 6 1. 8 2.0


factor 0)
Overrelaxation

Fig. 2.11 Influence of overrelaxation factor ar on the rate of convergenc€ of SOR

2.10 Gradient methods


'stationary'
The methods described in the previous sections are sometimes called
methods becausethere is no attempt made in them to modify the convergenceprocess
according to a measure of the error in the trial solution. In gradient methods, by
contrast, an error function is repeatedly evaluated, and used to generate new trial
solutions. In the following discussion we shall confine our interest to equations with
symmetric, positive definite coefficientmatrices. Although it will soon be apparent that
in this case the gradient methods are very simple to program, the mathematical
reasoning behind them is somewhat involved. For example, the reader is referred to
Jennings(1977) pp.212-216 for a reasonably conciseexposition in engineeringterms.

2.10.1 The method of 'steepestdescent'

For any trial solution xt, the error or'residual' will clearly be expressiblein terms of

l:b-Axr (2.64)

descent,theerror implicit in I is minimisedaccordingto the


In the methodof steepest
following algorithm
ut:Al (a)
(l)rl
o*:Fii (b) (2.6s)
I - xt (c)
xt+ *ay{

l*1:r*-drur (d)
52 C H A P T E R2

To implement this algorithm making useof a subroutine library, it will be helpful to be


able to multiply a matrix by a vector in step (a), to calculatevector inner or'dot'
products in step (b), to multiply a vector by a scalar in steps(c) and (d) and to add and
subtract vectorsin steps(c) and (d) respectively.Thesefive operations ar€ cateredfor in
t h e l i b r a r y b y s u b r o u t i n e sM v M U L T v, D o r v , v s M U L T ,v E c A D Da n d v e c s u s .

Program 2.11. Equation solution by'steepestdescent'


The program which implements eqs 2.65is listed as Program 2.ll.It usesthe same data as most
of the previous iterative programs (Fig. 2.12(a))requiring as input the number of equations to be

P R O G R A MP 2 1 1
c
P R O G R A M2 . 1 1 STEEPEST DESCENT
c
c . ALTER NEXT LINE TO CHANGE PROBLEI'I SIZE

PARAMETER (IN=2o)

R E A L A ( I N , I N ) , B ( I N ) , X ( I N ) , R ( I N ) , U ( I N ) , T E M P ( I N ) , X N E W( r N )

R E A D( 5 , * ) N , ( ( A ( I , J ) , J = 1 , N ) , I = 1 , N )
READ(5'*) (B(I),I=1.N)
READ (5, *) (x(I) ' I=1,N)
READ (5,*) TOL,ITS |
WRITE(6,*) ( r*************** S T E E P E S T D E S C E N T* * * * * * * * * * * * * * * * * )
WRITE(6,*)
WRITE(6, *) ( 'COEFFICIENT MATRIX' )
CALL PRINTA(A, IN,N, N, 6)
WRITE(6,*)
WRITE(6,*) (tRIGHT HAND SrDE VECTORT)
CALL PRINTV(8,N,6)
WRITE(6,*)
W R I T E ( 6 , * ) ( ' S T A R T I N G V E C T O R T)
CALL PRTNTV(X,N,6)
WRITE(6, *)
CALL },IVI'fULT(A, IN, X, N, N, TEMP)
C A L L V E C S U B( B , T E H P , R , N )
WRITE(6,*) ('FIRST FEW ITERATIONSI)
ITERS = o
1 ITERS - ITERS + I
C A L L M V I { U L T( A , I N , R , N , N , U )
CALL VDOTV(R,R,UP,N)
C A L L V D O T V( R , U , D O W N ,N )
ALPHA : UP/DOWN
CALL VECCOP(R,TEMP,N)
C A L L V S M U L T( T E M P , A L P H A , N )
C A L L V E C A D D( X , T E M P , X N E W ,N )
CALL VSMULT (U, ALPHA, N)
CALL VECSUB(R,U,R,N)
C A L L C H E C O N( X N E W ,X , N , T O L , r C O N )
IF (ITERS.LE.5) CALL PRINTV(X,N,6)
IF (ICON.EQ.O.AND.ITERS.LT.ITS) GO TO 1
!,IRITE(6,*)
WRITE(6,*) ( T I T E R A T I O N S T O C O N V E R G E N C E) T
WRITE(6,*)ITERS
WRITE(6,*)
W R I T E ( 6 , * ) ( ' S O L U T T O N V E C T O R ')
CALL PRINTV(X,N,6)
STOP
END
L I N E A RA L G E B R A I CE O U A T I O N S 53

Number of equations N
J

Coemcient matrix (A(r,


J),J: r,N),r: I, N
16.0 4-0 8.0
4.0 5.0 -4-0
8.0 -4.0 22.O
Right-handside B ( r )r,: l , N
4.0 2.O 5.0
Initial guessof x ( r )r, : 1 , N
solution 1.0 1.0 1.0
Tolerance TOL
l.E- 5
Iterationlimit ITS
a 100
Fig. 2.12(a) Input datafor Program2.ll

**************r S T E E P E S TD E S C E N T* * * * * * * * * * * * * * * * *

COEFFICIENT MATRIX
.16008+02 .4000E+o1 .8000E+01
- 40008+01 .5000E+o1 - . 4 0 0 o E + o1
.80008+o1 -.4000E+o1 .2200E+O2

RIGHT HAND SIDE VECTOR


.40008+01 .20008+01 .50008+01

STARTING VECTOR
.10008+01 .1000E+01 .1000E+01

FIRST FEW ITERATTONS


.91338-01 .8864E+00 - 2049E+OO
-.8584E-01 .75408+OO .4263E+OO
-. 13058+OO .76588+00 .3976E+OO
-. 15388+00 .8081E+OO - 4512E+OO
-.1715E+00 .82578+00 .4297E+OO

ITERATIONS TO COWERGENCE
oa

SOLUTION VECTOR
-.2500E+00 .9999E+00 .5000E+oo
Fig. 2.12{b) Resultsfrom Program2.ll

solved[N), the equationleft- and righrhand sides(A and B) and a starting guessto computcthe
initial valueof r in eqs2.646). The iteration tolerance(fOL) and limit flTS) completethe data.
The operationsinvolved in eqs2.64are first performed,namely the matrix-vector multiply
Axo and its subtractionfrom b to fonn R. The iterations in eqs2.65can then proceedinvolving
MVMULT(step(a))and two dot productsgiving a (ALPHA in step(b)).Vector R is temporarily
copiedinto TEMP andstep(c)completedby vsl,tulrand veclnn. It remainsonly to complete
step(d) by vsuulr and vrcsun, to checkthe toleranceon X usingcHEcoN, and to continue
the iteration or stop if convergencehas been achieved.The resultsare shown in Fig. 2.12(b),
where it can be seenthat 6l iterations are required to achievethe desiredaccuracy.
C H A P T E R2

2.10.2 The method of 'conjugategradients'

The results obtained by running Program 2. 1l show that the method of steepest
descentis not competitive with the other iteration methods tried so far. However, it can
be radically improved if the descentvectors are made mutually'conjugate'with respect
to A. That is, descentvectors p satisly

(p')'A(1):0 for i*j (2.66)

The equivalentalgorithmto eqs2.65becomes


p o : r o : b - A x o ( t o start the process)

uo: Apt (a)

(.*)t r*
do: (b)
(p*)t u*

xft+1:xk*atpk (c) (2.67)

rr*1:rft-drut (d)

, (fn r)r l+ r
r1 x1 . : - (e)
(l)' I

pft+r-rt+'*flrpr (f)

h Program 2.12. Equation solution by conjugate gradients


The programming of this algorithm clearly involves more steps, but uses the same library
routines as the previous program. The processis listed as Program 2.12,which begins with the
usual declarationsofarrays following the nomenclature ofeqs 2.67.The data are unchanged(Fig.
2.13(a)).The starting procedure leadsto ro using uvuuLt and VECSUBand ro is copied into po
using VECCOP.
(step (a)) and
Ir The iteration process then proceeds.Vector ur is computed by rlvuulr
ALPHA as required by step (b). Vector p is copied into temporary store TEMP and step (c) leads
t t
to the updated X, called XNEW as before. Routines vsMULT and vpcsus complete step (a)
which gives the new r, and f (BETA) involves the new numerator UP2. It remains to complete
step (f) by calls to vsMULT and veceoo and to check convergenceusing cI{ECoN.

PROGRAM P212
c
c P R O G R A M2 . 1 2 CONJUGATE GRADTENTS
c
ALTER NEXT LINE TO CHANGE PROBLE]'{ SIZE

PARAMETER (IN=20)
c
R E A LA ( r N , r N ) , B ( r N ) , x ( r N ) , R ( r N ) , u ( r N ) , T E M P( r N ) , P ( r N ) , x N E w ( I N )
c
R E A D( s , * ) N , ( ( A ( r , J ) , J : 1 , N ) , r : 1 , N )
R F j A D( 5 , * ) ( B ( I ) , I = 1 , N )
R E A D( 5 , * ) ( X ( I ) , I = 1 , N )
READ(5,*) TOL,ITS
W R I T E( 6 , * ) ( r * * * * * * * * * * * * * * * CoNJUGATE G R A D I E N T S* * * * * * * * * * * * * * |)
W R I T E ( 6*' )
LINEAR ALG EERAIC EOUATIONS 55

W R I T E ( 6 , * ) ( T C O E F F I C T E N TM A T R I X T)
CALL PRINTA(A, IN, N, N, 6)
WRITE(6,*)
WRITE(6,*) ('RIGHT HAND SrDE VECTORT)
CALL PRTNTV(8,N,6)
W R I T E( 6 , * )
w R r T E ( 6 , * ) ( ' S T A R T I N G V E C T O R ')
CALL PRINTV(X,N,6)
W R I T E( 6 , * )
C A L L M V M U L T ( A ,T N , X , N , N , T E M P )
C A L L V E C S U B( 8 , T E M P , R , N )
CALL VECCOP(R,P,N)
WRITE(6,*) ( ' F I R S T F E W I T E R A T I O N S ')
ITERS = O
ITERS:fTERS+1
C A L L M V M U L T( A , I N , P , N , N , U )
CALL VDOTV(R,R,UP,N)
C A L L V D O T V ( P , U , D O W N ,N )
ALPHA = UP/DowN
CALL VECCOP(P,TEMP,N)
CALL VSMULT (TEMP, ALPHA, N)
C A L L V E C A D D( X , T E M P , X N E W ,N )
CALL VSMULT (U, ALPHA, N)
CALL VECSUB(R,U,R,N)
CALL VDOTV(R,R,UP2,N)
BETA = UP2lUP
CALL VSMULT(P,BETA,N)
CALL VECADD(R,P,P,N)
C A L L C H E C O N( X N E W ,X , N , T O L , I C O N )
rF (ITERS.LE.5) CALL PRrN?V(X,N,6)
IF (TCoN.EQ.0.AND.ITERS.LT.ITS) cO TO l-
WRITE(6,*)
WRITE(6,*) ( |I?ERAT'IONS TO COIMRGENCE')
wRrTE(6,*) ITERS
WRITE(6,*)
W R I T E ( 6 , * ) ( ' S O L U T I O N V E C T O R T)
CALL PRTNTV(X,N,6)
STOP
END

Number of equations N
3
Coefficientmatrix ( A ( I ,J ) ,J : l , N ) , I : l , N
16.0 4.0 8.0
4.0 5.0 -4.0
8.0 -4.0 22.0
Right-handside B 0 ) r, : 1 , N
4.0 2.0 5.0
Initial guessof x(I),I:l,N
solution 1.0 1.0 1.0
Tolerance TOL
1.E-5
Iteration limit ITS
r00

Fig. 2.13(a) Input data for Program2.12


56 C H A P T E R2

* * * * ** * ******** CoNJUGATEGRADIENTS ************ **

COEFFICIENT MATRIX
.1600E+02 .4000E+o1 .8000E+01
.4000E+01 .50008+0L _.4OOOE+O1
. BOOOE+o1 -.4000E+01 .22OOE+O2

RIGHT HAND SIDE VECTOR


.4000E+01 .20008+01 .5000E+01

STARTING VECTOR
. 1000E+01 .1000E+o1 .1000E{'01

FIRST FEW ITERATIONS


.9133E-O1 .8864E+00 .2049E+OO
-. 1283E+Oo .7444E+00 .4039E+OO
_. 25OOE+00 . lOOOE+o1 .5oooE+oo
-. 25O0E+00 .10ooE+01 .500oE+o0

ITERATIONS TO CONVERGENCE
4 q

SOLUTION VECTOR
-.2500E+00 .10008+01 .5000E+oo

Fig. 2.13(b) Resultsfrom Program 2.12

When the program is run, the results are as shown in Fig. 2.13(b).The iteration count has
dropped to 4, making this the most successfulmethod so far. In fact theoretically, in perfect
arithmetic, the method would converge in N (3 in this case)steps-

r
t
2.10.3 Convergenceof iteration methods

Figure 2.14illustrateshow the five iterative methods describedin this chapter converge
on the solution (xr, x2):(1.0, 2.0)of eq. 2.41.Also shown are contours of error in the
solution defined when the error in x1 an!jc2-p]q Pglient and 20 per cent respectively
and the measure of total error is Jl"{rr)'+e(xr)'). The methods with poorest
convergence properties (Jacobi and steepestdescent) are seen to be furthest from
normal to the error contours in the vicinity of the true solution.

2.10.4 Preconditioning

An increasingly popular pro@ss for large scale computation which should be


'preconditioning'. In the solution of the usual system
mentioned is called

Ax: b (2.68)

a preconditioningmatrix is appliedso that


PAx:Pb (2.6e)
-
It will beclearthat if P : A 1thepreconditioningprocesswould be perfect,and no
further work would be necessary. Although this level of prior knowledgecan be
discounted,it turns out that relativelycrude approximationsto A-r can be very
L I N E A RA L G E B R A I CE O U A T I O N S

S O R a r= 1 . 2

- 201"error

1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.8 1.9 2-O 2.1 xz
Fig. 2-14 Convergenceof iterative methods

effective as preconditioners,and enable the solution of large systemsof equations by


conjugate gradient methods in far lessthan N iterations.

2.1I Comparison of direct and iterative methods

In the era of scalardigital computing,say up to 1980,it could be statedthat in the


majority of cases,directsolutionwasto be preferredto iterativesolution.Excrptions
werepossiblefor verysparsesystems, and someill-conditionedsystems, but the degree
of securityofferedby directsolutionwasattractive.It hasbeenshownin the examples
of iterativesolutionspresented in this chapterthat a very wide rangeof efficiencies
(as
measuredby iterationcount to convergence and work per iteration)is possibleand so
the amount of time consumedin the solution of a systemof equationsis rather
unpredictable.
However,the widespread useof 'vector'or 'parallel'processingcomputershasled
to a revisionof previouscertaintiesaboutequationsolution.Comparisonof Program
2.14(conjugategradienttechnique)with Program2.4(LU decomposition)will show
that theformerconsistsalmostentirelyof ratherstraightforwardoperationson vectors
which can be processed very quickly by non-scalarmachines.Even if the coefficient
matrix A is sparseor banded,the uvuulr operationis vectorisableor parallelisable.
In contrast, the LU factorisationis seen to contain more complicatedcode
C H A P T E R2

involving conditional statementsand variable length


loops which present greater
difficulties to the programmer attempting to optimise
code on a non-scarar machine.
It can thereforebe said that algorithm choicefor
linear equation solution is far from
simple for lhrge systemsof equations,and depends
strongly upon machine archirtecture
For small systemsof equations,direct methods
are still attractive.

2.12 Exercises
I Solve the set of simultaneousequations

6x1*3x2* 6x.: lg
2 x 1* 3 x 2 * 3 x r : 1 7
xt+2x2*2x.:11
Answer: x1:1, x2:2, xs:3.

2 Solve the system

fj
Lr -
I |
-,
;l{i,}
,_J
{:}
l;.j l;j
by LU factorisation.

' f;i1 i li 'j IJ


Answer:

a n dx r : ; s r - r : : x c - 1 .

Solvethe symmetricalequations
9.3746x1+ 3.04I 6x2- 2.437| 4 : 9.2333
3.04I 6x1+ 6.1832xl- : g.2Bg
+ 1.2163x.,
- 2.4371x 1+ l.2l63x + 8.4429 :
2 x, 3.9339
by LDLr decomposition,and hencefind the determinant
of the coefficientmatrix.
Answer:xr :0.8964, xz:0.7651,xr:0.6145.
The-terms in D are Drr:9.3746, Dzz:5.1964,Dy,.:7.0341and
so the determinant of the
coefficientmatrix is 342.66.

Solvethe symmetricalsystem
5 x t+ 6 x 2 - 2 x t - 2 x n : 1
6 x 1- 5 x 2 - 2 x 3 a ) a o : g
- 2 x 1 - 2 x 2 * 3 x 3- x o : Q
-2x1*2x2- x3-3xa:Q.

A n s w e rx: r : 0 . 1 2 4 4 6 ,x z : 0 . 0 7 7 2 5 , x::0 . 1 I 1 5 9 x, o : _ 0 . 0 6 g 6 7
L I N E A RA L G E E R A t CE O U , A T I O N S

Solvethe symmetricalsystem
x1*2x2-2x3+ xo: 4
2x1+5x2-2x3+3xo: 7
- 2 x , - 2 x 2 * 5 x 3+ 3 x n : - 1
x 1* 3 x 2* 3 x 3+ 2 x a : 0
A n s w e r :x r : Z , x . 2 : - l , x r : - 1 , x t : 2 .

Attempt to solveExercises
4 and 5 by Cholesky'smethod(program2.4).
Answer:Squareroots of negativenumberswill arise.

Solvethe symmetricalbandedsystem:

f4 2 o ol [r,) rt\
I t I I l'l
lI 2 8 2 0 rl l<r , >
t ,0t
:1^)
lo 2 8 2l l"( lo{
Lo o 2 +_Jt'.J [oJ
xr : -0.311,xr :0.089, xa: -Q.e{{.
Answer:xr : 1.156,

Solvethe followingsystemusingeliminationwith partial pivoting

f - r o , - t31 [ ' , ) f ' )


lI - t z z lI <l ' , )I : (l - r (
|L 6:2 12 I 4oJlt l' o" j {It t 2
_ J
l
A n s w e rx: r : - * t , x z : * , x r : - : 1 , x r : i * .
The interchanged row orderis 4,2, L,3.

Solvethe followingequationsusingeliminationwith partial pivoting


x112x2*3xt: 2
3x1*6x2* xr: 14
x1* x2* xr: 2.
Answer: x1 : l, x2 :2, xz: - l.

Attempt to solve Exercise 9 without pivoting.


Answer: Znro pivot found in row 2.

11 Solvethe equations
20x1* 2x2- xz:25
2xt+l3xz-2xt:30
xr * x2+ xt: 2.
using(a) Jacobiand (b) Gauss-Seidel iterations,usinga startingguessj(l -xz:r:-0.
A n s w e r :x r : 1 , x 2 : 2 , x r : - 1 .
60 C H A P T E R2

In case (a) the first five iterations give

X1 X2 )C3

0 0 0
1.25 .2.308 2
1.119 2.423 - 1.558
0.9298 1.8959 - 1.5423
0.9833 1.9274 -0.8257
1.01598 2.02939 -0.91067

whereasin case(b) they are:


J(1 X2 X3

0 0 0
t.25 2.1154 -1.3654
0.97b19 t.s4837 -0.91857
1.00923 2.01111 -r.02034
0.99787 1.99720 - 0.99507
1.00053 2.00068 - 1.00120
- 5,
t 2 Comparetheiterationcounts,for a solutiontoleranceof I x l0 in thesolutionof Exercise
I I by the followingmethods:
SOR (ro: 1.2)
Jacobi,Gauss-Seidel,
descent,conjugategradients.
steepest
Answer: 16, 10,30,doesnot converge,doesnot converge.
The last two methodsare suitableonly for symmetric,positivedefinitesystems.

l 3 SolveExercises4 and 5 by the methodof conjugategradients.


Answer:For a tolerance of 1x 10-5, solutionobtainedin 5 iterationsin both cases.

t 4 Check that the solution vectors{1.22 -1.000 3.037}rand {1.222 -0'75 3.33}Tare
both solutionsto the system
9 x r+ 9 x 2 + 8 x t : 2 6
9xt+8xz+7xt:24
8xr+7xr+6xt:21
to within a toleranceof 0.1.Find the true solution.What do theseresultsimply about the
systemof equations?
Answer:True solution {1.0 1.0 1.0}t
The systemis ill-conditioned.

2.13 Further reading


Bickley, W.G. and Thompson, R.S.H.G. (1964). Matrices, Their Meaning and Manipulation, English
Universities Press. London.
Conte, S. and de Boor, C. (1980). Elementary Numerical Analysis,3rd edn, McGraw-Hill, New York.
Dutr, I. (1977). A survey ofsparse matrix research,Proc. lEEE,65, pp.500-535.
L I N E A R A L G E B R A I CE O U A T I O N S

Evans, D. (ed.) (1985). sparsity and its Applications, cambridge University press,
cambridge.
Fox' L. (1964). An Introduction to Numerical Linear Aloebra, clarendon press.
oxford.
Frazer, R-A., Duncan, WJ. and Collar, A.R. (1938). Elementary Matrices and some
Applications to Dynamics
and. Diferential Equations, Cambridge University press, Cambridge.
Froberg C.E. (1969)'Introduction to Numerical Analysis,2ndedn,Addison-Westey,
Reading Massachusetts.
Gladwelt, I. and wait, R. (eds)(1979).A Surueyof Numerica!Methods
for paitiat Diferential Equations,
Oxford University press, Oxford.
Householder' A. (1965). The Theory of Matrices in Numericar Anarysis,Ginn,
Boston.
Jennings, A. (1977)-Matfix computation
for Engineersand Scientisis,wiley, chichestcr.
Varga, R.S. (1962). Matrix Analysis,prentice-Hall, Englewood Cliffs, New
Jersey.
Wilkinson, J.H. and Reinsch, C. (1971).Linear Algebra, Handbook for Automaiic
Computation, Vol. 2,
Springer-Verlag, New york-

You might also like