Bab-2 Linear Algebraic Equations
Bab-2 Linear Algebraic Equations
Bab-2 Linear Algebraic Equations
ii
,l;
,) ti. liY
Linear AlgebraicEquations
2-l-f$rqdusligtr
One of thecommonestnumericaltasksfacingengineers
is the solutionof setsof linear
algebraicequationsof the form
commonly written
Ax: b (2.2)
22 Gagqqnr-gllsq4qo"
!.
We begin with a specific set of equations:
Onemorestepof thesameprocedure
wouldbe
/r s\
(.) - {7]r*(d) -2.5x':l.J (2.s)
\ J , /
Thus, for setsof N simultaneousequations, however big N might be, after N steps of
this processa singleequationinvolving only the unknown xrywould remain. Working
backrtards from eq. 2.5- a procedure usually called'b49!5_uipj!tqt_i.q!_'-x3 can first bc
found as 3.512.5or 1.4.Knowing x3, substitution in eq.2.4{d) gives x2 as -2.0 and
finally substitutionin eq.2.3(a)givesx, as 1.0.Writing the back-substitutionprocessin
terms of matricesand vectors.we have
['l;':,]{i,}{;,}
of,
(2.6)
I
l|' Ux: Y (21)
The matrix U is called an'upper triangular matrix'and it is clearthat suchmatriceswill
I
be very convenient in linear equation work.
:
In a similar way, if we had the system of equations
t
f , , ,o o l l ' , , . )f r , l
lr
t t
l r r,,o (2 8)
1 , , ,lr, l1r,l:\y,l
/..1
L/r, tx.J [n.J
Of,
Lx:y Q.9)
It would be relativelyeasyto calculatex givenL and y. The matrix L is calleda'lower
triangularmatrix',and theproc€ss of findingx in eqs2.9is called'forward-substitution'.
The directmethodswe shalldiscussall involve,in someway or another,matriceslike
L and U.
2x1-3x2* xr: 7
Xr- Xz-2xr- _ 2
3x1* x2- xJ : 0
Solution 2.1
r- -)
xz:(- s.s+2.s( 2) ) 10.s:
_I
xr:(7-2 1 _3( _\) 12 r
Simple uariables
N Number of equations to be solved
PARAMETERrestriction
IN>N
I
PROGRAM P21
c
c PROGRAM 2.1 GAUSSTAN ELTMTNATION
c
c ALTER NEXT LTNE TO CHANGE
PROBLEM STZE
c
PARAMETER (rN=2o)
REAL A(TN,IN),8(IN)
c
READ (5,*) N
14 CHAPTER 2
In eqs 2.4 and 2.5 it can be seen that the elements of the original arrays A and b are
progressively altered during the calculation. In Program 2.1, once the terms in x have been
calculated, they are stored in b since the original b has been lost anyway.
In passingit should be noted that eqs2.4 involve division by the coefficientar, (equal to t0 in
this case)while eqs 2.5 involve division by the modifel coefficientc22 (equal to 5 in this case).
Thesecoefficientsall where | <k <n are called the 'pivots' and it will be clear that they might
be zero, either at the beginning of the elimination process,or during it. We shall return to this
problem later, but for the moment shall merelycheck whether a11is or has becomezero and stop
the calculation if this is so. Input details are shown in Fig. 2.1(a)with output in Fig. 2.1(b).
The program begins by reading in N, A and b. In the section commented'convert to upper
triangular form' the check is first made to see if a.* is greater than 'zero' (a small number in
this case).Rows 2 to N are then processedaccording to eqs 2.4 and 2.5 and the modified A and
b printed out for comparison with eqs 2.6. Note that only the numbers in the upper triangular
L I NE A R A L G E B R A I CE O U A T I O N S
Numberof equations N
3
C o e f f i c i e nmt a t r i x (A0,J),i:1,N), I:l,N
10.0 1.0 - 5.0
- 20.0 3.0 20.0
5.0 3.0 5.0
Right-handside B0), I: I, N
1.0 2.0 6.0
Fig. 2.1(a) Input data for Program2.1
COEFFICIENT HATRTX
.10008+02 .10008+o1 -. 50008+01
-. 2000E+02 . 30008+01 .2000E+02
.5000E+o1 .3000E+01 .50o0E+01
a
RIGHT HAND SIDE VECTOR
.100oE+o1 .2000E+01 .60008+01
MODTFIED HATRTX
.1000E+02 .1000E+01 -. 5000E+o1
-.20008+02 .5000E+01 .10008+02
.50008+01 .2500E+01 .25008+01
U O D I F T E D R I G H T H A N D S I D E VECTOR
. l-oo0E+01 .4000E+01 .3500E+O1
SOLIj:TTON VECTOR
. 1000E+ot -. 2000E+01 .1400E+01
part ofA are ofany significance.The back-substitution calculation is then performed, leaving the
original unknowns x stored in b which is printed out.
on the eliminationprocess
2.2.1 Observations
In Program 2.1, some terms below the diagonal in matrix U (eq. 2.7) have been
computedevenalthoughtheyareknownin advance to be'zero'.Thisis obviouslywork
whichneednot be done.Further,duringtheconversionof A to uppertriangularform,
it wasnecessary to operatealsoon b. Thereforeif equationswith the samecoefficients
A haveto be solvedfor differentb, whicharenot known in advanceasis oftenthe case,
the conversionof A to triangularform would be necessary for every b using thi/
method.
We thereforeseeka way of implementingGaussianeliminationso that multiple
right-handsideb vectorscan be processed afteronly a single'decomposition'of A to
triangular form. Such methodsinvolve 'factorisation'of A into triangular matrix
components.For example,it canbe shownthat matrix A canalwaysbe written as the
product
A:LU (2.10)
16 C H A P T E R2
where L is a lower tri angular matrix and U an upper triangular matrix, in the forms
[ " ' o ol
t:l "' I" o I (2.11)
/.r l
L /., Ir,
'
and
":[t'
;':;:i] (2.12)
The numbers I11and u141tearbitrary except that their product is known. For example
I11u1f:as (2'13)
[r,, erzo,.l [r o o l [ r , , t t r z, , . - l
t t t
o, e zz o * I, I. o to uzz t ,r , l ( 2.r 4\
I l:l || |
Lo' e tz a rrj L/r , Ir , tJ Lo g ,.r J
is a usual statementof LU factorisationwhich is elaboratedin the next section.
Ax:b (2.I 5)
or LUx:b
We now let
Ux:Y (2.16)
Hence
Ly:b (2.r7)
SinceL and b aie known, and L doesnot dependon b, this processis simply the
'forward-substitution'we sawin eq.2.9.Onceeq.2.17hasbeensolvedfor y, eq' 2.16is
then the'back-substitution'described by eq.2.1.A solutionalgorithmwill therefore
consistof threephases, namelya factorisation(eq.2.I 5)followedby a forward-substitu-
tion (eq.2.17\ and a back-substitution(eq.2.16).The proceduresof factorisationand
forward- and back-substitutionwill be used in other contextsin this book and
elsewhere, so that it makessenseto codethem aslibrary subroutines. They arecalled
LUFAq suBFoRand sussnc respectively and their actions and parametersare
described in Appendix1 with full listingsin Appendix2.
L I N E A RA L G E B R A I CE O U A T I O N S 17
R o w2 I21u1r:a21 .'.12r:U
utt
Having found /21, u12 and u23 can be computed since
Izturz*u22:a22 . . U 2 2 : d 2 2 - l z ru r z
and
Row3 . t Q l t
/3,u1,:d3, . - - J l
utt
., A32-l.31u12
ltt urz*132u22:a32 ' t32:-.----
uzz
Having found /r, and l.32,u33 can be computed from
/ r r U r :* I 3 2 u r r + u 3 3 : d 3 3 : . u J 3 : a 3 3 - f r r U r :- l y u z t
Subroutine LUFACcarries out these operations in two parts,
commented .lower
triangular components'and 'upper triangular components'respectively. .zero,
A pivot
is testedfor in the same way as was done in program 2.1.
Example 2.2
2xt-3x2* xa: 7
. X 1- x 2 - 2 x 3 : - l
3x1* X2- xl: 0
Solution2.2
fz -3
ilt'
rl [r 0 t utz ,rrl
l - r t - 2 l t 2 ,I
lr 1:1 uzz "rt
t
L3 | -lJ L/.,
t l
1., 0 urrJ
l
1B C H A P T E R2
":[;, : :-l
-i'
Ltr r 1l
':[' ;']
Forward-substitution:
Lv:u
'o tl
[r r olfr,l f
l0.s 0l{y,l:\-21
L" n 'Jlr,J I oJ
Ir:7, t2: -2 *0.5(7): -5.5
r t : - 1 . 5 ( 7 )1
+l ( 5 . 5 ) : 5 0 .
Back-substitution:
Ux:y
Simple uariables
N number of equations to be solved
PARAMETERrestriction
IN >N.
L I N E A RA L GE B R A I CE O U A T I O N S 19
PROGRAM P22
Number of equations N
J
COEFFICIENT MATRIX
.1000E+o2 .1000E+01 -.500OE+o1
-.2OOOE+O2 .3OOOE+O1 .2000E+02
.50008+01 .3000E+01 .5O00E+01
SOLUTION VECTOR
. 1000E+o1 -.2000E+01 .1400E+01
Compq.rison of outputs in Figs 2.1(b) and 2.2(b) will show that the upper triangular
factor of A in Fig. 2.2(b)is precisely the same as the modified upper triangular part of
A in Fig.2.l(b). Thus Programs 2.1 and2.2have much in common. However, were
many b vectors to be processed,it would merely be necessaryto call LUFAc once in
Program 2.2 and to createa small loop reading in each new b and calling susron and
SUBBAC to produce the solutions.Sincethe time taken in LUFACis substantiallymore
than that taken in suBFoRand sussec, this yields great economiesas N increases.
Inspection of the arithmetic in LUF^c will show that storage could be saved by
overwriting A by r-owrn.land urrRr, and this will be done in subsequentprograms.It
has not beenimplemented in this first factorisation program in an attempt to make the
computational processas clear as possible.
\
2.4 F,quationswith symmetriel coefficient matrix
If the coefficients
of the A matrix satisfvthe condition
aij:aji (2.I 8)
^:[';
:i] (2.re)
":[',''
0
I
- 1.5
:l
1_J
(2.20)
L I N E A RA L G E B R A I CE O U A T I O N S
21
and
['u o rl
u:f 0 4 _61
(2.2r)
L0 0 e_j
If the rows of U are then divided by
uro,we have
[r o.2s orl
u ' : fo I _1.5f
(2.22)
f o o l J
and it can be seen that l;i:uli' The scalingof u to (Jr is accomprished
terms by in matrix
IJ:DUl
(2.23)
whereD is the diagonalmatrix:
T - l
I r uo 0l
D:l 0 4 0f
(2.24)
Io o eJ
Thus, if A is a symmetrical matrix,
we can write
A:LU_LDU':LDLT.
(2.2s)
since the terms in Lr (the 'transpose')
can be inferred from the terms in L, it
sufficient to compute only L (or Lr), involving will be
approximatery urr the work in the
LU factorisation of unsymmetrical matrices.
Example 2.3
x1 * 2n2 * 2x3: 2
Solution2.3
Factorisethe coefficient matrix into upper
and lower triangular matrices. hence
A:LU
l-r o ol
l : l - o . e o z to l
L 0.3331.6 lJ
[t-2 r I
u:l 0 1 . 6 6 7 2 . 6 6 7|
Lo o - 2.6-l
Divide each row of U the diagonal term
-0.66i 0.3331
Ir
u':l o r r.6
l:L'
L o o l l
where
[: o ol
":l o t.66i o
L0 o -2 .6)I
Hence
U:DUl:DLI
Forward-substitution:
Lv: u
l - , o o l f ' , f] , l
l, ?J
L-llll t;:J:i-;j
. ! t : 3 , l z : - 3 + 3 ( 0 . 6 6 7 )-: 1
Yt : 2 - 0.333(3) + 1.6: 2.6
Back-substitution:
D L Tx : y
, l [ ' , 1[ , . |
[ : - '2667
L: -);,2"
ll;.i:t-;,j
L I N E A RA L GE B R A I CE O U A T I O N S z5
i ( 3 : - 1 ,x r : 1 - 1 + 2 . 6 6 7 ) 1 1 .:616 7
xr:(3+t+2)13:2
Simple uariables
N Number of equations to be solved
PARAMETERrestriction
IN>N
c
C PROGRAM2.3 GAUSSIAN ELIMINATTON
C USING A:L*D*LT FAqTORISATION LT OVERWRITESA
c
C ALTER NEXT LTNE TO CHANGEPROBLEI{SIZE
c
PARAMETER(rN=2o)
R E A LA ( r N , r N ) , D ( r N ) , 8 ( r N )
READ(5,*) N
R E A D( s , * ) ( ( A ( I , J ) , J = 1 , N ) , I : r , N )
R E A D( 5 , * ) ( B ( I ) , I = 1 , N )
I.TRITE(6,*) ('** GAUSSIANELIMTNATIONUSING LDLT FACTORISATION**')
I { R r T E( 6 , * ) ( r * * * * * * * r * * * * ****************
S Y M M E T R T C AELQ U A T I O N S t)
I{RITE(6,*)
hrRrTE(6, *) ( TCoEFFICTENTMATRIXT)
CALL PRTNTA (4, IN, N,N, 6)
V I R I T E ( 6 ,* )
w R r T E ( 6 , * ) ( ' R I G H T H A N DS r D E V E C T O R T )
CALL PRINTV(8,N,6)
WRITE(6.*)
CALL LDLT(A,IN,D,N)
W R I T E ( 6 , * ) ( ' U T M A T R I X )'
Do L r=1,N
1 wRrTE(6,100) (A(I,J) ,J=1,I)
W R I T E ( 6 ,* )
w R r T E ( 6 , * ) ( . D T A G O N A Lr E R M S ')
CALL PRINTV(D,N,6)
CALL LDLFOR(A,IN,B.N)
DO2 I : 1,N
DO2 J = 1,N
2 A(I,J) = A(I,J)/D(I)
CALL SUBBAC(A,IN,B,N)
WRITE(6,*)
W R I T E ( 6 ,* ) ( ' S O L U T I O N V E C T O R)T
CALL PRINTV(8,N,6)
100 FoRMAT(5812.4)
STOP
END
C H A P T E R2
Number of equations N
3
Coefficient matrix ( A ( r J, ) ,J : l , N ) , r : l , N
16.0 4.0 8.0
4.0 5.0 -4.0
8.0 - 4.0 22.0
Right-hand side B(I),r: I, N
4.0 2.0 5.0
Fig. 23(a) Input data for Program 2.3
COEFFTCTENT I{A?RIX
. 16O0E+02 .4000E+01 - 8OOOE+OI
..{000E+01 .5O008+o1 -.4OOOE+o1
.8000E+01 - . 400oE+o1 -2200E+O2
I].I MATRIX
. 1600E+O2
.4OOOE+o1 .4O00E+O1
.80008+01 -.60008+01 .90ooE+o1
DIAGONAL TERMS
. 1600E+O2 .4000E+o1 .9oooE+o1
SOLIITION VECTOR
-.25008+OO -1000E+01 .5oOOE+OO
Q 6 ) : a 1 x 1 2+ 2 a r r x r x z* . . . * 2 a y n x1 x n
* a 2 2 x 2 2+ . . . + 2 e 2 n X 2 X n
+...+
* annxn2
L I N E A RA L G E B R A I CE O U A T I O N S
2 a 12 x 1x 2 : e 1 2 X1x 2 + a t 2 x 2 x I
: Q r 2 X rX 2 * A 2 1X 2 X i
Q @ ) : o r 1 x 1 2* a 1 2 x 1 x z * . . . I e l n X l X n
* C I 2 1 X 21X* a 2 2 x2 2* . . . * e 2 n x 2 x n
: : :
I anlxnxl * an2xnx2* ... * annxnz
r , lo, a r z o r r l l o , , . . .o r , l
larr arzl | | l. I
arr, l^ dzz aztl, ,I
l a z t a^ zlz' ll o u l'
l o . , e t z o . r l l o " ,. . . o - l
must all be positive.
Example2.4
l - 2t 52 -- 4r 1{|.''')
x, ):
x 1 2+ 2 x 1 x 2 - 2 x 1 x 3
{x'xrx3}l 1 2 x 2 x 1* 5xr2-4x2x3
is positivedefinite.
C H A P T E R2
Solution 2.4
Irrl l rz 2 -tl
I
l' I l:l and I s -41:t
lz sl l-, -4
I
5l
all are positive.The quadraticform is
When the coefficient matrix A is symmetric and positive definite, a slightly different
factorisation can be obtained by forcing U to be the transposeof L. This factorisation
can be written
A: LLT (2.26)
or
[], ',',',
arzo,,l
';'
I',',;rr_l:L;r:
l:,', LlL: ;:l
[1,,
o o I [,,, r,, r,,l
(2.27)
In this caself1:drr aod so I11:n/a11 and in each row a square root evaluation is
'":[;
necessary.For the A given by eq. 2.19 the Cholesky factors are
:']t,: 1] (2.28)
This method is programmed in the next section, which deals with the situation when
A has a special structure.
Example2.5
Solution 2.5
A:LLT
[+ ool
L:l' , ol
lz -3 3l
Forward-substitution:
Lv: u
la o ol[r,l |, 'ul
l ' ' o l 1 r , f ": 1
lz -3 rl (.y.Jl-zz)
f
I
I I t : 4 , y 2 : X t 8- 4 ) : 7 , h : X - 2 2 + 2 1 -8): -3
II Back-substitution:
I Lrx:y
tt
have a'banded'structure (see,for example, Chapter 8).This meansthat the coefficients
are clusteredaround the diagonal stretching from the top left-hand corner of the matrix
C H A P T E R2
/"
.-b\-
"{:'
Qv
atz
att
Qta
a r s ' -.q
Qts
I
".;'l"-.
(2.2e)
a s s o,.
L:
ast
l
\J '. g(1
\' t r a d i n g
diagon
of the
In this case there are never more than two non-zero coefficientsto either side
'bandwidth' of this system is said to be 5. If the
leldine diagonal in any row. The
per
.[.m"i"i"nt, ure symmetrical,only the leading diagonal and two more coefficients
row need to be stored and operatedon. In this casethe'half bandwidth' is said to be 2.
If we wish to store only the lower triangular factors of the symmetrical A, the
coefficientsof interest are
lr, o o o o o
Ir,. Irr 0 o 0 o
1 . , I : i - 1 . . o o o (2.30)
L:
o l+z lot loo 0 0
o o lr. Iro Ir, o
0 0 lun lu, loa
and it can be seen that out of 36 potential locations, only 15 are nonzero. A more
economicalmethod of storageis to keep the band in a rectangular array, by shifting the
rows to obtain the structure
0 0 I,,
0 /r, In
lil In Jr.
LB: (2.3t)
Io, lo, Ino
lrr lto lts
Ioo les lua
PROGRAM P24
c
c P R O G R A M2 . 4 CHOLESKY FACTORISATION
c WITH BANDED STORAGE
c
c
tc
ALTER NEXT LINE TO CHANGE PROBLEM STZE
Numberof equations N IW
and half bandwidth J I
Coefficientmatrix (LB,(r,J),J: I, I W + l ) , I : l , N
0.0 0.0 16.0
0.0 4.0 5.0
8.0 -4.0 22.0
Rieht-hand side B 0 ) r, : 1 , N
4.0 2.0 5.0
Fig. 2.4(a) Input data for Program2.4
30 C H A P T E R2
* * * C H O L E S K Y F A C T O R I S A T T O N W I T H B A N D E D S I I O R J \ G E* * * *
************ S Y M M E T R I C A LE Q U A T I O N S* * * * * * * * * * * * * * * *
LT IN BAND FORM
. ooooE+oo .0000E+00 .40008+01
. o000E+00 .1000E+01 .2000E+01
.20008+01 -,3000E+01 .3000E+01
SOLUTION VEqIOR
-.2500E+O0 .10008+01 .50008+00
Simple uariables
N Number of equations to be solved
IW Half bandwidth of symmetrical banded coefficient matrix
Variable size anays
LB Lower triangular components of A stored as a rectangular band matrix with leading
diagonal terms A(K, K) stored in LB (K, IW+ l).
B Right-hand side vector - overwritten by solution.
PARAMETER restrictions
IN>N
IIW >IW
The program is extremely simple given the subroutine library. The number of equations and half
'half '
bandwidth are read in, followed by the lower of A in the appropriate form of eq. 2.31, and
then the resulting factor is printed out. A call to cHosuB completes the substitution phase and
the results, held in B, are printed.
Skyline
, I
atr
0J0
i--;;;--;;; _
o ;i "", .i -oi
,,,
0 i0 io io
0 Qsz att orr
i_0___j
Qta
l-f
A: (2.32)
0 0 0 e+t dts 0
Qst 0 asl Qsa QSS QSA
The extra information about number of components per row (column) is kept in an integer vector
called KDIAG. This could contain the number of components per row, or a running total,
representingthe'address'of each leading diagonal component of the original array in A, which is
the method used in this program. Thus the KDIAG associatedwith eqs 2.32 and 2.33 would be
Number of equations N
3
Skyline location KDIAG(I),I:1,N
1 3 6
Coefficient vector A(r),
r: l,rR
16.0 4.0 5.0 8.0 -4.0 22.0
Right-hand side B 0 )r,: 1 , N
4t0 2.0 5.0
I
Fig. 25(a) Input data for Program2.5
DIAGONAL I,OCATIONS
SOLU:IION VECTOR
-.2500E+OO .1000E+01 .50008+00
Simple oariables
N The number of equations to be solved
IR The total number of coefficients to be processed
PARAMETER restrictions
IA >IR
IB >N.
In the program, N is first read in, followed by the'address'vector KDIAG as describedin eq.2.34.
L I N E A RA L G E E R A I CE Q U A T I O N S
2.7 Pivoting
In the solution of unsymmetricalequations
using conventional Gaussianelimination
(Program 2.1)or LU factorisation(progra
m 2.2) weside-steppedthe problem of what
to do should a leading diagonal component
of the coefficienrmatrix be zeroto start
with' or becomezero during the solution process.
In the next program, we illustrate
how to cope with this by using a row interchange
techniqueusuaily cailed .pivoting,.
we saw that the crucialtermsor pivots lay on the
leadingdiagonal and so the technique
involves searchingfor the largest(absolute)coefficient
in the rows not so far processed,
and moving it into the leading diagonal position.
' For example,returning to eqs 2.3,it can be seen
that the largestcoefficientsoccur
initially in row 2, and so this would be interchanged
with row r to vield
2 0 x 3* 3 x 2I 2 O x ,: 2
-5xr* x2*10x,:l
(2.35)
5 x 3* 3 x 2* 5x, :6
After one stepof elimination(or factorisation)
we would have
20x3* 3x2-20xr:).
1.75x2-t 5x, : 1.5
(2.36)
2.25x,* l0x, : J.5
The largest coefficientin rows 2 to N is now in row 3
and so this would be interchanged
with row 2 to yield
20x3-20x1* 3x2-2
l0x1*2.25xr:J-5
(2.37)
5 x 1* l . 7 5 x r : 1 . 5
2 0 x 3 - 2 0 x 1* 3xr- 2
l0x1 * 2.25x2: 5.5 (2.38)
0.625x2: -1.25
( x r ,x r , x 3 ): ( - 2 . 0 ,1 . 0 ,1 . 4 ) .
The processjust described is actually called 'partial pivoting'.
An extension could be
made to'total pivoting'by extendingthe searchfor the
maximum modulus elementto
both rows and columns.
J+ C H A P T E R2
F Number
CALL PRINTV(X,N,6)
1000 FoRMAT (1015)
STCP
END
of equations N
?
Coefficient matrix (A(I,J),J: l, 1.0,I : I, N
10.0 1.0 - 5.0
-20.0 3.0 20.0
i l
5.0 3.0 5.0
Right-hand side B 0 ) I, : 1 , N
i.0 2-.o 6.0
Fig. Z6(a) Input datafor Program2.6
****** LU FACTORTSATION WITH PTVOTING *t****
COEFFICIENT MATRTX
. Lo00E+02 .10COE+01 .50008+01
-.2000E+o2 .3000E+c1 .20008+02
- 5000E+01 .30008+01 .500oE+o1
RIGHT }IAND SIDE VECTOR
.1000E+O1 .20008+01 .60008+o1
REORDERED ROW N-d}iEERS
2 3 1
SOLUTION VECTOR
. :.000E+01 -.20ooE+01 .1400E+0i
Fig. 2.6(b) Resultsfrom proqram 2.6
LINEAR
A L G E B R A IECO U A T I O N S
Input data are shownin Fig. 2.(a) and output resultsin Fig. 2.(b).
The nomenclatureusedis as follows:
Simpleuariables
N The numberof equationsto be solved
Variablelengtharrays
A The matrix of coefficients
(overwrittenby the factorsL and U).
B The right-handside
X The solution
Row The new row orderins
PARAMETER
restriction
IN>N.
2.7.1 lll-conditioning
Despite our best efforts to selectoptimum pivots, a set of linear equations may not be
solvable accurately by any of the methods we have just described, especially when
'hand'calculation
techniquesare employed. When this is so, the set of equations is said
to be 'ill-conditioned'. Fortunately, even modern electronic 'hand' calculators are
capable of working to many decimal places of accuracyand so the conditioning of sets
of equationsis not of suchgreat importane as it was when'hand'calculationsimplied
only a few decimal places of accuracy.
A very well-known example of ill-conditioned equations arises in the form of the
'Hilbert
matrix'obtained from polynomial curve fitting. The set of equations so derived
takes the form
, + 1 +* f .'J
+ r * * +l t ' I
+ + + 1'.+l:
1 1 1 1 1
4 5 6 ' 1 8
+ lx+l
* + + i +l',.J
In Table2.1,numericalsolutionsrounded off to 4, 5, 6,7 and 8 decimal placesare
comparedwith the true solution.lt can be seenthat something like 8 decimal place
t
r
C H A P T E R2
Table 2.1
o
['u
Ax:L; i"'l 8l
_; ;:l (.,..J
lrrl:b (2.3e)
4x, *(5* l O r o ) x 2- 4 x . : 6 ,
(2.40)
and the termin x2 swampstheothertwo terms.To get
thedesiredresurtof x2:5, it is
clearthat on the right-handsideweshouldput the
varue(5+ l0ro)x 5. we couldalso
L I N E A RA L G E B R A I CE O U A T T O N S
37
READ (5,*) N
(s,*) (KDIAG(r),r=1,N)
I - E A=D
IR KDIAG(N)
R E I A D( s , * ) (A(r),r=1,rR)
READ (5, *) (B(I) , r=1,N)
READ (5,*) NFIX
IIlg (s,-t (No(r),vAL(r),r=1,NFrX)
wRrrE(6,*) ( ' * * c H o L E S X yE a c i o R r s a r r o N w r r x
I{RrrE ( 6, *) (' *********** sxyLrNE ST'RAGE ***,1
S Y H M E T R T c AELe u A T r o p g * * * * * i * * * * * * * * * * ,
gtITE(5, *) (, r********** PRESCRIBEO -
vva, *nrrAa D s a
n r!,tnr b * *' *r *r * * * * * * * * * * * * * ' ) )
wnrrrio,*f
*) (,coEFFrcIENT VEqroR,)
!TrTE(6,
CALL PRINTV(A,IR,6)
I{RITE(6, *)
gtlTE(6, r) ('DrAcoNAL r-ocATroNs, )
w R r T E ( 6 , 1 o o ) ( K D T A G ( r,)r = 1 , N )
W R I T E ( 6 ,* )
W R I T E ( 6 , * ) ( , R I G H T t t A N DS I D E V E C T O R ' )
CALL PRINTV(8,N,6)
W R I T E ( 6 ,* )
W R T T E ( 6 , * ) ( , F R E E D O MN U H B E RA N D P R E S C R I B E D
VALUE')
DO 1 I=1INFIX
1 W R r T E ( 62, 0 0 )N o ( r ) , v A L ( r )
I{RrTE(6, *)
DO 2 f=lrNFfX
^ 1
2 Bt_5(?NIo4(c)I ()=N o ( I ) ) )= A ( x D r e c ( N o ( r ) )+) 1 . E 1 0
A ( K D T A G ( N ot i(-rv) i r , ? i j
CALL SKYFAC(A, N, KDTAG)
C E L L S K Y S U B( A , B , N , K D I A G )
t{RrrE(6, a) (,soLUTroN vicroR, r
CALL PRINTV(8,N,6)
100 FoRMAT(10r6)
200 FoRMAT(rs,Er2.4)
STOP
END
2
CHAPTER
Number of equations N
3
Skyline locations KDIAG (I),I-1,N
l J o
r jl ;:, 1l:crrset to ttre .rLigrnented diagonal multiplicd by the desired12, namel] 5.0. The results
! ' , . r , : r lr i r l ; i g . ) . i ( h ) .
qj
I ltcrative methods
' ; iii(i "vious secriions 'dircct'
lir of this chaptu', soltrlionmel.hodsfor systetnsof linear
r ' ; 1 ; : ), ,' , ' : i . i r r d . i i o [ ] r , '
l t ; t v € [ x ; e t t c l e s c r i f tIcJrVl .' d i r e r J " l v t : m e atthi la l f h t , s o l t r l i c r t l r n e t h o d
'i,, ir,l 1,. tlle ar!swer in a fjxr:d nulrrtx:r etf arithrnctic olx;rafi(lus. f irrhicct to
1r,. , 1 r resi l ( : ; ; s ' e x a c l . ' afsh < :( ; o l r t p u t ( : r . l t a r d w a1. rr rr r' i r n i t t e d .
r r . r fi j . , i r s , ' ; t . l l r i ( i r ,w
. i . ) ( i p i ) ' I i f f r r : r i l y f t r r ) n t e t r r r r l r l . i o i tb Y l l r t : t t s ( ) t l r 1 ] i c s t i l t i i i t l r r t l t ( ) ( ) e $ s .
'lr-lrlit 'f'hese
r r .j \ i t l r , i " i i : . j . : I i o n, i r : : r l r ; w i f l ; stJ' <tl tl.etalive tttr.{li(r(ls of so]trlilrrl.
,r ,:rr 'r i,lr;; r..i.i liir;t ;,rur:sl;ini:. iir) i!rrswet, v,'lti<tltis {lir:li $tt(rlrst;irrel! rxlrlected
: i t ; r : { i r ; , , i r , . r r : i ! , i , r ;ri , . : v e l u i n r r : t h o r l s ; i v i l } l x l r l < r s t : l - i l r t , < l , v , , l i i t ; hicl rl li }f rl -cct lc c h n i q u e
; 1 ; , , , ' l i ; . . , i i ' . . . { , } r e { : i r i i n s A - r ( li l a d e , a i r r } w i } l t X : f o t t t r d { o l t : A r i { O r J i f } r " i ' e nrta t e s O f
,!rJr!i'fq'lluc t{r ihe truc solutiOn.
L I N E A RA L G E B R A I CE O U A T I O N S 39
and then make a guessat (xr, x2) on the right-hand side to seeif it fits the
left-hand si<ie.
For this purpose it would be more convenient to have the simple variables
x1 and x2 on
the left-hand side, so we can write
xr:2 -0.5 x2
xz:2.5-0.5 xt (2.43)
Now we make a guessat (x,., xr) on the right-hand side, say (r.0, 1.0).Equations
2.43
then yield
{l;}:{li} {2.44)
I',i: It.rI
l.7sJ (2.4s)
lr, ) i
CHAPTER
2
Iteration number
0 1 2 3 4
[ . . ' l I . o 1 . 5 t . o 1 . 1 2 5t . o
t..rj l.o 2.L t.ls 2.0 1.9375
Ir a,zo,.ll.",.l[o'l
'o,,
1,,: ;"Ji;:j:ii,j (2.47)
The coefficl e n t m a t r i x m a y n o w b e s p l i t i n t o
A:I-L* U (2.48)
in which
l-o
I
o o l
t:l * o,, 0 0 l (2.4e)
L - o . , -e tz ol
and
[o *erz -o,.1
u : l o o - o , .1
[o o o]
Example 2.6
l6x1+4x2* 8x.:4
4x1+5x2- 4xt:2
8 x r- 4 x 2 * 2 2 x t : 5
by simple iteration.
Solution 2.6
t ( t + r- b + ( L + L I ) x r
hence
o -o2s -orl
[',1-.' lozs
I t-
{',) :(0.4 )+l-o.s o
[,.,.}-
o . 8l { x , )
[';J lorrrt) L-03636 or8r8 o ] ['.J
let
f.,J'ftJ
{',}:{r) |
l".J['J
hence
/' \1 /' \
l',1 l-0, I
1*rl:1 04 |
[..J I o.oosrJ
and
|.',.|'forzz:l[-o.zsl(aftermanyiterations)
:10.83641-1 I
1r, I |
[ ' , J [ o + a r aI l o r J
In this mse, @nvergenceis very slow (seeProgram 2.8).To operate this scheme on
a computer, we can see that the library modules we shall need are merely
a matrix-vector multiplication routine to compute (L+{.I)x and a vector addition to
add b to the result of the multiply. A check on convergencewould also be helpful. This
'Jacobi
method is called iteration'.
42 C H A P T E R2
l6x1*4x2* 8x.:4
4xr*5x2- 4xt:2 (2.s4)
8x1-4x2*22xr:J
The convergencecriterion adopted is a simple one, namely that the change in the largest
component of the solution vector shall not b€ greaterthan a small proportion of that component,
called the'tolerance'.
PROGRAM P28
Number of equation N
J
FrRsr FEwrrr"errofr
-.5000E+oo .4000E+00 .4545E-O1
. 12738+00 .8364E+OO .48188+OO
-.2000E+oo .68368+00 .3331E+OO
-.87448-01 .8264E+OO .42438+OO
. -.1688E+00 -8094E+00 .40938+OO
SOLUTION VECTOR
-.2500E+OO . 1000E+01 .50008+oo
Fig. 2.8(b) Results from Program 2.8
The initial guessedsolution is (x1, x2, x3): (1.0, 1.0, 1.0).The program data are listed in Fig.
2.8(a).Apart from integer counters and temporary storage the variables used are
Simple oariables
N Number of equations to be solved
TOL Convergencetolerance, defined above
ITERS Number of iterations carried out
ITS Maximum number of iterations
PARAMETERrestrictions
IN>N.
The number of equations to be solved, the coefficientmatrix, the right-hand side vector and
the initial vector are read in, followed by the convergencetolerance and an upper limit to the
number of iterations.
The program then scalesthe original coefficients and right-hand sides by dividing by the
diagonal coefficientin each row (called DIAG in the program). The library routine NULL is used
to fill.the matrix LPU with zeros, and then LPU, as defined in eqs 2.53 is formed.
The guessedsolution, X, is input as data and the iteration loop entered.As called for by eqs
2.53 the library routine MvMULT multiplies LPU by the currcnt solution X and puts the
temporary result in XNEW. This can then be added to right-hand side B using library routine
vacroo, to give a new solution stilt calledXNEW. This can be comparedwith X by using the
convergencelibrary subroutine cuecoH. If XNEW agrees with X to the specified tolerance,
ICON is set to I and the iterationsterminated-Note that the cuEcoN subroutinealso updates
X to XNEW.
Thdprogram prints out the number ofiterations taken and the valuesofthe solution vector
for the first 5 iterations and at convergence.This output is shown in Fig. 2.8(b),illustrating that 51
iterations are required to give a result accurate to abut 4 placesof decimals using this simple
technique.
In Jacobi's method, all of the components of xt* I (called XNEW in Program 2.8) are
evaluated using all the componentsof xt (X in the program). Thus the new information
is obtaindd entirely in terms of the old.
However, after the first row of eqs 2.53 has been evaluated, there is a new xrl*1
available which is presumably a better approximation to the solution than xrt. In the
Gauss-Seideltechnique,the new value x1t* I is immediately substituted for xr& in the
'old'
solution X. After evaluation of row 2, x2r* I is substituted for x2r and so on. The
convergenceof this processoperating on eqs 2.41 is
Iteration number
./. 5
'l f''J-.'fr,l
l_' :to,, :t;:l
L"::,I i;:j (2.58)
Example 2.7
\
Solve the equations
Solution 2.7 .
Hence
f''l':tll
['l
i;;J
46 C H A P T E R2
'
.lo.8
[ r
r o oo l (lf','l
x,):<
f-or I
1.2 |
fo.ruru 1J f,,J
-0.1818 I orrrt)
Henceby forward-substitution
|.',.l'[-orl
(x,):( 1.6)
l'',i I o'J
[r
_1,,,, :i o
llt..Jz):,J
ol [',.]' [-o' I
L::,,,
and also by forward-substitution
:t lil..J-l
f",l' [-0, I [-ozsl(after many iterations)
;,J
i-:i
Convergenceis still slow (see Program 2.9), but not as slow as in Jacobi's method'
PROGRAM P29
c
P R O G R A M2 . 9 GAUSS-SEIDEL ITERATION
C
c AI,TER NEXT LINE TO CHANGE PROBLEM SIZE
c
PARAMETER (IN=zo)
c
(IN)
R E A LA ( I N , I N ) , U ( I N ' I N ) , B ( I N ) , X ( I N ) , X N E W
c
R E A D( 5 , * ) N , ( ( A ( I , J ) , J : l ' N ) , I : 1 , N )
R E A D( s , * ) ( B ( I ) , I = 1 , N )
L I N E A RA L G E B R A I CE O U A T I O N S 47
*t ( 'coEFFrcrENr
llJlr(4, MATRTX')
PRINTA (A, IN, N, N, 6)
.CALL
WRITE(6, *)
*) ( 'RrcHT HAND srDE
II_IIE(6, vEcToR' )
CALL PRINTV(8,N,6)
wRITE ( 6, *)
DO 1 r : l,N
DIAG = A(r,r)
DO 2 J = l,N
2 A(r,J) = A(r.J)/DrAG
t- B ( I ) = B ( T ) / D I A G
CALL NULL(U,IN,N,N)
DO 3 r = l,N
DO 3 J = I + 1,N
u(I,J) = -A(I,J)
A(I,J) : 6.s
WRITE(6,*) ('FTRST FEW ITERATIONS')
ITERS = 0
4 fTERS = ITERS + 1
C A L L M V I ' I U L T( U , I N , X , N , N , X N E W )
C A L L V E C A D D( 8 , X N E W ,X N E W ,N )
C A L L S U B F O R ( A ,T N , X N E W ,N )
C A L L _ C H E C O N( X N E W, X , N , T O L , r c o N
)
rF (rTERS.LE.s) CALL lnrxrvlx,N,o;
r F ( r c o N . E e . o . A N D . r r r R s . L T . r r s l ' c i lr o a
W R I T E ( 6 ,* )
grITE(6,*) (,rTERATroNs TO{ONVERGENCE'
wRrTE(6,*) rTERS )
I{RITE(6,*)
*) ( 'soLUTrON vEcToR,
lllrE(6, )
CALL PRINTV(X,N,6)
STOP
END
Number of equations N
3
Coefficient matrix ( A 0 , J J) ,: 1 , N I r : r , N
16.0 4.0 8.0
4.0 5.0 -4.0
8.0 -4.0 22.0
Right-handside B 0 ) r, : t , N
4.0 2.0 5.0
Initial guessof x(I),r: I, N
solution 1.0 1.0 1.0
Tolerance TOL
l.E-5
Iterationlimit ITS
t00
COEFFICIENT !{ATRTX
.1600E+02 .40008+01 .8000E+o1
.4000E+01 .50008+01 -. 4000E+01
.80008+01 -.4000E+01 .22OOE+O2
ITERATIONS TO COIWERGENCE
30
SOLUTION VECTOR
-.2500E+OO - 1000E+01 .5000E+00
a
2.9.4 Successiveoverrelaxatiorr 4
xt+1-xt-a{xr*1-xt) (2.5e)
For the Gauss-Seidelmethod, co: l, but in general I <ar<2. Thus larger than
I }
I
xt*1:(l -co)xt+a;xr*1 (2.61)
so that
There is one extra input quantity, OMEGA, which is the overrelaxation scalar,but no other
major changes.First A and b are scaledas usual and b given the multiple rr,las required by eqs
2.63.Using library routine MSMULT, A is then multiplied by the scalar a; and U and A organised
as before.In order to end up with (I * <oL)and [(l - ro)I + otJ] on the left- and right-hand sidesof
eqs 2.63 the diagonals of A and of U are replaced by 1.0 and 1.0-ar respectively.
P R O G R A MP 2 1 0
Number of equations N
3
Coefficientmatrix ( A ( r , J J) ,: 1 , I 0 , I : 1 , N
16.0 4.0 8.0
4.0 5.0 -4.0
8.0 - 4.0 22.0
Right-handside B 0 )r,: 1 , N
4.0 2.0 5.0
Initial guessof x ( r )r,: l , N
solution 1.0 1.0 1.0
Tolerance TOL
t.E-5
Iterationlimit ITS
100
Overrelaxation OMEGA
scalar 1.5
t
I
*** t**** ***
COEFFICIENT UATRIX
.16008+02 .40008+01
** * * *** ** * * *
succEssrvE oVERRELAXATION
.8000E+01
F\
.4000E+01 .5000E+01 -.40008+ol,
.80008+01 -.40008+o1 .2200F,+O2
ITERATIONS TO CONVERGENCE
lo
SOLUTION VECTOR
-.2500E+O0 .1000E+01 .5000E+00
Iteration proceedsusing exactly the same loop as for Gauss-Seidel.In the case of the input
shown in Fig- 2.10(a),the number ofiterations to convergencehas droppcd to 18(seeFig. 2.l0(b).
Figure 2.11 shows the variation of iteration count with overrelaxation scalar co. For these
equations the optimum cr;is about 1.4, but this will be difficult to predict.
L I N E A RA L G E B R A I CE O U A T I O N S 51
130
120
o 110
c
3,1oo
> v u
c
8eo
9
c
--o An
6 - '
b - ^
= 5 U
Eo
ao
€- s o
z20
10
For any trial solution xt, the error or'residual' will clearly be expressiblein terms of
l:b-Axr (2.64)
l*1:r*-drur (d)
52 C H A P T E R2
P R O G R A MP 2 1 1
c
P R O G R A M2 . 1 1 STEEPEST DESCENT
c
c . ALTER NEXT LINE TO CHANGE PROBLEI'I SIZE
PARAMETER (IN=2o)
R E A L A ( I N , I N ) , B ( I N ) , X ( I N ) , R ( I N ) , U ( I N ) , T E M P ( I N ) , X N E W( r N )
R E A D( 5 , * ) N , ( ( A ( I , J ) , J = 1 , N ) , I = 1 , N )
READ(5'*) (B(I),I=1.N)
READ (5, *) (x(I) ' I=1,N)
READ (5,*) TOL,ITS |
WRITE(6,*) ( r*************** S T E E P E S T D E S C E N T* * * * * * * * * * * * * * * * * )
WRITE(6,*)
WRITE(6, *) ( 'COEFFICIENT MATRIX' )
CALL PRINTA(A, IN,N, N, 6)
WRITE(6,*)
WRITE(6,*) (tRIGHT HAND SrDE VECTORT)
CALL PRINTV(8,N,6)
WRITE(6,*)
W R I T E ( 6 , * ) ( ' S T A R T I N G V E C T O R T)
CALL PRTNTV(X,N,6)
WRITE(6, *)
CALL },IVI'fULT(A, IN, X, N, N, TEMP)
C A L L V E C S U B( B , T E H P , R , N )
WRITE(6,*) ('FIRST FEW ITERATIONSI)
ITERS = o
1 ITERS - ITERS + I
C A L L M V I { U L T( A , I N , R , N , N , U )
CALL VDOTV(R,R,UP,N)
C A L L V D O T V( R , U , D O W N ,N )
ALPHA : UP/DOWN
CALL VECCOP(R,TEMP,N)
C A L L V S M U L T( T E M P , A L P H A , N )
C A L L V E C A D D( X , T E M P , X N E W ,N )
CALL VSMULT (U, ALPHA, N)
CALL VECSUB(R,U,R,N)
C A L L C H E C O N( X N E W ,X , N , T O L , r C O N )
IF (ITERS.LE.5) CALL PRINTV(X,N,6)
IF (ICON.EQ.O.AND.ITERS.LT.ITS) GO TO 1
!,IRITE(6,*)
WRITE(6,*) ( T I T E R A T I O N S T O C O N V E R G E N C E) T
WRITE(6,*)ITERS
WRITE(6,*)
W R I T E ( 6 , * ) ( ' S O L U T T O N V E C T O R ')
CALL PRINTV(X,N,6)
STOP
END
L I N E A RA L G E B R A I CE O U A T I O N S 53
Number of equations N
J
**************r S T E E P E S TD E S C E N T* * * * * * * * * * * * * * * * *
COEFFICIENT MATRIX
.16008+02 .4000E+o1 .8000E+01
- 40008+01 .5000E+o1 - . 4 0 0 o E + o1
.80008+o1 -.4000E+o1 .2200E+O2
STARTING VECTOR
.10008+01 .1000E+01 .1000E+01
ITERATIONS TO COWERGENCE
oa
SOLUTION VECTOR
-.2500E+00 .9999E+00 .5000E+oo
Fig. 2.12{b) Resultsfrom Program2.ll
solved[N), the equationleft- and righrhand sides(A and B) and a starting guessto computcthe
initial valueof r in eqs2.646). The iteration tolerance(fOL) and limit flTS) completethe data.
The operationsinvolved in eqs2.64are first performed,namely the matrix-vector multiply
Axo and its subtractionfrom b to fonn R. The iterations in eqs2.65can then proceedinvolving
MVMULT(step(a))and two dot productsgiving a (ALPHA in step(b)).Vector R is temporarily
copiedinto TEMP andstep(c)completedby vsl,tulrand veclnn. It remainsonly to complete
step(d) by vsuulr and vrcsun, to checkthe toleranceon X usingcHEcoN, and to continue
the iteration or stop if convergencehas been achieved.The resultsare shown in Fig. 2.12(b),
where it can be seenthat 6l iterations are required to achievethe desiredaccuracy.
C H A P T E R2
The results obtained by running Program 2. 1l show that the method of steepest
descentis not competitive with the other iteration methods tried so far. However, it can
be radically improved if the descentvectors are made mutually'conjugate'with respect
to A. That is, descentvectors p satisly
(.*)t r*
do: (b)
(p*)t u*
rr*1:rft-drut (d)
, (fn r)r l+ r
r1 x1 . : - (e)
(l)' I
pft+r-rt+'*flrpr (f)
PROGRAM P212
c
c P R O G R A M2 . 1 2 CONJUGATE GRADTENTS
c
ALTER NEXT LINE TO CHANGE PROBLE]'{ SIZE
PARAMETER (IN=20)
c
R E A LA ( r N , r N ) , B ( r N ) , x ( r N ) , R ( r N ) , u ( r N ) , T E M P( r N ) , P ( r N ) , x N E w ( I N )
c
R E A D( s , * ) N , ( ( A ( r , J ) , J : 1 , N ) , r : 1 , N )
R F j A D( 5 , * ) ( B ( I ) , I = 1 , N )
R E A D( 5 , * ) ( X ( I ) , I = 1 , N )
READ(5,*) TOL,ITS
W R I T E( 6 , * ) ( r * * * * * * * * * * * * * * * CoNJUGATE G R A D I E N T S* * * * * * * * * * * * * * |)
W R I T E ( 6*' )
LINEAR ALG EERAIC EOUATIONS 55
W R I T E ( 6 , * ) ( T C O E F F I C T E N TM A T R I X T)
CALL PRINTA(A, IN, N, N, 6)
WRITE(6,*)
WRITE(6,*) ('RIGHT HAND SrDE VECTORT)
CALL PRTNTV(8,N,6)
W R I T E( 6 , * )
w R r T E ( 6 , * ) ( ' S T A R T I N G V E C T O R ')
CALL PRINTV(X,N,6)
W R I T E( 6 , * )
C A L L M V M U L T ( A ,T N , X , N , N , T E M P )
C A L L V E C S U B( 8 , T E M P , R , N )
CALL VECCOP(R,P,N)
WRITE(6,*) ( ' F I R S T F E W I T E R A T I O N S ')
ITERS = O
ITERS:fTERS+1
C A L L M V M U L T( A , I N , P , N , N , U )
CALL VDOTV(R,R,UP,N)
C A L L V D O T V ( P , U , D O W N ,N )
ALPHA = UP/DowN
CALL VECCOP(P,TEMP,N)
CALL VSMULT (TEMP, ALPHA, N)
C A L L V E C A D D( X , T E M P , X N E W ,N )
CALL VSMULT (U, ALPHA, N)
CALL VECSUB(R,U,R,N)
CALL VDOTV(R,R,UP2,N)
BETA = UP2lUP
CALL VSMULT(P,BETA,N)
CALL VECADD(R,P,P,N)
C A L L C H E C O N( X N E W ,X , N , T O L , I C O N )
rF (ITERS.LE.5) CALL PRrN?V(X,N,6)
IF (TCoN.EQ.0.AND.ITERS.LT.ITS) cO TO l-
WRITE(6,*)
WRITE(6,*) ( |I?ERAT'IONS TO COIMRGENCE')
wRrTE(6,*) ITERS
WRITE(6,*)
W R I T E ( 6 , * ) ( ' S O L U T I O N V E C T O R T)
CALL PRTNTV(X,N,6)
STOP
END
Number of equations N
3
Coefficientmatrix ( A ( I ,J ) ,J : l , N ) , I : l , N
16.0 4.0 8.0
4.0 5.0 -4.0
8.0 -4.0 22.0
Right-handside B 0 ) r, : 1 , N
4.0 2.0 5.0
Initial guessof x(I),I:l,N
solution 1.0 1.0 1.0
Tolerance TOL
1.E-5
Iteration limit ITS
r00
COEFFICIENT MATRIX
.1600E+02 .4000E+o1 .8000E+01
.4000E+01 .50008+0L _.4OOOE+O1
. BOOOE+o1 -.4000E+01 .22OOE+O2
STARTING VECTOR
. 1000E+01 .1000E+o1 .1000E{'01
ITERATIONS TO CONVERGENCE
4 q
SOLUTION VECTOR
-.2500E+00 .10008+01 .5000E+oo
When the program is run, the results are as shown in Fig. 2.13(b).The iteration count has
dropped to 4, making this the most successfulmethod so far. In fact theoretically, in perfect
arithmetic, the method would converge in N (3 in this case)steps-
r
t
2.10.3 Convergenceof iteration methods
Figure 2.14illustrateshow the five iterative methods describedin this chapter converge
on the solution (xr, x2):(1.0, 2.0)of eq. 2.41.Also shown are contours of error in the
solution defined when the error in x1 an!jc2-p]q Pglient and 20 per cent respectively
and the measure of total error is Jl"{rr)'+e(xr)'). The methods with poorest
convergence properties (Jacobi and steepestdescent) are seen to be furthest from
normal to the error contours in the vicinity of the true solution.
2.10.4 Preconditioning
Ax: b (2.68)
S O R a r= 1 . 2
- 201"error
1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.8 1.9 2-O 2.1 xz
Fig. 2-14 Convergenceof iterative methods
2.12 Exercises
I Solve the set of simultaneousequations
6x1*3x2* 6x.: lg
2 x 1* 3 x 2 * 3 x r : 1 7
xt+2x2*2x.:11
Answer: x1:1, x2:2, xs:3.
fj
Lr -
I |
-,
;l{i,}
,_J
{:}
l;.j l;j
by LU factorisation.
a n dx r : ; s r - r : : x c - 1 .
Solvethe symmetricalequations
9.3746x1+ 3.04I 6x2- 2.437| 4 : 9.2333
3.04I 6x1+ 6.1832xl- : g.2Bg
+ 1.2163x.,
- 2.4371x 1+ l.2l63x + 8.4429 :
2 x, 3.9339
by LDLr decomposition,and hencefind the determinant
of the coefficientmatrix.
Answer:xr :0.8964, xz:0.7651,xr:0.6145.
The-terms in D are Drr:9.3746, Dzz:5.1964,Dy,.:7.0341and
so the determinant of the
coefficientmatrix is 342.66.
Solvethe symmetricalsystem
5 x t+ 6 x 2 - 2 x t - 2 x n : 1
6 x 1- 5 x 2 - 2 x 3 a ) a o : g
- 2 x 1 - 2 x 2 * 3 x 3- x o : Q
-2x1*2x2- x3-3xa:Q.
A n s w e rx: r : 0 . 1 2 4 4 6 ,x z : 0 . 0 7 7 2 5 , x::0 . 1 I 1 5 9 x, o : _ 0 . 0 6 g 6 7
L I N E A RA L G E E R A t CE O U , A T I O N S
Solvethe symmetricalsystem
x1*2x2-2x3+ xo: 4
2x1+5x2-2x3+3xo: 7
- 2 x , - 2 x 2 * 5 x 3+ 3 x n : - 1
x 1* 3 x 2* 3 x 3+ 2 x a : 0
A n s w e r :x r : Z , x . 2 : - l , x r : - 1 , x t : 2 .
Attempt to solveExercises
4 and 5 by Cholesky'smethod(program2.4).
Answer:Squareroots of negativenumberswill arise.
Solvethe symmetricalbandedsystem:
f4 2 o ol [r,) rt\
I t I I l'l
lI 2 8 2 0 rl l<r , >
t ,0t
:1^)
lo 2 8 2l l"( lo{
Lo o 2 +_Jt'.J [oJ
xr : -0.311,xr :0.089, xa: -Q.e{{.
Answer:xr : 1.156,
11 Solvethe equations
20x1* 2x2- xz:25
2xt+l3xz-2xt:30
xr * x2+ xt: 2.
using(a) Jacobiand (b) Gauss-Seidel iterations,usinga startingguessj(l -xz:r:-0.
A n s w e r :x r : 1 , x 2 : 2 , x r : - 1 .
60 C H A P T E R2
X1 X2 )C3
0 0 0
1.25 .2.308 2
1.119 2.423 - 1.558
0.9298 1.8959 - 1.5423
0.9833 1.9274 -0.8257
1.01598 2.02939 -0.91067
0 0 0
t.25 2.1154 -1.3654
0.97b19 t.s4837 -0.91857
1.00923 2.01111 -r.02034
0.99787 1.99720 - 0.99507
1.00053 2.00068 - 1.00120
- 5,
t 2 Comparetheiterationcounts,for a solutiontoleranceof I x l0 in thesolutionof Exercise
I I by the followingmethods:
SOR (ro: 1.2)
Jacobi,Gauss-Seidel,
descent,conjugategradients.
steepest
Answer: 16, 10,30,doesnot converge,doesnot converge.
The last two methodsare suitableonly for symmetric,positivedefinitesystems.
t 4 Check that the solution vectors{1.22 -1.000 3.037}rand {1.222 -0'75 3.33}Tare
both solutionsto the system
9 x r+ 9 x 2 + 8 x t : 2 6
9xt+8xz+7xt:24
8xr+7xr+6xt:21
to within a toleranceof 0.1.Find the true solution.What do theseresultsimply about the
systemof equations?
Answer:True solution {1.0 1.0 1.0}t
The systemis ill-conditioned.
Evans, D. (ed.) (1985). sparsity and its Applications, cambridge University press,
cambridge.
Fox' L. (1964). An Introduction to Numerical Linear Aloebra, clarendon press.
oxford.
Frazer, R-A., Duncan, WJ. and Collar, A.R. (1938). Elementary Matrices and some
Applications to Dynamics
and. Diferential Equations, Cambridge University press, Cambridge.
Froberg C.E. (1969)'Introduction to Numerical Analysis,2ndedn,Addison-Westey,
Reading Massachusetts.
Gladwelt, I. and wait, R. (eds)(1979).A Surueyof Numerica!Methods
for paitiat Diferential Equations,
Oxford University press, Oxford.
Householder' A. (1965). The Theory of Matrices in Numericar Anarysis,Ginn,
Boston.
Jennings, A. (1977)-Matfix computation
for Engineersand Scientisis,wiley, chichestcr.
Varga, R.S. (1962). Matrix Analysis,prentice-Hall, Englewood Cliffs, New
Jersey.
Wilkinson, J.H. and Reinsch, C. (1971).Linear Algebra, Handbook for Automaiic
Computation, Vol. 2,
Springer-Verlag, New york-