Linear System of Equations: Gauss Elimination Method
Linear System of Equations: Gauss Elimination Method
Linear System of Equations: Gauss Elimination Method
1
Gauss Elimination method
Number of flops for Gauss Elimination method:
2
Gauss Elimination method
%----Back substitution---------
x=zeros(n,1);
x(n)=Aug(n,Nb)/Aug(n,n);
for k=n-1:-1:1
for j=k+1:n
Aug(k,Nb)=Aug(k,Nb)-Aug(k,j)*x(j);
% computes b(k)-sum(akj*xj)
end
x(k)=Aug(k,Nb)/Aug(k,k);
end
Cost of computing
Example 1:
Estimate the time required to carry out back substitution on a system of
500 equations in 500 unknowns, on a computer where elimination takes
1 second.
For Elimination
approx. flops, fe= 2n3/3=2*5003/3
time needed, te=1 sec
For Back substitution
approx. flops, fb=n2=5002
time needed, tb=?
tb/te=fb/fe
tb=te*fb/fe=3/(2*500)=0.003 sec
3
Cost of computing
Example 2:
Assume that your computer can solve a 200*200 linear system Ax=b in
1 second by Gauss elimination method. Estimate the time required to
solve four systems of 500 equations in 500 unknowns with the same
coefficient matrix, using LU factorization method.
Definition
Let xc be an approximate solution of the linear system Ax=b.
x exact
Error magnification factor=
b Ax c
b
8
4
Condition number
Definition
The condition number of a square matrix A, cond(A), is the maximum
possible error magnification factor for Ax=b, over all right-hand side b.
Theorem
The condition number of a n*n matrix A is cond(A)=||A||*||A-1||
Pivoting
• If the rows to the right of the pivot element are also checked and
columns switched, this is called complete pivoting.
10
5
Gauss Elimination with partial pivoting
clc; close all; format long e;
% A= coefficient matrix % b=right hand side vector
% x=solution vector
A=[10 -7 0; -3 2 6; 5 -1 5]; b=[7;4;6];
[m,n]=size(A);
if m~=n, error('Matrix A must be square');end
Nb=n+1;
Aug=[A b]; % Augmented matrix, combines A and b
%----forward elimilation-------
for k=1:n-1 % from row 1 to n-1
%-----------partial pivoting------------
[big, big_pos]=max(abs(Aug(k:n,k)));
pivot_pos=big_pos+k-1;
if pivot_pos~=k
Aug([k pivot_pos],:)=Aug([pivot_pos k],:);
end
%--------------------------------------
for j=k+1:n
multiplier=Aug(j,k)/Aug(k,k);
Aug(j,k:Nb)=Aug(j,k:Nb)-multiplier*Aug(k,k:Nb);
end 11
end
%----Back substitution---------
x=zeros(n,1);
x(n)=Aug(n,Nb)/Aug(n,n);
for k=n-1:-1:1
for j=k+1:n
Aug(k,Nb)=Aug(k,Nb)-Aug(k,j)*x(j);
% computes b(k)-sum(akj*xj)
end
x(k)=Aug(k,Nb)/Aug(k,k);
end
12
6
Scaling
13
Tridiagonal systems
function [xout]=tridiagonal(a,b,c,d)
N=length(b);
%---forward elimination------------
for k=2:N
multiplier=a(k)/b(k-1);
b(k)=b(k)-multiplier*c(k-1);
d(k)=d(k)-multiplier*d(k-1);
end
%-----back substitution----------
x(N)=d(N)/b(N);
for k=N-1:-1:1
x(k)=(d(k)-c(k)*x(k+1))/b(k);
end
xout=x.';
%---------------------------------
Main:
format long e; clc; clear all; close all;
a=[0 -1 -1 -1]; b=[2.04 2.04 2.04 2.04];
c=[-1 -1 -1 0]; d=[40.8 0.8 0.8 200.8];
[xout]=tridiagonal(a,b,c,d)
14
7
Jacobi method
function [xout iter_number]=Jacobi(A,b,max_iter,TOL)
N=length(b);
d=diag(A); % takes the main diagonal elements only
x=zeros(N,1); % initial guess vector
for k=1:max_iter
x=(b-(A-diag(d))*x)./d;
maxresidual=norm(b-A*x,inf);
if maxresidual<TOL
break;
end
End
iter_number=k;
xout=x;
%----------------------------------------------------
Main:
format long e; clc; clear all; close all;
A=[3 -0.1 -0.2; 0.1 7 -0.3; 0.3 -0.2 10];
b=[7.85;-19.3; 71.4];
max_iter=1000;TOL=1e-5;
[xc iter_number]=Jacobi(A,b,max_iter,TOL); 15
16
8
Gauss Seidel method
function [xout iter_number]=GaussSeidel(A,b,max_iter,TOL)
N=length(b);
d=diag(diag(A)); % takes the main diagonal elements only
x=zeros(N,1); % initial guess vector
U=triu(A,1); % above the main diagonal
L=tril(A,-1); % below the main diagonal
for k=1:max_iter
bL=b-U*x;
for j=1:N
x(j)=(bL(j)-L(j,:)*x)./d(j,j);
end
maxresidual=norm(b-A*x,inf);
if maxresidual<TOL
break;
end
End
iter_number=k;
xout=x;
%---------------------------------------------------------
17
SOR method
function [xout iter_number]=SOR(A,b,relax,max_iter,TOL)
N=length(b); d=diag(diag(A));
x=zeros(N,1); % initial guess vector
U=triu(A,1); % above the main diagonal
L=tril(A,-1); % below the main diagonal
for k=1:max_iter
bL=relax*(b-U*x)+(1-relax)*d*x;
for j=1:N
x(j)=(bL(j)-relax*L(j,:)*x)./d(j,j);
end
maxresidual=norm(b-A*x,inf);
if maxresidual<TOL
break;
end
end
iter_number=k;
xout=x;
18
9
Sparse matrix computations
3 1 0 0 0 0 0 0 0 0 0 2
1
1 3 1 0 0 0 0 0 0 0 1
0
2
0 1 3 1 0 0 0 0 0 1
0 0
2
0 0 1 3 1 0 0 0 1
2 0 0 0
0 0 0 1 3 1 0 1
0 0 0 0
2
0 0 0 0 1 3 1 0 0 0 0 0
AS
0 0 0 0 0 1 3 1 0 0 0 0
0 0 0 0 1
2 0 1 3 1 0 0 0
0 0 0 1
0 0 0 1 3 1 0 0
2
0 0 1
2 0 0 0 0 0 1 3 1 0
0
1
2 0 0 0 0 0 0 0 1 3 1
12 0 0 0 0 0 0 0 0 0 1 3
One double precision floating point number requires 8 bytes (64 bit) of storage.
10
Sparse matrix computations
• So, it is clear that Gauss Elimination method for this problem is not an
overnight computation.
21
Rank Deficiency
Kronecker-Capelli theorem:
22
11
Rank Deficiency
Example 1: Solution:
A=[ 1 -2 3 ; RA=2
2 4 -1 ; RAb=3
-1 -14 11 ];
Warning: Matrix is singular to
b=[ 5; working precision.
7;
xc =
2];
NaN
Ab=[A b]; % Augmented matrix
-Inf
RA=rank(A);
RAb=rank(Ab); -Inf
xc=A\b
23
Rank Deficiency
Example 2: Solution:
RA=2
A=[ 1 -2 3 ;
2 4 -1 ; RAb=2
-1 -14 11 ];
Warning: Matrix is singular to
working precision.
b=[ 5;
xc1 =
7;
NaN
1];
NaN
NaN
Ab=[A b]; % Augmented matrix
RA=rank(A);
xc2 =
RAb=rank(Ab);
2.131455399061033e+000
1.107981220657277e+000
xc1=A\b
1.694835680751174e+000
xc2=pinv(A)*b
24
12