Linear Algebraic Equations

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

LINEAR ALGEBRAIC

EQUATIONS
NUMERICAL ANALYSIS
Definition
• The general form of linear algebraic equations:

 [A]{X} = {B} where


Special types of square matrices
Cramer’s rule
• A solution technique that is best suited to small numbers of equations.
• The determinant D of a 3 × 3 matrix is defined as:

where the 2 × 2 determinant (minor) is:

• Cramer’s rule: each unknown in a system of linear algebraic equations may be


expressed as a fraction of two determinants with denominator D and with the
numerator obtained from D by replacing the column of coefficients of the
unknown in question by the constants b1, b2, …, bn.
b1 a12 a13 a11 b1 a13  a11 a12 b1 
b2 a22 a23 a21 b2 a23 a a b2 
 21 22 
a b3 
; x3 =  31 32
b3 a32 a33 a31 b3 a33 a
x1 = ; x2 =
D D D
Example 1
 0.3x1 + 0.52 x2 + x3 = −0.01

• Use Cramer’s rule to solve: 0.5 x1 + x2 + 1.9 x3 = 0.67
 0.1x + + 0.5 x3 = −0.44
 1 0.3x2
SOLUTION
• Computing the determinant:
1 1.9 0.3 1 0.3 0.52
D = 0.3  − 1 + 0.5  = −0.0022
0.3 0.5 0.1 0.5 0.5 1
• Applying Cramer’s rule:
Gauss Elimination
• The elimination of unknowns by combining equations is an algebraic approach
that can be illustrated for a set of two equations:

• The basic strategy is to multiply the equations by constants so that one of the
unknowns will be eliminated when the two equations are combined.

• Subtracting the first equation from the second one:

• Solving for x2 and substitute the result into an original equation to solve for x1:
Naive Gauss elimination
• Forward Elimination of Unknowns: The first phase is designed to reduce the
set of equations to an upper triangular system.
• Back Substitution: from the last equation, solve for xn first and then substitute
back into previous equations to get results of the remaining unknowns.
Example 2
Use Gauss elimination to solve:
 3x1 − 0.1x2 − 0.2 x3 = 7.85

 0.1x1 + 7 x2 − 0.3x3 = −19.3
0.3x − 0.2 x + 10 x3 = 71.4
 1 2

Carry six significant figures during the computation.

SOLUTION
• The initial matrix representing coefficients in the equations is:

 3 −0.1 −0.2 : 7.85


A( ) =  0.1 7 −0.3 : −19.3
0
 
0.3 −0.2 10 : 71.4 
Example 2 (cont.)
• Eliminate x1 from the 2nd and the 3rd rows:
3 −0.1 −0.2 : 7.85
A( ) = 0 7.00333 −0.293333 : −19.5617 
1
 
0 −0.190000 10.0200 : 70.6150
• Eliminate x2 from the 3rd row:
3 −0.1 −0.2 : 7.85
A( ) = 0 7.00333 −0.293333 : −19.5617 
2
 
0 0 10.0120 : 70.0843

• Solve these equations by back substitution:

  X  = x1 x3  = 3.00000 −2.50000 7.00000


T T
x2
LU decomposition
• LU decomposition methods separate the time-consuming elimination of the
matrix [A] from the manipulations of the right-hand side {B}.
• [A] is factored or “decomposed” into lower [L] and upper [U] triangular matrices.

[A]{X} = {B}

[A] = [L][U]
where
[L]{D} = [B]
[U]{X} = [D]
LU decomposition
• Doolitte method:

1 0 0 u11 u12 u1n 


l 1 0  0 u u2 n 
 L =  21 ; U  =  22 
   
   
 ln1 ln 2 1  0 0 unn 

u1 j = a1 j (1  j  n)

l =
ai1
(2  i  n)
 i1 u11

i −1
where 
u
 ij
= aij −  lik ukj (2  i  j  n)
k =1

 1  j −1 
lij =  aij −  lik ukj  (2  j  i  n)
 u jj  k =1 
LU decomposition
• Crout method:

 l11 0 0 1 u12 u1n 


l l 0 0 1 u2 n 
 L =  21 22 ; U  =  
   
   
 ln1 ln 2 lnn  0 0 1 

li1 = ai1 (1  i  n )

u a1 j
 1j
= (2  j  n)
l11
 j −1
where 
lij = aij −  lik ukj (2  j  i  n)
 k =1
 1  i −1 
uij =  ij  ik kj 
a − l u (2  i  j  n)
 lii  k =1 
Example 3
 2 x1 + 2 x2 − 3x3 = 9

Use LU decomposition (Doolitte method) to solve:  −4 x1 − 3x2 + 4 x3 = −15
 2x + + 2 x3 =
 1 x2 3
SOLUTION
• Decomposing [A] into [L] and [U]:
 2 2 −3  1 0 0  2 2 −3
[ A] = [ L][U ]   −4 −3 4  =  −2 1 0   0 1 −2 
    
 2 1 2   1 −1 1  0 0 3
• Substituting back to obtain results:
 1 0 0  9  9
   
[ L]{D} = {B}   −2 1 0  {D} =  −15   {D} =  3
 
 1 −1 1  3  −3
   
 2 2 −3  9  2
   
[U ]{ X } = {D}   0 1 −2  { X } =  3  { X } =  1
 
 0 0 3  −3  −1
   
Cholesky decomposition
• A decomposition of a symmetric, positive-definite matrix into the product of a
lower triangular matrix (C) and its conjugate transpose (CT).
[A] = [C][CT]
• A matrix is positive-definite when all its sub-determinants are positive.

c11 = a11 (2  i  n)

c ai1
 i1
= (2  i  n)
 c11 0 0  c11
c 0  
  =  21 c22  ; where  k −1
C
  c
 kk
= akk −  ckj
2
(2  k  n)
  
j =1
 cn1 cn 2 cnn 
 1  k −1 
cik =  aik −  cij c jk 
 
(k + 1  i  n)
 ckk  j =1 
Example 4
 x1 + x2 − x3 = 1

Use Cholesky decomposition to solve:  x1 + 2 x2 = 2
− x + + 4 x3 = 3
 1
SOLUTION 1 1 −1
1 1
• [A] is symmetric and positive-definite since: 1  0; = 1  0; 1 2 0 =20
1 2
 1 1 −1  1 0 0 −1 0 4
 
 A =  1 2 0  C  =  1 1 0
 −1 0 4   −1 1 2 

• Substituting back to obtain results:
 1 0 0  1  1 
     
[C ]{D} = {B}   1 1 0  {D} = 2   {D} =  1 
 −1 1  3  
 2    3 2 
 1 1 −1  1   3 
     
[C T ]{ X } = {D}  0 1 1 { X } =  1   { X } =  −1 2 
0 0    32
 2   3 2   

You might also like