US - TMC - 04 - Linear Equations - 2024
US - TMC - 04 - Linear Equations - 2024
US - TMC - 04 - Linear Equations - 2024
In matrix form: A x = b
MATHEMATICAL
BACKGROUND
Mathematical Background
A matrix consists of a rectangular array of elements
represented by a single symbol.
[A] is the shorthand notation for the matrix and aij
designates an individual element of the matrix.
A horizontal set of elements is called a row and a
vertical set is called a column. The first subscript i
always designates the number of the row in which the
element lies. The second subscript j designates the
column.
A matrix has n rows and m columns and is said to have a
dimension of n by m (or n × m). It is referred to as an n
by m matrix
Matrices where n = m are called square matrices.
The diagonal consisting of the elements aii is termed the
principal or main diagonal of the matrix.
Special Types of Square Matrices
Matrix Operating Rules
Addition
cij = aij + bij
Subtraction
dij = eij − fij
=
Existence and Solutions
M = N (square matrix) and nonsingular, a unique
solution exists
Example:
Part A
ELIMINATION METHODS
Method of Elimination
The basic strategy is to successively solve one of the
equations of the set for one of the unknowns and to
eliminate that variable from the remaining equations by
substitution.
11
1. Graphical Method
a11 b
x2 x1 1 x2 (slope) x1 intercept
a12 a12
a21 b2
x2
x1
a22 a22
12
Graphical solution of a
set of two simultaneous
linear algebraic
equations. The
intersection of the
lines represents the
solution.
13
Graphical depiction of singular and ill-conditioned
systems: (a) no solution, (b) infinite solutions, and (c) ill-
conditioned system where the slopes are so close that
the point of intersection is difficult to detect visually.
Figure 9.2
14
2. Determinants and Cramer’s Rule
Ax B
Where [A] is the coefficient matrix:
15
Determinant
a11 a12 a13
D a 21 a 22 a 23
a 31 a 32 a 33
a 22 a 23
D11 a 22 a 33 a 32 a 23
a 32 a 33
a 21 a 23
D12 a 21 a 33 a 31 a 23
a 31 a 33
a 21 a 22
D13 a 21 a 32 a 31 a 22
a 31 a 32
a22 a23 a21 a23 a21 a22
D a11 a12 a13
a32 a33 a31 a33 a31 a32
16
Cramer’s rule
Cramer’s rule expresses the solution of a systems of
linear equations in terms of ratios of determinants of
the array of coefficients of the equations.
b1 a12 a13
b2 a22 a23
b3 a32 a33
x1
D
17
Example #1
Example #2
Use Cramer’s rule to solve
0.3x1 + 0.52x2 + x3 = −0.01
0.5x1 + x2 + 1.9x3 = 0.67
0.1x1 + 0.3x2 + 0.5x3 = −0.44
2. Naive Gauss Elimination
Extension of method of elimination to large sets of
equations by developing a systematic scheme or
algorithm to eliminate unknowns and to back substitute.
21
22
3. Gauss-Jordan Elimination
Basic facts about linear equations
26
Techniques for Improving
Solutions
Use of more significant figures.
1. Pivoting. If a pivot element is zero, normalization step
leads to division by zero. The same problem may arise,
when the pivot element is close to zero. Problem can be
avoided:
Partial pivoting. Switching the rows so that the largest
element is the pivot element.
Complete pivoting. Searching for the largest element in all
rows and columns then switching.
2. Scaling
27
Example #4
Use Gauss elimination to solve
0.0003x1 + 3.0000x2 = 2.0001
1.0000x1 + 1.0000x2 = 1.0000
Note that in this form the first pivot element, a11 = 0.0003, is very close to zero.
Then repeat the computation, but partial pivot by reversing the order of the
equations. The exact solution is x1 = 1/3 and x2 = 2/3.
(a) Solve the following set of equations using Gauss elimination and a pivoting
strategy:
2x1 + 100,000x2 = 100,000
x1 + x2 = 2
(b) Repeat the solution after scaling the equations so that the maximum
coefficient in each row is 1.
LU DECOMPOSITION
If
L- lower triangular matrix
U- upper triangular matrix
Then,
[A]{X}={B} can be decomposed into two
matrices [L] and [U] such that
[L][U]=[A]
[L][U]{X}={B}
35
36
LU Decomposition
ii 1
Crout’s Algorithm
j 1
1
ij a ij ik kj
jj k 1
Solving linear equations
LU = A
Thus A x = (LU) x = L (U x) = b
Let y = U x, then solve y in L y = b by forward
substitution
Solve x in U x = y by backward substitution
LU Decomposition
=
21
=
31 a’22= 22 − 21 12
21 31
11 11
1
32 = − a’23= 23 − 21 13
′
a’33= 33 − 31 13 − 32 23
Example #6
Use LU decomposition to solve
3x1 − 0.1x2 − 0.2x3 = 7.85
0.1x1 + 7x2 − 0.3x3 = −19.3
0.3x1 − 0.2x2 + 10x3 = 71.4
• Properties of determinant
Since det(LU) = det(L)det(U), thus
N
det(A) = det(U) =
j 1
jj
Example #7
Employ LU decomposition to determine the matrix inverse for the system from
the previous example.
PART C
49
50
Tridiagonal Systems
A tridiagonal system has a bandwidth of 3:
f1 g1 x1 r1
e f2 g2 x r
2 2 2
e3 f3 g 3 x3 r3
e4 f 4 x4 r4
51
Example #8
Cholesky Decomposition
Cholesky decomposition is used for symmetric matrices.
This algorithm is based on the fact that a symmetric
matrix can be decomposed, as in
[A] = [L][L]T
57
Gauss–Seidel method
Example #10
Example
Example
Successive over-relaxation
Symmetric successive over-relaxation
Jacobi method
Example #11
Any Questions?