US - TMC - 04 - Linear Equations - 2024

Download as pdf or txt
Download as pdf or txt
You are on page 1of 67

TMC: Linear Equations

Presenter: Dr. Ha Viet Uyen Synh.


MOTIVATION

In matrix form: A x = b
MATHEMATICAL
BACKGROUND
Mathematical Background
A matrix consists of a rectangular array of elements
represented by a single symbol.
[A] is the shorthand notation for the matrix and aij
designates an individual element of the matrix.
A horizontal set of elements is called a row and a
vertical set is called a column. The first subscript i
always designates the number of the row in which the
element lies. The second subscript j designates the
column.
A matrix has n rows and m columns and is said to have a
dimension of n by m (or n × m). It is referred to as an n
by m matrix
Matrices where n = m are called square matrices.
The diagonal consisting of the elements aii is termed the
principal or main diagonal of the matrix.
Special Types of Square Matrices
Matrix Operating Rules
Addition
cij = aij + bij

Subtraction
dij = eij − fij

Both addition and subtraction are commutative:


[A] + [B] = [B] + [A]
Addition and subtraction are also associative, that is,
([A] + [B]) + [C] = [A] + ([B] + [C])

The multiplication of a matrix [A] by a scalar g is obtained by


multiplying every element of [A] by g, as in
A Simple Method for Multiplying Two Matrices
Matrix Operating Rules
The transpose of a matrix involves transforming its rows into columns
and its columns into rows.

The trace of a matrix is the sum of the elements on its principal


diagonal. It is designated as tr[A]

=
Existence and Solutions
M = N (square matrix) and nonsingular, a unique
solution exists

M < N, more variables than equations – need to find the


null space of A, A x = 0 (e.g. SVD)

M > N, one can ask for least-square solution, e.g., from


(ATA) x = (AT b)

Example:
Part A

ELIMINATION METHODS
Method of Elimination
The basic strategy is to successively solve one of the
equations of the set for one of the unknowns and to
eliminate that variable from the remaining equations by
substitution.

The elimination of unknowns can be extended to


systems with more than two or three equations;
however, the method becomes extremely tedious to
solve by hand.

11
1. Graphical Method

For two equations:


a11 x1  a12 x2  b1
a21 x1  a22 x2  b2
Solve both equations for x2:

 a11  b
x2    x1  1  x2  (slope) x1  intercept
 a12  a12
 a21  b2

x2   
 x1 
 a22  a22

12
Graphical solution of a
set of two simultaneous
linear algebraic
equations. The
intersection of the
lines represents the
solution.

13
Graphical depiction of singular and ill-conditioned
systems: (a) no solution, (b) infinite solutions, and (c) ill-
conditioned system where the slopes are so close that
the point of intersection is difficult to detect visually.

Figure 9.2

14
2. Determinants and Cramer’s Rule

Determinant can be illustrated for a set of three equations:

Ax  B
Where [A] is the coefficient matrix:

a11 a12 a13 


A  a21 a22 a23 
a31 a32 a33 

15
Determinant
a11 a12 a13
D  a 21 a 22 a 23
a 31 a 32 a 33
a 22 a 23
D11   a 22 a 33  a 32 a 23
a 32 a 33
a 21 a 23
D12   a 21 a 33  a 31 a 23
a 31 a 33
a 21 a 22
D13   a 21 a 32  a 31 a 22
a 31 a 32
a22 a23 a21 a23 a21 a22
D  a11  a12  a13
a32 a33 a31 a33 a31 a32
16
Cramer’s rule
Cramer’s rule expresses the solution of a systems of
linear equations in terms of ratios of determinants of
the array of coefficients of the equations.

For example, x1 would be computed as:

b1 a12 a13
b2 a22 a23
b3 a32 a33
x1 
D

17
Example #1
Example #2
Use Cramer’s rule to solve
0.3x1 + 0.52x2 + x3 = −0.01
0.5x1 + x2 + 1.9x3 = 0.67
0.1x1 + 0.3x2 + 0.5x3 = −0.44
2. Naive Gauss Elimination
Extension of method of elimination to large sets of
equations by developing a systematic scheme or
algorithm to eliminate unknowns and to back substitute.

As in the case of the solution of two equations, the


technique for n equations consists of two phases:
- Forward elimination of unknowns
- Back substitution

21
22
3. Gauss-Jordan Elimination
Basic facts about linear equations

Interchanging any two rows of A and b does not change


the solution x

Replace a row by a linear combination of itself and any


other row does not change x

Interchange column permutes the solution


Gauss-Jordan Elimination
When an unknown is eliminated, it is eliminated from all
other equations rather than just the subsequent ones.

All rows are normalized by dividing them by their pivot


elements.

Elimination step results in an identity matrix.

Consequently, it is not necessary to employ back


substitution to obtain solution.
Example #3
Pitfalls of Elimination Methods
Division by zero. It is possible that during both
elimination and back-substitution phases a division by
zero can occur.

Round-off errors. makes the system singular, or


numerical instability makes the answer wrong

Ill-conditioned systems. Systems where small changes in


coefficients result in large changes in the solution.
Alternatively, it happens when two or more equations are
nearly identical, resulting a wide ranges of answers to
approximately satisfy the equations. Since round off
errors can induce small changes in the coefficients,
these changes can lead to large solution errors.

26
Techniques for Improving
Solutions
Use of more significant figures.
1. Pivoting. If a pivot element is zero, normalization step
leads to division by zero. The same problem may arise,
when the pivot element is close to zero. Problem can be
avoided:
Partial pivoting. Switching the rows so that the largest
element is the pivot element.
Complete pivoting. Searching for the largest element in all
rows and columns then switching.

2. Scaling

27
Example #4
Use Gauss elimination to solve
0.0003x1 + 3.0000x2 = 2.0001
1.0000x1 + 1.0000x2 = 1.0000
Note that in this form the first pivot element, a11 = 0.0003, is very close to zero.
Then repeat the computation, but partial pivot by reversing the order of the
equations. The exact solution is x1 = 1/3 and x2 = 2/3.

Multiplying the first equation by 1/(0.0003) yields


x1 + 10,000x2 = 6667
which can be used to eliminate x1 from the second equation:
−9999x2 = −6666
Example #5
Problem Statement.

(a) Solve the following set of equations using Gauss elimination and a pivoting
strategy:
2x1 + 100,000x2 = 100,000
x1 + x2 = 2

(b) Repeat the solution after scaling the equations so that the maximum
coefficient in each row is 1.

(c) Finally, use the scaled coefficients to determine whether pivoting is


necessary. However, actually solve the equations with the original coefficient
values. For all cases, retain only three significant figures. Note that the correct
answers are x1 = 1.00002 and x2 = 0.99998 or, for three significant figures, x1
= x2 = 1.00.
Solution.
(a) Without scaling, forward elimination is applied to give
2x1 + 100,000x2 = 100,000
−50,000x2 = −50,000
which can be solved by back substitution for
x2 = 1.00
x1 = 0.00
Although x2 is correct, x1 is 100 percent in error because of round-off.

(b) Scaling transforms the original equations to


0.00002x1 + x2 = 1
x1 + x2 = 2
Therefore, the rows should be pivoted to put the greatest value on the diagonal.
x1 + x2 = 2
0.00002x1 + x2 = 1
Forward elimination yields
x1 + x2 = 2
x2 = 1.00
which can be solved for
x1 = x2 = 1
Thus, scaling leads to the correct answer.
(c) The scaled coefficients indicate that pivoting is necessary. We therefore pivot
but retain the original coefficients to give
x1 + x2 = 2
2x1 + 100,000x2 = 100,000
Forward elimination yields
x1 + x2 = 2
100,000x2 = 100,000
which can be solved for the correct answer: x1 = x2 = 1. Thus, scaling was useful
in determining whether pivoting was necessary, but the equations themselves
did not require scaling to arrive at a correct result.
PART B

LU DECOMPOSITION
If
L- lower triangular matrix
U- upper triangular matrix
Then,
[A]{X}={B} can be decomposed into two
matrices [L] and [U] such that
[L][U]=[A]
[L][U]{X}={B}

Similar to first phase of Gauss elimination, consider


[U]{X}={D}
[L]{D}={B}
[L]{D}={B} is used to generate an intermediate
vector {D} by forward substitution
Then, [U]{X}={D} is used to get {X} by backward
substitution.

35
36
LU Decomposition
ii  1
Crout’s Algorithm

Set  ii  1 for all i


For each j = 1, 2, 3, …, N
(a) for i = 1, 2, …, j
i 1
ij  a ij    ik  kj
k 1

(b) for i = j +1, j +2, …, N

j 1
1  
 ij   a ij    ik  kj 
 jj  k 1 
Solving linear equations
LU = A
Thus A x = (LU) x = L (U x) = b
Let y = U x, then solve y in L y = b by forward
substitution
Solve x in U x = y by backward substitution
LU Decomposition

=
21
=
31 a’22= 22 − 21 12
21 31
11 11

1
32 = − a’23= 23 − 21 13

a’33= 33 − 31 13 − 32 23
Example #6
Use LU decomposition to solve
3x1 − 0.1x2 − 0.2x3 = 7.85
0.1x1 + 7x2 − 0.3x3 = −19.3
0.3x1 − 0.2x2 + 10x3 = 71.4

We solve the matrix


Example
Example
Inverting a matrix
When solving systems of equations, b is usually treated
as a vector with a length equal to the height of matrix A.
Instead of vector b, we have matrix B, where B is an n-
by-p matrix, so that we are trying to find a matrix X (also
a n-by-p matrix):
AX = LUX = B
We can use the same algorithm presented earlier to
solve for each column of matrix X. Now suppose that B is
the identity matrix of size n. It would follow that the
result X must be the inverse of A
Compute det(A)
• Definition of determinant
P
det( A)   (1) a1,i1 a2,i2  an ,in
P

• Properties of determinant
Since det(LU) = det(L)det(U), thus
N

det(A) = det(U) = 
j 1
jj
Example #7
Employ LU decomposition to determine the matrix inverse for the system from
the previous example.
PART C

SPECIAL MATRICES &


GAUSS-SEIDEL METHOD
Special Matrices and Gauss-Seidel

Certain matrices have particular structures that


can be exploited to develop efficient solution
schemes.

A banded matrix is a square matrix that has all elements


equal to zero, with the exception of a band centered on
the main diagonal. These matrices typically occur in
solution of differential equations.

The dimensions of a banded system can be quantified by


two parameters: the band width BW and half-bandwidth
HBW. These two values are related by BW=2HBW+1.

Gauss elimination or conventional LU decomposition


methods are inefficient in solving banded equations
because pivoting becomes unnecessary.

49
50
Tridiagonal Systems
A tridiagonal system has a bandwidth of 3:

 f1 g1   x1   r1 
e f2 g2   x  r 
 2   2    2 
 e3 f3 g 3   x3  r3 
 
 e4 f 4   x4  r4 

• An efficient LU decomposition method, called Thomas


algorithm, can be used to solve such an equation. The
algorithm consists of three steps: decomposition,
forward and back substitution, and has all the
advantages of LU decomposition.

51
Example #8
Cholesky Decomposition
Cholesky decomposition is used for symmetric matrices.
This algorithm is based on the fact that a symmetric
matrix can be decomposed, as in
[A] = [L][L]T

For the kth row


Example #9
Example
Gauss-Seidel Decomposition
Iterative or approximate methods provide an alternative
to the elimination methods. The Gauss-Seidel method is
the most commonly used iterative method.

The system [A]{X}={B} is reshaped by solving the first


equation for x1, the second equation for x2, and the third
for x3, …and nth equation for xn. For conciseness, we will
limit ourselves to a 3x3 set of equations.

57
Gauss–Seidel method
Example #10
Example
Example
Successive over-relaxation
Symmetric successive over-relaxation
Jacobi method
Example #11
Any Questions?

[email protected]

You might also like