Direct Methods For Solving Linear Systems: A ® Ë@ Õæ Aë Áöß at - X - at Èð B@ É ®Ë@

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Numerical Analysis (Math 3313)

 Õæ…Aë áÖ ß
@ .X . @
A®‚Ë@

2019-2018 Èð B@ ɒ®Ë@

Chapter 6
Direct Methods for Solving Linear Systems

In this chapter we study methods of solving large linear systems of equations. Linear systems
are used in many problems in engineering and science, as well as with applications of mathematics
to the social sciences and the quantitative study of business and economic problems. In particular
linear systems are used in solving differential equations numerically.
By the term direct methods we mean methods that theoretically give the exact solution to the
system in a finite number of steps such as the Gaussian elimination. In practice, of course, the
solution obtained will be contaminated by the round-off error that is involved with the arithmetic
being used. Analyzing the effect of this round-off error and determining ways to keep it under control
will be a major component of this chapter. Direct methods are in contrast to iterative methods that
we will study in Chapter 7.

6.1 Linear Systems of Equations


A linear system of m equations in n variables (unknowns) has the form
E1 : a11 x1 + a12 x2 + · · · + a1n xn = b1
E2 : a21 x1 + a22 x2 + · · · + a2n xn = b2
..
.
Em : am1 x1 + am2 x2 + · · · + amn xn = bm .
We use three operations to simplify the linear system given in this section:
(1) Equation Ei can be multiplied by any nonzero constant λ with the resulting equation used in
place of Ei . This operation is denoted (λEi ) → (Ei ).
(2) Equation Ej can be multiplied by any constant λ and added to equation Ei with the resulting
equation used in place of Ei . This operation is denoted (Ei + λEj ) → (Ei ).
(3) Equations Ei and Ej can be transposed in order. This operation is denoted (Ei ) ↔ (Ej ).
By a sequence of these operations, a linear system will be systematically transformed into to a new
linear system that is more easily solved and has the same solutions.
When performing the operations, we do not need to write out the full equations at each step or
to carry the variables x1 , x2 , · · · , xn through the calculations, if they always remained in the same
column. The only variation from system to system occurs in the coefficients of the unknowns and
in the values on the right side of the equations. For this reason, a linear system is often replaced
by a matrix, which contains all the information about the system that is necessary to determine its
solution, but in a compact form, and one that is easily represented in a computer.

1
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@
Matrices and vectors
A matrix is a rectangular array of numbers or other mathematical objects arranged in rows and
columns. It has the general form
 
a11 a12 · · · a1n
 a21 a22 · · · a2n 
A =  .. ..  .
 
..
 . . ··· . 
am1 am2 · · · amn

In a matrix the mathematical object and its position in the array are important.
The size of a matrix is defined by the number of rows and columns that it contains. A matrix
with m rows and n columns is called an m × n matrix or m−by−n matrix, while m and n are called
its dimensions. The individual items in a matrix are called its elements or entries.
A square matrix is an n × n matrix. The main or principal diagonal of a square matrix is the
line of elements aii , i = 1, · · · , n, from upper left to lower right.
 1× n matrix y = [y1 y2 · · · yn ] is called an n−dimensional row vector , and an n × 1 matrix
An
x1
 x2 
x =  ..  is called an n−dimensional column vector.
 
.
xn

Special types of square matrices


(1) A diagonal matrix is a square matrix whose all elements off the main diagonal are equal zero:

0 ··· 0
 
a11 0
 0 a22
 . 0 ··· 0 
.. .. .. .. 
A =  .. . . . . .
 
 . .. .. .. .. 
 .. . . . . 
0 0 · · · 0 ann

(2) An upper triangular matrix is a square matrix in which all the elements below the main diagonal
are zero:
a11 a12 a13 · · · a1n
 
 0 a22 a23 · · · a2n 
 . .. . . .. .. 
A =  .. . . . . .
 
 . .. .. .. .. 
 .. . . . . 
0 0 · · · 0 ann

(3) A lower triangular matrix is a square matrix in which all the elements above the main diagonal
are zero:
a11 0 · · · · · ·
 
0
 a21 a22 0
 . ··· 0 
 . .. .. .. .. 
A= . . . . . .

 . .. .. .. .. 
 .. . . . . 
an1 an2 · · · ann−1 ann

2019-2018 Èð B@ ɒ®Ë@ Page 2 of 21
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@
Matrix form of a linear system
A linear system
a11 x1 + a12 x2 + · · · + a1n xn = b1
a21 x1 + a22 x2 + · · · + a2n xn = b2
..
.
am1 x1 + am2 x2 + · · · + amn xn = bm
can be written in matrix form as
Ax = b,
where      
a11 a12 ··· a1n b1 x1
 a21 a22 ··· a2n   b2   x2 
A =  .. ..  , b =  ..  , x =  ..  .
     
..
 . .··· .   .  .
am1 am2 · · · amn bm xn

The Elimination Method


Definition. (Elementary row operations)
The following three operations are called the elementary row operations:

(1) Multiplying any row Ri of a matrix by a number λ 6= 0. This operation will be denoted by
Ri → λRi .

(2) Adding a multiple of a row Ri to a another row Rj . This operation will be denoted by Rj →
(Rj + λRi ).

(3) Interchanging the order of two rows Ri and Rj . This operation will be denoted by Rj ↔ Ri .
 
1 2 3
Example 1. Let A = 4 5 6.
7 8 9  
1 2 3
If we multiply the second row of A by 2, then we obtain a new matrix B = 8 10
 12.
7 8 9
 
1 2 3
If we add 2 times the first row to the third row, then we obtain the new matrix C = 4 5 6.
5 4 3
 If we interchange the order of the second and third rows, then we obtain the new matrix D =
1 2 3
7 8 9.
4 5 6

Definition. (Gaussian elimination)


Gaussian elimination is a method to solve a linear system Ax = b by transforming the matrix A to
an upper triangular matrix by applying elementary row operations to the augmented matrix [A : b].

Remark. Applying elementary row operations to the augmented matrix [A : b] does not change the
solution of a linear system Ax = b.


2019-2018 Èð B@ ɒ®Ë@ Page 3 of 21
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@
Definition. (Back-substitution)
Let A be an upper triangular matrix. Back-substitution is solving the linear system Ax = b by
finding the unknown xn from the last equation, then substituting xn into the (n − 1)st equation to
find the unknown xn−1 and so on.

Example. Use Gaussian elimination and back-substitution to solve the linear system

2x1 + 2x2 − 2x3 = 2,


3x1 + 5x2 + x3 = 5,
x1 + 2x2 = 6.

Solution.


2019-2018 Èð B@ ɒ®Ë@ Page 4 of 21
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@
Exercises
Page 368: 3-10

6.2 Pivoting Strategies


In Gaussian elimination, to eliminate the elements below a diagonal element aii , we use the row
operations
aki
Rk −→ Rk − Ri , k = i + 1, i + 2, · · · , n.
aii
Thus problems occur when a diagonal element aii (called the pivot element) is zero or close to zero.
To avoid such problems we may interchange rows and columns so that the elements aii has largest
magnitude. This process is called complete pivoting. If we interchange rows only, then the process
is called partial pivoting.

Definition. (Partial pivoting)


Partial pivoting is interchanging rows so that |aii | ≥ |aji | for j = i + 1, i + 2, · · · , n, before eliminating
the elements below a diagonal element aii .

Example 1. Use Gaussian elimination and back-substitution to solve the linear system

−0.002x1 + 4x2 + 4x3 = 7.998,


−2x1 + 2.906x2 − 5.387x3 = −4.481,
3x1 − 4.031x2 − 3.112x3 = −4.143.

Use four-digit arithmetic with rounding.

Solution.


2019-2018 Èð B@ ɒ®Ë@ Page 5 of 21
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@
Remark. Exact solution: x1 = x2 = x3 = 1.

Example 2. Use Gaussian elimination with partial pivoting and back-substitution to solve the linear
system
−0.002x1 + 4x2 + 4x3 = 7.998,
−2x1 + 2.906x2 − 5.387x3 = −4.481,
3x1 − 4.031x2 − 3.112x3 = −4.143.
Use four-digit arithmetic with rounding.

Solution.


2019-2018 Èð B@ ɒ®Ë@ Page 6 of 21
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@
Remark. We use pivoting to minimize round-off error and to avoid division by zero .

Gauss-Jordan Method
Gauss-Jordan method is an elimination method to transform the coefficient matrix A of a linear
system Ax = b to the identity matrix I. As a result of this process, the right-hand side b will be
transformed to the solution of the system.

Example. Use Gauss-Jordan method to solve the linear system

2x1 + 2x2 − 2x3 = 2,


3x1 + 5x2 + x3 = 5,
x1 + 2x2 = 6.

Solution.


2019-2018 Èð B@ ɒ®Ë@ Page 7 of 21
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@
Remarks.

(1) Pivoting is normally used with Gauss-Jordan method to preserve arithmetic accuracy.

(2) To solve linear systems with the same coefficient matrix A, but with different right-hand sides
b(1) , b(2) , · · · , b(k) , we apply the row operations to

[A : b(1) , b(2) , · · · , b(k) ]

and reduce it to
[U : b̃(1) , b̃(2) , · · · , b̃(k) ].
Then the solutions of the systems are obtained by applying back-substitution to the equivalent
systems
U x = b̃(j) , j = 1, 2, · · · , k.

Example. Solve the systems Ax = b(i) by Gaussian elimination if


     
4 −1 3 0 −2
(1) (2)
A= 2 3
 7 ,
 b = 0 ,
  b = 1 .

3 6 −1 1 3

Solution.


2019-2018 Èð B@ ɒ®Ë@ Page 8 of 21
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@
Scaling
Scaling is the operation of adjusting the coefficients of a set of equations so that they are all of the
same order of magnitude. To scale a linear system, we divide each equation by the magnitude of the
largest coefficient in each equation. That is, if
 
a11 a12 · · · a1n
 a21 a22 · · · a2n 
A =  ..
 
.. .. 
 . ··· . . 
am1 am2 · · · amn

and αj = max {|aj1 |, · · · , |ajn |}, then


 
a11 a12 a1n b1
 α1 ··· |
α1 α1 α1 
 a21 a22 a2n b2 
 
scaling  ··· | 
[A |b ]− − −→   α.2 α2
..
α2
..
α2 
.. 
 .. . ··· . | . 
 
 am1 am2 amn bm 
··· |
αm αm αm αm

Example. Consider the system Ax = b, where


   
200 1 1 202
A =  −1 100 2  , b =  101  .
−50 3 −3 −50

(a) Solve the system by Gaussian elimination with pivoting.

(b) Solve the system by Gaussian elimination with pivoting and scaling.

Use three digit arithmetic with rounding.

Solution.


2019-2018 Èð B@ ɒ®Ë@ Page 9 of 21
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@


2019-2018 Èð B@ ɒ®Ë@ Page 10 of 21
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@
Remark. The exact solution of the above example is x1 = x2 = x3 = 1. Thus we see that scaling
minimizes round-off error.

Exercises
Page 379: 1-34

6.3 Linear Algebra and Matrix Inversion


In this section we consider some algebra associated with matrices and show how it can be used to
solve problems involving linear systems.

Matrix Inversion
Definition. (1) An n × n matrix A is said to be nonsingular (or invertible) if there exists an n × n
matrix A−1 such that A−1 A = AA−1 = I.

(2) The matrix A−1 is called the inverse of A.

(3) A matrix without an inverse is called singular (or noninvertible).

How can we find the inverse of a matrix?


The Gauss-Jordan method can be used to find the inverse of a given matrix A. We augment the
given matrix A with the identity matrix of the same size. Then we reduce A to the identity matrix
I by row operations, performing the same operations on I. As a result I will be transformed to A−1 .
Gauss-Jordan
[A | I]− − − −→ [I |A−1 ].
 
2 2 −2
Example. Find the inverse of the matrix A = 0 −1 4  .
0 2 1
Solution.


2019-2018 Èð B@ ɒ®Ë@ Page 11 of 21
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@
Remark. It is more efficient to use Gaussian elimination instead of Gauss-Jordan method to find
A−1 . In this case we solve the systems AA−1 = I.
 
2 2 −2
Example. Use Gaussian elimination to find the inverse of the matrix A = 0 −1 4 .
0 2 1
Solution.

Remarks.

(1) If A−1 exists, then the solution of the system Ax = b is given by x = A−1 b. However this is not
the preferred method for solving Ax = b since it take more time than other methods.

(2) The inverse of a matrix is important for theoretical reasons.

Exercises
Page 390: 5-8, 13-18


2019-2018 Èð B@ ɒ®Ë@ Page 12 of 21
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@
6.4 The Determinant of a Matrix
The determinant of a matrix provides existence and uniqueness results for linear systems having the
same number of equations and unknowns. We will denote the determinant of a square matrix A by
det(A) , but it is also common to use the notation |A|.
Cramer’s rule is a method to solve linear systems by determinants. Using Cramer’s rule for
solving linear systems is too inefficient. It requires too much multiplications and divisions than any
other method.
In fact, the determinant of a matrix can be found by applying Gaussian elimination.

Definition. (Matrix determinant)


Suppose A is a square matrix.

(1) If A = [a] is a 1 × 1 matrix, then det(A) = a.

(2) If A is a n × n matrix with n > 1, then the minor Mij is the determinant of the (n − 1) × (n − 1)
submatrix of A obtained by deleting the ith row and jth column of the matrix A.

(3) The cofactor Aij associated with Mij is defined by Aij = (−1)i+j Mij .

(4) The determinant of the n × n matrix A, when n > 1, is given either by


n
X n
X
det(A) = aij Aij = (−1)i+j aij Mij , for any i = 1, 2, · · · , n,
j=1 j=1

or by
n
X n
X
det(A) = aij Aij = (−1)i+j aij Mij , for any j = 1, 2, · · · , n.
i=1 i=1

Although it appears that there are 2n different definitions of det(A), depending on which row or
column is chosen, all definitions give the same numerical result. The flexibility in the definition is
used in the following example. It is most convenient to compute det(A) across the row or down the
column with the most zeros.
 
2 2 −2
Example. Find det(A) if A = 4 3 0  .
0 2 1
Solution.


2019-2018 Èð B@ ɒ®Ë@ Page 13 of 21
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@
Theorem 1. (Properties of the determinant)

 
u11 u12 · · · u1n
(1) If U =  0 u22 · · · u2n  is an upper triangular matrix, then det(U ) is the product of its
 
.. ..
. . · · · unn
diagonal elements; det(U ) = u11 u22 · · · unn .
 
l11 0 0 · · · 0
 l21 l22 0 · · · 0 
(2) If L =  ..  is a lower triangular matrix, then det(L) is the product of its
 
.. ..
 . . . ··· 0 
ln1 ln2 ln3 · · · lnn
diagonal elements; that is det(L) = l11 l22 · · · lnn .

(3) The determinant of a matrix is not changed when a multiple of one row is added to another of
the matrix.

(4) Interchanging two rows of a matrix changes the sign of its determinant.

(5) Multiplying a row of a matrix by a constant multiplies the value of its determinant by the same
constant. In other words, if B is obtained from A by applying the row operation Ri −→ cRi ,
then det(B) = c det(A).

(6) If A and B are n × n matrices, then det(AB) = det(A) det(B).


1
(7) If A has an inverse, Then det(A−1 ) = .
det(A)
 
2 2 −2
Example. Find det(A) if A = 4 3 0  .
0 2 1
Solution.


2019-2018 Èð B@ ɒ®Ë@ Page 14 of 21
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@
The following theorem gives relations between nonsingularity, Gaussian elimination, linear sys-
tems, and determinants.
Theorem 2. The following statements are equivalent for any n × n matrix A:
(1) The equation Ax = 0 has the unique solution x = 0.
(2) The system Ax = b has a unique solution for any n−dimensional column vector b.
(3) The matrix A is nonsingular; that is, A−1 exists.
(4) det(A) 6= 0.
(5) Gaussian elimination with row interchanges can be performed on the system Ax = b for any
n−dimensional column vector b.

Exercise
Page 399: 1-8,11-12

6.5 Matrix Factorization


Gaussian elimination is the principal tool in the direct solution of linear systems of equations, so it
should be no surprise that it appears in other guises. In this section we will see that the steps used
to solve a system of the form Ax = b can be used to factor a matrix. The factorization is particularly
useful when it has the form A = LU , where L is lower triangular matrix and U is an upper triangular
matrix. Although not all matrices have this type of factorization, many matrices that occur in the
applications of numerical analysis have.
To solve the linear system Ax = b, we may solve LU x = b. Define y = U x. Then LU x = b implies
Ly = b. The two systems
Ly = b and U x = y
are easy to solve. We use forward substitution to solve Ly = b. Then we use back substitution to
solve U x = y for x which is the solution of Ax = b.

LU Decomposition from Gaussian elimination


Consider a square matrix  
a11 a12 ··· a1n
 a21 a22 ··· a2n 
A =  .. ..  .
 
..
 . . ··· . 
an1 an2 ··· ann
We can use Gaussian elimination without row interchanges to write A in the form A = LU , where L
is lower triangular matrix and U is an upper triangular matrix.
Applying a row operation to A is equivalent to multiply A by an elementary matrix; that is a
matrix that obtained by applying the same row operation to the identity matrix. To reduce a matrix
n(n − 1)
to an upper triangular matrix, we need m = row operations; that is we have
2
A = (Em Em−1 · · · E2 E1 )−1 U.

2019-2018 Èð B@ ɒ®Ë@ Page 15 of 21
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@
The matrix U is the upper triangular matrix we obtain by applying Gaussian elimination to A.
The matrix L = (Em Em−1 · · · E2 E1 )−1 is formed from the ratios used in Gaussian elimination; that
is L is given by
1 0 0 ··· 0
 
 
 a21 

a 1 0 · · · 0

 11 
 
L= a
 31

1 · · · 0

 a11
 

 . .
 .. .. 

a 
n1
1
a11
 
2 1 2
Example. Use Gaussian elimination to find LU decomposition of the matrix A = 4 4 1 .
3 0 2
Solution.


2019-2018 Èð B@ ɒ®Ë@ Page 16 of 21
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@

Permutation Matrices
If we interchange rows during the Gaussian elimination, then LU = Â, where  can be obtained
from A by interchanging rows. We will now consider the modifications that must be made when row
interchanges are required. We begin the discussion with the introduction of a class of matrices that
are used to rearrange, or permute, rows of a given matrix.
Definition. (Permutation Matrix)
An n × n permutation matrix P is a matrix obtained by rearranging the rows of the n × n identity
matrix In .
 
1 0 0
Example. The matrix P = 0 0 1 is a 3 × 3 permutation matrix.
0 1 0
Lemma 1. A matrix P is a permutation matrix if and only if each row of P contains all 0 entries
except for a single 1, and, in addition, each column of P also contains all 0 entries except for a single
1.

At the end of Section 6.4 we saw that for any nonsingular matrix A, the linear system Ax = b
can be solved by Gaussian elimination, with the possibility of row interchanges. If we knew the row
interchanges that were required to solve the system by Gaussian elimination, we could arrange the
original equations in an order that would ensure that no row interchanges are needed. Hence there is
a rearrangement of the equations in the system that permits Gaussian elimination to proceed without
row interchanges. This implies that for any nonsingular matrix A , a permutation matrix P exists
for which the system
P Ax = P b
can be solved without row interchanges. As a consequence, this matrix P A can be factored into
P A = LU.

The Permuted LU Factorization


A practical method to construct a permuted LU factorization of a given matrix A would proceed
as follows. First set up P = L = I as n × n identity matrices. The matrix P will keep track of the
permutations performed during the Gaussian Elimination process, while the entries of L below the
diagonal are gradually replaced by the ratios used in Gaussian elimination. Each time two rows of
A are interchanged, the same two rows of P will be interchanged. Moreover, any pair of entries that
both lie below the diagonal in these same two rows of L must also be interchanged, while entries lying
on and above its diagonal need to stay in their place. At a successful conclusion to the procedure, A
will have been converted into the upper triangular matrix U , while L and P will assume their final
form.
Example. Find the permutation matrix P so that P A can be factored into the product LU , where
 
1 2 −1
A= 2 4 2 .
−3 −5 6


2019-2018 Èð B@ ɒ®Ë@ Page 17 of 21
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@
Solution.


2019-2018 Èð B@ ɒ®Ë@ Page 18 of 21
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@
Crout Reduction
Crout reduction is the process of writing a matrix A as A = LU , where
0 ··· 0
 
l11 0
1 u12 u13 · · ·
 
u1n  
0 1 u23 · · · u 2n   
. . ..   l21 l22 0 · · · 0 
. . .. ..
U = . . . . . 

and L=

.. .

. . . .
 .. .. .. . . . un−1,n 
 
 .

 .. . .. 0 

0 ··· ··· ··· 1
ln1 ln2 · · · lnn
To find L and U , we multiply L and U and equate the result to A.
To illustrate the method let us consider the case n = 3; that is let us consider
 
a11 a12 a13
A = a21 a22 a23  .
a31 a32 a33
In this case L and U have the forms
   
1 u12 u13 l11 0 0
U = 0 1 u23  and L = l21 l22 0  .
0 0 1 l31 l32 l33
Using LU = A, we get
   
l11 l11 u12 l11 u13 a11 a12 a13
l21 l21 u12 + l22 l21 u13 + l22 u23  = a21 a22 a23  .
l31 l31 u12 + l32 l31 u13 + l32 u23 + l33 a31 a32 a33
This yields 9 equations in 9 unknowns. These equation can be solved as follows:
First ↓: l11 = a11 , l21 = a21 , l31 = a31 .
a12
l11 u12 = a12 ⇒ u12 = ,
Second →: l
a1113
l11 u13 = a13 ⇒ u13 = .
l11
l21 u12 + l22 = a22 ⇒ l22 = a22 − l21 u12 ,
Third ↓:
l31 u12 + l32 = a32 ⇒ l32 = a32 − l31 u12 .
a23 − l21 u13
Fourth →: l21 u13 + l22 u23 = a23 ⇒ u23 = .
l22
Fifth ↓: l31 u13 + l32 u23 + l33 = a23 ⇒ l33 = a33 − l31 u13 − l32 u23 .
Generalizing the above calculations we find
j−1
X
lij = aij − lik ukj , j = 1, 2, · · · , n; i = j, j + 1, · · · , n.
k=1
" i−1
#
1 X
uij = aij − lik ukj , i = 1, 2, · · · , n − 1; j = i + 1, i + 2, · · · , n.
lii k=1


2019-2018 Èð B@ ɒ®Ë@ Page 19 of 21
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@
Remarks.
m
X
1. In the above two formulas for lij and uij we use the convention ck = 0 for m < 1.
k=1

2. The above formulas are not independent. We have to use the first formula for j = 1 and the
second formula for i = 1. Then we apply the formulas for j = 2 and i = 2 and so on.
 
2 1 2
Example 1. Use Crout reduction to find the LU decomposition of A = 4 4 1 .
3 0 2
Solution.


2019-2018 Èð B@ ɒ®Ë@ Page 20 of 21
Numerical analysis (Math 3313)
 áÖ ß
@ .X . @
A®‚Ë@
   
2 1 2 6
Example 2. Use Crout reduction to solve the linear system Ax = b if A = 4 4 1 , b = −4 .
3 0 2 8
Solution.

Exercise
Page 409: 1-9


2019-2018 Èð B@ ɒ®Ë@ Page 21 of 21

You might also like