Untitled

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

MODIBBO ADAMA UNIVERSITY (MAU), YOLA

Department of Mathematics
MTH 201: LINEAR ALGEBRA I LECTURE NOTE (2019/2020)
By: Musa Abdullahi - Email: [email protected] - Google Classroom Code: x3ztzjn

Course Contents:
Matrices: Algebra of matrices, Vector spaces over the real fields: Subspaces, linear independence, Basis and
Dimension. Linear Transformation and their representation by matrices-range, Null space rank. Singular and
nonsingular transformation.

Text:
1. Howard Anton & Chris Rorres: Elementary Linear Algebra and Applications 11th Edition.
2. Advanced Engineering Mathematics by H. K. Dass 2nd Edition.

1.0 MATRICES
Let us consider a set of simultaneous equations,
x + 2 y + 3z + 5t = 0
4 x + 2 y + 5 z + 7t = 0
3 x + 4 y + 2 z + 6t = 0
Now, writing the coefficients of x, y, z , t from the above equations and enclose them within brackets
and then get
1 2 3 5 
A =  4 2 5 7 
 3 4 2 6 
The above system of numbers arranged in a rectangular array in rows and columns and bounded by the
brackets is called a Matrix. More generally, we make the following definition.
▪ Definition: A Matrix is a rectangular array of numbers. The numbers in the array are called
the entries in the matrix.

1.1 VARIOUS TYPES OF MATRICES


(a) Row Matrix. If a matrix has only one row and any number of columns, it is called a Row
Matrix, e.g., 2 4 6
(b) Column Matrix. A matrix, having one column and any number of rows, is called a Column Matrix,
1 
e.g., 2
 
 3 
(c) Null Matrix or Zero Matrix. Any matrix, in which all the elements are zeros, is called a Zero Matrix
0 0 0 0
or Null matrix, e.g., 0 0 0 0
 
(d) Square Matrix. A matrix, in which the number of rows is equal to the number of columns, is
 2 4
called a Square Matrix, e.g., 8 1 
 

Page | 1
(e) Diagonal Matrix. A square matrix is called a diagonal matrix, if all its non-diagonal elements are
1 0 0 
zero, e.g., 0 3 0 
 
 0 0 4 
(f) Scalar Matrix. A diagonal matrix in which all the diagonal elements are equal to a scalar, say ( k
 −6 0 0 
) is called a scalar matrix, e.g.,  0 −3 0 
 
 0 0 −4 
(g) Unit or Identity Matrix. A square matrix is called a unit matrix if all the diagonal elements are
1 0 0 
unity and non-diagonal elements are zero, e.g., 0 1 0 
 
0 0 1 
(h) Symmetric Matrix. A square matrix will be called symmetric, if for all values of i and j ,
a h g
aij = a ji i.e., A = A
T
e.g., h b f 

 g f c 
(i) Skew Symmetric Matrix. A square matrix is called skew symmetric matrix, if
1. aij = −a ji for all values of i and j or AT = − A .
0 −h − g 
2. All diagonal elements are zero, e.g., h 0 − f 

 g f 0 
(j) Transpose of a Matrix. If in a given matrixA , we interchange the rows and the corresponding
columns, the new matrix obtained is called the transpose of the matrix A and is denoted by
AT or A' e.g.,
2 3 4 2 1 6
A = 1 0 5  , A =  3 0 7 
  T

 6 7 8   4 5 8 
(k) Orthogonal Matrix. A square matrix A is called an orthogonal matrix if the product of the matrix
T
A and the transpose matrix A is an identity matrix, e.g.,
A A = IT
if A = 1, matrix A is proper.
(l) Triangular Matrix. (Echelon Form) A square matrix, all of whose elements below the leading
diagonal are zero, is called an upper triangular matrix. A square matrix, all of whose elements
above the leading diagonal are zero, is called a lower triangular matrix, e.g.,

2 0 0 2 6 7
4 1 0 0 1 4
   
 5 6 7   0 0 7 
Upper Triangular Lower Triangular
(m) Singular Matrix. If the determinant of the matrix is zero, then the matrix is known as singular
1 2
matrix, e.g., 3 6  is singular matrix, because A = 6 − 6 = 0
 

Page | 2
1.2 ADDITION OF MATRICES
If A and B be two matrices of the same order, then their sum, A + B is defined as the matrix, each
element of which is the sum of the corresponding elements of A and B .
4 2 5  1 0 2 
Thus, if A=  , B= 
1 3 −6 3 1 4 
4 + 1 2 + 0 5 + 2  5 2 7 
Then A+ B =   = 4 4 −2 
1 + 3 3 + 1 −6 + 4   
If A =  aij  , B = bij  then A + B =  aij + bij 

Example 1. Write matrix A given below as the sum of a symmetric and a skew symmetric matrix.
 1 2 4
A =  −2 5 3 
 −1 6 3 
 1 2 4 1 −2 −1
Solution. A =  −2 5 3  , on transposing, we get A =  2 5 6 
  T

 −1 6 3   4 3 3 
 1 2 4  1 −2 −1  2 0 3
On adding A  
and A , we have A + AT = −2 5 3 + 2 5
T  6  =  0 10 9 
 (1)
  
 −1 6 3   4 3 3   3 9 6 
 1 2 4  1 −2 −1 0 4 5
On subtracting A  
from A , we get A − A = −2 5 3 − 2 5
T T  6  =  −4 0 −3 (2)

  
 −1 6 3   4 3 3   −5 3 0 
Adding (1) and (2), we have
 2 0 3  0 4 5 
2 A =  0 10 9  +  −4 0 −3
 3 9 6   −5 3 0 

 2 0 3 0 4 5
A =  0 10 9  +  −4 0 −3
1  1
2 2
 3 9 6   −5 3 0 

A = [Symmetric matrix] + [Skew symmetric matrix]

• Properties of Matrix Addition: Only matrices of the same order can be added or subtracted.
(a) Commutative Law. A + B = B + A
(b) Associative Law. A + ( B + C ) = ( A + B) + C

1.3 SUBTRACTION OF MATRICES


The difference of two matrices is a matrix, each element of which is obtained by subtracting the elements
of the second matrix from the corresponding element of the first.

A − B =  aij − bij 

8 6 4   3 5 1  8 − 3 6 − 5 4 − 1   5 1 3 
Thus 1 2 0  − 7 6 2  = 1 − 7 2 − 6 0 − 2  =  −6 −4 −2 
       

Page | 3
1.4 SCALAR MULTIPLE OF A MATRIX
If a matrix is multiplied by a scalar quantity k , then each element is multiplied by k , i.e.
2 3 4 2 3 4 3 x 2 3 x 3 3 x 4   6 9 12 
A =  4 5 6  then 3 A = 3  4 5 6  = 3 x 4 3 x 5 3 x 6  = 12 15 18 
     
 6 7 9   6 7 9  3 x 6 3 x 7 3 x 9  18 21 27 
0 2 0  1 2 1 
Example 2. If A = 1 0 3 , and B =  2 1 0  . Find:
 
   
1 1 2   0 0 3
(a). 2 A + 3B
(b). 3 A − 4B
Solution
0 2 0  1 2 1  0 4 0  3 6 3   3 10 3 
(a). 2 A + 3B = 2 1 0 3  + 3  2 1 0  =  2 0 6  + 6 3 0  = 8 3 6 
 
1 1 2   0 0 3  2 2 4  0 0 9   2 2 13
0 2 0  1 2 1  0 6 0  4 8 4   −4 −2 −4 
(b). 3 A − 4 B = 3 1 0 3  − 4  2 1 0  = 3 0 9  − 8 4 0  =  −5 −4 9 
       
1 1 2   0 0 3  3 3 6   0 0 12   3 3 −6 
1.5 MATRIX MULTIPLICATION
The product of two matrices A and B is only possible if the number of columns in A is equal to the
number of rows in B.
• Properties of Matrix Multiplication
1. Multiplication of matrices is not commutative. i.e. AB  BA
2. Matrix Multiplication is associative. i.e. A( BC ) = ( AB)C

3. Matrix Multiplication is distributive with respect to addition. A( B + C ) = AB + AC

4. Multiplication of matrix A by unit matrix. AI = IA = A


−1 −1
5. Multiplicative inverse of a matrix exist if A  0, A  A = A  A = I

 1 −2 3  1 0 2 
Example 3.

If A = 2 3 −1 and B = 0 1 2  , form the products AB and BA , and


 −3 1 2  1 2 0 
show that AB  BA .
Solution.
 1 −2 3  1 0 2   4 4 −2 
Here: AB =  2 3 −1 0 1 2  =  1 1 10 
 −3 1 2  1 2 0   −1 5 −4 
1 0 2   1 −2 3   −5 0 7 
BA = 0 1 2   2 3 −1 =  −4 5 3   AB  BA
1 2 0   −3 1 2   5 4 1 
 1 2  2 1  −3 1 
Example 4. If A =  , B=  and C= 
 −2 3  2 3  2 0
Verify that: (a) A( BC ) = ( AB)C
(b) A( B + C ) = AB + AC.

Page | 4
Solution. We have,
1 2   2 1  6 7   2 1  −3 1   −4 2
AB =     =  , BC =   =
 −2 3  2 3  2 7   2 3  2 0  0 2
1 2  −3 1   1 1   2 − 3 1 + 1   −1 2
AC =     =  , B+C =  =
 −2 3  2 0  12 −2  2 + 2 3 + 0  4 3 
6 7   −3 1   −4 6 
(a) ( AB)C =   =  and
2 7   2 0  8 2
 1 2  −4 2  −4 6 
A( BC ) =   =   ( AB)C = A( BC ) .
 −2 3  0 2  8 2
 1 2  −1 2  7 8
(b) A( B + C ) =   =  and
 −2 3  4 3 14 5
 6 + 1 7 + 1   7 8
AB + AC =  =   A( B + C ) = AB + AC .
 2 + 12 7 − 2 14 5
1 2 2 
Example 5.
 
If A = 2 1 2 show that A2 − 4 A − 5I = 0 where I , 0 are the unit and null
 
 2 2 1 
matrix of order 3 Respectively. Use this result to find A−1.
 1 2 2   1 2 2  9 8 8 
Solution. A =  2 1 2   2 1 2  = 8 9 8 
2

 2 2 1   2 2 1  8 8 9 
9 8 8   1 2 2  1 0 0  0 0 0 
A − 4 A − 5I = 8 9 8  − 4  2 1 2  − 5 0 1 0  = 0 0 0 
2  
8 8 9   2 2 1  0 0 1  0 0 0 
A2 − 4 A − 5I = 0  5I = A2 − 4 A
−1
Multiplying by A , we get 5 A−1 = A − 4I
1 2 2 1 0 0   −3 2 2 
=  2 1 2  − 4 0 1 0  =  2 −3 2 
 
 2 2 1  0 0 1   2 2 −3
 −3 2 2 
A =  2 −3 2 
1
 −1

5
 2 2 −3
0 2  
Example 6.

Determine the values of  ,  ,  when   −  is orthogonal.

 −  
0 2    0   
Solution.

Let A =   −  and A =  2 
T
 −  

 −     −  

If A is orthogonal, then AAT = I .


0 2   0    1 0 0 
  −   2  −   = 0 1 0 

 −     −   0 0 1 

Page | 5
 4 2 +  2 2 2 −  2 −2 2 +  2  1 0 0 
 2 
  2 − 
2
2 + 2 + 2  2 −  2 −  2  = 0 1 0 
− 2 +  2 2 −  2 − 2  2 +  2 +  2  0 0 1 

Equating the corresponding elements, we have

4 2 +  2 = 1  1 1
   = ,  =
2 −  = 0
2 2
6 3
1 1 1
But  2 +  2 +  2 = 1 as  = ,  = ,  =
6 3 2

2.1 DETERMINANTS
The notation of determinants arises from the process of elimination of the unknowns of simultaneous
linear equations. Consider the two linear equations in x,
a1 x + b1 = 0 (1)
a2 x + b2 = 0 (2)
b1
From (1) x=−
a1
substituting the value of x in (2); we get the eliminant
 b 
a2  − 1  + b2 = 0 or a1b2 − a2b1 = 0 (3)
 a1 
From (1) and (2) by suppressing x, the eliminant is written as
a1 b1
=0 (4)
a2 b2
Each quantity a1 , b1 , a2 , b2 is called an element or a constituent of the determinant.
From (3) and (4), we know that both expressions are eliminant, so we equate them.

a1 b1
= a1b2 − a2b1
a2 b2

a1 b1
a1b2 − a2b1 is called the expansion of the determinant of .
a2 b2
3 2
Example 1. Expand the determinant .
6 7
3 2
Solution. = ( 3  2 ) − ( 2  6 ) = 21 − 12 = 9
6 7
Definition. If A is a square matrix, then the minor of entry aij is denoted by M ij and is defined to
be the determinant of the submatrix that remains after the ith row and jth column are deleted from A .
The number (−1)i + j M ij is denoted by Cij and is called the cofactor of entry aij .
 3 1 −4 
 
Example 2. Find the Minors and Cofactors of A =  2 5 6 
1 4 8 

Page | 6
Solution. The minor of entry a11 is
3 1 −4
5 6
M 11 = 2 5 6 = = 16
4 8
1 4 8

The cofactor of a11 is

C11 = ( −1)
1+1
M 11 = M 11 = 16
Similarly, the minor entry of a32 is
3 1 −4
3 −4
M 32 = 2 5 6 = = 26
2 6
1 4 8
The cofactor of a32 is

C32 = ( −1)
3+ 2
M 32 = − M 32 = − 26
Remark. Note that a minor M ij and its corresponding cofactor Cij are either the same or
i+ j
negatives of each other and that the relating sign (−1) is either +1 or − 1 in accordance with the
pattern in the “checkerboard” array
+ − + − + 
− + − + − 
 
+ − + − + 
 
− + − + − 
 

For example, C11 = M 11 , C21 = − M 21 , C22 = M 22 and so forth. Thus, it is never really

necessary to evaluate ( −1)


i+ j
to calculate Cij , you can simply compute the minor M ij and then adjust
the sign in accordance with the checkerboard pattern.
6 2 3
Example 3. Expand the determinant 2 3 5
4 2 1
6 2 3
Solution. 2 3 5 = 6(cofactor of 6) + 2(cofactor of 2) + 3(cofactor of 3)
4 2 1
= 6(3 − 10) − 2(2 − 20) + 3(4 − 12)
= − 42 + 36 − 24 = − 30
Theorem If A is an n  n matrix, then regardless of which row or column of A is chosen, the
number obtained by multiplying the entries in that row or column by the corresponding cofactors and
adding the resulting products is always the same.
Definition. If A is an n  n matrix, then the number obtained by multiplying the entries in any row
or column of A by the corresponding cofactors and adding the resulting products is called the
determinant of A , and the sums themselves are called cofactor expansion of A . That is,
det( A) = a1 j C1 j + a2 j C2 j + + anj Cnj
and det( A) = ai1Ci1 + ai 2Ci 2 + + ainCin
Page | 7
Example 4. Cofactor Expansion Along the First Row
Find the determinant of the matrix
3 1 0
A =  −2 −4 3 

 5 4 −2 
Solution. By cofactor expansion along the first row.
3 1 0
−4 3 −2 3 −2 −4
det( A) = −2 −4 3 = 3 −1 +0
4 −2 5 −2 5 4
5 4 −2
= 3(−4) − (−11) + 0 = − 1

Example 5. Cofactor Expansion Along the First Column


Using the matrix in the example above, evaluate det( A) by cofactor expansion along the first column of
A.
3 1 0
−4 3 1 0 1 0
det( A) = −2 −4 3 = 3 − (−2) +5
4 −2 4 −2 −4 3
5 4 −2
= 3(−4) − (−2)(−2) + 5(3) = − 1
Example 6. Smart Choice of Row or Column

1 0 0 −1
3 1 2 2 
If A is the 4  4 matrix A=
1 0 −2 1 
 
2 0 0 1
Then to find det( A) it will be easiest to use cofactor expansion along the second column, since it has the
most zeros:
1 0 −1
det( A) = 1  1 −2 1
2 0 1

For the 3  3 determinant, expand along its second column, since it has the most zeros:
1 −1
det( A) = 1 (−2) = − 2(1 + 2) = − 6
2 1
Theorem (Determinant of a Triangular Matrix): If A is an n  n triangular matrix (upper
triangular, lower triangular, or diagonal), then det( A) is the product of the entries on the main diagonal
of the matrix; that is, det( A) = a11a22 ann .

a11 0 0 0
a22 0 0
a21 a22 0 0 a 0
Example 7. = a11 a32 a33 0 = a11a22 33 = a11a22 a33 a44
a31 a32 a33 0 a43 a44
a42 a43 a44
a41 a42 a43 a44
A Technique for Evaluating 3  3 Determinants (Rule of Sarrus)
After writing the determinant, repeat the first two columns as below

a11 a12 a13 a11 a12 a13 a11 a12


a21 a22 a23 = a21 a22 a23 a21 a22
a31 a32 a33 a31 a32 a33 a31 a32
Page | 8
= (a11a22 a33 + a12 a23a31 + a13 a21a32 ) − (a13 a22 a31 + a11a23 a32 + a12 a21a33 )

Example 8. Evaluate the determinant


1 2 3 1 2 3 1 2
−4 5 6 = −4 5 6 −4 5 =  45 + 84 + 96  − 105 − 48 − 72  = 240
7 −8 9 7 −8 9 7 −8

• Properties of Determinants
Let A be an n  n matrix. Then
1. If B is the matrix that results when a single row or single column of A is multiplied by a scalar
k , then det( B) = k det( A).
2. If B is the matrix that results when two rows or two columns of A are interchanged, then
det( B) = − det( A).
3. If B is the matrix that results when a multiple of one row of A is added to another or when a
multiple of one column is added to another, then det( B) = det( A).

Example 9. Using Row Reduction to Evaluate a Determinant


0 1 5

Evaluate det( A) where A = 3 −6 9

 
 2 6 1 
We will reduce A to row echelon form (which is upper triangular) and then apply Theorem 1.
0 1 5 3 −6 9
The first and second rows of
det( A) = 3 −6 9 = − 0 1 5
A were interchanged.
2 6 1 2 6 1

1 −2 3 A common factor of 3 from the


= −3 0 1 5 first row was taken through the
determinant sign.
2 6 1

1 −2 3
- 2 times the first row was
= −3 0 1 5 added to the third row.
0 10 −5

1 −2 3
-10 times the second row
= −3 0 1 5
was added to the third row.
0 0 −55

1 −2 3 A common factor of -55 from the


= − 3(−55) 0 1 5 last row was taken through the
determinant sign.
0 0 1

= − 3(−55)(1) = 165

2.2 PIVOTAL CONDENSATION


The condensation process of reducing n order determinant to (n − 1)
th th
order determinant is
shown below:

Page | 9
a1 b1 c1 d1
a2 b2 c2 d2
a3 b3 c3 d3
Consider nth order determinant D = a4 b4 c4 d4

an bn cn dn
this can be reduced to
a1 b1 a1 c1 a1 d1
a2 b2 a2 c2 a2 d2
a1 b1 a1 c1 a1 b1
a3 b3 a3 c3 a3 d3
1 a1 b1 a1 c1 a1 d1
D= n−2
(a1 ) a4 b4 a4 c4 a4 d4

a1 b1 a1 c1 a1 d1
an bn an cn an dn

Thus, the nth order determinant is condensed to (n − 1)th order determinant. Repeated application of this
nd
method ultimately results in a determinant of 2 order which can be evaluated. It is obvious that the
leading element is zero. It can be made non-zero by interchanging the columns. If the leading element is
zero, it can be made non-zero by interchanging the columns.
Example 10. Condense the following determinants to second order and hence evaluate them:

2 1 3 5 0 4 1 2
4 −2 7 6 5 3 7 8
i. ii.
−8 3 1 0 4 1 2 3
5 7 2 −6 1 2 5 5

Solution.
2 1 3 5
−4 − 4 14 − 12 12 − 20 −8 2 −8
4 −2 7 6 1 1
(i) = 6 + 8 2 + 24 0 + 40 = 14 26 40
−8 3 1 0 (2) 4− 2 4
14 − 5 4 − 15 −12 − 25 9 −11 −37
5 7 2 −6

−4 1 −4
2 2 1 −59 −52 4 59 52
= 7 13 20 = =
4 (−4)3− 2 35 184 4 35 46
9 −11 −37

= (59  46) − (13  35) = 2259

Page | 10
(ii) As the leading element is zero, hence interchanging the 1st and 2nd columns, we get

0 4 1 2 4 0 1 2
20 25 26 5 25 13
5 3 7 8 3 5 7 8 1 4 2
=− = − 2 16 7 10 = − 4 7 5
4 1 2 3 1 4 2 3 4 16
4 18 16 1 18 8
1 2 5 5 2 1 5 5
1 1 −65 −27 1 65 27
= −  = = 0
2 5 65 27 10 65 27

Theorem 2. A square matrix is invertible if and only if det( A)  0 .


Theorem 3. If A is invertible, then
1
det( A−1 ) =
det( A)
Definition: (ADJOINT OF A SQUARE MATRIX) If A is any n  n matrix and Cij is the cofactor of aij ,
then the matrix
 C11 C12 C1n 
C C2 n 
 21 C22
 
 
Cn1 Cn 2 Cnn 
is called the matrix of cofactors from A. The transpose of this matrix is called the adjoint of A and is
denoted by adj ( A) .

• Property of Adjoint Matrix


1. The product of matrix A and its adjoint is equal to unit matrix multiplied by the determinant of
A.

2.3 INVERSE OF A MATRIX


Theorem 4. Inverse of a Matrix Using its Adjoint
If A is an invertible matrix, then
1
A−1 = adj ( A)
det( A)
 3 −3 4 
  −1
Example 1. If A =  2 −3 4  , find A .
 0 −1 1 
Solution. det( A) = 3(−3 + 4) + 3(2 − 0) + 4(−2 − 0) = 3 + 6 − 8 = 1
The matrix formed by the co-factors of A is
 1 −2 −2   1 −1 0 
C =  −1 3 3  and Adj ( A) = C =  −2 3 −4 
  T

 0 −4 −3  −2 3 −3
 1 −1 0 
Adj ( A) =  −2 3 −4 
−1 1
Therefore, A =
det( A)
 −2 3 −3
 3 2 −1
 3  , find A−1 .
Example 2. Let A =  1 6
 2 −4 0 

Page | 11
Solution. det( A) = 3(12) − 2(−6) − (−16) = 64 .
The matrix of cofactors is
12 6 −16   12 4 12 
C =  4 2 16  and adj ( A) =  6 2 −10 
12 −10 16   −16 16 16 
 12 4 12 
1 
2 −10 
−1 1
Thus, A = adj ( A) =  6
det( A) 64
 −16 16 16 
2.4 ELEMENTARY TRANSFORMATIONS (ROW OPERATIONS)
Any one of the following operations on a matrix is called an elementary transformation.
i. Interchanging any two rows (or columns). This transformation is indicated by Rij , if the
ith and jth rows are interchanged.
ii. Multiplication of the elements of any row Ri (or column) by a non-zero scalar quantity
k is denoted by ( k .Ri ).
iii. Addition of constant multiplication of the elements of any row R j to the corresponding
elements of any other row R j is denoted ( Ri + k .R j ).

2.5 GAUSSIAN ELIMINATION AND GAUSS JORDAN ELIMINATION


When doing row reduction, two forms are particularly convenient:
i. Row Echelon Form (REF)
ii. Reduced Row Echelon Form (RREF)

• Row Echelon Form: A matrix is in Row Echelon Form if:


(a) If a row does not consist entirely of zeros, then the first non-zero number in the row is a 1. This
is called a PIVOT or a LEADING 1.
(b) If there are any rows that consist entirely of zeros, then they are grouped together at the bottom
of the matrix.
(c) In any two successive row that do not consist entirely of zeros, the pivots in the lower row occurs
farther to the right than the pivots in the higher row.
(d) There should be zeros in the column below all pivots, except pivots in the last row, if any.

The process of using row operations to get a matrix in REF is called GAUSSIAN ELIMINATION.

• Reduced Row Echelon Form: A matrix is in RREF if:


(a) The matrix is already if REF with one additional criterion.
(b) Each column that contains a pivot, has zeros everywhere else in that column.

The process of using row operations to get a matrix in RREF is called GAUSS-JORDAN ELIMINATION.

Example 1. A matrix in REF

1 4 −3 7  1 1 0  0 1 2 6 0 
(a)
0 1 6 2 (b)
0 1 0  (c)
0 0 1 −1 0 
     
 0 0 1 5  0 0 0  0 0 0 0 1 

Example 2. A matrix in RREF

1 0 0 4  1 0 0  0 1 −2 0 1 
(a)
0 1 0 7  (b)
0 1 0  (c)
0 0 0 1 3
     
0 0 1 −1 0 0 1  0 0 0 0 0 

Page | 12
Example 3. Reduce the following matrix to upper triangular form (REF):
1 2 3 
2 5 7
 
 3 1 2 
Solution.
1 2 3 1 2 3  1 2 3 
2 5 7 0 1 1  R − 2 R 0 1 1  R − 2 R
    2 1   2 1

 3 1 2  0 −5 −7  R3 − 3R1 0 −5 −7  R3 − 3R1


1 2 3  1 2 3 
0 1 1  R − 2 R 0 1 1 
  2 1  
0 −5 −7  R3 − 3R1 0 0 2  R3 + 5 R2

Example 4. Reduce the following matrix to Echelon form (REF).


1 2 −3 0 
 2 4 −2 2 
 
 3 6 −4 3 
Solution.
1 2 −3 0  1 2 −3 0  1 2 −3 0 
 2 4 −2 2  0 0 4 2  R − 2 R 0 0 4 2 
    2 1  
 3 6 −4 3  0 0 5 3  R3 − 3R1 0 0 0 2  4 R3 − 5 R2

1 3 3 
 
Example 5. Transform  2 4 10  into a unit matrix (RREF).
 3 8 4 
Solution.
1 3 3  1 3 3  1 3 3  1 3 3 
 2 4 10  0 −2 4  R − 2 R 0 −2 4  R − 2 R 0 1 −2  − 1 R
    2 1   2 1   2 2
 3 8 4  0 −1 −5 R3 − 3R1 0 −1 −5 R3 − 3R1 0 −1 −5 
1 0 9  R1 − 3R2 1 0 9  1 0 0  R1 − 9 R3
0 1 −2  0 1 −2  0 1 0  R + 2 R
      2 3

0 0 −7  R3 + R2 0 0 1  − 17 R3 0 0 1 

2.6 ELEMENTARY MATRICES


Definition. A Matrix E is called an elementary matrix if it can be obtained from an identity matrix by
performing a single elementary row operation.
Example 6. Listed below are three elementary matrices and the operations that produces them.
1 0 0 0
0 1 0 3 
1 0   0 0 1  0 1 0 
0 −3 0 0 1 0  
  0 0 1 
 
0 1 0 0

−3R2 R2  R4 R1 + 3R3

Page | 13
2.7 USING ROW OPERATION TO FIND INVERSE (Gauss – Jordan Method)
To find the inverse of an invertible matrix A , find a sequence of elementary row operations that
reduces A to the identity and then perform that same sequence of operations on I n to obtain
A−1 . We want to reduce A to the identity matrix by row operations and simultaneously apply
−1
these operations to I n to produce A . To accomplish this, we will adjoin the identity matrix to
the right side of A , thereby producing a partitioned matrix of the form
A In 
Then we will apply row operations to this matrix until the left side is reduced to I n ; these operations
−1
will convert the right side to A , so the final matrix will have the form
 I n A−1  .
 1 2 3
 
Example 1. Find the inverse of A =  2 5 3
1 0 8
1 2 3 1 0 0  1 2 3 1 0 0 
2 5 3 0 1 0 0 1 −3 −2 1 0  R → −2 R + R
Solution.     2 1 2

1 0 8 0 0 1  0 −2 5 −1 0 1  R3 → − R1 + R3
1 2 3 1 0 0  1 2 0 −14 6 3  R1 → −3R3 + R1
0 1 −3 −2 1 0  0 1 0 13 −5 −3 R → 3R + R
    2 3 2

0 0 −1 −5 2 1  R3 → 2 R2 + R3 0 0 1 5 −2 −1 R3 → − R3

1 0 0 −40 16 9  R1 → −2 R2 + R1  −14 16 9 
0 1 0 13 −5 −3 A =  13 −5 −3
−1
  
0 0 1 5 −2 −1  5 −2 −1

3.1 REAL VECTOR SPACE


Definition. Let V be an arbitrary nonempty set of objects on which two operations are defined;
addition, and multiplication by numbers called scalars. By addition we mean a rule for
associating with each pair of objects u and v in V an object u + v , called the sum of u
and v ; by scalar multiplication we mean a rule for associating with each scalar k and each
objects u in V an object ku , called the scalar multiple of u by k . If the following axioms are
satisfied by all objects u, v, w in V and all scalars k and m , then we call V a vector space and we
call the objects in V vectors.
1. If u and v are objects in V , then u + v is in V .
2. u+ v = v+u
3. u + ( v + w) = (u + v) + w
4. There is an object 0 in V , called a zero vector for V , such that 0 + u = u + 0 = u
 uV
5. For each u in V , there is an object −u in V , called a negative of u , such that
u + (−u) = (−u) + u = 0 .
6. If k is any scalar and u is any object in V , then ku is in V .
7. k (u + v) = ku + kv
8. (k + m)u = ku + mu
9. k (mu) = (km)u
10. 1 u = u
Page | 14
3.1.1 EXAMPLES OF VECTOR SPACE
Example 1. The Zero Vector Space
Let V consist of a single object, which we denote by 0 , and define
0+0 = 0 and k0 = 0  k.
It is easy to check that all vector space axioms are satisfied. We call this the zero vector
space.
Example 2. Rn is a Vector Space
Let V = R n , and define the vector space operations on V to be the usual operations
of addition and scalar multiplication of n − tuples ; that is,
u + v = (u1 , u2 , , un ) + (v1 , v2 , , vn ) = (u1 + v1 , u2 + v2 , , un + vn )
ku = k (u1 , u2 , , un ) = (ku1 , ku2 , , kun )
The set V = R n is closed under addition and scalar multiplication because the
foregoing operations produce n − tuples as their end result, and these operations
satisfy all vector space axioms.
Example 3. The Vector Space of 2  2 Matrices
Let V be the set of 2  2 matrices with real entries, and take the vector space
operations on V to be the usual operations of matrix addition and scalar
multiplication; that is,
u u  v v  u + v u +v 
u + v =  11 12  +  11 12  =  11 11 12 12 
u21 u22  v21 v22  u21 + v21 u22 + v22 
u u   ku ku12 
ku = k  11 12  =  11 
u21 u22   ku21 ku22 
set V is closed under addition and scalar multiplication because the foregoing
operations produces 2  2 matrices as the end result. Thus, we can clearly see
that Axioms 2, 3, 4, 5, 7, 8, 9 and 10 holds.
Theorem Let V be a vector space, u a vector in V , and k a scalar; then:
0u = 0
`

(a)

(b) k0 = 0
(c) (−1)u = − u
(d) If ku = 0, then k = 0 or u = 0

Proof:
(a) We can write
0u + 0u = (0 + 0)u = 0u
By axiom 5 the vector 0u has a negative, −0u . Adding this negative to both sides above
0u +  0u + (−0u) = 0u + (−0u)
yields 0u + 0 = 0
0u = 0
(b) By axiom 2, with u = 0 and v = 0 , we have 0 + 0 = 0 , then
k 0 = k[0 + 0] = k 0 + k 0
By axiom 5 the vector ku has a negative, −ku . Adding this negative to both sides above
yields
k 0 + (−k 0) = k 0 + [k 0 + (−k 0)]
0 = k0
Page | 15
(c) To prove (−1)u = − u , we must show that u + (−1)u = 0 .
u + (−1)u = 1u + (−1)u = (1 + (−1))u
= 0u = 0
−1
(d) Suppose ku = 0 and k  0 , then there exists a scalar k such that
−1
k k =1
Hence, u = 1u = (k −1k )u = k −1 (ku) = k −1 0 = 0
3.2 SUBSPACES
Definition
A subset W of a vector space V is called a subspace of V if W is itself a vector space
under the addition and scalar multiplication defined on V .
3.2.1 Examples of Subspaces
Example 1. The Zero Subspace
If V is any vector space, and if W = 0 is the subset of V that consist of the zero vector only,
then W is closed under addition and scalar multiplication since
0+0 = 0 and k0 = 0  k.
We call W the zero subspace of V .

Example 2. Subspaces of M nn
We know that the sum of two symmetric n  n matrices is symmetric and that a scalar
multiple of a symmetric n  n matrix is symmetric. Thus, the set of symmetric n  n matrices
is closed under addition and scalar multiplication and hence is a subspace of M nn .
Similarly, the sets of upper triangular matrices, lower triangular matrices, and diagonal
matrices are subspaces of M nn .

Example 3. The Subspace of All Polynomials


Recall that a polynomial is a function that can be expressed in the form
p( x) = a0 + a1 x + + an x n
Where a0 , a1 , , an are constants. It is evident that the sum of two polynomials is a
polynomial and that a constant times a polynomial is a polynomial. Thus, the set W of
all polynomials is closed under addition and scalar multiplication and hence is a subspace of
F (−, ) denoted by P .
4.1 LINEAR COMBINATION
Definition if w is a vector in a vector space V , then w is said to be a linear combination of the
vectors v1 , v 2 , , v r in V if w can be expressed in the form
w = k1 v1 + k2 v 2 + kr v r
Where k1 , k2 , , kr are scalars. These scalars are called the coefficient of the linear
combination.
Definition Spanning
If S = w1 , w 2 , , w r  is a nonempty set of vectors in a vector space V , then the subspace W
of V that consist of all possible linear combinations of the vectors in S is called the subspace
V generated by S , and we say that the vectors w1 , w 2 , , w r span W . We denote this
subspace as
W = span w1 , w 2 , , wr  or W = span(S )

Page | 16
Example 1. Linear Combinations
Consider the vectors
u = (1, 2, − 1) and v = (6, 4, 2) in R3 .
Show that w = (9, 2, 7) is a linear combination of u and v , and that z = (4, − 1, 8) is
not a linear combination of u and v .

Solution. For w to be a linear combination of u and v , there must be scalars k1 and k2 such
that w = k1u + k2 v ; that is,
(9, 2, 7) = k1 (1, 2, − 1) + k2 (6, 4, 2) = ( k1 + 6k2 , 2k1 + 4k2 , − k1 + 2k2 )
Equating corresponding components gives
k1 + 6k2 = 9
2k1 + 4k2 = 2
−k1 + 2k2 = 7
Solving the system using Gaussian elimination gives k1 = −3, k2 = 2 , so
w = −3u + 2 v
Similarly, for z to be a linear combination of u and v , there must be scalars k1 and k2 such that
z = k1u + k2 v ; that is,
(4, − 1, 8) = k1 (1, 2, − 1) + k2 (6, 4, 2) = ( k1 + 6k2 , 2k1 + 4k2 , − k1 + 2k2 )
Equating corresponding components gives
k1 + 6k2 = 4
2k1 + 4k2 = − 1
−k1 + 2k2 = 8
This system of equations is inconsistent, so no such scalars k1 and k2 exist. Consequently, z is
not a linear combination of u and v .

Example 2. Determine whether the vector v = (3, 9, − 4, − 2) is a linear combination of the


vectors
u1 = (1, − 2, 0, 3), u 2 = (2, 3, 0, − 1) and u3 = (2, − 1, 2, 1) .
Solution. Set the vector v as a linear combination of the u i ' s using the scalars k1 , k2 and k3 ;
that is, v = k1u1 + k2u 2 + k3u3
(3, 9, − 4, 8) = k1 (1, − 2, 0, − 1) + k2 (2, 3, 0, − 1) + k3 (2, − 1, 2, 1)
= (k1 + 2k2 + 2k3 , − 2k1 + 3k2 − k3 , 2k3 , 3k1 − k2 + k3 )
Equating corresponding components gives
k1 + 2k2 + 2k3 = 3
−2k1 + 3k2 − k3 = 9
2k3 = −4
3k1 − k2 + k3 = 8
The above system is consistent (verify), and so has solution, solving for the unknowns gives
k1 = 1, k2 = 3 and k3 = −2
Therefore, the vector v is a linear combination of the u i ' s , and hence
v = u1 + 3u 2 − 2u3
Example 3. In each part express the vector as a linear combination of
p1 = 2 + x + 4 x 2 , p 2 = 1 − x + 3x 2 , and p3 = 3 + 2 x + 5 x 2
(a). −9 − 7 x − 15x 2 (b) 6 + 11x + 6 x 2
Page | 17
Solution
(a). Set −9 − 7 x − 15 x 2 = k1p1 + k2p 2 + k3p3
= k1 (2 + x + 4 x 2 ) + k2 (1 − x + 3x 2 ) + k3 (3 + 2 x + 5 x 2 )
= x 2 (4k1 + 3k 2 + 5k3 ) + x(k1 − k2 + 2k3 ) + (2k1 + k2 + 3k3 )
Equating corresponding components gives
4k1 + 3k2 + 5k3 = −15
k1 − k2 + 2k3 = −7
2k1 + k2 + 3k3 = −9
Solving the system using Gaussian elimination gives k1 = −2, k2 = 1, k3 = −2 . Hence, we
write
−9 − 7 x − 15 x 2 = − 2p1 + p 2 + −2p3
(b). Set 6 + 11x + 6 x 2 = k1p1 + k2p 2 + k3p3
= x 2 (4k1 + 3k 2 + 5k3 ) + x(k1 − k2 + 2k3 ) + (2k1 + k2 + 3k3 )
Equating corresponding components gives
4k1 + 3k2 + 5k3 = 6
k1 − k2 + 2k3 = 11
2k1 + k2 + 3k3 = 6
Solving the system using Gaussian elimination gives k1 = 4, k2 = −5, k3 = 1 . Hence, we write
6 + 11x + 6 x 2 = 4p1 − 5p 2 + p3
Example 4. Testing for Spanning
Determine whether the vectors v1 = (1, 1, 2), v 2 = (1, 0, 1), v 3 = (2, 1, 3) span the vector
3
space R .
We need to determine whether an arbitrary vector b = (b1 , b2 , b3 ) in R can be
3
Solution.
expressed as a linear combination
b = k1 v1 + k2 v 2 + k3 v 3
of the vectors v1 , v 2 , and v 3 . Expressing this equation in terms of components gives
(b1 , b2 , b3 ) = k1 (1, 1, 2) + k2 (1, 0, 1) + k3 (2, 1, 3)
= (k1 + k2 + 2k3 , k1 + k3 , 2k1 + k2 + 3k3 )
Which gives the system
k1 + k2 + 2k3 = b1
k1 + k3 = b2
2k1 + k2 + 3k3 = b3

Now, we check whether the system is consistent for all values of b1 , b2 , and b3 . The system is
consistent if and only if its coefficient matrix has a nonzero determinant. In this case the
3
determinant is zero (verify), so v1 , v 2 , and v 3 do not span R .

Example 5. Determine whether the following polynomials span P2 .


p1 = 1 − x + 2 x 2 , p 2 = 3 + x, p3 = 5 − x + 4 x 2 , p 4 = −2 − 2 x + 2 x 2
Solution. We look for constants an arbitrary polynomial b = (b1 , b2 , b3 ) in P2 such that the
polynomial b can be expressed as a linear combination

b = k1p1 + k2p 2 + k3p3

Page | 18
of the polynomials p1 , p 2 , p3 , and p 4 . If we substitute the expressions for p1 , p 2 , p3 , and p 4
into the above equation and equate corresponding coefficients, we obtain the system

2k1 + 4k3 + 2k4 = b1


−k1 + k2 − k3 − 2k4 = b2
k1 + 3k2 + 5k3 − 2k4 = b3

The system is inconsistent, therefore the polynomials p1 , p 2 , p3 , and p 4 do not span P2 .


3
Example 6. in each part, determine whether the vectors span R .
(a) v1 = (2, 2, 2), v 2 = (0, 0, 3), v 3 = (0, 1, 1)
(b) v1 = (2, − 1, 3), v 2 = (4, 1, 2), v 3 = (8, − 1, 8)
Solution
We need to determine whether an arbitrary vector b = (b1 , b2 , b3 ) in R can be
3
(a).
expressed as a linear combination
b = k1 v1 + k2 v 2 + k3 v 3
of the vectors v1 , v 2 , and v 3 . Expressing this equation in terms of components gives
(b1 , b2 , b3 ) = k1 (2, 2, 2) + k2 (0, 0, 3) + k3 (0, 1, 1)
= (2k1 , 2k2 + k3 , 2k1 + 3k2 + k3 )
Which gives the system
2k1 = b1
2k1 + k3 = b2
2k1 + 3k2 + k3 = b3
3
The above system is consistent (verify), and so the vectors v1 , v 2 , and v 3 span R .

5.1 LINEAR DEPENDENCE AND INDEPENDENCE

Definition. If S = v1 , v 2 , , v r  is a set of two or more vectors in a vector space V , then S is

said to be a linearly independent set if no vector in S can be expressed as a linear combination


of the others. A set that is not linearly independent is said to be linearly dependent.

Theorem. A nonempty set S = v1 , v 2 , , v r  in a vector space V is linearly independent if and


only if the only coefficient satisfying the vector equation
k1 v1 + k2 v 2 + + kr v r = 0 are k1 = 0, k2 = 0, , kr = 0 .
3
Example 1. Linear Independence in R
Determine whether the vectors v1 = (1, − 2, 3), v 2 = (5, 6, − 1), v 3 = (3, 2, 1) are linearly
3
independent or dependent in R .
Solution. The linear independence or dependence of these vectors is determined by whether the
vector equation
k1 v1 + k2 v 2 + k3 v 3 = 0
can be satisfied with coefficients that are not all zero. Rewriting the equation in component
form, we have
k1 (1, − 2, 3) + k2 (5, 6, − 1) + k3 (3, 2, 1) = (0, 0, 0)
Equating corresponding components on the two sides yields the homogeneous linear system

Page | 19
k1 + 5k2 + 3k3 = 0
−2k1 + 6k2 + 2k3 = 0
3k1 − k2 + k3 = 0
Solving the system gives
k1 = − 12 t , k2 = − 12 t , k3 = t .
This shows that the system has nontrivial solutions and hence the vectors are linearly
dependent.
4
Example 2. Linear Independence in R
Determine whether the vectors
v1 = (1, 2, 2, − 1), v 2 = (4, 9, 9, − 4), v 3 = (5, 8, 9, − 5)
4
in R are linearly dependent or linearly independent.
Solution. The linear independence or linear dependence of these vectors is determined by
whether there exist nontrivial solutions of the vector equation
k1 v1 + k2 v 2 + k3 v3 = 0
Or, equivalently,
k1 (1, 2, 2, − 1) + k2 (4, 9, 9, − 4) + k3 (5, 8, 9, − 5) = (0, 0, 0, 0)
Equating corresponding components on the two sides yield the homogeneous linear system
k1 + 4k2 + 5k3 = 0
2k1 + 9k2 + 8k3 = 0
2k1 + 9k2 + 9k3 = 0
−k1 − 4k2 − 5k3 = 0
The above system has only the trivial solution k1 = 0, k2 = 0, k3 = 0 from which we
conclude that v1 , v 2 , and v 3 are linearly independent.
Example 3. Linear Independence of Polynomials
Determine whether the polynomials
p1 = 1 − x, p 2 = 5 + 3x − 2 x 2 , p3 = 1 + 3 x − x 2
are linearly dependent or linearly independent in P2 .
Solution. The linear independence or dependence of these vectors is determined by whether the
vector equation
k1p1 + k2p 2 + k3p3 = 0
Can be satisfied with coefficients that are not all zero. Writing the above equation in its
polynomial form, we have
(k1 + 5k2 + k3 ) + (−k1 + 3k2 + 3k3 ) x + (−2k1 − k3 ) x 2 = 0
The linear dependence or independence of the given polynomial hinges on whether the
following linear system has nontrivial solution:
k1 + 5k2 + k3 = 0
−k1 + 3k2 + 3k3 = 0
− 2 k 2 − k3 = 0

Page | 20
Thus, the system has nontrivial solution (verify), and therefore, the set p1 , p 2 , p3  is linearly

dependent.
Example 4. For which values real values of  do the following vectors form a linearly dependent
set in R3 .
v1 = ( , − 12 , − 12 ), v 2 = (− 12 ,  , − 12 ), v 3 = (− 12 , − 12 ,  )
Solution. Suppose there are constants a, b, and c such that
a( , − 12 , − 12 ) + b(− 12 ,  , − 12 ) + c(− 12 , − 12 ,  ) = (0, 0, 0)
Equating corresponding components on the two sides yield the homogeneous linear system

  −1 2 −1 2   a  0 
 −1 2  −1 2   b  = 0 

 −1 2 −1 2    c  0 

The determinant of the coefficient matrix is


2
3 1  1
 −  − = ( − 1)   + 
3

4 4  2
This equals zero if and only if  = 1, or  = −1 2 . Thus, the vectors are linearly dependent for
these two values of  and linearly independent for all other values.

6.1 RANK OF A MATRIX


Definition. The rank of a matrix A is said to be r if
(a) It has at least on nonzero minor of order r .
(b) Every minor of A of order higher than r is zero.
The number r obtained is called the rank of A and we write Rank ( A) = r or  ( A) = r .
• NORMAL FORM (CANONICAL FORM)
By performing elementary transformation, any nonzero matrix A can be reduced to one of the following
four forms, called the Normal form of A .
Ir  Ir 0
(i) Ir (ii)  Ir 0 (iii) 0 (iv) 0
   0
Example 1. Find the rank of the following matrix by reducing it to its normal form.
1 2 3 4 
A =  2 1 4 3 
 3 0 5 −10 
Solution.
1 2 3 4  1 2 3 4  1 2 3 4 
2 1 4 3  0 −3 −2 −5  R − 2 R 0 1 2 3 5 3  − 1 R
    2 1   3 2
 3 0 5 −10  0 −6 −4 −22  R3 − 3R1 0 1 2 3 11 3 − 16 R3
1 2 3 4  1 2 3 4  1 0 0 0  C2 − 2C1
 0 1 2 3 5 3  0 1 2 3 5 3 0 1 2 3 5 3 C − 3C
      3 1

0 0 0 2  R3 − R2 0 0 0 1  12 R3 0 0 0 1  C4 − 4C1


1 0 0 0  1 0 0 0 
0 1 0 0  C − 2 C 0 1 0 0 C  C
  3 3 2   4 3 =  I3 0
0 0 0 1  C4 − 53 C2 0 0 1 0 

Therefore, the normal form of A is  I 3 0 and the rank of A is given by  ( A) = 3 .

Page | 21
Example 2. Find the rank of the following matrix, hence, reduce it to its normal form.

1 2 −1 3
4 1 2 1 
A=
 3 −1 1 2
 
1 2 0 1
Solution.
1 2 −1 3 1 2 −1 3  1 −2 −1 3 
4 1 2    R − 4 R1 0 −7 6 −11
 1 0 −7 6 −11 2 
 3 −1 1 2 0 −7 4 −7  R3 − 3R1 0 0 −2 4  R3 − R2
     
1 2 0 1 0 0 1 −2  R4 − R1 0 0 1 −2 
1 −2 −1 3  1 0 0 0  C2 − 2C1 1 0 0 0
0 −7 6 −11    C + C1 0 −7 0 0  C3 + 76 C2
 0 −7 6 −11 3 
0 0 −2 4  0 0 −2 4  0 0 −2 4
     
0 0 0 0  2 R4 + R3 0 0 0 0  C4 − 3C1 0 0 0 0  C4 + 117 C2
1 0 0 0
0 1 0 0  − 17 C2
  I3 0
=  0 0
0 0 1 0  − 12 C3  
 
0 0 0 0  C4 + 4C3
Therefore, the normal form of A is  I 3 0 and the rank of A is given by  ( A) = 3 .

Example 3.
Find the rank of the matrix
 −1 2 3 −2 
 2 −5 1 2 
 
 3 −8 5 2 
 
 5 −12 −1 6 
Solution.
 −1 2 3 −2   −1 2 3 −2   −1 2 3 −2 
 2 −5 1 2   0 −1 7 −2  R + 2 R  0 −1 7 −2 
   2 1 
 3 −8 5 2  0 −2 14 −4  R3 + 3R1 0 0 0 0  R3 − 2 R2
     
 5 −12 −1 6  0 −2 14 −4  R4 + 5R1 0 0 0 0  R4 − 2 R2
Hence, Rank = Number of nonzero rows = 2 .
Example 4.
Find the rank of A where
1 3 1 −2 −3
1 4 3 −1 −4 
A=
2 3 −4 −7 −3
 
3 8 1 −7 −8
Solution. Reduce A to echelon form
1 3 1 −2 −3 1 3 1 −2 −3 1 3 1 −2 −3
1 4 3 −1 −4  0 1 2 1 −1 R − R 0 1 2 1 −1
   2 1 
2 3 −4 −7 −3 0 −3 −6 −3 3  R3 − 2 R1 0 0 0 0 0  R3 + 3R2
     
3 8 1 −7 −8 0 −1 −2 −1 1  R4 − 3R1 0 0 0 0 0  R4 + R2
Therefore,  ( A) = 2 .

Page | 22
6.2 BASIS AND DIMENSION
Definition. If S = v1 , v 2 , , v n  is a set of vectors in a finite-dimensional vector space V , then

S is called a basis for V if:


(a) S spans V .
(b) S is linearly independent.

Example 1. The Standard Basis for Rn


The standard unit vectors
e1 = (1, 0, 0, , 0), e2 = (0, 1, 0, , 0), , en = (0, 0, 0, ,1)
Span R n since every vector v = (v1 , v2 , , vn ) in R n can be expressed as
v = e1v1 + e2 v2 + + en vn
Which is a linear combination of the vectors e1 , e2 , , en . To illustrate the linear independence
of the standard unit vectors in R , consider the standard unit vectors in R 3
n

i = (1, 0, 0), j = (0, 1, 0), k = (0, 0, 1)


We must show that the only coefficients satisfying the vector equation
k1i + k2 j + k3k = 0
are k1 = 0, k2 = 0, k3 = 0 . This is evident by writing this equation in its component form
(k1 , k2 , k3 ) = (0, 0, 0)
Thus, they form a basis for R n that we call the standard basis for Rn .
Example 2. The Standard Basis for Pn


Show that S = 1, x, x ,
2
, x n  is a basis for the vector space Pn of polynomials of degree n
or less.
Solution. We need to show that the polynomial S span Pn and are linearly independent.
Let us denote these polynomials by
p0 = 1, p1 = x, p2 = x2 , , pn = xn
The polynomials p 0 , p1 , , p n span the vector space Pn since each polynomial p in Pn can
be written as
p = a0 + a1 x + + an x n
Which is a linear combination of 1, x, x 2 , , x n . We can denote this by writing
Pn = span 1, x, x 2 , , xn
Next, we must show that the only coefficients satisfying the vector equation
a0p 0 + a1p1 + a2p 2 + + anp n = 0
Or equivalently,
a0 + a1 x + a2 x 2 + + an x n = 0
are
a0 = a1 = a2 = = an = 0
Thus, the polynomial p 0 , p1 , , p n are linearly independent, and therefore, they form a basis
for Pn that we call the standard basis for Pn .

Example 3. The Standard Basis for M mn


Show that the matrices

1 0  0 1  0 0 0 0
M1 =  , M2 =  , M3 =  , M4 =  
0 0 0 0 1 0  0 1
Page | 23
Form a basis for the vector space M 22 of 2  2 matrices.
Solution. We must show that the matrices span M 22 and are linearly independent.
To prove that the matrices span M 22 , we need to show that every 2  2 matrix
a b 
B= 
c d 
Can be expressed as
c1M 1 + c2 M 2 + c3 M 3 + c4 M 4 = B (1)
and to prove linear independence, we must show that the equation
c1M 1 + c2 M 2 + c3 M 3 + c4 M 4 = 0 (2)
has only the trivial solution, where 0 is the 2  2 zero matrix;
the matrix forms of equation (1) and (2) are

1 0  0 1  0 0  0 0  a b 
c1   + c2   + c3   + c4   =  
0 0  0 0  1 0  0 1  c d 
and
1 0  0 1  0 0  0 0  0 0 
c1   + c2   + c3   + c4   =  
0 0 0 0  1 0  0 1  0 0 
Which can be rewritten as

 c1 c2   a b   c1 c2  0 0 
c = = 
c4   c d  c  
and
 3  3 c4  0 0 

Since the first equation has solution


c1 = a, c2 = b, c3 = c, c4 = d
the matrices span M 22 , and since the second equation has only the trivial solution
c1 = 0, c2 = 0, c3 = 0, c4 = 0
the matrices are linearly independent. This proves that M 1 , M 2 , M 3 , M 4 form a basis for M 22

Example 4 show that the vectors


v1 = (1, 2, 1), v 2 = (2, 9, 0), v 3 = (3, 3, 4)
3
Form a basis for R .

Solution. We need to show that these vectors are linearly independent and span R 3 .
Suppose there exist constants c1 , c2 , and c3 such that
c1 v1 + c2 v 2 + c3 v 3 = 0
by equating corresponding components on the two sides, we have the following system
c1 + 2c2 + 3c3 = 0
2c1 + 9c2 + 3c3 = 0
c1 + 4c3 = 0
Form the coefficient matrix and reduce to echelon form

1 2 3  1 2 3 1 2 3 
2 9 3 0 5 −3 R − 2 R 0 5 −3
    2 1  
1 0 4  0 −2 1  R3 − R1 0 0 −1 5 R3 + 2 R2

Therefore, since the echelon matrix has no zero rows, hence the vectors are linearly
independent, and so the vectors v1 , v 2 , and v 3 form a basis for R 3 .

Page | 24
Definition. If S = v1 , v 2 , , v n  is a basis for a vector space V , and
v = c1 v1 + c2 v 2 + + cn v n
is the expression for a vector v in terms of the basis S , then the scalars c1 , c2 , , cn are
called the coordinates of v relative to the basis S . The vector ( c1 , c2 , , cn ) in R n
constructed from these coordinates is called the coordinate vector of v relative to S ; it is
denoted by
( v) s = ( c1 , c2 , , cn )
Example 5. Consider the vectors
v1 = (1, 2, 1), v 2 = (2, 9, 0), v 3 = (3, 3, 4)
(a) Find the coordinate vector of v = (5, − 1, 9) relative to the basis S = v1 , v 2 , v3  .

(b) Find the vectors v in R3 whose coordinate vector relative to S in ( v ) s = (−1, 3, 2) .


Solution.
(a) To find ( v ) s we must first express v as a linear combination of the vectors in S ; that is, we
must find values of c1 , c2 , and c3 such that
v = c1 v1 + c2 v 2 + c3 v 3
or, in terms of components,
(5, − 1, 9) = c1 (1, 2, 1) + c2 (2, 9, 0) + c3 (3, 3, 4)
Equating corresponding components gives
c1 + 2c2 + 3c3 = 5
2c1 + 9c2 + 3c3 = −1
c1 + 4c3 = 9
Solving this system, we obtain c1 = 1, c2 = −1, c3 = 2 . Therefore,
( v) s = (1, − 1, 2)
(b) Using the definition of ( v ) s , we obtain
v = (−1) v1 + 3v 2 + 2 v3
= (−1)(1, 2, 1) + 3(2, 9, 0) + 2(3, 3, 4) = (11, 31, 7)
Definition. The dimension of a finite dimensional vector space V is denoted by dim(V ) and is
defined to be the number of vectors in a basis for V . In addition, the zero-vector space is
defined to have dimension zero.
Example 3. Let W be the space generated by the polynomials
V1 = t 3 − 2t 2 + 4t + 1
V2 = 2t 3 − 3t 2 + 9t − 1
V3 = t 3 + 6t − 5
V4 = 2t 3 − 5t 2 + 7t + 5
Find the basis and dimension of W .
Solution.
3 2
The coordinate vector of the given polynomial relative to the basis t , t , t , 1 are( )
respectively
(V1 ) = (1, − 2, 4, 1)
(V2 ) = ( 2, − 3, 9, − 1)
(V3 ) = (1, 0, 6, − 5 )
(V4 ) = ( 2, − 5, 7, 5 )
Form a matrix of the coordinate vector and reduce to echelon form
Page | 25
1 −2 4 1  1 −2 4 1  1 −2 4 1
 2 −3 9 −1 0 1 1 −3 R − 2 R 0 1 1 −3
    2 1 
1 0 6 −5 0 2 2 −6  R3 − R1 0 0 0 0  R3 − 2 R2
     
 2 −5 7 5  0 −1 −1 3  R4 − 2 R1 0 0 0 0  R4 + R2

The nonzero rows 1, − 2, 4, 1 and  0, 1, 1, − 3 of the echelon matrix form a basis of the space
generated by the coordinate vectors, and so the corresponding polynomials
t 3 − 2t 2 + 4t + 1 and t 2 + t − 3 form a basis of W . Thus, dim(W ) = 2 .

7.1 LINEAR TRANSFORMATION OR LINEAR MAPPING


Let V and U be vector spaces over the same field k . A mapping F : V → U is called a linear mapping
or linear transformation if:
(a) F (v + w) = F (v) + F ( w) , for any v, w V .

(b) F (kv) = kF (v) , for any k  K and v  V .

(c) F (mv + nw) = F (mv) + F (nw) = mF (v) + nF ( w),  scalars m, n  K and any vectors v, w V .

Example 1.
Let F : R3 → R be a mapping denoted by F ( x, y, z ) = 2 x − 3 y + 4 z . Show that F is a linear map.

Solution.
(a) Suppose v, w  R 3 , where v = (a1 , a2 , a3 ) and w = (b1 , b2 , b3 ) , then
v + w = (a1 , a2 , a3 ) + (b1 , b2 , b3 ) = (a1 + b1 , a2 + b2 , a3 + b3 )
F (v + w) = F (a1 + b1 , a2 + b2 , a3 + b3 ) = 2(a1 + b1 ) − 3(a2 + b2 ) + 4(a3 + b3 )
= 2a1 + 2b1 − 3a2 − 3b2 + 4a3 + 4b3
= (2a1 − 3a2 + 4a3 ) + (2b1 − 3b2 + 4b3 )
= F (v) + F ( w)
(b) If k  K and v  R , then
3
kv = k (a1 , a2 , a3 ) = (ka1 , ka2 , ka3 )
F (kv) = F (ka1 , ka2 , ka3 ) = 2ka1 − 3ka2 + 4ka3 = k (2a1 − 3a2 + 4a3 )
= kF (v)
(c) If m, n  K and v, w  R then mv + nw = (ma1 + nb1 , ma2 + nb2 , ma3 + nb3 )
3

then F (mv + nw) = F (ma1 + nb1 , ma2 + nb2 , ma3 + nb3 )


= 2(ma1 + nb1 ) − 3(ma2 + nb2 ) + 4(ma3 + nb3 )
= (2ma1 − 3ma2 + 4ma3 ) + (2nb1 − 3nb2 + 4nb3 )
= m(2a1 − 3a2 + 4a3 ) + n(2b1 − 3b2 + 4b3 )
= mF (v) + nF ( w)
Thus, F is linear.
Definition. Kernel
Let F : U → V be a linear map. The kernel of F denoted by ker( F ) is the set of all vectors in
U that are mapped to zero vector in V . i.e. ker( f ) = u  v : f (u ) = 0 .

Definition. Image
The Image of F denoted by Im( F ) is the set of all vectors of the form F (u ) V where u U .
That is Im( F ) = u U : f (v) = u for some v V  .

Page | 26
Theorem. Let F : V → U be a linear map. Then
(a) dim(U ) = nullity( F ) + rank ( F )
(b) dim(ker F ) = nullity( F )
(c) dim( ImF ) = rank ( F )
(d) dim(V ) = dim(ker F ) + dim(Im F )

Example 1.
Let T : 3
→ 3
be a linear mapping defined by T ( x, y, z ) = ( x + 2 y − z, y + z, x + y − 2 z ) .
Find a basis and dimension of:
(a) Image of T
(b) Kernel of T
Solution.
(a) using the usual basis for 3
: e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1) . The image
3
of the usual basis on generates the image of T . Then:
T (e1 ) = T (1, 0, 0) = (1 + 0 + 0, 0 + 0, 1 + 0 − 0) = (1, 0, 1)
T (e2 ) = T (0, 1, 0) = (0 + 2 − 0, 1 + 0, 0 + 1 − 0) = (2, 1, 1)
T (e3 ) = T (0, 0, 1) = (0 + 0 − 1, 0 + 1, 0 + 0 − 2) = ( −1, 1, − 2)
We then form a matrix whose rows are the generators of image of T and then reduce
to echelon form. i.e.
1 0 1 1 0 1  1 0 1 
2 1 1 0 1 −1 R − 2 R 0 1 −1
    2 1  
 −1 1 −2  0 1 −1 R3 + R1 0 0 0  R3 − R2
Thus, the nonzero rows (1, 0, 1), (0, 1, − 1) is a basis for image of T . Hence
dim(Im T ) = Rank (T ) = 2
(b) To determine the basis of kernel of T , we seek the set of all vectors ( x, y, z ) such that
T ( x, y, z ) = (0, 0, 0) , i.e.
T ( x, y, z ) = ( x + 2 y − z, y + z, x + y − 2 z ) = (0, 0, 0)
Set corresponding components equal each other, we have:
x + 2y − z = 0
y+z = 0
x + y − 2z = 0
Solving the above system, we obtain: x = 0, y = 0, z = 0 . Writing the system in
matrix form and reduce it to echelon form, we have
1 2 −1 1 2 −1 1 2 −1
0 1 1  0 1 1  0 1 1 
     
1 1 −2  0 −1 −1 R3 − R1 0 0 0  R3 + R2
The nonzero rows are (1, 2, − 1) and (0, 1, 1) which corresponds to
x + 2y − z = 0 (1)
y+z = 0 (2)
From equation (2): y = −z (3)
Put (3) in (1): x = 3z
Then the vector
 x  3z  3
 y  =  − z  = z  −1
     
 z   z   1 
Page | 27
Therefore, the kernel of T = Nullity of T is the subspace of 3
spanned by (3, − 1, 1) .
That is (3, − 1, 1) is a basis of T and so Nullity of T is 1 . Hence
dim(U ) = Nullity of T + rank of T
= 1 + 2 = 3
Example 2.
Consider the map F : 4
→ 2
defined by F ( x, y, z, w) = (2 x + y + z + w, x + z − w) . Find
the basis and dimension of:
(a) Image of F
(b) Kernel of F
Solution.
4
(a) using the usual basis for :
e1 = (1, 0, 0, 0), e2 = (0, 1, 0, 0), e3 = (0, 0, 1, 0), e4 = (0, 0, 0, 1) .
4
The image of the usual basis on generates the image of T . Then:

F (e1 ) = F (1, 0, 0, 0) = (2, 1)


F (e2 ) = F (0, 1, 0, 0) = (1, 0)
F (e3 ) = F (0, 0, 1, 0) = (1, 1)
F (e4 ) = F (0, 0, 0, 1) = (1, − 1)
We then form a matrix whose rows are the generators of image of F and then reduce to
echelon form.

2 1  1 0  R2  R1 1 0  1 0
1 0  2 1  0 1  R − 2 R 0 1 
      2 1 
1 1  1 1  0 1  R3 − R1 0 0  R3 − R2
       
1 −1 1 −1 0 −1 R4 − R1 0 0  R4 + R3

Thus, the nonzero rows (1, 0) and (0, 1) is a basis for image of F . Hence,
dim(Im F ) = rank ( F ) = 2 .
To determine the basis of kernel of F , we seek the set of all vectors ( x, y, z , w) 
4
(b)
such that F ( x, y, z, w) = 0 . That is
(2 x + y + z + w, x + z − w) = (0, 0)
Equating corresponding components gives
2x + y + z + w = 0 (1)
x +z−w = 0 (2)
From (2), we have
x = w− z (3)
Substituting (3) in (1), we obtain
y = z − 3w (4)
Now, z and w are free variables, they can be chosen arbitrarily. Let z = t and w = h .
Such that the vectors
 x   −t + h   −1 1
 y   t − 3h     
 =   = t  1  + h  −3
z  t  1 0
       
 w  h  0 1
The vectors (−1, 1, 1, 0) and (1, − 3, 0, 1) are the basis of the kernel of F and therefore,
Nullity of F = 2 . Hence,
Page | 28
dim(U ) = Nullity ( F ) + rank ( F )
= 2 + 2 = 4
8.1 MATRICES AND LINEAR OPERATORS
Let T be a linear operator on a vector space V over a field k and suppose e1 , e2 , , en  is a basis of
V . Now T (e1 ), T (e2 ), , T (en ) are vectors in V and so each is a linear combination of the elements
of the basis ei  .
T (e1 ) = a11e1 + a12 e2 + + a1n en
T (e2 ) = a21e1 + a22e2 + + a2 n en

T (en ) = an1e1 + an 2e2 + + ann en


Definition.
The transpose of the above matrix of the coefficients, denoted by T e or T  is called the matrix

representation of T relative to the basis ei  . i.e.

 a11 a21 a1n 


a a22 a2 n 
T e =  21
 
 
 an1 an 2 ann 
Example 1.
Let T be the linear operator on 2
defined by T ( x, y) = (3x − 4 y, x + 5 y) . Find the matrix
representation of T in the basis e1 = (1, 0), e2 = (0, 1) .
Solution
T ( x, y) = (3x − 4 y, x + 5 y)
Now,
T (e1 ) = T (1, 0) = (3, 1)
Writing T (e1 ) as a linear combination of the ei ' s using scalars a and b , that is:
(3, 1) = ae1 + be2 = a(1, 0) + b(0, 1)
=a + b
Equating corresponding components, we have
a = 3, b = 1 (1)
Similarly,
T (e2 ) = T (0, 1) = (−4, 5)
Writing T (e2 ) as a linear combination of the ei ' s using scalars c and d , that is:
(−4, 5) = ce1 + de2 = c(1, 0) + d (0, 1)
= c+d
Equating corresponding components, we have
c = − 4, d = 5 (2)
Thus, the coefficient matrix
a b   3 1
c d  =  −4 5
   
Therefore, the matrix representation of T (Transpose of coefficient matrix) is

3 −4 
T e = 
1 5 
Page | 29
Example 2.
Find the matrix representation of the operator T ( x, y) = (5x + y, 3x − 2 y) relative to the basis
 f1 = (1, 2) and f 2 = (2, 3) .
Solution.
T ( x, y) = (5x + y, 3x − 2 y)
Now,
T ( f1 ) = T (1, 2) = (7, − 1)
Writing T ( f1 ) as a linear combination of the f i ' s using scalars a and b , that is:
(7, − 1) = af1 + bf 2 = a(1, 2) + b(2, 3)
= (a + 2b, 2a + 3b)
Equating corresponding components, we have
a + 2b = 7 (1)
2a + 3b = −1 (2)
Solving these two equations, we obtain:
a = − 23, b = 15
Similarly,
T ( f 2 ) = T (2, 3) = (13, 0)
Writing T ( f 2 ) as a linear combination of the f i ' s using scalars c and d , that is:
(13, 0) = cf1 + df 2 = c(1, 2) + d (2, 3)
= (c + 2d , 2c + 3d )
Equating corresponding components, we have
c + 2d = 13 (1)
2c + 3d = 0 (2)
Solving these equations, we have:
c = − 39, d = 26
Thus, the coefficient matrix
a b   −23 15 
c d  =  −39 26
   

Therefore, the matrix representation of T (Transpose of coefficient matrix) is

 −23 −39 
T  f = .
 15 26 

Page | 30

You might also like