Untitled
Untitled
Untitled
Department of Mathematics
MTH 201: LINEAR ALGEBRA I LECTURE NOTE (2019/2020)
By: Musa Abdullahi - Email: [email protected] - Google Classroom Code: x3ztzjn
Course Contents:
Matrices: Algebra of matrices, Vector spaces over the real fields: Subspaces, linear independence, Basis and
Dimension. Linear Transformation and their representation by matrices-range, Null space rank. Singular and
nonsingular transformation.
Text:
1. Howard Anton & Chris Rorres: Elementary Linear Algebra and Applications 11th Edition.
2. Advanced Engineering Mathematics by H. K. Dass 2nd Edition.
1.0 MATRICES
Let us consider a set of simultaneous equations,
x + 2 y + 3z + 5t = 0
4 x + 2 y + 5 z + 7t = 0
3 x + 4 y + 2 z + 6t = 0
Now, writing the coefficients of x, y, z , t from the above equations and enclose them within brackets
and then get
1 2 3 5
A = 4 2 5 7
3 4 2 6
The above system of numbers arranged in a rectangular array in rows and columns and bounded by the
brackets is called a Matrix. More generally, we make the following definition.
▪ Definition: A Matrix is a rectangular array of numbers. The numbers in the array are called
the entries in the matrix.
Page | 1
(e) Diagonal Matrix. A square matrix is called a diagonal matrix, if all its non-diagonal elements are
1 0 0
zero, e.g., 0 3 0
0 0 4
(f) Scalar Matrix. A diagonal matrix in which all the diagonal elements are equal to a scalar, say ( k
−6 0 0
) is called a scalar matrix, e.g., 0 −3 0
0 0 −4
(g) Unit or Identity Matrix. A square matrix is called a unit matrix if all the diagonal elements are
1 0 0
unity and non-diagonal elements are zero, e.g., 0 1 0
0 0 1
(h) Symmetric Matrix. A square matrix will be called symmetric, if for all values of i and j ,
a h g
aij = a ji i.e., A = A
T
e.g., h b f
g f c
(i) Skew Symmetric Matrix. A square matrix is called skew symmetric matrix, if
1. aij = −a ji for all values of i and j or AT = − A .
0 −h − g
2. All diagonal elements are zero, e.g., h 0 − f
g f 0
(j) Transpose of a Matrix. If in a given matrixA , we interchange the rows and the corresponding
columns, the new matrix obtained is called the transpose of the matrix A and is denoted by
AT or A' e.g.,
2 3 4 2 1 6
A = 1 0 5 , A = 3 0 7
T
6 7 8 4 5 8
(k) Orthogonal Matrix. A square matrix A is called an orthogonal matrix if the product of the matrix
T
A and the transpose matrix A is an identity matrix, e.g.,
A A = IT
if A = 1, matrix A is proper.
(l) Triangular Matrix. (Echelon Form) A square matrix, all of whose elements below the leading
diagonal are zero, is called an upper triangular matrix. A square matrix, all of whose elements
above the leading diagonal are zero, is called a lower triangular matrix, e.g.,
2 0 0 2 6 7
4 1 0 0 1 4
5 6 7 0 0 7
Upper Triangular Lower Triangular
(m) Singular Matrix. If the determinant of the matrix is zero, then the matrix is known as singular
1 2
matrix, e.g., 3 6 is singular matrix, because A = 6 − 6 = 0
Page | 2
1.2 ADDITION OF MATRICES
If A and B be two matrices of the same order, then their sum, A + B is defined as the matrix, each
element of which is the sum of the corresponding elements of A and B .
4 2 5 1 0 2
Thus, if A= , B=
1 3 −6 3 1 4
4 + 1 2 + 0 5 + 2 5 2 7
Then A+ B = = 4 4 −2
1 + 3 3 + 1 −6 + 4
If A = aij , B = bij then A + B = aij + bij
Example 1. Write matrix A given below as the sum of a symmetric and a skew symmetric matrix.
1 2 4
A = −2 5 3
−1 6 3
1 2 4 1 −2 −1
Solution. A = −2 5 3 , on transposing, we get A = 2 5 6
T
−1 6 3 4 3 3
1 2 4 1 −2 −1 2 0 3
On adding A
and A , we have A + AT = −2 5 3 + 2 5
T 6 = 0 10 9
(1)
−1 6 3 4 3 3 3 9 6
1 2 4 1 −2 −1 0 4 5
On subtracting A
from A , we get A − A = −2 5 3 − 2 5
T T 6 = −4 0 −3 (2)
−1 6 3 4 3 3 −5 3 0
Adding (1) and (2), we have
2 0 3 0 4 5
2 A = 0 10 9 + −4 0 −3
3 9 6 −5 3 0
2 0 3 0 4 5
A = 0 10 9 + −4 0 −3
1 1
2 2
3 9 6 −5 3 0
• Properties of Matrix Addition: Only matrices of the same order can be added or subtracted.
(a) Commutative Law. A + B = B + A
(b) Associative Law. A + ( B + C ) = ( A + B) + C
A − B = aij − bij
8 6 4 3 5 1 8 − 3 6 − 5 4 − 1 5 1 3
Thus 1 2 0 − 7 6 2 = 1 − 7 2 − 6 0 − 2 = −6 −4 −2
Page | 3
1.4 SCALAR MULTIPLE OF A MATRIX
If a matrix is multiplied by a scalar quantity k , then each element is multiplied by k , i.e.
2 3 4 2 3 4 3 x 2 3 x 3 3 x 4 6 9 12
A = 4 5 6 then 3 A = 3 4 5 6 = 3 x 4 3 x 5 3 x 6 = 12 15 18
6 7 9 6 7 9 3 x 6 3 x 7 3 x 9 18 21 27
0 2 0 1 2 1
Example 2. If A = 1 0 3 , and B = 2 1 0 . Find:
1 1 2 0 0 3
(a). 2 A + 3B
(b). 3 A − 4B
Solution
0 2 0 1 2 1 0 4 0 3 6 3 3 10 3
(a). 2 A + 3B = 2 1 0 3 + 3 2 1 0 = 2 0 6 + 6 3 0 = 8 3 6
1 1 2 0 0 3 2 2 4 0 0 9 2 2 13
0 2 0 1 2 1 0 6 0 4 8 4 −4 −2 −4
(b). 3 A − 4 B = 3 1 0 3 − 4 2 1 0 = 3 0 9 − 8 4 0 = −5 −4 9
1 1 2 0 0 3 3 3 6 0 0 12 3 3 −6
1.5 MATRIX MULTIPLICATION
The product of two matrices A and B is only possible if the number of columns in A is equal to the
number of rows in B.
• Properties of Matrix Multiplication
1. Multiplication of matrices is not commutative. i.e. AB BA
2. Matrix Multiplication is associative. i.e. A( BC ) = ( AB)C
1 −2 3 1 0 2
Example 3.
If A = 2 3 −1 and B = 0 1 2 , form the products AB and BA , and
−3 1 2 1 2 0
show that AB BA .
Solution.
1 −2 3 1 0 2 4 4 −2
Here: AB = 2 3 −1 0 1 2 = 1 1 10
−3 1 2 1 2 0 −1 5 −4
1 0 2 1 −2 3 −5 0 7
BA = 0 1 2 2 3 −1 = −4 5 3 AB BA
1 2 0 −3 1 2 5 4 1
1 2 2 1 −3 1
Example 4. If A = , B= and C=
−2 3 2 3 2 0
Verify that: (a) A( BC ) = ( AB)C
(b) A( B + C ) = AB + AC.
Page | 4
Solution. We have,
1 2 2 1 6 7 2 1 −3 1 −4 2
AB = = , BC = =
−2 3 2 3 2 7 2 3 2 0 0 2
1 2 −3 1 1 1 2 − 3 1 + 1 −1 2
AC = = , B+C = =
−2 3 2 0 12 −2 2 + 2 3 + 0 4 3
6 7 −3 1 −4 6
(a) ( AB)C = = and
2 7 2 0 8 2
1 2 −4 2 −4 6
A( BC ) = = ( AB)C = A( BC ) .
−2 3 0 2 8 2
1 2 −1 2 7 8
(b) A( B + C ) = = and
−2 3 4 3 14 5
6 + 1 7 + 1 7 8
AB + AC = = A( B + C ) = AB + AC .
2 + 12 7 − 2 14 5
1 2 2
Example 5.
If A = 2 1 2 show that A2 − 4 A − 5I = 0 where I , 0 are the unit and null
2 2 1
matrix of order 3 Respectively. Use this result to find A−1.
1 2 2 1 2 2 9 8 8
Solution. A = 2 1 2 2 1 2 = 8 9 8
2
2 2 1 2 2 1 8 8 9
9 8 8 1 2 2 1 0 0 0 0 0
A − 4 A − 5I = 8 9 8 − 4 2 1 2 − 5 0 1 0 = 0 0 0
2
8 8 9 2 2 1 0 0 1 0 0 0
A2 − 4 A − 5I = 0 5I = A2 − 4 A
−1
Multiplying by A , we get 5 A−1 = A − 4I
1 2 2 1 0 0 −3 2 2
= 2 1 2 − 4 0 1 0 = 2 −3 2
2 2 1 0 0 1 2 2 −3
−3 2 2
A = 2 −3 2
1
−1
5
2 2 −3
0 2
Example 6.
Determine the values of , , when − is orthogonal.
−
0 2 0
Solution.
Let A = − and A = 2
T
−
− −
Page | 5
4 2 + 2 2 2 − 2 −2 2 + 2 1 0 0
2
2 −
2
2 + 2 + 2 2 − 2 − 2 = 0 1 0
− 2 + 2 2 − 2 − 2 2 + 2 + 2 0 0 1
4 2 + 2 = 1 1 1
= , =
2 − = 0
2 2
6 3
1 1 1
But 2 + 2 + 2 = 1 as = , = , =
6 3 2
2.1 DETERMINANTS
The notation of determinants arises from the process of elimination of the unknowns of simultaneous
linear equations. Consider the two linear equations in x,
a1 x + b1 = 0 (1)
a2 x + b2 = 0 (2)
b1
From (1) x=−
a1
substituting the value of x in (2); we get the eliminant
b
a2 − 1 + b2 = 0 or a1b2 − a2b1 = 0 (3)
a1
From (1) and (2) by suppressing x, the eliminant is written as
a1 b1
=0 (4)
a2 b2
Each quantity a1 , b1 , a2 , b2 is called an element or a constituent of the determinant.
From (3) and (4), we know that both expressions are eliminant, so we equate them.
a1 b1
= a1b2 − a2b1
a2 b2
a1 b1
a1b2 − a2b1 is called the expansion of the determinant of .
a2 b2
3 2
Example 1. Expand the determinant .
6 7
3 2
Solution. = ( 3 2 ) − ( 2 6 ) = 21 − 12 = 9
6 7
Definition. If A is a square matrix, then the minor of entry aij is denoted by M ij and is defined to
be the determinant of the submatrix that remains after the ith row and jth column are deleted from A .
The number (−1)i + j M ij is denoted by Cij and is called the cofactor of entry aij .
3 1 −4
Example 2. Find the Minors and Cofactors of A = 2 5 6
1 4 8
Page | 6
Solution. The minor of entry a11 is
3 1 −4
5 6
M 11 = 2 5 6 = = 16
4 8
1 4 8
C11 = ( −1)
1+1
M 11 = M 11 = 16
Similarly, the minor entry of a32 is
3 1 −4
3 −4
M 32 = 2 5 6 = = 26
2 6
1 4 8
The cofactor of a32 is
C32 = ( −1)
3+ 2
M 32 = − M 32 = − 26
Remark. Note that a minor M ij and its corresponding cofactor Cij are either the same or
i+ j
negatives of each other and that the relating sign (−1) is either +1 or − 1 in accordance with the
pattern in the “checkerboard” array
+ − + − +
− + − + −
+ − + − +
− + − + −
For example, C11 = M 11 , C21 = − M 21 , C22 = M 22 and so forth. Thus, it is never really
1 0 0 −1
3 1 2 2
If A is the 4 4 matrix A=
1 0 −2 1
2 0 0 1
Then to find det( A) it will be easiest to use cofactor expansion along the second column, since it has the
most zeros:
1 0 −1
det( A) = 1 1 −2 1
2 0 1
For the 3 3 determinant, expand along its second column, since it has the most zeros:
1 −1
det( A) = 1 (−2) = − 2(1 + 2) = − 6
2 1
Theorem (Determinant of a Triangular Matrix): If A is an n n triangular matrix (upper
triangular, lower triangular, or diagonal), then det( A) is the product of the entries on the main diagonal
of the matrix; that is, det( A) = a11a22 ann .
a11 0 0 0
a22 0 0
a21 a22 0 0 a 0
Example 7. = a11 a32 a33 0 = a11a22 33 = a11a22 a33 a44
a31 a32 a33 0 a43 a44
a42 a43 a44
a41 a42 a43 a44
A Technique for Evaluating 3 3 Determinants (Rule of Sarrus)
After writing the determinant, repeat the first two columns as below
• Properties of Determinants
Let A be an n n matrix. Then
1. If B is the matrix that results when a single row or single column of A is multiplied by a scalar
k , then det( B) = k det( A).
2. If B is the matrix that results when two rows or two columns of A are interchanged, then
det( B) = − det( A).
3. If B is the matrix that results when a multiple of one row of A is added to another or when a
multiple of one column is added to another, then det( B) = det( A).
1 −2 3
- 2 times the first row was
= −3 0 1 5 added to the third row.
0 10 −5
1 −2 3
-10 times the second row
= −3 0 1 5
was added to the third row.
0 0 −55
= − 3(−55)(1) = 165
Page | 9
a1 b1 c1 d1
a2 b2 c2 d2
a3 b3 c3 d3
Consider nth order determinant D = a4 b4 c4 d4
an bn cn dn
this can be reduced to
a1 b1 a1 c1 a1 d1
a2 b2 a2 c2 a2 d2
a1 b1 a1 c1 a1 b1
a3 b3 a3 c3 a3 d3
1 a1 b1 a1 c1 a1 d1
D= n−2
(a1 ) a4 b4 a4 c4 a4 d4
a1 b1 a1 c1 a1 d1
an bn an cn an dn
Thus, the nth order determinant is condensed to (n − 1)th order determinant. Repeated application of this
nd
method ultimately results in a determinant of 2 order which can be evaluated. It is obvious that the
leading element is zero. It can be made non-zero by interchanging the columns. If the leading element is
zero, it can be made non-zero by interchanging the columns.
Example 10. Condense the following determinants to second order and hence evaluate them:
2 1 3 5 0 4 1 2
4 −2 7 6 5 3 7 8
i. ii.
−8 3 1 0 4 1 2 3
5 7 2 −6 1 2 5 5
Solution.
2 1 3 5
−4 − 4 14 − 12 12 − 20 −8 2 −8
4 −2 7 6 1 1
(i) = 6 + 8 2 + 24 0 + 40 = 14 26 40
−8 3 1 0 (2) 4− 2 4
14 − 5 4 − 15 −12 − 25 9 −11 −37
5 7 2 −6
−4 1 −4
2 2 1 −59 −52 4 59 52
= 7 13 20 = =
4 (−4)3− 2 35 184 4 35 46
9 −11 −37
Page | 10
(ii) As the leading element is zero, hence interchanging the 1st and 2nd columns, we get
0 4 1 2 4 0 1 2
20 25 26 5 25 13
5 3 7 8 3 5 7 8 1 4 2
=− = − 2 16 7 10 = − 4 7 5
4 1 2 3 1 4 2 3 4 16
4 18 16 1 18 8
1 2 5 5 2 1 5 5
1 1 −65 −27 1 65 27
= − = = 0
2 5 65 27 10 65 27
0 −4 −3 −2 3 −3
1 −1 0
Adj ( A) = −2 3 −4
−1 1
Therefore, A =
det( A)
−2 3 −3
3 2 −1
3 , find A−1 .
Example 2. Let A = 1 6
2 −4 0
Page | 11
Solution. det( A) = 3(12) − 2(−6) − (−16) = 64 .
The matrix of cofactors is
12 6 −16 12 4 12
C = 4 2 16 and adj ( A) = 6 2 −10
12 −10 16 −16 16 16
12 4 12
1
2 −10
−1 1
Thus, A = adj ( A) = 6
det( A) 64
−16 16 16
2.4 ELEMENTARY TRANSFORMATIONS (ROW OPERATIONS)
Any one of the following operations on a matrix is called an elementary transformation.
i. Interchanging any two rows (or columns). This transformation is indicated by Rij , if the
ith and jth rows are interchanged.
ii. Multiplication of the elements of any row Ri (or column) by a non-zero scalar quantity
k is denoted by ( k .Ri ).
iii. Addition of constant multiplication of the elements of any row R j to the corresponding
elements of any other row R j is denoted ( Ri + k .R j ).
The process of using row operations to get a matrix in REF is called GAUSSIAN ELIMINATION.
The process of using row operations to get a matrix in RREF is called GAUSS-JORDAN ELIMINATION.
1 4 −3 7 1 1 0 0 1 2 6 0
(a)
0 1 6 2 (b)
0 1 0 (c)
0 0 1 −1 0
0 0 1 5 0 0 0 0 0 0 0 1
1 0 0 4 1 0 0 0 1 −2 0 1
(a)
0 1 0 7 (b)
0 1 0 (c)
0 0 0 1 3
0 0 1 −1 0 0 1 0 0 0 0 0
Page | 12
Example 3. Reduce the following matrix to upper triangular form (REF):
1 2 3
2 5 7
3 1 2
Solution.
1 2 3 1 2 3 1 2 3
2 5 7 0 1 1 R − 2 R 0 1 1 R − 2 R
2 1 2 1
1 3 3
Example 5. Transform 2 4 10 into a unit matrix (RREF).
3 8 4
Solution.
1 3 3 1 3 3 1 3 3 1 3 3
2 4 10 0 −2 4 R − 2 R 0 −2 4 R − 2 R 0 1 −2 − 1 R
2 1 2 1 2 2
3 8 4 0 −1 −5 R3 − 3R1 0 −1 −5 R3 − 3R1 0 −1 −5
1 0 9 R1 − 3R2 1 0 9 1 0 0 R1 − 9 R3
0 1 −2 0 1 −2 0 1 0 R + 2 R
2 3
−3R2 R2 R4 R1 + 3R3
Page | 13
2.7 USING ROW OPERATION TO FIND INVERSE (Gauss – Jordan Method)
To find the inverse of an invertible matrix A , find a sequence of elementary row operations that
reduces A to the identity and then perform that same sequence of operations on I n to obtain
A−1 . We want to reduce A to the identity matrix by row operations and simultaneously apply
−1
these operations to I n to produce A . To accomplish this, we will adjoin the identity matrix to
the right side of A , thereby producing a partitioned matrix of the form
A In
Then we will apply row operations to this matrix until the left side is reduced to I n ; these operations
−1
will convert the right side to A , so the final matrix will have the form
I n A−1 .
1 2 3
Example 1. Find the inverse of A = 2 5 3
1 0 8
1 2 3 1 0 0 1 2 3 1 0 0
2 5 3 0 1 0 0 1 −3 −2 1 0 R → −2 R + R
Solution. 2 1 2
1 0 8 0 0 1 0 −2 5 −1 0 1 R3 → − R1 + R3
1 2 3 1 0 0 1 2 0 −14 6 3 R1 → −3R3 + R1
0 1 −3 −2 1 0 0 1 0 13 −5 −3 R → 3R + R
2 3 2
1 0 0 −40 16 9 R1 → −2 R2 + R1 −14 16 9
0 1 0 13 −5 −3 A = 13 −5 −3
−1
0 0 1 5 −2 −1 5 −2 −1
(a)
(b) k0 = 0
(c) (−1)u = − u
(d) If ku = 0, then k = 0 or u = 0
Proof:
(a) We can write
0u + 0u = (0 + 0)u = 0u
By axiom 5 the vector 0u has a negative, −0u . Adding this negative to both sides above
0u + 0u + (−0u) = 0u + (−0u)
yields 0u + 0 = 0
0u = 0
(b) By axiom 2, with u = 0 and v = 0 , we have 0 + 0 = 0 , then
k 0 = k[0 + 0] = k 0 + k 0
By axiom 5 the vector ku has a negative, −ku . Adding this negative to both sides above
yields
k 0 + (−k 0) = k 0 + [k 0 + (−k 0)]
0 = k0
Page | 15
(c) To prove (−1)u = − u , we must show that u + (−1)u = 0 .
u + (−1)u = 1u + (−1)u = (1 + (−1))u
= 0u = 0
−1
(d) Suppose ku = 0 and k 0 , then there exists a scalar k such that
−1
k k =1
Hence, u = 1u = (k −1k )u = k −1 (ku) = k −1 0 = 0
3.2 SUBSPACES
Definition
A subset W of a vector space V is called a subspace of V if W is itself a vector space
under the addition and scalar multiplication defined on V .
3.2.1 Examples of Subspaces
Example 1. The Zero Subspace
If V is any vector space, and if W = 0 is the subset of V that consist of the zero vector only,
then W is closed under addition and scalar multiplication since
0+0 = 0 and k0 = 0 k.
We call W the zero subspace of V .
Example 2. Subspaces of M nn
We know that the sum of two symmetric n n matrices is symmetric and that a scalar
multiple of a symmetric n n matrix is symmetric. Thus, the set of symmetric n n matrices
is closed under addition and scalar multiplication and hence is a subspace of M nn .
Similarly, the sets of upper triangular matrices, lower triangular matrices, and diagonal
matrices are subspaces of M nn .
Page | 16
Example 1. Linear Combinations
Consider the vectors
u = (1, 2, − 1) and v = (6, 4, 2) in R3 .
Show that w = (9, 2, 7) is a linear combination of u and v , and that z = (4, − 1, 8) is
not a linear combination of u and v .
Solution. For w to be a linear combination of u and v , there must be scalars k1 and k2 such
that w = k1u + k2 v ; that is,
(9, 2, 7) = k1 (1, 2, − 1) + k2 (6, 4, 2) = ( k1 + 6k2 , 2k1 + 4k2 , − k1 + 2k2 )
Equating corresponding components gives
k1 + 6k2 = 9
2k1 + 4k2 = 2
−k1 + 2k2 = 7
Solving the system using Gaussian elimination gives k1 = −3, k2 = 2 , so
w = −3u + 2 v
Similarly, for z to be a linear combination of u and v , there must be scalars k1 and k2 such that
z = k1u + k2 v ; that is,
(4, − 1, 8) = k1 (1, 2, − 1) + k2 (6, 4, 2) = ( k1 + 6k2 , 2k1 + 4k2 , − k1 + 2k2 )
Equating corresponding components gives
k1 + 6k2 = 4
2k1 + 4k2 = − 1
−k1 + 2k2 = 8
This system of equations is inconsistent, so no such scalars k1 and k2 exist. Consequently, z is
not a linear combination of u and v .
Now, we check whether the system is consistent for all values of b1 , b2 , and b3 . The system is
consistent if and only if its coefficient matrix has a nonzero determinant. In this case the
3
determinant is zero (verify), so v1 , v 2 , and v 3 do not span R .
Page | 18
of the polynomials p1 , p 2 , p3 , and p 4 . If we substitute the expressions for p1 , p 2 , p3 , and p 4
into the above equation and equate corresponding coefficients, we obtain the system
Page | 19
k1 + 5k2 + 3k3 = 0
−2k1 + 6k2 + 2k3 = 0
3k1 − k2 + k3 = 0
Solving the system gives
k1 = − 12 t , k2 = − 12 t , k3 = t .
This shows that the system has nontrivial solutions and hence the vectors are linearly
dependent.
4
Example 2. Linear Independence in R
Determine whether the vectors
v1 = (1, 2, 2, − 1), v 2 = (4, 9, 9, − 4), v 3 = (5, 8, 9, − 5)
4
in R are linearly dependent or linearly independent.
Solution. The linear independence or linear dependence of these vectors is determined by
whether there exist nontrivial solutions of the vector equation
k1 v1 + k2 v 2 + k3 v3 = 0
Or, equivalently,
k1 (1, 2, 2, − 1) + k2 (4, 9, 9, − 4) + k3 (5, 8, 9, − 5) = (0, 0, 0, 0)
Equating corresponding components on the two sides yield the homogeneous linear system
k1 + 4k2 + 5k3 = 0
2k1 + 9k2 + 8k3 = 0
2k1 + 9k2 + 9k3 = 0
−k1 − 4k2 − 5k3 = 0
The above system has only the trivial solution k1 = 0, k2 = 0, k3 = 0 from which we
conclude that v1 , v 2 , and v 3 are linearly independent.
Example 3. Linear Independence of Polynomials
Determine whether the polynomials
p1 = 1 − x, p 2 = 5 + 3x − 2 x 2 , p3 = 1 + 3 x − x 2
are linearly dependent or linearly independent in P2 .
Solution. The linear independence or dependence of these vectors is determined by whether the
vector equation
k1p1 + k2p 2 + k3p3 = 0
Can be satisfied with coefficients that are not all zero. Writing the above equation in its
polynomial form, we have
(k1 + 5k2 + k3 ) + (−k1 + 3k2 + 3k3 ) x + (−2k1 − k3 ) x 2 = 0
The linear dependence or independence of the given polynomial hinges on whether the
following linear system has nontrivial solution:
k1 + 5k2 + k3 = 0
−k1 + 3k2 + 3k3 = 0
− 2 k 2 − k3 = 0
Page | 20
Thus, the system has nontrivial solution (verify), and therefore, the set p1 , p 2 , p3 is linearly
dependent.
Example 4. For which values real values of do the following vectors form a linearly dependent
set in R3 .
v1 = ( , − 12 , − 12 ), v 2 = (− 12 , , − 12 ), v 3 = (− 12 , − 12 , )
Solution. Suppose there are constants a, b, and c such that
a( , − 12 , − 12 ) + b(− 12 , , − 12 ) + c(− 12 , − 12 , ) = (0, 0, 0)
Equating corresponding components on the two sides yield the homogeneous linear system
−1 2 −1 2 a 0
−1 2 −1 2 b = 0
−1 2 −1 2 c 0
4 4 2
This equals zero if and only if = 1, or = −1 2 . Thus, the vectors are linearly dependent for
these two values of and linearly independent for all other values.
Page | 21
Example 2. Find the rank of the following matrix, hence, reduce it to its normal form.
1 2 −1 3
4 1 2 1
A=
3 −1 1 2
1 2 0 1
Solution.
1 2 −1 3 1 2 −1 3 1 −2 −1 3
4 1 2 R − 4 R1 0 −7 6 −11
1 0 −7 6 −11 2
3 −1 1 2 0 −7 4 −7 R3 − 3R1 0 0 −2 4 R3 − R2
1 2 0 1 0 0 1 −2 R4 − R1 0 0 1 −2
1 −2 −1 3 1 0 0 0 C2 − 2C1 1 0 0 0
0 −7 6 −11 C + C1 0 −7 0 0 C3 + 76 C2
0 −7 6 −11 3
0 0 −2 4 0 0 −2 4 0 0 −2 4
0 0 0 0 2 R4 + R3 0 0 0 0 C4 − 3C1 0 0 0 0 C4 + 117 C2
1 0 0 0
0 1 0 0 − 17 C2
I3 0
= 0 0
0 0 1 0 − 12 C3
0 0 0 0 C4 + 4C3
Therefore, the normal form of A is I 3 0 and the rank of A is given by ( A) = 3 .
Example 3.
Find the rank of the matrix
−1 2 3 −2
2 −5 1 2
3 −8 5 2
5 −12 −1 6
Solution.
−1 2 3 −2 −1 2 3 −2 −1 2 3 −2
2 −5 1 2 0 −1 7 −2 R + 2 R 0 −1 7 −2
2 1
3 −8 5 2 0 −2 14 −4 R3 + 3R1 0 0 0 0 R3 − 2 R2
5 −12 −1 6 0 −2 14 −4 R4 + 5R1 0 0 0 0 R4 − 2 R2
Hence, Rank = Number of nonzero rows = 2 .
Example 4.
Find the rank of A where
1 3 1 −2 −3
1 4 3 −1 −4
A=
2 3 −4 −7 −3
3 8 1 −7 −8
Solution. Reduce A to echelon form
1 3 1 −2 −3 1 3 1 −2 −3 1 3 1 −2 −3
1 4 3 −1 −4 0 1 2 1 −1 R − R 0 1 2 1 −1
2 1
2 3 −4 −7 −3 0 −3 −6 −3 3 R3 − 2 R1 0 0 0 0 0 R3 + 3R2
3 8 1 −7 −8 0 −1 −2 −1 1 R4 − 3R1 0 0 0 0 0 R4 + R2
Therefore, ( A) = 2 .
Page | 22
6.2 BASIS AND DIMENSION
Definition. If S = v1 , v 2 , , v n is a set of vectors in a finite-dimensional vector space V , then
Show that S = 1, x, x ,
2
, x n is a basis for the vector space Pn of polynomials of degree n
or less.
Solution. We need to show that the polynomial S span Pn and are linearly independent.
Let us denote these polynomials by
p0 = 1, p1 = x, p2 = x2 , , pn = xn
The polynomials p 0 , p1 , , p n span the vector space Pn since each polynomial p in Pn can
be written as
p = a0 + a1 x + + an x n
Which is a linear combination of 1, x, x 2 , , x n . We can denote this by writing
Pn = span 1, x, x 2 , , xn
Next, we must show that the only coefficients satisfying the vector equation
a0p 0 + a1p1 + a2p 2 + + anp n = 0
Or equivalently,
a0 + a1 x + a2 x 2 + + an x n = 0
are
a0 = a1 = a2 = = an = 0
Thus, the polynomial p 0 , p1 , , p n are linearly independent, and therefore, they form a basis
for Pn that we call the standard basis for Pn .
1 0 0 1 0 0 0 0
M1 = , M2 = , M3 = , M4 =
0 0 0 0 1 0 0 1
Page | 23
Form a basis for the vector space M 22 of 2 2 matrices.
Solution. We must show that the matrices span M 22 and are linearly independent.
To prove that the matrices span M 22 , we need to show that every 2 2 matrix
a b
B=
c d
Can be expressed as
c1M 1 + c2 M 2 + c3 M 3 + c4 M 4 = B (1)
and to prove linear independence, we must show that the equation
c1M 1 + c2 M 2 + c3 M 3 + c4 M 4 = 0 (2)
has only the trivial solution, where 0 is the 2 2 zero matrix;
the matrix forms of equation (1) and (2) are
1 0 0 1 0 0 0 0 a b
c1 + c2 + c3 + c4 =
0 0 0 0 1 0 0 1 c d
and
1 0 0 1 0 0 0 0 0 0
c1 + c2 + c3 + c4 =
0 0 0 0 1 0 0 1 0 0
Which can be rewritten as
c1 c2 a b c1 c2 0 0
c = =
c4 c d c
and
3 3 c4 0 0
Solution. We need to show that these vectors are linearly independent and span R 3 .
Suppose there exist constants c1 , c2 , and c3 such that
c1 v1 + c2 v 2 + c3 v 3 = 0
by equating corresponding components on the two sides, we have the following system
c1 + 2c2 + 3c3 = 0
2c1 + 9c2 + 3c3 = 0
c1 + 4c3 = 0
Form the coefficient matrix and reduce to echelon form
1 2 3 1 2 3 1 2 3
2 9 3 0 5 −3 R − 2 R 0 5 −3
2 1
1 0 4 0 −2 1 R3 − R1 0 0 −1 5 R3 + 2 R2
Therefore, since the echelon matrix has no zero rows, hence the vectors are linearly
independent, and so the vectors v1 , v 2 , and v 3 form a basis for R 3 .
Page | 24
Definition. If S = v1 , v 2 , , v n is a basis for a vector space V , and
v = c1 v1 + c2 v 2 + + cn v n
is the expression for a vector v in terms of the basis S , then the scalars c1 , c2 , , cn are
called the coordinates of v relative to the basis S . The vector ( c1 , c2 , , cn ) in R n
constructed from these coordinates is called the coordinate vector of v relative to S ; it is
denoted by
( v) s = ( c1 , c2 , , cn )
Example 5. Consider the vectors
v1 = (1, 2, 1), v 2 = (2, 9, 0), v 3 = (3, 3, 4)
(a) Find the coordinate vector of v = (5, − 1, 9) relative to the basis S = v1 , v 2 , v3 .
The nonzero rows 1, − 2, 4, 1 and 0, 1, 1, − 3 of the echelon matrix form a basis of the space
generated by the coordinate vectors, and so the corresponding polynomials
t 3 − 2t 2 + 4t + 1 and t 2 + t − 3 form a basis of W . Thus, dim(W ) = 2 .
(c) F (mv + nw) = F (mv) + F (nw) = mF (v) + nF ( w), scalars m, n K and any vectors v, w V .
Example 1.
Let F : R3 → R be a mapping denoted by F ( x, y, z ) = 2 x − 3 y + 4 z . Show that F is a linear map.
Solution.
(a) Suppose v, w R 3 , where v = (a1 , a2 , a3 ) and w = (b1 , b2 , b3 ) , then
v + w = (a1 , a2 , a3 ) + (b1 , b2 , b3 ) = (a1 + b1 , a2 + b2 , a3 + b3 )
F (v + w) = F (a1 + b1 , a2 + b2 , a3 + b3 ) = 2(a1 + b1 ) − 3(a2 + b2 ) + 4(a3 + b3 )
= 2a1 + 2b1 − 3a2 − 3b2 + 4a3 + 4b3
= (2a1 − 3a2 + 4a3 ) + (2b1 − 3b2 + 4b3 )
= F (v) + F ( w)
(b) If k K and v R , then
3
kv = k (a1 , a2 , a3 ) = (ka1 , ka2 , ka3 )
F (kv) = F (ka1 , ka2 , ka3 ) = 2ka1 − 3ka2 + 4ka3 = k (2a1 − 3a2 + 4a3 )
= kF (v)
(c) If m, n K and v, w R then mv + nw = (ma1 + nb1 , ma2 + nb2 , ma3 + nb3 )
3
Definition. Image
The Image of F denoted by Im( F ) is the set of all vectors of the form F (u ) V where u U .
That is Im( F ) = u U : f (v) = u for some v V .
Page | 26
Theorem. Let F : V → U be a linear map. Then
(a) dim(U ) = nullity( F ) + rank ( F )
(b) dim(ker F ) = nullity( F )
(c) dim( ImF ) = rank ( F )
(d) dim(V ) = dim(ker F ) + dim(Im F )
Example 1.
Let T : 3
→ 3
be a linear mapping defined by T ( x, y, z ) = ( x + 2 y − z, y + z, x + y − 2 z ) .
Find a basis and dimension of:
(a) Image of T
(b) Kernel of T
Solution.
(a) using the usual basis for 3
: e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1) . The image
3
of the usual basis on generates the image of T . Then:
T (e1 ) = T (1, 0, 0) = (1 + 0 + 0, 0 + 0, 1 + 0 − 0) = (1, 0, 1)
T (e2 ) = T (0, 1, 0) = (0 + 2 − 0, 1 + 0, 0 + 1 − 0) = (2, 1, 1)
T (e3 ) = T (0, 0, 1) = (0 + 0 − 1, 0 + 1, 0 + 0 − 2) = ( −1, 1, − 2)
We then form a matrix whose rows are the generators of image of T and then reduce
to echelon form. i.e.
1 0 1 1 0 1 1 0 1
2 1 1 0 1 −1 R − 2 R 0 1 −1
2 1
−1 1 −2 0 1 −1 R3 + R1 0 0 0 R3 − R2
Thus, the nonzero rows (1, 0, 1), (0, 1, − 1) is a basis for image of T . Hence
dim(Im T ) = Rank (T ) = 2
(b) To determine the basis of kernel of T , we seek the set of all vectors ( x, y, z ) such that
T ( x, y, z ) = (0, 0, 0) , i.e.
T ( x, y, z ) = ( x + 2 y − z, y + z, x + y − 2 z ) = (0, 0, 0)
Set corresponding components equal each other, we have:
x + 2y − z = 0
y+z = 0
x + y − 2z = 0
Solving the above system, we obtain: x = 0, y = 0, z = 0 . Writing the system in
matrix form and reduce it to echelon form, we have
1 2 −1 1 2 −1 1 2 −1
0 1 1 0 1 1 0 1 1
1 1 −2 0 −1 −1 R3 − R1 0 0 0 R3 + R2
The nonzero rows are (1, 2, − 1) and (0, 1, 1) which corresponds to
x + 2y − z = 0 (1)
y+z = 0 (2)
From equation (2): y = −z (3)
Put (3) in (1): x = 3z
Then the vector
x 3z 3
y = − z = z −1
z z 1
Page | 27
Therefore, the kernel of T = Nullity of T is the subspace of 3
spanned by (3, − 1, 1) .
That is (3, − 1, 1) is a basis of T and so Nullity of T is 1 . Hence
dim(U ) = Nullity of T + rank of T
= 1 + 2 = 3
Example 2.
Consider the map F : 4
→ 2
defined by F ( x, y, z, w) = (2 x + y + z + w, x + z − w) . Find
the basis and dimension of:
(a) Image of F
(b) Kernel of F
Solution.
4
(a) using the usual basis for :
e1 = (1, 0, 0, 0), e2 = (0, 1, 0, 0), e3 = (0, 0, 1, 0), e4 = (0, 0, 0, 1) .
4
The image of the usual basis on generates the image of T . Then:
2 1 1 0 R2 R1 1 0 1 0
1 0 2 1 0 1 R − 2 R 0 1
2 1
1 1 1 1 0 1 R3 − R1 0 0 R3 − R2
1 −1 1 −1 0 −1 R4 − R1 0 0 R4 + R3
Thus, the nonzero rows (1, 0) and (0, 1) is a basis for image of F . Hence,
dim(Im F ) = rank ( F ) = 2 .
To determine the basis of kernel of F , we seek the set of all vectors ( x, y, z , w)
4
(b)
such that F ( x, y, z, w) = 0 . That is
(2 x + y + z + w, x + z − w) = (0, 0)
Equating corresponding components gives
2x + y + z + w = 0 (1)
x +z−w = 0 (2)
From (2), we have
x = w− z (3)
Substituting (3) in (1), we obtain
y = z − 3w (4)
Now, z and w are free variables, they can be chosen arbitrarily. Let z = t and w = h .
Such that the vectors
x −t + h −1 1
y t − 3h
= = t 1 + h −3
z t 1 0
w h 0 1
The vectors (−1, 1, 1, 0) and (1, − 3, 0, 1) are the basis of the kernel of F and therefore,
Nullity of F = 2 . Hence,
Page | 28
dim(U ) = Nullity ( F ) + rank ( F )
= 2 + 2 = 4
8.1 MATRICES AND LINEAR OPERATORS
Let T be a linear operator on a vector space V over a field k and suppose e1 , e2 , , en is a basis of
V . Now T (e1 ), T (e2 ), , T (en ) are vectors in V and so each is a linear combination of the elements
of the basis ei .
T (e1 ) = a11e1 + a12 e2 + + a1n en
T (e2 ) = a21e1 + a22e2 + + a2 n en
3 −4
T e =
1 5
Page | 29
Example 2.
Find the matrix representation of the operator T ( x, y) = (5x + y, 3x − 2 y) relative to the basis
f1 = (1, 2) and f 2 = (2, 3) .
Solution.
T ( x, y) = (5x + y, 3x − 2 y)
Now,
T ( f1 ) = T (1, 2) = (7, − 1)
Writing T ( f1 ) as a linear combination of the f i ' s using scalars a and b , that is:
(7, − 1) = af1 + bf 2 = a(1, 2) + b(2, 3)
= (a + 2b, 2a + 3b)
Equating corresponding components, we have
a + 2b = 7 (1)
2a + 3b = −1 (2)
Solving these two equations, we obtain:
a = − 23, b = 15
Similarly,
T ( f 2 ) = T (2, 3) = (13, 0)
Writing T ( f 2 ) as a linear combination of the f i ' s using scalars c and d , that is:
(13, 0) = cf1 + df 2 = c(1, 2) + d (2, 3)
= (c + 2d , 2c + 3d )
Equating corresponding components, we have
c + 2d = 13 (1)
2c + 3d = 0 (2)
Solving these equations, we have:
c = − 39, d = 26
Thus, the coefficient matrix
a b −23 15
c d = −39 26
−23 −39
T f = .
15 26
Page | 30