Linear Algebra 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 40

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

Dr. Suresh Kumar, Department of Mathematics, BITS-Pilani, Pilani Campus


Note: Some concepts of Linear Algebra are briefly described here just to help the students. Therefore,
the following study material is expected to be useful but not exhaustive for the Mathematics-II course. For
detailed study, the students are advised to attend the lecture/tutorial classes regularly, and consult the text
book prescribed in the hand out of the course.

Chapter 2 (2.1-2.4)
Elementary row operations
There are three elementary row operations:
(1) Interchanging two rows Ri and Rj (symbolically written as Ri Rj )
(2) Multiplying a row Ri by a non-zero number k (symbolically written as Ri kRi )
(3) Adding constant k multiple of a row Rj to a row Ri (symbolically written as Ri Ri + kRj )
To see how row transformations are applied, consider the matrix

4 8 10
A = 1 2 3 .
3 5 6
Applying R1 R2 , we obtain

1 2 3
A 4 8 10 .
3 5 6
Applying R2 (1/2)R2 , we obtain

1 2 3
A 2 4 5 .
3 5 6
Applying R2 R2 2R1 and R3 R3 3R1 , we get

1 2
3
A 0 0 1 .
0 1 3
Note. The matrices resulting from the row transformation(s) are known as row equivalent matrices.
That is why the sign of equivalence is used after applying row transformations. So we can write

4 8 10
1 2 3
1 2 3
1 2
3
A = 1 2 3 4 8 10 2 4 5 0 0 1 .
3 5 6
3 5 6
3 5 6
0 1 3
Notice that row equivalent matrices can be obtained from each other by applying suitable row transformation(s), but row equivalent matrices need not be equal.

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

Row Echelon Form


A matrix is said to be in row echelon form1 (REF) if
(i) all the zero rows, if any, lie at the bottom, and
(ii) the leading entry (the first non-zero entry) in any row is 1, and its column lies to the right of the
column of the leading entry of the preceding row.
In addition, if all the entries above
row echelon form (RREF).

1 1 3
1 1 3 
1 1
Ex. 0 1 5, 0 0 1,
0 0
0 0 1
0 0 0



1 0 0
1 3 0
1 0
Ex. 0 1 0, 0 0 1, 0 1
0 0 1
0 0 0
0 0

the leading entries are 0, then the matrix is said to be in reduced


3
all are in REF.
1

3
5 all are in RREF.
0

The following example illustrates that how we find REF and RREF of a given matrix.
Ex. Find RREF

2 4
A = 1 2
3 5

of the matrix

5
3 .
6

Sol. We have

2 4 5
A = 1 2 3 .
3 5 6
Applying R1 R2 , we obtain

1 2 3
A 2 4 5 .
3 5 6
Applying R2 R2 2R1 and R3 R3 3R1 , we get

1 2
3
A 0 0 1 .
0 1 3
Applying R2 R3 ,

1 2

A 0 1
0 0

we obtain

3
3 .
1

Applying R2 R2 and R3 R3 , we get

1 2 3
A 0 1 3 . Notice that it is REF of A.
0 0 1
1

Dictionary meaning of echelon: A formation of troops in which each unit is positioned successively to the left or
right of the rear unit to form an oblique or steplike line.

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

Applying R2 R2 3R3 and R1 R1 3R3 , we get

1 2 0
A 0 1 0 .
0 0 1
Finally, applying

1 0
A 0 1
0 0

R1 R1 2R2 , we get

0
0 ,
1

the RREF of A.
Useful Tip: From the above example, one may notice that for getting RREF of a matrix we make use
of first row to make zeros in the first column, second row to make zeros in the second column and so on.
Note: REF of matrix is not unique, but RREF is unique.

Inverse of a Matrix
Let A be a matrix with m rows say R1 , R2 ,....,Rm , and B
Then the product matrix AB is of order m n, and

R1
R1 C1 R1 C2
R2 
 R2 C1 R2 C2

A.B =
... C1 C2 ... Cn = ...
...
Rm
Rm C1 Rm C2

be a matrix with n columns say C1 , C2 ,.....,Cn .

... R1 Cn
... R2 Cn
= AB
...
...
... Rm Cn

If we interchange two rows in A say R1 R2 , then the first two rows of AB also get interchanged.
Similarly, it is easy to see that applying any of the other two row operations in A is equivalent to applying
the same row operation in AB. Thus, we conclude that applying any elementary row operation in the
matrix A is equivalent to applying the same elementary row operation in the matrix AB. Hence, if R is
any row operation, then R(AB) = R(A)B. Note that the matrix B is left unchanged. We make use of
this fact to find the inverse of a matrix.
Let A be a given non-singular matrix of order n n. To find the inverse of A, first we write A = In A.
In this identity, we apply the elementary row operations on the left hand side matrix A in such a way
that it transforms to In , the RREF of A. As discussed above, the same row operations apply to the
first matrix In on right hand side and suppose it transforms to a matrix B. Then, we have In = BA.
Therefore, A1 = B. This method for obtaining the inverse of a matrix is called Gauss-Jordan method.
Note: One may use elementary column operations (Ci Cj , Ci kCi and Ci Ci + kCj ) also to find
A1 . In this case, we write A = AIn and apply elementary column operations to obtain In = AB so that
A1 = B. It may be noted that we can not apply row and column operations together for
finding A1 in the Gauss-Jordan method.
Ex. Use Gauss-Jordan method to find inverse of the matrix

2 4 5
A = 1 2 3
3 5 6

Linear Algebra

Sol. We

2
1
3

Dr. Suresh Kumar, BITS-Pilani

write A = I3 A and therefore


4 5
1 0 0
2 3 = 0 1 0 A.
5 6
0 0 1

Applying

1
2
3

R1 R2 , we

2 3
0
4 5 = 1
5 6
0

Applying

1
0
0

R2 R2 2R1 and R3 R3 3R1 , we get


2
3
0 1 0
0 1 = 1 2 0 A.
1 3
0 3 1

Applying

1
0
0

R2 R3 , we obtain

2
3
0 1 0
1 3 = 0 3 1 A.
0 1
1 2 0

Applying

1
0
0

R2 R2 and

0
2 3
1 3 = 0
1
0 1

Applying

1
0
0

R2 R2 3R3 and

2 0
3 5

1 0 = 3 3
0 1
1 2

obtain

1 0
0 0 A.
0 1

R3 R3 , we get

1 0
3 1 A.
2 0
R1 R1 3R3 , we get

0
1 A.
0

Finally, applying R1 R1 2R2 , we get

3 1
2
1 0 0
0 1 0 = 3 3 1 A.
1 2
0
0 0 1

A1

3 1
2
= 3 3 1
1 2
0

Useful Tip: To find inverse of A, first write A = In A. Then change the left hand side matrix A to its
RREF by applying suitable row transformations so that In = BA and A1 = B.
Note: You might be familiar that inverse of a square matrix A exists if and only if A is non-singular,
that is, |A| =
6 0. Note that RREF of a non-singular matrix is always a unit matrix.
Note: You know that when a system of n linear equations in n variables is represented in the matrix
form AX = B, then A is n square matrix of the coefficients of the variables, and solution of the system
reads as X = A1 B provided A1 exists. In case, if number of equations is not equal to the number of
variables, then A is not a square matrix, and therefore A1 is not defined. In what follows, we present
a general strategy for solving a system of linear equations. First we introduce the concept of rank of a
matrix.

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

Rank of a Matrix
Let A be a matrix of order m n. Then rank of A, denoted by rank(A), is defined as the number of
non-zero rows in the REF of the matrix A.

2 4 5
Ex. Find rank of the matrix A = 1 2 3.
3 5 6
Sol. Applying

A= 1
3

suitable row

4 5
1
2 3 0
5 6
0

transformations, we obtain

2 3
1 3 .
0 1

We see that the REF of A carries three non-zero rows. So rank of A is 3.


Useful Tip: To find rank of a matrix A, find REF of A. Then rank of A is the number of non-zero rows
in REF of A.
Additional Information: The number of non-zero rows in REF of A is, in fact, defined as the row rank
of A. Similarly, the number of non-zero rows in REF of A0 (transpose of A) is defined as the column rank
of A. Further, it can be established that the row rank is always equal to the column rank. It follows that
rank of A is less than or equal to the minimum of m (number of rows in A) and n (number of columns in
A).

Solution of a System of Linear Equations


A system of m linear equations in n unknown variables x1 , x2 ,...., xn is given by
a11 x1 + a12 x2 + ........... + a1n xn = b1 ,
a21 x1 + a22 x2 + ........... + a2n xn = b2 ,
..........................
am1 x1 + am2 x2 + ........... + amn xn = bm .
Here aij and bi (i = 1, 2, ..., m and j = 1, 2, ..., n) are constants. The matrix form of this system of
linear equations reads as
AX = B,
where

a11 a12
a21 a22
A=
...
...
am1 am2



... a1n
x1
b1
x2
b2
... a2n
, X = and B = .
...
...
... ...
... amn
xn
bm

The system AX = B is said to be non-homogeneous provided B 6= O. Further, AX = O is called


homogeneous system. The matrix

a11 a12 ... a1n b1


a21 a22 ... a2n b2

,
...
... ... ...
...
am1 am2 ... amn bm

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

which is formed by inserting the column of matrix B next to the columns of A, is known as augmented
matrix of the matrices A and B. We shall denote it by [A : B]. The following theorem tells us about
the consistency of the system AX = B.
Theorem: The system AX = B of linear equations has a
(i) unique solution if rank(A)=rank([A : B]) = n,
(ii) infinitely many solutions if rank(A)=rank([A : B]) < n,
(iii) no solution if rank(A)6=rank([A : B]).
From this theorem, we deduce the following.
If B = O, then obviously rank(A)=rank([A : B]). It implies that the homogeneous system AX = O
always has at least one solution. Further, it has the unique trivial solution X = O if rank(A)= n
and infinitely many solutions if rank(A)< n.
To find rank([A : B]), we find REF of the augmented matrix [A : B]. From the REF of [A : B], we
can immediately write the rank(A). Then using the above theorem, we decide about the nature of
solution of the system. In case the solution exists, it can be derived using the REF of the matrix
[A : B] as illustrated in the following example.
Ex. Test the consistency of the following system of equations and find the solution, if exists.
2x + 3y + 4z = 11,
x + 5y + 7z = 15,
3x + 11y + 13z = 25.

x
Sol. Considering the matrix form AX = B of the given system, we have X = y and the augmented
z
matrix

2 3 4 11
[A : B] = 1 5 7 15 .
3 11 13 25
Applying R1 R2 ,

[A : B] 2
3

we obtain

5 7 15
3 4 11 .
11 13 25

Applying R2 R2 + (2)R1 and R3 R3 + (3)R1 , we have

1 5
7
15
[A : B] 0 7 10 19 .
0 4 8 20
Applying R2 (1/7)R2 , we have

1 5
7
15
[A : B] 0 1 10/7 19/7 .
0 4 8 20
Applying R3 R3 + 4R2 , we have

1 5
7
15
19/7 .
[A : B] 0 1 10/7
0 0 16/7 64/7

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

Applying R3 (7/16)R3 , we have

1 5
7
15
[A : B] 0 1 10/7 19/7 .
0 0
1
4
This is the REF of [A : B], which contains three non-zero rows. So rank([A : B])= 3. Also, we see that
REF of the matrix A contains three non-zero rows. So rank(A)= 3. Further, there are three variables in
the given system. So rank(A)=rank([A : B]) = 3. Hence, the given system of equations is consistent and
has a unique solution.
From the REF of [A : B], the given system of equations is equivalent to
x + 5y + 7z = 15,
y + (10/7)z = 19/7,
z = 4.
From the third equation, we have z = 4. Inserting z = 4 into second equation, we obtain y = 3.
Finally, plugging z = 4 and y = 3 into first equation, we get x = 2. Hence, the solution of the given
system is x = 2, y = 3 and z = 4.
Note: In the above example, first we have found the REF of the matrix [A : B]. Then we have written
the reduced system of equations and found the solution using back substitution. This approach is called
Gauss Elimination Method. If we use RREF of [A : B] to obtain the solution, then this approach is
called Gauss-Jordan Method. For illustration of this method, we start with the REF of the matrix
[A : B] as obtained above. We have

1 5
7
15
[A : B] 0 1 10/7 19/7 .
0 0
1
4
Applying R2 R2 + (10/7)R3 and R1 R1 + (7)R3 , we get

1 5 0 13
[A : B] 0 1 0 3 .
0 0 1
4
Applying R1 R1 5R2 , we get

1 0 0 2
[A : B] 0 1 0 3 .
0 0 1 4
The RREF of [A : B] yields x = 2, y = 3 and z = 4.
Ex. Test the consistency of the following system of equations
x + y + 2z + w = 5,
2x + 3y z 2w = 2,
4x + 5y + 3z = 7.
Sol. Here the augmented matrix is

1 1 2
1 5
[A : B] = 2 3 1 2 2 .
4 5 3
0 7

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

Using suitable row transformations (you can do it), we find

1 0 7
5 0
[A : B] 0 1 5 4 0 .
0 0 0
0 1
We see that the rank([A : B]) = 3, but rank(A) = 2. So the given system of equations is inconsistent,
that is, it has no solution.
Note: From the above example, you can understand that why there does not exist a solution
when rank(A)6=rank([A : B]). For, look at the reduced system of equations, which reads as
x + 7z + 5w = 0,
y 5z 4w = 0,
0 = 1.
Obviously, the third equation is an absurd.
Also, notice that initially we are given three equations in four variables. Initial guess says that there
would exist infinitely many solutions of the system. But we find that there does not exist even a single
solution. So you should keep in mind that a system involving number of variables more than the number
of equations need not to possess a solution.
Ex. Test the consistency of the following system of equations:
x + 2y + z = 1,
3x y + 2z = 1,
y + z = 1,
where is a constant.
Sol. Here the augmented

1 2
[A : B] = 3 1
0
1

matrix is

1 1
2 1 .
1

Using suitable row transformations (you can do it), we find

1 2 1
1
1
4/5 .
[A : B] 0 1
0 0 1 1/5
We see that the rank([A : B]) = 3 irrespective of the values of , but rank(A) = 2 if = 1 and rank(A) = 3
if 6= 1. So the given system of equations is consistent with rank([A : B]) = 3 =rank(A) when 6= 1,
and possesses a unique solution in terms of . The reduced system of equations reads as
x 2y z = 1,
y + z = 4/5,
( 1)z = 1/5.
which gives
z=

1
,
5( 1)

y=

4
1

,
5 5( 1)

x=

3
1

.
5 5( 1)

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

Ex. Test the consistency of the following system of equations and find the solution, if exists.
6x1 12x2 5x3 + 16x4 2x5 = 53,
3x1 + 6x2 + 3x3 9x4 + x5 = 29,
4x1 + 8x2 + 3x3 10x4 + x5 = 33.
Sol. Here the augmented matrix is

6 12 5 16 2 53
6
3
9
1
29 .
[A : B] = 3
4
8
3 10 1
33
Using suitable row transformations (you can do it), we find

1 2 0 1 0 4
[A : B] 0 0 1 2 0 5 .
0 0 0 0 1 2
We see that the rank([A : B]) = 3 =rank(A) < 5(the number of variables in the system). So the given
system of equations has infinitely many solutions. The reduced system of equations is
x1 2x2 + x4 = 4,
x3 2x4 = 5,
x5 = 2.
The second and fourth columns in RREF of [A : B] do not carry the leading entries, and correspond to
the variables x2 and x4 which we consider as independent variables. Let x2 = b and x4 = d. So from the
reduced system of equations, we get
x1 = 2b d 4,

x2 = b,

x3 = 2d + 5,

x4 = d,

x5 = 2.

Hence, the complete solution set is


{(x1 , x2 , x3 , x4 , x5 ) = (2b d 4, b, 2d + 5, d, 2) : b, d R}.

Row vectors, linear combinations and row space


The rows of a

A= 1
3

matrix are treated as row vectors. For example, consider the matrix

3 4
5 7 .
11 13

Then there are three row vectors given by [2, 3, 4], [1, 5, 7] and [3, 11, 13].
Sum of any scalar multiples of row vectors is called a linear combination while the set of all linear
combinations of the row vectors is called row space of the matrix.
For example, if a, b and c are any three real numbers, then the expression
a[2, 3, 4] + b[1, 5, 7] + c[3, 11, 13] = [2a + b + 3c, 3a + 5b + 11c, 4a + 7b + 13c]
is a linear combination of the vectors [2, 3, 4], [1, 5, 7] and [3, 11, 13] while the set
{[2a + b + 3c, 3a + 5b + 11c, 4a + 7b + 13c] : a, b, c R}

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

10

of all linear combinations of the row vectors of A is the row space of A.


Recall that two matrices are row equivalent if one can be derived from the other by applying suitable
row transformation(s). The row spaces of two row equivalent matrices are always same. Afterall, row
spaces are nothing but the linear combinations of the row vectors of the matrices. Consequently, the
interplay of row operations gives rise to the same sets of linear combinations of row vectors, and hence
the same row spaces. This fact is quite useful for determining a simple form of row space of a matrix.
We know RREF of a matrix is its row equivalent matrix, and is unique as well. So we shall prefer to use
RREF of the given matrix to write its row space. For example, consider the matrix

2 3 4
A = 1 5 7 .
3 11 13
Its RREF is

1 0 0
0 1 0 .
0 0 1
So row space of A reads as
{a[1, 0, 0] + b[0, 1, 0] + c[0, 0, 1] = [a, b, c] : a, b, c R}.
Ex. Determine

P = 4
2

whether the row vector [5, 17, 20] is in the row space of the matrix

1 2
0 1 .
4 3

Sol. We need to check whether there exists three real numbers a, b and c such that
[5, 17, 20] = a[3, 1, 2] + b[4, 0, 1] + c[2, 4, 3].
This gives the following system of linear equations:
3a + 4b 2c = 5,
a + 4c = 17,
2a + b 3c = 20.
Here the augmented

[A : B] = 1
2

matrix is

4 2
5
1 0 0 5
0 4
17 0 1 0 1 .
1 3 20
0 0 1 3

So we get a = 5, b = 1, c = 3, and
[5, 17, 20] = 5[3, 1, 2] [4, 0, 1] + 3[2, 4, 3].
Thus, [5, 17, 20] is a linear combination of the row vectors of P , and hence is in the row space of P .

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

11

Linearly independent row vectors


The rows or row vectors of a matrix are said to be linearly independent (LI) if any row vector does not
belong to the row space of remaining row vectors, that is, if any row vector is not the linear combination
of the remaining row vectors. Equivalently, the row vectors are LI if their linear combination is zero vector
only when all scalars are 0. The row vectors which are not LI, are said to be linearly dependent (LD).
To determine linear independence of the row vectors of a matrix, put the linear combination of the row
vectors equal to zero vector, and determine the values of scalars. If all the scalars are 0, the vectors are LI.
Ex. Test the linear

3 1
P = 4 0
2 4

independence of the row vectors of the matrix

2
1 .
3

Sol. We need to find three real numbers a, b and c such that


a[3, 1, 2] + b[4, 0, 1] + c[2, 4, 3] = [0, 0, 0].
This gives the following system of linear equations:
3a + 4b 2c = 0,
a + 4c = 0,
2a + b 3c = 0.
Here the augmented

3
[A : B] = 1
2

matrix is

4 2 0
1 0 0 0
0 4 0 0 1 0 0 .
1 3 0
0 0 1 0

So we get a = 0, b = 0, c = 0. Thus, all the scalars are 0, which shows that the row vectors of the matrix
P are LI.
Ex. Test the linear independence of the row vectors of the matrix

3 1 2
P = 4 0 1 .
7 1 1
Sol. We need to find three real numbers a, b and c such that
a[3, 1, 2] + b[4, 0, 1] + c[7, 1, 1] = [0, 0, 0].
This gives the following system of linear equations:
3a + 4b + 7c = 0,
a + c = 0,
2a + b c = 0.
Here the augmented

[A : B] = 1
2

matrix is

4 7 0
1 0 0 0
0 1 0 0 1 0 0 .
1 1 0
0 0 0 0

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

12

So we get a = 0, b = 0, but c is arbitrary. Thus, the row vectors of the matrix P are not LI. Notice that
the third row in P is sum of the first two rows. That is why we got the linear dependence of the row
vectors of P .
Note: We can talk about the linear independence of the rows of a matrix by looking at the rank of the
matrix as well. If rank of a matrix is equal to the number of its rows, then the rows of the matrix are
LI. At this stage, you should understand the interplay of row operations, RREF, rank, linear system of
equations and linear independence of rows.

Homework: Do the problems of exercises 2.1 to 2.4 from the textbook.

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

13

Chapter 3 (3.4)
Eigenvalues and Eigenvectors
A real number is an eigenvalue of an n-square matrix A iff there there exists a non-zero n-vector X
such that AX = X or (A In )X = 0. The non-zero vector X is called eigenvector of A corresponding to the eigenvalue . Since the non-zero vector X is non-trivial solution of the homogeneous system
(A In )X = 0, we must have |A In | = 0. This equation, known as the characteristic equation of A,
yields eigenvalues of A. So to find the eigenvalues of A, we solve the equation |A In | = 0.
The eigenvectors of A corresponding to are the non-trivial solutions (X 6= 0) of the homogeneous
system (A In )X = 0.
The set E = {X : AX = X} is known as the eigenspace of . Note that E contains all eigenvectors
of A corresponding to the eigenvalue in addition to the vector X = 0 since A0 = 0. Of course, by
definition X = 0 is not an eigenvector of A. 

12 51
Ex. Find eigenvalues and eigenvectors of A =
.
2 11
Sol. Here, the characteristic equation of A, that is, |A I2 | = 0 reads as


12

51

= 0.

2
11
This leads to a quadratic equation in given by
2 30 = 0.
Its roots are = 6, 5, the eigenvalues of A.
Now, the eigenvectors corresponding to = 6, are the non-trivial solutions X of the homogeneous
system (A 6I2 )X = 0. So to find eigenvectors of A corresponding to the eigenvalue = 6, we need to
solve the homogeneous system:
   

6 51 x1
0
.
=
2 17 x2
0
Applying R1 (1/6)R1 , we get

   
1 17/2 x1
0
=
.
2 17
x2
0
Applying R2 R2 2R1 , we get

   
1 17/2 x1
0
=
.
0
0
x2
0
So the system reduces to x1 (17/2)x2 = 0. Letting x2 = a, we get x1 = (17/2)a. So
[x1 , x2 ] = [(17/2)a, a] = (1/2)a[17, 2]
. So the eigenvectors corresponding to = 6 are non-zero multiples of the vector [17, 2]. The eigenspace
corresponding to = 6, therefore, is E6 = {a[17, 2] : a R}.
Likewise, to find the eigenvectors corresponding to = 5, we solve the homogeneous system
(A + 5I2 )X = 0, that is,

   
17 51 x1
0
=
.
2 6 x2
0

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

14

Applying R1 (1/17)R1 and R2 (1/2)R2 , we obtain



   
1 3 x1
0
=
.
1 3 x2
0
Applying R2 R2 R1 , we have

   
1 3 x1
0
=
.
0 0
x2
0
So the system reduces to x1 3x2 = 0. Letting x2 = a, we get x1 = 3a. So
[x1 , x2 ] = [a, 3a] = a[1, 3].
So the eigenvectors corresponding to = 5 are non-zero multiples of the vector [3, 1]. The eigenspace
corresponding to = 5, therefore, is E5 = {a[3, 1] : a R}.

4 8 12
Ex. Find eigenvalues and eigenvectors of A = 6 6 12 .
6 8 14
Sol. Here, the characteristic equation of A, that is, |A I3 | = 0 reads as


4

8
12


6
6
12 = 0.

6
8
14
This leads to a cubic equation in given by
3 42 + 4 = 0.
Its roots are = 0, 2, 2, the eigenvalues of A.
Let us first find the eigenvectors of A corresponding to = 0. For this, we need to find non-zero
solutions X of the homogeneous system (A 0I3 )X = 0, that is,


4 8 12 x1
0
6 6 12 x2 = 0
6 8 14
x3
0
Using suitable row operations, we find


1 0 1
x1
0
0 1 1 x2 = 0 .
0 0 0
x3
0
So the reduced system of equations is
x1 + x3 = 0,

x2 x3 = 0.

Letting x3 = a, we get x1 = a and x2 = a. So


[x1 , x2 , x3 ] = [a, a, a] = a[1, 1, 1].
Thus, the eigenvectors of A corresponding to = 0 are non-zero multiples of the vector X1 = [1, 1, 1].
The eigenspace corresponding to = 0, therefore, is E0 = {aX1 : a R}.

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

15

Now, let us find the eigenvectors of A corresponding to = 2. For this, we need to find non-zero
solutions X of the homogeneous system (A 2I3 )X = 0, that is,


0
6 8 12
x1
6 8 12 x2 = 0
0
6 8 12
x3
Using suitable row operations, we find


1 4/3 2
x1
0
0
0
0 x2 = 0 .
0
0
0
x3
0
So the system reduces to
x1 (4/3)x2 + 2x3 = 0.
Letting x2 = a and x3 = b, we get x1 = (4/3)a 2b. So
[x1 , x2 , x3 ] = [(4/3)a 2b, a, b] = [(4/3)a, a, 0] + [2b, 0, b] = (1/3)a[4, 3, 0] + b[2, 0, 1].
Thus, the eigenvectors corresponding to = 2 are non-trivial linear combinations of the vectors X2 =
[4, 3, 0] and X3 = [2, 0, 1]. So E2 = {aX2 + bX3 : a, b R} is the eigenspace corresponding to = 2.
Note: The algebraic multiplicity of an eigenvalue is defined as the number of times it repeats. In the
above example, the eigenvalue = 2 repeats two times. So its algebraic multiplicity is 2. Also, we get two
linearly independent eigenvectors X2 = [4, 3, 0] and X3 = [2, 0, 1] corresponding to = 2. The following example shows that there may not exist as many linearly independent eigenvectors as the algebraic
multiplicity of an eigenvalue.

0 1 0
Ex. If A = 0 0 1 , then eigenvalues of A are = 0, 0, 0. The eigenvectors corresponding to = 0
0 0 0
are non-zero multiples of the vector X = [1, 0, 0]. The eigenspace corresponding to = 0, therefore, is
E0 = {a[1, 0, 0] : a R}. Please try this example yourself. Notice that there is only one linearly independent eigenvector X = [1, 0, 0] corresponding to the repeated eigenvalue (repeating thrice) = 0.
Note: One can easily prove the following properties of eigen values.
(i) Sum of eigenvalues of a matrix A is equal to the trace of A, that is, the sum of diagonal elements of A.
(ii) Product of eigenvalues of a matrix A is equal to the determinant of A. It further implies that determinant of a matrix is vanishes iff at least one eigenvalue of the matrix is 0.
(iii) If is eigenvalue of A, then m is eigenvalue of Am where m is any positive integer; 1/ is eigenvalue
of inverse of A, that is, A1 ; k is eigenvalue of A kI, where k is any real number.

Diagonalization
A square matrix A is said to be similar to a matrix B if there exists a non-singular matrix P such that
P 1 AP = B. In case, B is a diagonal matrix, we say that A is a diagonalizable matrix. Thus, a square
matrix A is diagonalizable if there exists a non-singular matrix P such that P 1 AP = D, where D is a
diagonal matrix.

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

16

Suppose an n-square matrix A has n linearly independent eigenvectors X1 , X2 , ...... , Xn corresponding


to the eigenvalues 1 , 2 , ...... , n . Let P = [X1 X2 .... Xn ]. Then we have

1 0 ... 0
0 2 ... 0

AP = [AX1 AX2 .... AXn ] = [1 X1 2 X2 .... n Xn ] = [X1 X2 .... Xn ]


... ... ... ... = P D.
0 0 ... n
This shows that if we construct P from eigenvectors of A, then A is diagonalizable, and P 1 AP = D has
eigenvalues of A at the diagonal places.
Note: If A has n different eigenvalues, then it can be proved that there exist n linearly independent
eigenvectors of A and consequently A is diagonalizable. However, there may exist n linearly independent
eigenvectors even if A has repeated eigenvalues as we have seen earlier. Such a matrix is also, of course,
diagonalizable. In case, A does not have n linearly independent eigenvectors, it is not diagonalizable.






12 51
17 3
6 0
1
Ex. If A =
, then P =
and P AP =
. (Verify!)
2 11
2 1
0 5

4 8 12
4 2 1
2 0 0
1 and P 1 AP = 0 2 0. (Verify!)
Ex. If A = 6 6 12 , then P = 3 0
6 8 14
0 1
1
0 0 0
Note: If A is a diagonalizable matrix, that is, P 1 AP = D or A = P DP 1 , then for any positive integer
n, we have An = P Dn P 1 . For,
A2 = (P DP 1 )2 = P DP 1 P DP 1 = P D2 P 1 .
Likewise, A3 = P D3 P 1 . So in general, An = P Dn P 1 .
This result can be utilized to evaluate powers of a diagonalizable matrix easily.

4 8 12
Ex. Determine A2 , where A = 6 6 12 .
6 8 14

4 2 1
1 1 2
2 0 0
1 , P 1 = 3 4 7 and D = 0 2 0.
So. P = 3 0
0 1
1
3 4 6
0 0 0

4 2 1 4 0 0
1 1 2
8 16 24
1 0 4 0 3 4 7 = 12 12 24 .
So A2 = P D2 P 1 = 3 0
0 1
1
0 0 0
3 4 6
12 16 28

Homework: Do exercise 3.4 from the textbook.

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

17

Chapter 4 (4.1-4.7)
Before going through the concept of real vector space, you must be familiar with the following axioms
(mathematical statements without proof) satisfied by the real numbers.
1. Closure property of addition: Sum of any two real numbers is a real number, that is, a, b R
implies a + b R. We also say that R is closed with respect to addition or real numbers satisfy
closure property with respect to addition.
2. Commutative property of addition: Real numbers are commutative in addition, that is, a+b =
b + a for all a, b R.
3. Associative property of addition: Real numbers are associative in addition, that is, a+(b+c) =
(a + b) + c for all a, b, c R.
4. Additive identity: The real number 0 is the additive identity of real numbers, that is, a + 0 =
a = 0 + a for all a R.
5. Additive inverse: Additive inverse exits for every real number. Given any a R, we have a R
such that a + (a) = 0 = (a) + a. So a is additive inverse of a.
6. Closure property of multiplication: Product of any two real numbers is a real number, that
is, a, b R implies a.b R. We also say that R is closed with respect to multiplication or real
numbers satisfy closure property with respect to multiplication.
7. Commutative property of multiplication: Real numbers are commutative in multiplication,
that is, a.b = b.a for all a, b R.
8. Associative property of multiplication: Real numbers are associative in multiplication, that
is,
a.(b.c) = (a.b).c for all a, b, c R.
9. Multiplicative identity: The real number 1 is the multiplicative identity of real numbers, that is,
a.1 = a = 1.a for all a R.
10. Multiplicative inverse: Multiplicative inverse exits for every non-zero real number. Given any
non-zero a R, we have 1/a R such that a.(1/a) = 1 = (1/a).a. So 1/a is multiplicative inverse
of a.
11. Multiplication is distributive over addition.
For any a, b, c R, we have a.(b + c) = a.b + a.c (Left Distribution Law)
(b + c).a = b.a + c.a (Right Distribution Law).
Note: In real numbers, division is only right distributive over addition. For example,
(4 + 5)/7 = 4/7 + 5/7.
But
7/(4 + 5) 6= 7/4 + 7/5.

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

18

Real Vector Space


A non-empty set V is said to be a real vector space if there are defined two operations called vector
addition and scalar multiplication denoted by and respectively such that for all u, v, w V and
a, b R, the following properties are satisfied:
1. u v V (Closure property)
2. u v = v u (Commutative property)
3. (u v) w = u (v w) (Associative property)
4. There exists some element 0 V such that u 0 = u = 0 u. (Existence of additive identity)
5. There exists u V such that u (u) = 0 = (u) u. (Existence of additive inverse)
6. a u V
7. a (u v) = a u a v
8. (a + b) u = a u b u
9. (ab) u = a (b u)
10. 1 u = u
Note: Elements of the vector space V are called as vectors while that of R are called as scalars. In what
follows, a vector space shall mean a real vector space.
Note: (i) Any scalar multiplied with the zero vector gives zero vector, that is, a 0 = 0. For,
a 0 = a 0 0 = a 0 a 0 (a 0) = a (0 0) (a 0) = a 0 (a 0) = 0.
For clarity, let us use + and . symbols in place of and respectively. Then we have
a.0 = a.0 + 0 = a.0 + a.0 + (a.0) = a.(0 + 0) + (a.0) = a.0 + (a.0) = 0.
(ii) The scalar 0 multiplied with any vector gives zero vector, that is, 0 u = 0. For,
0 u = (0 0) u = 0 u 0 u = 0.
0.u = (0 0).u = 0.u 0.u = 0.
(iii) (1) u gives the additive inverse u of u. For,
(1) u u = (1) u 1 u = (1 + 1) u = 0 u = 0.
(1).u + u = (1).u + 1.u = (1 + 1).u = 0.u = 0.
Ex. The set R of all real numbers is a vector space with respect to the following operations:
u v = u + v, (Vector Addition)
a u = au, (Scalar Multiplication)

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

19

for all a, u, v R.
Sol. In this case, V = R, and all the properties of vector space are easily verifiable using the axioms
satisfied by the real numbers.
Note: The set V = {0}, carrying only one real number namely 0, is a real vector space with respect to
the operations mentioned in the above example. Think! It is easy!
Ex. The set R2 = R R = {[x1 , x2 ] : x1 , x2 R} of all ordered pairs of real numbers is a vector space
with respect to the following operations:
[x1 , x2 ] [y1 , y2 ] = [x1 + y1 , x2 + y2 ], (Vector Addition)
a [x1 , x2 ] = [ax1 , ax2 ], (Scalar Multiplication)
for all a R and [x1 , x2 ], [y1 , y2 ] R2 .
Sol. Let u = [x1 , x2 ], v = [y1 , y2 ] and w = [z1 , z2 ] be members of V = R2 , and a, b be any two real
numbers. Then we have the following properties:
1. Closure Property:
u v = [x1 , x2 ] [y1 , y2 ] = [x1 + y1 , x2 + y2 ] R2 since x1 + y1 and x2 + y2 are real numbers.
2. Commutative Property:
u v = [x1 , x2 ] [y1 , y2 ] = [x1 + y1 , x2 + y2 ],
v u = [y1 , y2 ] + [x1 , x2 ] = [y1 + x1 , y2 + x2 ].
But [x1 + y1 , x2 + y2 ] = [y1 + x1 , y2 + x2 ] since real numbers are commutative in addition.

u v = v u.

3. Associative Property:
(uv)w = ([x1 , x2 ][y1 , y2 ])[z1 , z2 ] = ([x1 +y1 , x2 +y2 ])[z1 , z2 ] = [(x1 +y1 )+z1 , (x2 +y2 )+z2 ]
u(vw) = [x1 , x2 ]([y1 , y2 ][z1 , z2 ]) = [x1 , x2 ]([y1 +z1 , y2 +z2 ]) = [x1 +(y1 +z1 ), x2 +(y2 +z2 )].
Since real numbers are associative in addition, we have
[(x1 + y1 ) + z1 , (x2 + y2 ) + z2 ] = [x1 + (y1 + z1 ), x2 + (y2 + z2 )].
It implies that (u v) w = u (v w).
4. Existence of identity:
There exists 0 = [0, 0] R2 such that
u 0 = [x1 , x2 ] [0, 0] = [x1 + 0, x2 + 0] = [x1 , x2 ] = u,
0 u = [0, 0] [x1 , x2 ] = [0 + x1 , 0 + x2 ] = [x1 , x2 ] = u.
So u 0 = u = 0 u. Therefore [0, 0] is additive identity in R2 .
5. Existence of inverse: There exists u = [x1 , x2 ] R2 such that
u (u) = [x1 , x2 ] [x1 , x2 ] = [x1 x1 , x2 x2 ] = [0, 0] = 0,
(u) u = [x1 , x2 ] [x1 , x2 ] = [x1 + x1 , x2 + x2 ] = [0, 0] = 0.
So u (u) = 0 = (u) u. This shows that u = [x1 , x2 ] is additive inverse of u = [x1 , x2 ]
in R2 .
6. a u = a [x1 , x2 ] = [ax1 , ax2 ] R2

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

20

7. a (uv) = a ([x1 , x2 ][y1 , y2 ]) = a[x1 +y1 , x2 +y2 ] = [a(x1 +y1 ), a(x2 +y2 )] = [ax1 +ay1 , ax2 +ay2 ],
a u a v = a [x1 , x2 ] a [y1 , y2 ] = [ax1 , ax2 ] [ay1 , ay2 ] = [ax1 + ay1 , ax2 + ay2 ].
a (u v) = a u a v.
8. (a + b) u = (a + b)[x1 , x2 ] = [(a + b)x1 , (a + b)x2 ] = [ax1 + bx1 , ax2 + bx2 ],
a u b u = a[x1 , x2 ] b[x1 , x2 ] = [ax1 , ax2 ] [bx1 , bx2 ] = [ax1 + bx1 , ax2 + bx2 ].
(a + b) u = a u b u
9. (ab) u = (ab) [x1 , x2 ] = [(ab)x1 , (ab)x2 ],
a (b u) = a (b [x1 , x2 ]) = a [bx1 , bx2 ] = [a(bx1 ), a(bx2 )].
But [(ab)x1 , (ab)x2 ] = [a(bx1 ), a(bx2 )] since real numbers are associative in multiplication.
So (ab) u = a (b u).
10. 1 u = 1 [x1 , x2 ] = [1.x1 , 1.x2 ] = [x1 , x2 ] = u
Hence R2 is a real vector space.
Note: In general, the set Rn = {[x1 , x2 , ......, xn ] : xi R, i = 1, 2, ...n} of all ordered n-tuples of real
numbers is a vector space with respect to the following operations:
[x1 , x2 , ...., xn ] [y1 , y2 , ......, yn ] = [x1 + y1 , x2 + y2 , ....., xn + yn ], (Vector Addition)
a [x1 , x2 , ....., xn ] = [ax1 , ax2 , ......, axn ], (Scalar Multiplication)
for all a R and [x1 , x2 , ....., xn ], [y1 , y2 , ......, yn ] Rn .
Ex. The set Mmn = {[aij ]mn : aij R} of all m n matrices with real entries is a vector space with
respect to the following operations:
[aij ]mn [bij ]mn = [aij + bij ]mn , (Vector Addition)
a [aij ]mn = [aaij ]mn , (Scalar Multiplication)
for all a R and [aij ]mn , [bij ]mn Mmn . Notice that vector addition is usual addition of matrices, and
scalar multiplication is the usual scalar multiplication in matrices.
Sol. Let u = [aij ]mn , v = [bij ]mn and w = [cij ]mn be members of V = Mmn , and a, b be any two real
numbers. Then we have the following properties:
1. Closure Property:
u v = [aij ]mn [bij ]mn = [aij + bij ]mn Mmn .
2. Commutative Property:
u v = [aij ]mn [bij ]mn = [aij + bij ]mn .
v u = [bij ]mn [aij ]mn = [bij + aij ]mn .
Since aij + bij = bij + aij , so we have u v = v u.
3. Associative Property:
(u v) w = ([aij ]mn [bij ]mn ) [cij ]mn = ([aij + bij ]mn ) [cij ]mn = [(aij + bij ) + cij ]mn ,
u (v w) = [aij ]mn ([bij ]mn [cij ]mn ) = [aij ]mn ([bij + cij ]mn ) = [aij + (bij + cij )]mn .
Since (aij + bij ) + cij = aij + (bij + cij ), so we get (u v) w = u (v w).
4. Existence of identity:
There exists 0 = [0]mn M mn such that
u 0 = [aij ]mn [0]mn = [aij + 0]mn = [aij ]mn = u,
0 u = [0]mn [aij ]mn = [0 + aij ]mn = [aij ]mn = u.
So u 0 = u = 0 u. Therefore, [0]mn , the null matrix of order m n is the additive identity
in Mmn .

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

21

5. Existence of inverse: There exists u = [aij ]mn Mmn such that


u (u) = [aij ]mn [aij ]mn = [aij aij ]mn = [0]mn = 0,
(u) u = [aij ]mn [aij ]mn = [aij + aij ]mn = [0]mn = 0.
So u (u) = 0 = (u) u. This shows that u = [aij ]mn is additive inverse of u = [aij ]mn
in Mmn .
6. a u = a [aij ]mn = [aaij ]mn Mmn
7. a (u v) = a ([aij ]mn [bij ]mn ) = a [aij + bij ]mn = [a(aij + bij )]mn = [aaij + abij ]mn ,
a u a v = a [aij ]mn a [bij ]mn = [aaij ]mn [abij ]mn = [aaij + abij ]mn .
a (u v) = a u a v.
8. (a + b) u = (a + b) [aij ]mn = [(a + b)aij ]mn = [aaij + baij ]mn ,
a u b u = a [aij ]mn b [aij ]mn = [aaij ]mn [baij ]mn = [aaij + baij ]mn .
(a + b) u = a u b u.
9. (ab) u = (ab) [aij ]mn = [(ab)aij ]mn ,
a (b u) = a (b [aij ]mn ) = a [baij ]mn = [a(baij )]mn .
But [(ab)aij ]mn = [a(baij )]mn since real numbers are associative in multiplication.
So (ab) u = a (b u).
10. 1 u = 1 [aij ]mn = [1.aij ]mn = [aij ]mn = u
Hence Mmn is a real vector space.
( n
)
X
i
n
Ex. The set Pn =
ai x = a0 + a1 x + .... + an x : ai R of all polynomials in x of degree n with
i=0

real coefficients is a vector space with respect to the following operations:

n
X

ai x

i=0

n
X

n
X
bi x =
(ai + bi )xi , (Vector Addition)
i

i=0
n
X

ai xi =

i=0

for all a R and

i=0
n
X

aai xi , (Scalar Multiplication)

i=0
n
X

ai xi Pn . Notice that vector addition is usual addition of polynomials, and scalar

i=0

multiplication is the usual scalar multiplication in polynomials.


Sol. Please do yourself following the procedure given in the previous example(s).
Ex. The set = {f : f is well defined real valued function on [0, 1]} is a vector space with respect to
the following operations:
f g = f + g, (Vector Addition)
a f = af , (Scalar Multiplication)
for all a R and f, g .
Sol. Please do yourself following the procedure given in the previous example(s).

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

22

Ex. Show that the set = {f : f is well defined real valued function on [0, 1] and f (1/2) = 1} is not a
vector space with respect to the following operations:
f g = f + g, (Vector Addition)
a f = af , (Scalar Multiplication)
for all a R and f, g .
Sol. Let f, g . Then by definition of , we have
f (1/2) = 1, g(1/2) = 1.
Next by definition of vector addition, we find
(f g)(1/2) = (f + g)(1/2) = f (1/2) + g(1/2) = 1 + 1 = 2 6= 1.
So f g
/ . It means vector addition fails in . Consequently, it is not a vector space.
Ex. The set R+ of positive real numbers is a vector space with respect to the operations:
u v = uv (vector addition)
a u = ua (scalar multiplication)
for all a R and u, v R+ . Here vector addition of u and v is defined by their product while scalar
multiplication of a and u is defined by u raised to the power a. So be careful while verifying the properties.
Sol. Let u, v and w be members of V = R+ , and a, b be any two real numbers. Note that u, v and w
are positive real numbers while a and b are any real numbers. Then we have the following properties:
1. Closure Property:
u v = uv R+ .
2. Commutative Property:
u v = uv = vu = v u.
3. Associative Property:
(u v) w = (uv) w = (uv)w = u(vw) = u (vw) = u (v w).
4. Existence of identity:
1 R+ (denoting the positive real number 1 by 0) such that
u 0 = u 1 = u.1 = u,
0 u = 1 u = 1.u = u.
So u 0 = u = 0 u. Therefore, the positive real number 1 is the additive identity in R+ .
5. Existence of inverse:
Since u is a positive real number so 1/u is also a positive real number (denoting 1/u by u) such
that
u (u) = u (1/u) = u.(1/u) = 1 = 0,
(u) u = (1/u) u = (1/u).u = 1 = 0.
So u (u) = 0 = (u) u. This shows that u = 1/u is additive inverse of u in R+ .
6. a u = ua R+
7. a (u v) = a (uv) = (uv)a = ua v a = ua v a = a u a v.
8. (a + b) u = u(a+b) = ua ub = ua ub = a u b u.
a

9. (ab) u = u(ab) = (ub ) = a (ub ) = a (b u).

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

23

10. 1 u = u1 = u.
Hence R+ is a real vector space.
Ex. Show that the set R of real numbers is a vector space with respect to the operations:
u v = (u5 + v 5 )1/5
a u = a1/5 u
for all a, u, v R. Further, the principal fifth root is to be considered in both the operations.
Sol. Let u, v and w be members of V = R, and a, b be any two real numbers. Then we have the following
properties:
1. Closure Property:
u v = (u5 + v 5 )1/5 R.
2. Commutative Property:
u v = (u5 + v 5 )1/5 = (v 5 + u5 )1/5 = v u.
3. Associative Property:
(u v) w = ((u5 + v 5 )1/5 ) w = ((u5 + v 5 ) + w5 )1/5 = (u5 + (v 5 + w5 ))1/5 = u (v 5 + w5 )1/5 =
u (v w).
4. Existence of identity:
0 R (denoting the real number 0 by 0) such that
u 0 = u 0 = (u5 + 05 )1/5 = u,
0 u = 0 u = (05 + u5 )1/5 = u.
So u 0 = u = 0 u. Therefore, the real number 0 is the additive identity in R.
5. Existence of inverse:
u R such that
u (u) = (u5 u5 )1/5 = 0 = 0,
(u) u = (u5 + u5 )1/5 = 0 = 0.
So u (u) = 0 = (u) u. This shows that u is additive inverse of u in R.
6. a u = a1/5 u R
7. a (uv) = a ((u5 +v 5 )1/5 ) = a1/5 (u5 +v 5 )1/5 = (au5 +av 5 )1/5 = (a1/5 u)(a1/5 v) = a ua v.
8. (a + b) u = (a + b)1/5 u = (au5 + bu5 )1/5 = (a1/5 u) (b1/5 u) = a u b u.
9. (ab) u = (ab)1/5 u = (a1/5 b1/5 )u = a1/5 (b1/5 u) = a1/5 (b u) = a (b u).
10. 1 u = 11/5 u = u.
Hence R is a real vector space.
Ex. Show that the set R of real numbers is not a vector space with respect to the operations:
u v = (u5 + v 5 )1/5 (vector addition)
a u = au (scalar multiplication)
for all a, u, v R. Further, the principal fifth root is to be considered in vector addition.
Sol. Property 8 is not satisfied. Please do yourself.

Homework: Do exercise 4.1 from the textbook.

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

24

Subspace
If V is a vector space and W is a subset of V such that W is also a vector space under the same operations
as in V , then W is called a subspace of V .
Ex. The set W = {[x1 , 0] : x1 R} is a vector space under the operations of vector addition and scalar
multiplication as we considered earlier in R2 . Also, W is subset of R2 . So W is subspace of R2 .
Note: For checking W to be a subspace of a vector space V , we need to verify 10 properties of vector
space in W . The following theorem suggests that verification of two properties is enough.
Theorem: A subset W of a vector space V is a subspace of V if and only if W is closed with respect to
vector addition and scalar multiplication.
Proof: First consider that W is subspace of V . Then by definition of subspace, W is a vector space. So
by definition of vector space, W is closed with respect to vector addition and scalar multiplication.
Conversely assume that the subset W of the vector space V is closed with respect to vector addition
and scalar multiplication. So u v W and a u W for all u, v W and a R. Choosing a = 0, we
get 0 u W . But 0 u = 0. So 0 W , that is, additive identity exists in W . Again, choosing a = 1,
we get (1) u = u W . So additive inverse exists in W . Commutative and associative properties
with regard to vector addition, and all the properties related to scalar multiplication shall follow in W
for two reasons (i) members of W are members from V , and (ii) V is a vector space.
Note: It is also easy to show that a subset W of a vector space V is subspace of V if and only if
a u b v W for all a, b R and u, v W .
Ex. Show that the set W = {[x1 , 0] : x1 R} is subspace of R2 .
Sol. Let u = [x1 , 0], v = [y1 , 0] W and a R. Then we have
u v = [x1 , 0] [y1 , 0] = [x1 + y1 , 0] W,
a u = a [x1 , 0] = [ax1 , 0] W.
This shows that W is closed with respect to vector addition and scalar multiplication. Hence W is a
subspace of R2 .
Ex. The set W = {(a, b, a + 2b) : a, b R} is a subspace of R3 .
Sol. Please do yourself.
Ex. Verify whether W = {[x, y] : x y = 0, x, y R} is a subspace of R2 .
Sol. Let u = [x1 , x2 ], v = [y1 , y2 ] W and a R. Then x1 x2 = 0 and y1 y2 = 0. we have
u v = [x1 , x2 ] [y1 , y2 ] = [x1 + y1 , x2 + y2 ] W,
since (x1 + y1 ) (x2 + y2 ) = x1 x2 + y1 y2 = 0.
a u = a [x1 , x2 ] = [ax1 , ax2 ] W,

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

25

since ax1 ax2 = a(x1 x2 ) = a.0 = 0. This shows that W is closed with respect to vector addition and
scalar multiplication. Hence W is a subspace of R2 .
Ex. Verify whether W = {[x, y] : y = x2 , x, y R} is a subspace of R2 .
Sol. We have [1, 1], [2, 4] W but
[1, 1] [2, 4] = [1 + 2, 1 + 4] = [3, 5]
/ W,
since 5 6= 32 . So W is not a subspace of R2 .



p q
Ex. Verify whether W =
: ps qr 6= 0, p, q, r, s R is a subspace of M22 .
r s




1 0
1 0
Sol. We have u =
,v=
W but
0 1
0 1


 
 
 

1 0
1 0
11 0+0
0 0
uv=

=
=

/ W,
0 0
0 1
0 1
0+0 11

0 0
is a singular matrix. So W is not a subspace of M22 .
since
0 0


Ex. If A is n-square matrix and is an eigen value of A, then the eigen space E = {X : AX = X} of
is a subspace of Rn .
Sol. Let a R and X1 , X2 E . Then
A(X1 + X2 ) = AX1 + AX2 = X1 + X2 = (X1 + X2 ). So X1 + X2 E .
Also, A(aX1 ) = aAX1 = aX1 = (aX1 ). So aX1 E .
Thus, E is a subspace of Rn .
Note: If V is any vector space, then {0} and V are its trivial subspaces. Any other subspace is called a
proper subspace of V .
Ex. Let W1 and W2 be two subspaces of a vector space V .
(i) Show that W1 W2 is a subspace of V .
(ii) Give an example to show that W1 W2 need not be a subspace of V .
(iii) Show that W1 W2 is subspace of V iff either W1 W2 or W2 W1 .
Sol. (i) Since W1 and W2 are subspaces of V . So at least the zero vector 0 lies in W1 W2 . So W1 W2 6= .
Let u, v W1 W2 , and a, b R. Then a u b v W1 and a u b v W2 since W1 and W2
both are subspaces of V . It follows that a ub v W1 W2 . This shows that W1 W2 is a subspace of V .
(ii) W1 = {[x1 , 0] : x1 R} and W2 = {[0, x2 ] : x1 R} both are subspaces of R2 . Also,
[1, 0], [0, 1] W1 W2 . But [1, 0] [0, 1] = [1 + 0, 0 + 1] = [1, 1]
/ W1 W2 . So W1 W2 is not
closed with respect to vector addition, and consequently it is not a subspace of R2 .
(iii) If W1 W2 or W2 W1 , then W1 W2 = W2 or W1 W2 = W1 . But W1 and W2 both are
subspaces of V . Thus, in both cases W1 W2 is a subspace of V .

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

26

Conversely assume that W1 W2 is a subspace of V . We need to prove that either W1 W2 or


W2 W1 . On the contrary assume that neither W1 W2 nor W2 W1 . So there must exist some
u W1 and v W2 such that u
/ W2 and v
/ W1 .
Now u, v W1 W2 . So u v W1 W2 , by closure property of vector addition in W1 W2 . It
implies that either u v W1 or u v W2 . Let u v W1 . Also, u W1 . So by closure property
of vector addition in W1 , we have u (u v) = v W1 , a contradiction since v
/ W1 . Thus, either
W1 W2 or W2 W1 .
Note: Hereafter, we shall use the symbols + and . for vector addition and scalar multiplication
respectively in all vector spaces.

Homework: Do exercise 4.2 from the textbook.

Span of a Set
Let S be any subset of a vector space V . Then the set of all linear combinations of finite number of
members of S is called the span of S, and is denoted by span(S) or L(S). Therefore, we have
L(S) = span(S) = {a1 v1 + a2 v2 + ........ + an vn : ai R, vi S, i = 1, 2, ....., n}.
Ex. If S = {[1, 0], [0, 1]}, then L(S) = {a(1, 0) + b(0, 1) : a, b R} = {(a, b) : a, b R} = R2 .
Note: Recall that row space of a matrix is the set of all sums of scalar multiples of the row vectors of
the matrix. So row space of a matrix is noting but the span of its row vectors. Also, we know that the
row spaces of two row equivalent matrices are same. This fact can be used to simplify the span of subsets
of the vector spaces Rn , Pn and Mmn as illustrated in the following examples.
Ex. Show that span of the set S = {[2, 3, 4], [1, 5, 7], [3, 11, 13]} is R3 .
Sol. By definition, span of the given set S reads as
L(S) = {a[2, 3, 4]+b[1, 5, 7]+c[3, 11, 13] : a, b, c R} = {[2a+b+3c, 3a+5b+11c, 4a+7b+13c] : a, b, c R}.
For simplified span, we use suitable row operations, and find

2 3 4
1 0 0
1 5 7 0 1 0 .
3 11 13
0 0 1
L(S) = {a[1, 0, 0] + b[0, 1, 0] + c[0, 0, 1] : a, b, c R} = {[a, b, c] : a, b, c R} = R3 .


Ex. Find simplified span of the set S = x3 1, x2 x, x 1 in P3x .
Sol. By definition, span of the given set S reads as
L(S) = {a(x3 1) + b(x2 x) + c(x 1) : a, b, c R} = {ax3 + bx2 + cx a c : a, b, c R}.
For simplified span, we write
operations. We find


1 0 0 1
1 0
0 1 1 0 0 1
0 0 1 1
0 0

coefficients of the polynomials as row vectors, and then use suitable row

0 1
0 1 .
1 1

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

27

L(S) = {a(x3 1) + b(x2 1) + c(x 1) : a, b, c R} = {ax3 + bx2 + cx a b c : a, b, c R}.



 
 

1 1
0 0
1 0
Ex. Find simplified span of the set S =
,
,
in M22 .
0 0
1 1
0 1
Sol. By definition, span of the given set S reads as
 





 


1 1
0 0
1 0
a c
a
L(S) = a
+b
+c
: a, b, c R =
: a, b, c R
0 0
1 1
0 1
b
b + c
For simplified span, we write each matrix as continuous row vector (writing both rows of each matrix
in one row), and then use suitable row operations. We find

1 1 0 0
1 0 0 1
0 0 1 1 0 1 0 1 .
1 0 0 1
0 0 1 1

 





 


1 0
0 1
0 0
a
b
L(S) = a
+b
+c
: a, b, c R =
: a, b, c R .
0 1
0 1
1 1
c a b c

Ex. Span of the empty set is defined to be the singleton set containing the zero vector., that is,
span() = {0}.
Theorem: Let S be subset of a vector space V . Then prove the following:
(i) L(S) is a subset of V
(ii) L(S) is a subspace of V
(iii) L(S) is the minimal subspace of V containing S.
Proof: (i) Let u L(S). Then
u = c1 u1 + c2 u2 + ..... + cm um
for some ci R and ui S where i = 1, 2, ..., m. Now, c1 R and u1 S V . So c1 u1 V , by
property 6 (scalar multiplication) of vector space. Likewise c2 u2 V . It follows that c1 u1 + c2 u2 V ,
by property 1 (vector addition) of vector space. Thus, repeated use of scalar multiplication and vector
addition properties of vector space yields
u = c1 u1 + c2 u2 + ..... + cm um V.
Therefore, L(S) is subset of V .
(ii) Let u, v L(S) and a R. Then u and v both are linear combinations of finite number of members
of S.

u = c1 u1 + c2 u2 + ..... + cm um ,

v = d1 v1 + d2 v2 + ....... + dn vn ,

for some ci , dj R and ui , vj S where i = 1, 2, ..., m and j = 1, 2, ...., n. It follows that


u + v = c1 u1 + c2 u2 + ..... + cm um + d1 v1 + d2 v2 + ....... + dn vn L(S),
au = (ac1 )u1 + (ac2 )u2 + ..... + (acm )um L(S).
This shows that L(S) is closed with respect to vector addition and scalar multiplication. Also, by part
(i), L(S) is subset of V . So L(S) is subspace of V .

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

28

(iii) For any u S, we can write u = 1.u. So every member of S can be written as a linear combination
of members of S. So S L(S). In part (ii), we have shown that L(S) is a subspace of V . To show
that L(S) is minimal subspace of V containing S, it requires to prove that L(S) W where W is any
subspace of V containing S.
Let u L(S). Then
u = c1 u1 + c2 u2 + ..... + cm um
for some ci R and ui S where i = 1, 2, ..., m. Now W contains S. Also W being a subspace of V is a
vector space. So by the properties of scalar multiplication and vector addition, it follows that
u = c1 u1 + c2 u2 + ..... + cm um W.
Thus, L(S) W . This completes the proof.

Homework: Do exercise 4.3 from the textbook.

Linear Independence
A finite non-empty subset S = {v1 , v2 , ......, vn } of a vector space V is said to be linearly dependent (LD)
if and only if there exist real numbers a1 , a2 , ......, an not all zero such that a1 v1 + a2 v2 + .......... + an vn = 0.
If S is not LD, that is, a1 v1 + a2 v2 + .......... + an vn = 0 implies a1 = 0, a2 = 0, ......, an = 0, then S is
said to be linearly independent (LI).
Ex. The set S = {[1, 0], [0, 1]} is LI in R2 .
Sol. Here we have two vectors v1 = [1, 0] and v2 = [0, 1]. To check LI/LD, we let
a1 v1 + a2 v2 = 0.
=

a1 [1, 0] + a2 [0, 1] = [0, 0].

[a1 , 0] + [0, a2 ] = [0, 0].

[a1 , a2 ] = [0, 0].

a1 = 0, a2 = 0.

Thus, the set S = {[1, 0], [0, 1]} is LI in R2 .


Ex. The set S = {[1, 2], [2, 4]} is LD in R2 .
Sol. Here we have two vectors v1 = [1, 2] and v2 = [2, 4]. To check LI/LD, we let
a1 v1 + a2 v2 = 0.
=

a1 [1, 2] + a2 [2, 4] = [0, 0].

[a1 , 2a1 ] + [2a2 , 4a2 ] = [0, 0].

[a1 + 2a2 , 2a1 + 4a2 ] = [0, 0].

a1 + 2a2 = 0, 2a1 + 4a2 = 0.

a1 + 2a2 = 0.

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

29

We see a non-trivial solution, a1 = 2, a2 = 1. Thus, the set S = {[1, 2], [2, 4]} is LD in R2 .
Note: In fact, a1 + 2a2 = 0 has infinitely many non-trivial solutions. But for LD, existence of one
non-trivial is sufficient.
Theorem: Two vectors in a vector space are LD if and only if one vector is scalar multiple of the other.
Proof: Let v1 and v2 be two vectors of a vector space V . Suppose v1 and v2 are LD. Then there exists
real numbers a1 and a2 such that at least one of a1 and a2 is non-zero (say a1 6= 0), and
a1 v1 + a2 v2 = 0.
Since a1 6= 0, we have


a2
v1 =
v2 = 0.
a1
This shows that v1 is scalar multiple of v2 .
Conversely assume that v1 is scalar multiple of v2 , that is, v1 = v2 for some real number . Then
we have,
(1)v1 + v2 = 0.
We see that the linear combination of v1 and v2 is 0, where the scalar 1 with v1 is non-zero. So v1 and
v2 are LD.
Ex. Verify whether the set S = {[3, 1, 1], [5, 2, 2], [2, 2, 1]} is LI in R3 .
Sol. Here we have three vectors v1 = [3, 1, 1], v2 = [5, 2, 2] and (2, 2, 1). To check LI/LD, we let
a1 v1 + a2 v2 + a3 v3 = 0.
=

a1 [3, 1, 1] + a2 [5, 2, 2] + a3 [2, 2, 1] = [0, 0, 0].

[3a1 5a2 + 2a3 , a1 2a2 + 2a3 , a1 + 2a2 a3 ] = [0, 0, 0].

This gives the following homogeneous system of equations:


3a1 5a2 + 2a3 = 0,
a1 2a2 + 2a3 = 0,
a1 + 2a2 a3 = 0.
Here the augmented matrix is

3 5 2 0
[A : B] = 1 2 2 0 .
1 2 1 0
Using suitable row transformations, we find

1 0 0 0
[A : B] 0 1 0 0 .
0 0 1 0

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

30

So we get the trivial solution, a1 = 0, a2 = 0 and a3 = 0.


Thus, the set S = {[3, 1, 1], [5, 2, 2], [2, 2, 1]} is LI in R3 .
Note: Row reduction approach can also be applied to test the linear independence of the vectors of
Rn . Just write the given vectors as the rows of a matrix say A, and find rank of A. If rank of A is
equal to the number of rows or vectors, then the given vectors are LI. For example, consider the set
S = {[3, 1, 1], [5, 2, 2], [2, 2, 1]}. The matrix with rows as the vectors of S reads as

3
1 1
A = 5 2 2 .
2
2 1
Using suitable row operations, we find

3
1 1
1 0 0
A = 5 2 2 0 1 0 .
2
2 1
0 0 1
So rank of A is 3. It implies that the set S = {[3, 1, 1], [5, 2, 2], [2, 2, 1]} is LI.
Ex. Any set containing zero vector is always LD while a singleton set containing a non-zero vector is LI.
Sol. Let S = {0, v1 , v2 , ......, vn } be a set containing 0 vector. Then the expression
1.0 + 0v1 + 0.v2 + ....... + 0.vn = 0
shows that S is LD.
Next, consider a set A = {v1 } carrying a single non-zero vector. Then a1 v1 = 0 gives a1 = 0 since
v1 6= 0. So A is LI.
Theorem: A finite set S containing at least two vectors is LD iff some vector in S can be expressed as
a linear combination of the other vectors in S.
Proof: Let S = {v1 , v2 , ......, vn } be a set containing at least two vectors. Suppose S is LD. Then there
exist real numbers a1 , a2 , ......, an not all zero (say some am 6= 0) such that
a1 v1 + a2 v2 + ..... + am1 vm1 + am vm + am+1 vm+1 + ..... + an vn = 0.
It can be rewritten as








a1
a2
am1
am+1
vm =
v1 +
v2 + .... +
vm1 +
vm+1 + ...... + (an )vn .
am
am
am
am
This shows that vm is linear combination of the remaining vectors.
Conversely assume that some vector vm of S is linear combination of the remaining vectors of S, that
is, there exists some real numbers b1 , b2 ,.....,bm1 , bm+1 ........,bn such that
vm = b1 v1 + b2 v2 + ....... + bm1 vm1 + bm+1 vm+1 + ...... + bn vn .
It can be rewritten as
b1 v1 + b2 v2 + ....... + bm1 vm1 + (1)vm + bm+1 vm+1 + ...... + bn vn = 0.

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

31

We see that scalar with the vector vm is 1, which is non-zero. It follows that the set S is LD.
Theorem: A non-empty finite subset S of a vector space V is LI iff every vector v L(S) can be
expressed uniquely as a linear combination of the members of S.
Proof: Let S = {v1 , v2 , ......, vn } be a subset of the vector space V . Suppose S is LI and v is any member
of L(S). Then v can be expressed as linear combination of the members of S. For uniqueness, let
a1 v1 + a2 v2 + ......... + an vn = v,
b1 v1 + b2 v2 + ......... + bn vn = v.
Subtracting the two expressions, we get
(a1 b1 )v1 + (a2 b2 )v2 + ......... + (an bn )vn = 0.
Then linear independence of the set S = {v1 , v2 , ......, vn } implies that a1 b1 = 0, a2 b2 = 0, ....,
an bn = 0, that is, a1 = b1 , a2 = b2 , ....., an = bn . This proves the uniqueness.
Conversely assume that every vector v L(S) can be expressed uniquely as a linear combination of
the members of S. To prove that S = {v1 , v2 , ......, vn } is LI, let
a1 v1 + a2 v2 + ......... + an vn = 0.
Also, we have
0v1 + 0v2 + ......... + 0vn = 0.
The above two expressions represent 0 L(S) as linear combinations of members of the set S. So by
uniqueness, we must have a1 = 0, a2 = 0, ......., an = 0. It follows that S is LI.
Note: An infinite subset S of a vector space V is LI iff every finite subset of S is LI. For example, the
set S = {1, x, x2 , .......} is an infinite LI set in P (the vector space of all polynomials).

Homework: Do exercise 4.4 from the textbook.

Basis
A subset B of a vector space V is a basis of V if B is LI and L(B) = V . Therefore, the basis set B is LI
and generates or spans the vector space V .
Ex. The set B = {[1, 0], [0, 1]} is a basis of R2 , called the standard basis of R2 .
For, B = {[1, 0], [0, 1]} is LI since a[1, 0] + b[0, 1] = [0, 0] yields a = 0, b = 0.
Also, L(B) = R2 since any [x1 , x2 ] R2 can be written as
[x1 , x2 ] = x1 [1, 0] + x2 [0, 1],
a linear combination of the members of the set B = {[1, 0], [0, 1]}.
Ex. The set B = {[1, 0, 0], [0, 1, 0], [0, 0, 1]} is the standard basis of R3 .

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

32

Ex. The set B = {[1, 2, 1], [2, 3, 1], [1, 2, 3]} is a basis of R3 .
Sol. Using

1
2
1

suitable row transformations, we find


2 1
1 0 0
3 1 0 1 0 .
2 3
0 0 1

This shows that B is LI. Also, L(B) = {a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1) : a, b, c R} = {(a, b, c) :
a, b, c R} = R3 . So B is a basis of R3 .

 
 
 

1 0
0 1
0 0
0 0
Ex. The set B =
,
,
,
is the standard basis of M22 .
0 0
0 0
1 0
0 1
Ex. The set B = {1, x, x2 , ........, xn } is standard basis of Pn .
Sol. Any polynomial in x of degree n is, of course, a linear combination of the members of the set
B = {1, x, x2 , ........, xn }. So L(B) = Pn .
Also, B = {1, x, x2 , ........, xn } is LI since a0 .1 + a1 x + ............ + an xn = 0 = 0.1 + 0.x + ............ + 0.xn
gives a0 = 0, a1 = 0, ........., an = 0.
Thus B = {1, x, x2 , ........, xn } is a basis of Pn , also known as standard basis of Pn .
Ex. The empty set {} is basis of the trivial vector space V = {0}.
Theorem: If B1 is a finite basis of a vector space V , and B2 is any other basis of V , then B2 has same
number of vectors as in B1 .
Note: A vector space may have infinitely many finite bases. However, the number of vectors in each
basis would be same as suggested by the above theorem. This fact led to the following definition.

Dimension
The number of elements in the basis of a vector space is called its dimension. Further, a vector space
with finite dimension is called a finite dimensional vector space.
Ex. The basis B = {[1, 0], [0, 1]} of R2 carries two vectors. So dim(R2 ) = 2.

 
 
 

1 0
0 1
0 0
0 0
Ex. The basis set B =
,
,
,
of M22 carries four vectors. So dim(M22 ) = 4.
0 0
0 0
1 0
0 1
The following theorem gives the idea to find basis of a vector space from a spanning set.
Theorem: Any maximal LI subset of a spanning set of a vector space forms a basis of the vector space.

LI test method to find maximal LI subset


Write the given vectors as the columns in a matrix, and find its REF. The columns carrying the leading
entries correspond to the LI vectors.

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

33

Ex. Find a maximal LI subset of the set S = {[1, 0], [2, 1], [1, 5]}.
Sol. Writing the given vectors as the columns, we get the matrix


1 2 1
0 1 5
Using suitable row operations, we find

 

1 2 1
1 0 9

0 1 5
0 1 5
We see that the first two columns carry the leading entries. So the corresponding set of vectors {[1, 0], [2, 1]}
is a maximal LI subset of the given set S of vectors.
Note. It should be noted that the maximal LI set given by the LI test method is not unique. For, the
set {[1, 0], [1, 5]} is also a maximal LI subset of S. So LI test method just serves the purpose of providing
a maximal LI subset of the given set of vectors.
Ex. The set S = {[1, 0], [2, 1], [1, 5]} spans R2 . Find a basis of R2 from this set.
Sol. As shown in the previous example that {[1, 0], [2, 1]} is a maximal LI subset of S. So it forms a basis.
Theorem: Every LI subset of a vector space can be extended to form a basis of the vector space.
Ex. The set S = {[1, 3, 7]} is a LI subset of R3 . It is easy to verify that the extended set {[1, 3, 7], [0, 1, 0], [0, 0, 1]}
is a basis of R3 .
Interesting Note: If we remove one or more vectors from the basis set of a vector space, it is no more a
spanning set of the vector space. Further, if we insert one or more vectors in the basis set, it is no more
LI. Thus, number of elements in the basis set does not vary.
Theorem: If W is subspace of a vector space V , then dim(W) dim(V).
Ex. Find a basis of a subspace of R4 spanned by the set S = {[3, 5, 2, 4], [1, 2, 2, 1], [1, 2, 1, 1]}.
Sol. We need to find maximal
row operations, we find


3 5 2 4
1
1 2 2 1 0
1 2 1 1
0

LI subset of S = {[3, 5, 2, 4], [1, 2, 2, 1], [1, 2, 1, 1]}. Using suitable

0 0 15
1 0 9 .
0 1 2

This shows that S = {[3, 5, 2, 4], [1, 2, 2, 1], [1, 2, 1, 1]} is LI. So S = {[3, 5, 2, 4], [1, 2, 2, 1], [1, 2, 1, 1]}
itself is a basis of the subspace of R4 .
Note: Row vectors of the row equivalent matrix of the basis vectors also form a basis. So in the above
example, the set {[1, 0, 0, 15], [0, 1, 0, 9], [0, 0, 1, 2]} is also a basis of the subspace of R4 .

Homework: Do exercise 4.5 and 4.6 from the textbook.

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

34

Coordinatization
Let B = (v1 , v2 , ..........., vn ) be an ordered basis (an ordered n-tuple of vectors) of a vector space V .
Suppose v V . Then there exists real numbers a1 , a2 , ......, an such that v = a1 v1 + a2 v2 + ......... + an vn .
The n-vector [v]B = [a1 , a2 , ....., an ] is called the coordinatization of v with respect to B. We also say that
v is expressed in B-coordinates.
Ex. The set B = ([1, 0], [0, 1]) is an ordered basis of R2 . Then [5, 4]B = [5, 4] since [5, 4] = 5[1, 0] + 4[0, 1].
Ex. The set C = ([0, 1], [1, 0]) is an ordered basis of R2 . Then [5, 4]C = [4, 5] since [5, 4] = 4[0, 1] + 5[1, 0].
Ex. Let V be a subspace of R3 spanned by the ordered basis B = ([2, 1, 3], [3, 2, 1]) and [5, 6, 11] V .
Then [5, 6, 11]B = [4, 1].
Sol. Let [5, 6, 11] = a[2, 1, 3] + b[3, 2, 1]. This yields the following system of linear equations:
2a + 3b = 5;

a + 2b = 6;

Writing the augmented matrix and


..
..
2 3 . 5 1 0 .


.
.
1 2 .. 6 0 1 ..


.
.
3 1 .. 11
0 0 ..

3a + b = 11.
applying suitable row transformations, we find

.
1

This gives a = 4 and b = 1. So [5, 6, 11]B = [4, 1].

Transition Matrix
Let V be a non-trivial n-dimensional vector space with ordered bases B and C. Let P be n-square matrix
whose ith column is [bi ]C where [bi ] is the ith basis vector in B. Then P is called the transition matrix
from B-coordinates to C-coordinates or B to C. For any vector v V , it can be proved that P [v]B = [v]C .
Ex. The sets B= ([1, 0, 0], [0, 1, 0],
[0, 0, 1]) and
102 20
3
13 2.
R3 . Then P = 67
36
7 1

..
1
1
1
.
1
0
0

1 0 0


.
For, 5 6 3 .. 0 1 0 0 1 0


..
1 6 14 . 0 0 1
0 0 1

C = ([1, 5, 1], [1, 6, 6], [1, 3, 14]) are two ordered bases of

..
. 102 20
3

..
.
.
67
13 2

..
.
36
7 1

Theorem: Suppose B, C and D are ordered bases for a non-trivial finite dimensional vector space V .
Let P be transition matrix from B to C, and Q be transition matrix from C to D. Then QP is the
transition matrix from B to D.
Theorem: If A is an n-square diagonalizable matrix, that is, there exists a non-singular matrix P such
that P 1 AP = D, and B is an ordered basis of Rn consisting of eigen vectors of A (or column vectors of
P ), then for any v Rn , we have D[v]B = [Av]B .

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

35

For, if S standard basis of Rn , then P is transition matrix from B to S. So P [v]B = [v]S and hence
D[v]B = P 1 AP [v]B = P 1 A[v]S = P 1 [Av]S = [Av]B .

Homework: Do exercise 4.7 from the textbook.

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

36

Chapter 5 (5.1-5.4)
Linear Transformation
Let V and W be vector spaces. Then a function from f : V W is said to be linear transformation (LT)
iff for all v1 , v2 V and c R, we have f (v1 + v2 ) = f (v1 ) + f (v2 ) and f (cv1 ) = cf (v1 ).
Ex. f : Mmn Mnm given by f (A) = AT is a LT.
Sol. Let A, B Mmn and a R. Then we have
f (A + B) = (A + B)T = AT + B T and f (aA) = (aA)T = aAT .
This shows that f is a LT.
Ex. f : R3 R3 given by f ([x1 , x2 , x3 ]) = [x1 , x2 , x3 ] is a LT.
Sol. Let v1 = [x1 , x2 , x3 ], v2 = [y1 , y2 , y3 ] R3 and a R. Then we have
f (v1 + v2 ) = f ([x1 + y1 , x2 + y2 , x3 + y3 ]) = [x1 + y1 , x2 + y2 , (x3 + y3 )],
f (v1 ) + f (v2 ) = [x1 , x2 , x3 ] + [y1 , y2 , y3 ] = [x1 + y1 , x2 + y2 , (x3 + y3 )].
Therefore, f (v1 + v2 ) = f (v1 ) + f (v2 ).
Also, f (av1 ) = f ([ax1 , ax2 , ax3 ]) = [ax1 , ax2 , ax3 ] = a[x1 , x2 , x3 ] = af (v1 ).
This shows that f is a LT.
Ex. Let A be a m n matrix. Then f : Rn Rm given by f (X) = AX is a LT.
Sol. Let X1 , X2 Rn and a R. Then we have
f (X1 + X2 ) = A(X1 + X2 ) = AX1 + AX2 = f (X1 ) + f (X2 ) and f (aX1 ) = A(aX1 ) = aAX1 = af (X1 ).
This shows that f is a LT.
Note: A LT f : V V is called linear operator on V .
Theorem: If L : V W is a LT, then L(0) = 0 and L(a1 v1 +a2 v2 ) = a1 L(v1 )+a2 L(v2 ) for all a1 , a2 R
and v1 , v2 V .
Proof: Let u V . Then we have
L(0) = L(u + (u)) = L(u) + L(u) = L(u) + L((1)u) = L(u) L(u) = 0.
Next, L(a1 v1 + a2 v2 ) = L(a1 v1 ) + L(a2 v2 ) = a1 L(v1 ) + a2 L(v2 ) since a1 v1 , a2 v2 V and L is a LT.
Theorem: The composition of two LT is also a LT, that is, if L1 : V1 V2 and L2 : V2 V3 are LT,
then L2 oL1 : V1 V3 is a LT.
Proof: Let u, v V1 and a R. Given that L1 : V1 V2 and L2 : V2 V3 are LT, we have
(L2 oL1 )(u+v) = L2 (L1 (u+v)) = L2 (L1 (u)+L1 (v)) = L2 (L1 (u))+L2 (L1 (v)) = (L2 oL1 )(u)+(L2 oL1 )(v).
(L2 oL1 )(au) = L2 (L1 (au)) = L2 (aL1 (u)) = aL2 (L1 (u)) = a(L2 oL1 )(u).
This shows that L2 oL1 is a LT.
Theorem: If L : V W is a LT, and V1 and W1 are subspaces of V and W respectively, then
L(V1 ) = {L(v) : v V1 } is a subspace of W and L1 (W1 ) = {v : L(v) W1 } is a subspace of V .
Proof: To prove that L(V1 ) = {L(v) : v V1 } is a subspace of W , let L(u), L(v) L(V1 ) and a R.
Then u, v V1 , and we have
L(u) + L(v) = L(u + v) L(V1 ) since u + v V1 .
Also, aL(u) = L(au) L(V1 ) since au V1 .
This shows that L(V1 ) is a subspace of W1 .
Likewise, it is easy to prove that L1 (W1 ) = {v : L(v) W1 } is a subspace of V . (please try!)
Homework: Do exercise 5.1 from the textbook.

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

37

Matrix of a Linear Transformation


Let B = (v1 , v2 , ........, vn ) and C = (w1 , w2 , ........., wn ) be ordered bases of two non-trivial vector spaces
V and W respectively. If L : V W is a LT, then there exists a unique matrix ABC of order m n
known as the matrix of the LT L such that ABC [v]B = [L[v]]C for all v V . Furthermore, the ith column
of ABC = [L[vi ]]C .
Ex. Find the matrix of the LT L : P3 R3 given by L(a3 x3 + a2 x2 + a1 x + a0 ) = [a0 + a1 , 2a2 , a3 a0 ]
with respect to the ordered basis B = (x3 , x2 , x, 1) for P3 and C = ([1, 0, 0], [0, 1, 0], [0, 0, 1]) for R3 . Hence
compute [L(5x3 x2 + 3x + 2)]C .

0 0 1 1
Sol. ABC = [[L(x3 )]C , [L(x2 )]C , [L(x)]C , [L(1)]C ] = 1 2 0 0 .
1 0 0 1
3
2
Now [5x x + 3x + 2]B = (5, 1, 3, 2).

5
[L(5x3 x2 + 3x + 2)]C = ABC [5x3 x2 + 3x + 2]B = 2.
3
Theorem: Let L : V W be a LT where V and W are non-trivial vector spaces. Let ABC be matrix
of L with respect to the ordered bases B and C. Suppose D and E are any other ordered bases of V and
W respectively. Let P be transition matrix from B to D, and Q be the transition matrix from C to E.
Then the matrix ADE of L with respect to the bases D and E is ADE = QABC P 1 .
Note: (i) Please do example 5 from the text book.
(ii) If L : V V is linear operator on V , then ABB = P 1 ACC P , where B and C are two ordered bases
of V , and P is a transition matrix from B to C.
(iii) If ABC and ACD are respectively the matrices of the linear transformations L1 : V1 V2 and
L2 : V2 V3 , where B, C and D are ordered bases of V1 , V2 and V3 respectively, then the matrix of
L2 oL1 : V1 V3 is ACD ABC .

Homework: Do exercise 5.2 from the textbook.

Kernel and Range


Let L : V W be a LT. Then the sets Ker(L) = {v V : L(v) = 0} and Range(L) = {L(v) : v V } are
respectively defined as the kernel and range of L.
Ex. If L : V W be a LT, then Ker(L) = {v V : L(v) = 0} is a subspace of V and Range(L) =
{L(v) : v V } is a subspace of W .
Sol. We know that 0 V and L(0) = 0. So 0 Ker(L) and L(0) Range(L). This shows that Ker(L)
and Range(L) both are non-empty. To show that Ker(L) is subspace of V assume that u, v Ker(L) and
a R. Then L(u) = 0 and L(v) = 0. Also, L being a LT, we have L(u + v) = L(u) + L(v) = 0 + 0 = 0.
So u + v Ker(L). Next, L(au) = aL(u) = a0 = 0. So au Ker(L). Thus, Ker(L) is a subspace of V .
Likewise, it is easy to show that Range(L) = {L(v) : v V } is a subspace of W . (please try!).
Ex. If L : R3 R3 is given by L([x1 , x2 , x3 ]) = [x1 , x2 , 0], then find Ker(L) and Range(L).
Sol. Let [x1 , x2 , x3 ] Ker(L). Then by definition of kernel, we have
L([x1 , x2 , x3 ]) = [0, 0, 0] or [x1 , x2 , 0] = [0, 0, 0]. Thus, we get x1 = 0, x2 = 0 and x3 is any real number.

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

38

So Ker(L) = {[0, 0, x3 ] : x3 R}.


Given that L([x1 , x2 , x3 ]) = [x1 , x2 , 0]. So we have Range(L) = {[x1 , x2 , 0] : x1 , x2 R}.
Ex. If L : P 3 P 2 is given by L(ax3 + bx2 + cx + d) = 3ax2 + 2bx + c, then Ker(L) = {d : d R} and
Range(L) = {3ax2 + 2bx + c : a, b, c R}.
For, let ax3 + bx2 + cx + d Ker(L). Then L(ax3 + bx2 + cx + d) = 0, or 3ax2 + 2bx + c = 0 gives
a = b = c = 0 and d any real number.

8
4 16 32
0
4
2 10 22 4

Ex. Let L : R5 R4 be given by L(X) = AX, where A =


2 1 5 11 7 . Find bases of
6
3 15 33 7
Ker(L) and Range(L).

1 1/2 0 2 0
0 0 1 3 0

Sol. Here Ker(L) is the solution of AX = 0. Also, A


0 0 0 0 1.
0 0 0 0 0
It follows that x1 = (1/2)x2 + 2x4 , x3 = 3x4 and x5 = 0. Let x2 = a and x4 = b. Then
[x1 , x2 , x3 , x4 , x5 ] = [(1/2)a + 2b, a, 3b, b, 0] = a[1/2, 1, 0, 0, 0] + b[2, 0, 3, 1, 0].
Hence Ker(L) = {[1/2a + 2b, a, 3b, b, 0] : a, b R}, and
{[1/2, 1, 0, 0, 0], [2, 0, 3, 1, 0]} is the basis of Ker(L).

1 1/2 0 2 0
0 0 1 3 0

Range space of L is spanned by the column vectors of A


0 0 0 0 1.
0 0 0 0 0
We see that the first third and fifth column vectors are LI. So {[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0]} is a
basis of Range(L).
Theorem: If L : Rn Rm is a LT with matrix A with respect to any bases of Rn and Rm , then
(i) dim(Range(L)) = Rank(A)
(ii) dim(Ker(L)) = n Rank(A)
(iii) dim(Ker(L)) + dim(Range(L)) = n.
Verify this theorem for the previous example (important!). A generalized version of part (iii) in the
the above theorem is given by the dimension theorem.
Dimension Theorem: If L : V W is a LT and V is finite dimensional, then
dim(Ker(L)) + dim(Range(L)) = dim(V ).

Homework: Do exercise 5.3 from the textbook.

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

39

One-to-One and Onto Linear Transformation


A LT L : V W is said to be one-to-one iff L(v1 ) = L(v2 ) = v1 = v2 for all v1 , v2 V . Further, it is
said to be onto iff for any w W , there exists some vector v V such that L(v) = w. So L : V W is
onto iff Range(L) = W .
Ex. Show that L : R3 R3 given by L[x1 , x2 , x3 ] = [x1 , x2 , 5x3 ] is both one-to-one and onto.
Sol. Let v1 = [x1 , x2 , x3 ], v2 = [y1 , y2 , y3 ] V = R3 . Then
L(v1 ) = L(v2 ) = L([x1 , x2 , x3 ]) = L([y1 , y2 , y3 ]) = [x1 , x2 , 5x3 ] = [y1 , y2 , 5y3 ].
So x1 = y1 , x2 = y2 , x3 = y3 and v1 = v2 . Hence, L is one-to-one.
Let w = [z1 , z2 , z3 ] W = R3 . Then v = [z1 , z2 , (1/5)z3 ] such that
L(v) = L([z1 , z2 , (1/5)z3 ]) = [z1 , z2 , z3 ] = w. This shows that L is onto.
Ex. L : R3 R2 given by L[x1 , x2 , x3 ) = [x1 , x2 ] is onto but not one-to-one.
Sol. Here V = R3 and W = R2 . Let w = [y1 , y2 ] R2 . Then for any real number y3 , we have
v = [y1 , y2 , y3 ] R3 such that L(v) = L[y1 , y2 , y3 ] = [y1 , y2 ] = u. This shows that L is onto.
Now, [1, 2, 4], [1, 2, 6] R3 such that L[1, 2, 4] = [1, 2] = L[1, 2, 6], but [1, 2, 4] 6= [1, 2, 6]. So L is not
one-to-one.
Ex. L : R2 R3 given by L[x1 , x2 ] = [x1 x2 , x1 + x2 , x1 ] is one-to-one but not onto.
Sol. Here V = R2 and W = R3 . Let v1 = [x1 , x2 ], v2 = [x1 , x2 ] R2 such that L(v1 ) = L(v2 ) or
L[x1 , x2 ] = L[y1 , y2 ] or [x1 x2 , x1 + x2 , x1 ] = [y1 y2 , y1 + y2 , y1 ]. This leads to x1 = y1 and x2 = y2 . So
[x1 , x2 ] = [y1 , y2 ] or v1 = v2 . Thus, L is one-to-one.
Next, [1, 2, 3] R3 . Let [x1 , x2 ] R2 such that L[x1 , x2 ] = [1, 2, 3] or [x1 x2 , x1 + x2 , x1 ] = [1, 2, 3].
This yields the following system of equations:
x1 x2 = 1; x1 + x2 = 2; , x1 = 3,
which has no solution. This in turn implies that [1, 2, 3] R3 has no pre-image in R2 . Hence L is not onto.
Theorem: A LT L : V W is one-to-one iff Ker(L) = {0}.
Proof: First assume that L : V W is one-to-one. We shall prove that Ker(L) = {0}. Let v Ker(L).
Then L(v) = 0, by definition of kernel. Also, we know that L(0) = 0. So L(v) = L(0). Given that L is
one-to-one. It follows that v = 0. So Ker(L) = {0}.
Conversely assume that Ker(L) = {0}. To prove that L is one-to-one, let v1 , v2 V such that
L(v1 ) = L(v2 ). So we have L(v1 ) L(v2 ) = 0 or L(v1 v2 ) = 0 since L is a LT. Further, L(v1 v2 ) = 0
implies that v1 v2 Ker(L). But Ker(L) = {0}. So we get v1 v2 = 0 or v1 = v2 . Hence, L is one-to-one.
Theorem: A LT L : V W is onto iff dim(Range(L)) = dim(W ).
Proof: By definition of onto LT, L : V W is onto iff Range(L) = W . Further, we know that
two vector spaces are equal iff their dimensions are equal. It follows that L : V W is onto iff
dim(Range(L)) = dim(W ).
Theorem: If L : V W is one-to-one and S = {v1 , v2 , ....., vn } is LI subset of V ,
then L(S) = {L(v1 ), L(v2 ), ......, L(vn )} is LI in W .
Proof: To prove that L(S) = {L(v1 ), L(v2 ), ......, L(vn )} is LI in W , let a1 , a2 , ......, an be real numbers
(scalars) such that
a1 L(v1 ) + a2 L(v2 ) + .......... + an L(vn ) = 0.
Since L is LT and L(0) = 0, we have
L(a1 v1 + a2 v2 + .......... + an vn ) = L(0).

Linear Algebra

Dr. Suresh Kumar, BITS-Pilani

40

It is given that L is one-to-one. So we get


a1 v1 + a2 v2 + .......... + an vn = 0.
Also, it is given that S = {v1 , v2 , ....., vn } is LI subset of V . So we have a1 = 0, a2 = 0, ......, an = 0.
Hence L(S) = {L(v1 ), L(v2 ), ......, L(vn )} is LI in W .
Theorem: If L : V W is onto and S spans V , then L(S) spans W .
Proof: Let S = {v1 , v2 , ....., vn } spans V and w W . Since L : V W is onto, there exists some v V
such that L(v) = w. Now v V and S = {v1 , v2 , ....., vn } spans V . So there exist real numbers (scalars)
a1 , a2 , ......, an such that
v = a1 v1 + a2 v2 + ......... + an vn .
Since L is LT, we have w = L(v) = a1 L(v1 ) + a2 L(v2 ) + .......... + an L(vn ).
This shows that L(S) spans W .

Homework: Do exercise 5.4 from the textbook.

You might also like