Linear Algebra (MA - 102) Lecture - 12 Eigen Values and Eigen Vectors

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Linear Algebra (MA - 102)

Lecture - 12
Eigen values and Eigen vectors

G. Arunkumar

May 9, 2022

1 / 23
The cofactor matrix
1 Definition: Let A = (aij ) be an n × n matrix. The cofactor of aij , denoted
by cof aij is defined as

cof aij = (−1)i+j det Aij .

2 The cofactor matrix of A denoted by cof A is the matrix

cof A = (cof aij ).

Theorem
For any n × n matrix A,

A(cof A)t = (det A)I = (cof A)t A.

In particular, if det A is nonzero then A−1 = 1


det A (cof A)t .
2 / 23
Cramer’s Rule:

Cramer’s Rule: Suppose


     
a11 a12 ··· a1n x1 b1
 a21 a22 ··· a2n   x2 b2
     
  
 .   .. = .. 
 .
 . . .
    
    
an1 an2 ··· ann xn bn

is system of n linear equations in n unknowns, x1 , x2 , . . . , xn .


Suppose the coefficient matrix A = (aij ) is invertible. Let Cj be the matrix
obtained from A by replacing j th column of A by b = (b1 , b2 , . . . , bn )t . Then for
j = 1, 2, . . . , n,
det Cj
xj = .
det A

3 / 23
Area of a Parallelogram and determinant

(a+b,c+d)

(b,d)

(a,c)

4 / 23
Area of Parallelogram = (a + c)(b + d) − 2bc − ac − bd = ad − bc
= det(A)
" #
a b
where A =
c d

Theorem
Volume of the parallelepiped spanned by the vectors v1 , . . . , vn in
Rn = | det(v1 , . . . , vn )| where |a| denotes the absolute value of a.

Reference: See Section 6.4.3 of S. Kumaresan, Linear algebra - A Geometric


approach, Prentice Hall of India (2000)

5 / 23
Chapter 4: Linear Transformations

Definition and examples of linear transformations


Rank-Nullity theorem for linear transformation
The rank-nullity theorem: Examples and its consequences
Matrices and Linear Transformations

6 / 23
Linear Transformations: Introduction
1 Let A be an m × n matrix with real entries.
2 Then A “acts” on the n-dimensional space Rn by left multiplication : If
v ∈ Rn then Av ∈ Rm .
3 In other words, A defines a function

TA : Rn −→ Rm , TA (v ) = Av .

4 By properties of matrix multiplication, TA satisfies the following conditions:


i. TA (v + w ) = TA (v ) + TA (w )
ii. TA (cv ) = cTA (v )
where c ∈ R and v , w ∈ Rn .
5 We say that TA respects the two operations in the vector space Rn .
6 In this lecture we study such maps between the vector spaces.
7 / 23
Linear Transformations
Definition
Let V , W be vector spaces over R. A linear transformation T : V −→ W is a
function satisfying

T (v + w ) = T (v ) + T (w ) and T (cv ) = cT (v )

where v , w ∈ V and c ∈ R.

Remark If T : V → W is a linear transformation, then T (0) = 0. Exercise: Prove


that T : V → W is linear transformation if and only if
T (αv1 + βv2 ) = αT (v1 ) + βT (v2 )
for all α, β ∈ R and all v1 , v2 ∈ V .
i. For any pair of vector spaces V , W over F, the “zero map” T0 : V → W
defined as T0 (v ) = 0 for all v ∈ V , is clearly a linear transformation.
8 / 23
Linear Transformations: Examples

Can you now think of another linear map from a vector space V to itself?

ii. The identity map I : V → V defined as I (v ) = v for all v ∈ V , is clearly a


linear map.

9 / 23
Linear Transformations: Examples

v. Rotation
Fix θ and define T : R2 −→ R2 by
" #! " # " # " #
x cos θ − sin θ x x cos θ − y sin θ
T = = .
y sin θ cos θ y x sin θ + y cos θ

Then T (e1 ) = (cos θ, sin θ)t and T (e2 ) = (− sin θ, cos θ)t . Thus T rotates
the whole space by θ. (Draw a picture to convince yourself of this).
vi Let T : R2 → R2 be defined as T (x, y ) = (x, −y ). Then T is linear
transformation which represents the reflection about the x-axis. Similarly,
S : R2 → R2 be defined as T (x, y ) = (−x, y ) is a linear transformation
which represents the reflection about the y -axis.

10 / 23
Linear Transformations: Examples

vii. The map T : R → R given by T (x) = x 2 is not linear (why ?).


viii. Let U ⊆ Rn be an open set and f : U → Rm be differentiable at a point
p ∈ U.
Then its total derivative is a linear transformation dfp : Rn → Rm given by

∂f ∂f
dfp (x1 , . . . , xn ) = x1 (p) + · · · + xn (p)
∂x1 ∂xn

11 / 23
Motivation

Here F = R or C.

Let A and B be n × n matrices. Recall that we say A is similar/conjugate to


B (or A and B are similar) if there exists an invertible matrix P such that
P −1 AP = B.
Exercise: If A is similar to B, then det(A) = det(B) and trace(A) = trace(B)
Suppose A = (aij ) is an n × n matrix with entries in F. Is A similar to a
matrix D which is “relatively simple”, that means, does there exists an
invertible matrix P such that P −1 AP = D and D is “simple” ?
Computation of Ak , for instance A100 .
(PAP −1 )2 = (PAP −1 )(PAP −1 ) = PA2 P −1
In general, (PAP −1 )k = PAk P −1 and Ak = P −1 (PAP −1 )k P

12 / 23
Eigenvalues and Eigenvectors
1 Definition. Let A ∈ Mn×n (F). A scalar λ ∈ F is said to be an eigenvalue or
characteristic value of A if there exists a nonzero v ∈ V such that

Av = λv .

In this case, the vector 0 6= v is called an eigenvector or characteristic vector


of A corresponding to the eigenvalue λ.
2 The eigenvalue tells whether the special vector v is stretched or shrunk or
reversed or left unchanged when it is multiplied by A. We may find λ = 2 or
1
2 or −1 or 1.
3 Why “eigen” ?: Eigen is German means characteristic
4 If v , w are the eigenvectors of A corresponding to the eigenvalue λ, then
show that v + w and cv for 0 6= c ∈ F is also an eigenvector of A
corresponding to the eigenvalue λ.
A(v + w ) = Av + Aw = λv + λw = λ(v + w )
A(αv ) = αAv = αλv = λ(αv )
13 / 23
1 Thus

Eλ := {v : v is an eigenvector corresponding to eigenvalue λ} ∪ {0}

is a subspace of Fn called the eigenspace corresponding to the eigenvalue λ.


2 Let I denote the identity matrix of order n × n.
3 By definition, λ ∈ F is an eigenvalue of A
4 Av = λv = λ(Iv ) for some v 6= 0 where I is the identity matrix
5 ⇔ (λI − A)v = 0 for some v 6= 0
6 ⇔ 0 6= v ∈ null space of (λI − A)
⇔ det(λI − A) = 0.
Proposition
Let A = (aij ) be an n × n matrix. Then following are equivalent:
(i) λ is an eigenvalue of A;
(ii) det(λI − A) = 0;
(iii) (λI − A) is not invertible.

Moreover, Eλ = nullspace of λI − A.
14 / 23
Thus eigenvalues are roots of the polynomial det(xI − A), denoted by pA (x),
called the characteristic polynomial of A.

Exercise Show that pA (x) = det(xI − A) is a monic polynomial of degree n, i.e.,


the coefficient of the term x n is 1. (Hint: Use induction on n, the order of the
square matrix A).

Hence A can have at most n distinct eigenvalues.


The eigenvalue could be zero also! Then Av = 0v means that the
eigenvector v is in the null space of A.

15 / 23
Eigenvalues and Eigenvectors: Examples

1 Example 0: Let A be a diagonal matrix with scalars µ1 , . . . , µn on the


diagonal. We write this as A = diag(µ1 , . . . , µn ). Then Aei = µi ei for
1 ≤ i ≤ n and so e1 , . . . , en are eigenvectors of A with the corresponding
eigenvalues µ1 , . . . , µn .

16 / 23
" #
0 1
Example 1 [Reflection matrix]: Consider A = .
1 0
Computation of eigenvalues: characteristic polynomial of A
" #
x −1
= det(xI − A) = det = x 2 − 1.
−1 x

Therefore λ is an eigenvalue of A ⇔ λ2 − 1 = 0 ⇔ λ = ±1.


Computation of E1 :

17 / 23
Computation of E−1 : Suppose λ = −1. Then
0 6= v = (x1 , x2 ) ∈ E−1 ⇔ (A + I )v = 0. Thus
" #" # " #
1 1 x1 0
= .
1 1 x2 0
This gives that x1 + x2 = 0. Thus E−1 = {(c, −c) : c ∈ F}. Set v2 := (c, −c) for
c 6= 0.

v2 Av1 = v1

Av2

Figure: Reflection across line v1

18 / 23
Existence of eigenvalues depends on F. Moreover, A need not have eigenvalues
" #
0 −1
Example 2 [Rotation matrix]: Let A = .
1 0
The characteristic polynomial of A is
" #
x 1
det(xI − A) = det = x 2 + 1.
−1 x

Thus λ is an eigenvalue of A ⇔ λ2 + 1 = 0. Hence A has no eigenvalues if F = R.


If F = C, then A has two eigenvalues ±i where i 2 = −1.

Exercise 2: Let F = C. Prove that [1, −i]t is an eigenvector corresponding to i and


[−i, 1]t is an eigenvector corresponding to −i.

19 / 23
Eigenvalues and Eigenvectors of Linear Operators

1 We can define eigenvalues and eigenvectors for linear operators too.


2 Definition. Let V be a vector space over F and let T : V → V be a linear
operator. A scalar λ ∈ F is said to be an eigenvalue of T if there is a nonzero
vector v ∈ V such that T (v ) = λv .
3 We say that v is an eigenvector of T with eigenvalue λ.
4 Similarly,

Eλ := {v : v is an eigenvector of T corresponding to eigenvalue λ} ∪ {0}

is a subspace of V called the eigenspace of T corresponding the eigenvalue λ.


5 Let A be an n × n matrix over F. The eigenvalues and eigenvectors of A are
the eigenvalues and the eigenvectors of the linear map TA : Fn → Fn defined
by TA (v ) = Av , v ∈ Fn .

20 / 23
Distinct Eigenvalues
Theorem
Let T : V → V be a linear operator. Let λ1 , . . . , λn ∈ F be distinct eigenvalues of
T and let v1 , . . . , vn be corresponding eigenvectors. Then v1 , v2 , . . . , vn are linearly
independent.

Proof:
1 Use induction on n, the case n = 1 being clear.
2 Let n > 1. Let c1 , c2 , . . . , cn ∈ F such that
c1 v1 + c2 v2 + · · · + cn vn = 0 · · · · · · · · · (1)
3 Apply T to equation (1) to get
c1 λ1 v1 + c2 λ2 v2 + · · · + cn λn vn = 0 · · · · · · · · · (2)

4 Now, (2) − λ1 × (1) implies


c (λ − λ )v + · · · + c (λ − λ )v = 0. 21 / 23
1 Note that

(T − λ1 I )vi = Tvi − λ1 vi = λi vi − λ1 vi = (λi − λ1 )vi

for i = 2, . . . , n.
2 Hence λ2 − λ1 , . . . , λn − λ1 are the eigenvalues of T − λ1 I .
3 Since λ1 , λ2 , . . . , λn are distinct, λi − λ1 , . . . , λn − λ1 are also distinct. Hence
by induction c2 = · · · = cn = 0 and by substituting these values in (1) we get
c1 = 0 too.

Corollary: Let A ∈ Mm×m (F). Let λ1 , . . . , λn ∈ F be distinct eigenvalues of A and


let v1 , . . . , vn be corresponding eigenvectors. Then v1 , v2 , . . . , vn are linearly
independent.

22 / 23

You might also like