Eigenvectors and Eigenvalues: 10.1 Definitions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Chapter 10

Eigenvectors and Eigenvalues

10.1 Definitions
Let A be an n × n matrix. An eigenvector of A is a vector v ∈ Rn , with v 6= 0n , such
that Av = λv for some scalar λ ∈ R (which might be 0). The scalar λ is called the
eigenvalue of A associated to the eigenvector v.
   
1 2 1
Example. Let A = . The vector v = is an eigenvector of A, with
4 3 2
corresponding eigenvalue 5, since
      
1 2 1 5 1
= =5 .
4 3 2 10 2

Geometrically, it is clear that the eigenvectors of the linear transformation tA : x 7→ Ax


are the position vectors of points on fixed lines through the origin (except for the origin
itself), and the eigenvalues are the corresponding stretch factors, at least in the case of
eigenvalues λ 6= 0.

10.2 Eigenvectors corresponding to the eigenvalue 0


From the definitions, saying that 0 is an eigenvalue of A means the same as saying that
there exists a nonzero vector v such that Av = 0n . But this implies that A is not
invertible (since if A is invertible then Ax = 0n has unique solution x = A−1 0n = 0n ).
In fact the converse is also true, namely that if A is not invertible then Ax = 0n has a
nonzero solution x (we are not going to prove this here, but we saw it was true in the
case n = 2, when we examined singular 2 × 2 matrices). Thus we have the

Useful Fact. An n × n matrix has 0 as one of its eigenvalues if and only if A is not
invertible.

92
10.3 Finding all eigenvalues
Given a matrix, it turns out to be easiest to first calculate the eigenvalues, and then the
eigenvectors.
Theorem 10.1. Let A be an n × n matrix. Then the eigenvalues of A are those real
numbers λ which have the property that det(A − λIn ) = 0.
Proof. Strictly speaking, we have only proved all properties of determinants (namely
that A is invertible if and only if det A = 0) we need for n 6 2, and stated that the same
holds for n = 3, and for general n if you read the optional section (§8.4) of Chapter 8.
So strictly speaking we will only have proved this theorem for n 6 2, though the same
argument works for general n, as you will see in MTH5112: Linear Algebra I:
λ is an eigenvalue of A ⇐⇒ ∃v 6= 0n such that Av − λv = (A − λIn )v = 0n
⇐⇒ (A − λIn ) is not invertible
⇐⇒ det(A − λIn ) = 0.

The expression det(A − xIn ) is called the characteristic poynomial of A (it is a


polynomial in x, of degree n), so another way to state the Theorem above is to say that
the eigenvalues of A are the zeros of the characteristic polynomial of A. (Recall that the
zeros of a function f (x) are the values of x such that f (x) = 0.) Eigenvalues are usually
counted by multiplicity, so that if the characteristic polynomial is, say, −(x − 4)2 (x + 3)
we would say that the eigenvalues are −3, 4, 4.
 
1 2
Example 1. Let A = . Then the characteristic polynomial of A is
4 3
     
1 2 x 0 1−x 2
f (x) = det(A − xI2 ) = det − = det
4 3 0 x 4 3−x
= (1 − x)(3 − x) − 8 = x2 − 4x − 5 = (x + 1)(x − 5).

The only zeros of f (x) are −1 and 5, so by Theorem 10.1 the only eigenvalues of A are
−1 and 5.
 
1 2 −1
Example 2. Let A =  0 1 4 . Then the characteristic polynomial of A is
0 0 3

1−x 2 −1
1−x 4
f (x) = det(A − xI3 ) = 0
1−x 4 = (1 − x)
−0+0
0 3−x
0 0 3−x
= (1 − x)((1 − x)(3 − x) − 0) = −(x − 1)2 (x − 3).

The only zeros of f (x) are 1 and 3, so by Theorem 10.1 the only distinct eigenvalues of
A are 1 and 3. In this example, the characteristic polynomial has a repeated root, and

93
the eigenvalues of A are actually 1, 1, 3. This does have some mathematical significance.
Using language that will be defined in MTH5112: Linear Algebra I, this means that the
eigenvectors having eigenvalue 1, together with the zero vector, could potentially form
a 2-dimensional subspace of R3 , though in this example they only form a 1-dimensional
subspace.

Remarks.

1. Let n = 2 or 3, and let A be an n × n matrix, with characteristic polynomial f (x).


It is not difficult to see that f (x) is a polynomial with coefficients in R, and that
the degree of f (x) is n (the highest degree term in f (x) comes from the product
of the terms down the main diagonal of A, so the coefficient of xn is ±1 [(−1)n in
fact]). Such an f (x) has at most n real zeros, and so A has at most n distinct real
eigenvalues.

2. A polynomial of odd degree with coefficients in R always has at least one real
zero. (This follows from the Intermediate Value Theorem [see your MTH4100:
Calculus I notes], since the graph of y = f (x) is either above the x-axis as x tends
to +∞ and below the x-axis as x tends to −∞, or vice versa.) Hence it follows
that every 3 × 3 (real) matrix has at least one (real) eigenvalue, and so every
linear transformation of R3 (except for zero transformation) fixes at least one line
through the origin.

10.4 Finding eigenvectors


Let n = 2 or 3, and let A be an n × n matrix with an eigenvalue λ. How do we find one
(or all) eigenvectors v of A with Av = λv? The answer is that we solve a system of n
equations in n unknowns. We have:

Av = λv ⇐⇒ Av − λv = 0n
⇐⇒ Av − λIn v = 0n
⇐⇒ (A − λIn )v = 0n .

If n = 2, to obtain
 the eigenvectors
  of A corresponding
  to  the
 eigenvalue λ we solve
x 0 x x 0
(A − λI2 ) = for ∈ R2 with 6= .
y 0 y y 0
If n = 3, to obtain
  the eigenvectors
  of A corresponding
  to 
the eigenvalue
 λ we solve
x 0 x x 0
(A − λI3 )  y  =  0  for  y  ∈ R3 with  y  6=  0 .
z 0 z z 0
Example 1. Let  
1 2
A= .
4 3

94
In the previous subsection we found that the eigenvalues of A are −1 and 5. We now
determine the eigenvectors corresponding to eigenvalue −1. We solve:
   
x 0
(A − (−1)I2 ) = ,
y 0
that is to say,        
1 2 −1 0 x 0
− = ,
4 3 0 −1 y 0
in other words,     
2 2 x 0
= .
4 4 y 0
This is the system of equations:

2x + 2y = 0
,
4x + 4y = 0
which, when reduced to echelon form is:

2x + 2y = 0
.
0=0
Thus y can be any real number r, and then 2x + 2r = 0, so x = −r. Thus the set of all
eigenvectors of A corresponding to the eigenvalue −1 is
  
−r
: r ∈ R r 6= 0 .

r
 
−1
One such eigenvector is . We can check our calculation, as shown below.
1
        
−1 1 2 −1 1 −1
A = = = (−1) .
1 4 3 1 −1 1
It is left as an exercise for you to compute the eigenvectors of A corresponding to the
other eigenvalue, 5.
Example 2. Let  
1 2 −1
A= 0 1 4 .
0 0 3
We have found that the characteristic polynomial of A is f (x) = −(x − 1)2 (x − 3), and
thus that the eigenvalues of A are 1 and 3 (the real zeros of f (x)), with 1 being repeated.
We now find all the eigenvectors of A corresponding to the eigenvalue 3. We solve:
   
x 0
(A − 3I3 )  y  =  0  ,
z 0

95
that is to say,     
1−3 2 −1 x 0
 0 1−3 4   y  =  0 ,
0 0 3−3 z 0
which, in other words, is
    
−2 2 −1 x 0
 0 −2 4   y  =  0  .
0 0 0 z 0
This is the system of equations:

−2x + 2y − z = 0 
− 2y + 4z = 0 .
0=0

These equations are already in echelon form, so we can solve them by setting z to be
r (representing any real number) and deducing by back substitution that y = 2r (from
the second equation) and then that x = 23 r from the first equation. Thus the set of all
eigenvectors of A corresponding to the eigenvalue 3 is
 3  
 2
r

 2r  : r ∈ R r 6= 0 .

r
 
 
3
One such eigenvector is b :=  4 . (Check this!)
2
Thus the line through the origin with vector equation r = µb (µ ∈ R) is fixed by the
linear transformation tA represented by A, and a point on this line with position vector
v is mapped by tA to the point that has position vector 3v.
It is left as an exercise for you to compute the eigenvectors of A corresponding to
the other eigenvalue, 1.

10.5 Eigenvectors and eigenvalues for linear trans-


formations of the plane
We revisit rotations and reflexions, axes stretches, dilations and shears in R2 , to see how
eigenvectors and eigenvalues are involved.

Rotations. No real eigenvalues, as no fixed lines, except for the case of a rotation
through mπ for some m ∈ Z, when every nonzero vector is an eigenvector with eigen-
value (−1)m , and the rotation matrix is (−1)m I2 . (In general, an anticlockwise rotation
through θ has complex eigenvalues e iθ and e−iθ .)

96
Reflexions. Here every (nonzero) vector in the direction u of the mirror is an eigen-
vector with eigenvalue +1, and every (nonzero) vector orthogonal to u is an eigenvector
with eigenvalue −1.
 
a 0
Axes stretches. The matrix (with a and d nonzero and a 6= d) has the
0 d
position vector of every point on the x-axis (except the origin) as an eigenvector with
eigenvalue a, and the position vector of every point on the y-axis (except the origin) as
an eigenvector with eigenvalue d.
 
a 0
Dilations. The matrix (with a > 0) has a as an eigenvalue and every
0 a
nonzero vector as an eigenvector corresponding to a.
 
1 1
Shears. Consider for example . By an easy calculation the only eigen-
0 1
value
  is +1, and the eigenvectors
 corresponding to this eigenvalue are the vectors
t
: t ∈ R t 6= 0 , that is, the position vectors of all points on the x-axis other
0
than the origin. In general, all shears (in 2 dimensions) have characteristic polynomial
x2 − 2x + 1 = (x − 1)2 , and thus eigenvalues 1, 1. However, all eigenvectors of a shear
lie along a single line (origin excluded).

10.6 Rotations and reflexions in R3


It was mentioned in an earlier remark (without proof) that every rigid motion of R3
which fixes the origin is represented by an orthogonal matrix, that is to say a matrix
A with the property that AAT = I3 = AT A (where AT denotes the transpose of A). It
follows that (det A)2 = 1 and hence that det A = ±1.
Rigid motions of R3 represented by matrices A which have det A = +1 are called
orientation-preserving (they send right-handed triples of vectors to right-handed triples
of vectors), and those which have det A = −1 are called orientation-reversing.
As was mentioned in another earlier remark, every linear transformation of R3 has a
fixed line (since the characteristic polynomial is a cubic and therefore has a real root).
If the linear transformation is a rigid motion, the corresponding eigenvalue must be +1
or −1, since rigid motions preserve distances.
Let A be a 3 × 3 matrix representing a rigid motion of R3 fixing the origin.

Case 1. A has +1 as an eigenvalue.


Let u 6= 03 be an eigenvector with eigenvalue +1. Now A maps the plane Π through the
origin orthogonal to u to itself. If this map of Π to itself is a rotation then A represents
a rotation of R3 around the axis which has direction u. If the map of Π to itself is a

97
reflexion, with mirror a line in Π having direction v, then A represents a reflexion of R3
with mirror the plane containing u and v.

Case 2. A has −1 as an eigenvalue.


Let u 6= 03 be an eigenvector with eigenvalue −1. Once again, A maps the plane Π
through the origin orthogonal to u to itself. If this map of Π to itself is a reflexion, then
Π contains an eigenvector v of A with eigenvalue +1, corresponding to the direction
of the mirror in Π, and an eigenvector w orthogonal to v with eigenvalue −1. Thus
u, v, w are an orthogonal set of vectors and A sends u 7→ −u, v 7→ v, w 7→ −w,
and so A is a rotation about the direction of v through an angle π. If, on the other
hand, the map of Π to itself is a rotation, there is not much more to say except that
A represents the composition of a reflexion in the plane Π followed by a rotation about
the axis orthogonal to Π.
Remark. It follows from the analysis above that every orientation-preserving rigid mo-
tion of R3 which fixes the origin is a rotation about some axis, but that (unlike the
case for R2 ) an orientation-reversing rigid motion of R3 which fixes the origin is not
necessarily a reflexion.
We finish by exhibiting examples of 3 × 3 matrices representing rotations and reflexions.
Example 1. The 3 × 3 matrix which represents a rotation through angle θ around the
z-axis is  
cos θ − sin θ 0
 sin θ cos θ 0 ,
0 0 1
and similarly the 3 × 3 matrices which represent rotations through angle θ around the
x- and y-axes are respectively.
   
1 0 0 cos θ 0 sin θ
 0 cos θ − sin θ  and  0 1 0 .
0 sin θ cos θ − sin θ 0 cos θ
In these examples we look at the origin from a point somewhere along the positive
portion of the relevant axis. These are rotations through an angle of θ anticlockwise
from this point of view. (If we look the origin from the opposite direction then these
rotations have an anticlockwise angle of −θ.)
Example 2. The reflexion sn = sΠ in the plane Π through the origin with normal vector
n 6= 0 = 03 sends each vector x to:
(x·n)
sΠ (x) = x − 2 n.
|n|2
To see this, draw a picture like the one we used in our calculation of the distance from the
point X with position vector x, to the plane Π (the point with position vector x − (x·n)
|n|2
n
is the point of Π which is closest to X).

98
If the equation of the plane Π is ax + by + cz = 0, we can take n = ai + bj + ck, and so:
ax + by + cz
sΠ (xi + yj + zk) = xi + yj + zk − 2 (ai + bj + ck).
a2 + b2 + c2
Thus in particular
a 1
(b2 + c2 − a2 )i − 2abj − 2ack ,

sΠ (i) = i − 2 (ai + bj + ck) = 2
a2 2
+b +c 2 2
a +b +c 2

and so the first column of the matrix SΠ representing sΠ is


 2 
b + c 2 − a2
1  −2ab .
a + b2 + c 2
2
−2ca

Similarly, one can compute the second and third columns, to give the following matrix,
which you are not expected to memorise for the examination for this module!
 2 
b + c 2 − a2 −2ab −2ac
1
SΠ = 2  −2ab c2 + a2 − b2 −2bc .
a + b2 + c 2 2 2 2
−2ac −2bc a +b −c

Note that    
a −a
Sπ (n) = SΠ  b  =  −b  = −n,
c −c
which is exactly what one would expect since n is orthogonal to the plane Π. Now each
vector x in R3 can be written uniquely as x = u + v where u and n are collinear and v is
parallel to (and thus in) Π, so that v is orthogonal to n. (We have u = (x·n)
|n|2
n = (x·n̂)n̂
and v = x − (x·n̂)n̂, where n̂ is the unit vector in the direction of n. You should prove
the assertions of the previous sentence, and also try to prove that u and v are the unique
vectors with this property.) Then sΠ (u) = −u and sΠ (v) = v, so that u is an eigenvector
of sΠ with corresponding eigenvalue −1 (unless u = 0), and v is an eigenvector of sΠ
with corresponding eigenvalue 1 (unless v = 0). Moreover, sΠ (x) = sΠ (u + v) = −u + v,
so that (exercise for you) x is an eigenvector of sΠ when exactly one of the conditions
u = 0 and v = 0 holds.
All of what we have done above generalises readily to reflexions in a hyperplane Π
of Rn orthogonal to the vector n 6= 0n . I leave the reader to work out the necessary
details.

Exercise. Check that the formula for SΠ gives the right answer when Π is the (x, y)
plane (the plane defined by the equation z = 0). Also check that the transformation sΠ
defined at the start of Example 2 is indeed a linear map.

99

You might also like