Eigenvectors and Eigenvalues: 10.1 Definitions
Eigenvectors and Eigenvalues: 10.1 Definitions
Eigenvectors and Eigenvalues: 10.1 Definitions
10.1 Definitions
Let A be an n × n matrix. An eigenvector of A is a vector v ∈ Rn , with v 6= 0n , such
that Av = λv for some scalar λ ∈ R (which might be 0). The scalar λ is called the
eigenvalue of A associated to the eigenvector v.
1 2 1
Example. Let A = . The vector v = is an eigenvector of A, with
4 3 2
corresponding eigenvalue 5, since
1 2 1 5 1
= =5 .
4 3 2 10 2
Useful Fact. An n × n matrix has 0 as one of its eigenvalues if and only if A is not
invertible.
92
10.3 Finding all eigenvalues
Given a matrix, it turns out to be easiest to first calculate the eigenvalues, and then the
eigenvectors.
Theorem 10.1. Let A be an n × n matrix. Then the eigenvalues of A are those real
numbers λ which have the property that det(A − λIn ) = 0.
Proof. Strictly speaking, we have only proved all properties of determinants (namely
that A is invertible if and only if det A = 0) we need for n 6 2, and stated that the same
holds for n = 3, and for general n if you read the optional section (§8.4) of Chapter 8.
So strictly speaking we will only have proved this theorem for n 6 2, though the same
argument works for general n, as you will see in MTH5112: Linear Algebra I:
λ is an eigenvalue of A ⇐⇒ ∃v 6= 0n such that Av − λv = (A − λIn )v = 0n
⇐⇒ (A − λIn ) is not invertible
⇐⇒ det(A − λIn ) = 0.
The only zeros of f (x) are −1 and 5, so by Theorem 10.1 the only eigenvalues of A are
−1 and 5.
1 2 −1
Example 2. Let A = 0 1 4 . Then the characteristic polynomial of A is
0 0 3
1−x 2 −1
1−x 4
f (x) = det(A − xI3 ) = 0
1−x 4 = (1 − x)
−0+0
0 3−x
0 0 3−x
= (1 − x)((1 − x)(3 − x) − 0) = −(x − 1)2 (x − 3).
The only zeros of f (x) are 1 and 3, so by Theorem 10.1 the only distinct eigenvalues of
A are 1 and 3. In this example, the characteristic polynomial has a repeated root, and
93
the eigenvalues of A are actually 1, 1, 3. This does have some mathematical significance.
Using language that will be defined in MTH5112: Linear Algebra I, this means that the
eigenvectors having eigenvalue 1, together with the zero vector, could potentially form
a 2-dimensional subspace of R3 , though in this example they only form a 1-dimensional
subspace.
Remarks.
2. A polynomial of odd degree with coefficients in R always has at least one real
zero. (This follows from the Intermediate Value Theorem [see your MTH4100:
Calculus I notes], since the graph of y = f (x) is either above the x-axis as x tends
to +∞ and below the x-axis as x tends to −∞, or vice versa.) Hence it follows
that every 3 × 3 (real) matrix has at least one (real) eigenvalue, and so every
linear transformation of R3 (except for zero transformation) fixes at least one line
through the origin.
Av = λv ⇐⇒ Av − λv = 0n
⇐⇒ Av − λIn v = 0n
⇐⇒ (A − λIn )v = 0n .
If n = 2, to obtain
the eigenvectors
of A corresponding
to the
eigenvalue λ we solve
x 0 x x 0
(A − λI2 ) = for ∈ R2 with 6= .
y 0 y y 0
If n = 3, to obtain
the eigenvectors
of A corresponding
to
the eigenvalue
λ we solve
x 0 x x 0
(A − λI3 ) y = 0 for y ∈ R3 with y 6= 0 .
z 0 z z 0
Example 1. Let
1 2
A= .
4 3
94
In the previous subsection we found that the eigenvalues of A are −1 and 5. We now
determine the eigenvectors corresponding to eigenvalue −1. We solve:
x 0
(A − (−1)I2 ) = ,
y 0
that is to say,
1 2 −1 0 x 0
− = ,
4 3 0 −1 y 0
in other words,
2 2 x 0
= .
4 4 y 0
This is the system of equations:
2x + 2y = 0
,
4x + 4y = 0
which, when reduced to echelon form is:
2x + 2y = 0
.
0=0
Thus y can be any real number r, and then 2x + 2r = 0, so x = −r. Thus the set of all
eigenvectors of A corresponding to the eigenvalue −1 is
−r
: r ∈ R r 6= 0 .
r
−1
One such eigenvector is . We can check our calculation, as shown below.
1
−1 1 2 −1 1 −1
A = = = (−1) .
1 4 3 1 −1 1
It is left as an exercise for you to compute the eigenvectors of A corresponding to the
other eigenvalue, 5.
Example 2. Let
1 2 −1
A= 0 1 4 .
0 0 3
We have found that the characteristic polynomial of A is f (x) = −(x − 1)2 (x − 3), and
thus that the eigenvalues of A are 1 and 3 (the real zeros of f (x)), with 1 being repeated.
We now find all the eigenvectors of A corresponding to the eigenvalue 3. We solve:
x 0
(A − 3I3 ) y = 0 ,
z 0
95
that is to say,
1−3 2 −1 x 0
0 1−3 4 y = 0 ,
0 0 3−3 z 0
which, in other words, is
−2 2 −1 x 0
0 −2 4 y = 0 .
0 0 0 z 0
This is the system of equations:
−2x + 2y − z = 0
− 2y + 4z = 0 .
0=0
These equations are already in echelon form, so we can solve them by setting z to be
r (representing any real number) and deducing by back substitution that y = 2r (from
the second equation) and then that x = 23 r from the first equation. Thus the set of all
eigenvectors of A corresponding to the eigenvalue 3 is
3
2
r
2r : r ∈ R r 6= 0 .
r
3
One such eigenvector is b := 4 . (Check this!)
2
Thus the line through the origin with vector equation r = µb (µ ∈ R) is fixed by the
linear transformation tA represented by A, and a point on this line with position vector
v is mapped by tA to the point that has position vector 3v.
It is left as an exercise for you to compute the eigenvectors of A corresponding to
the other eigenvalue, 1.
Rotations. No real eigenvalues, as no fixed lines, except for the case of a rotation
through mπ for some m ∈ Z, when every nonzero vector is an eigenvector with eigen-
value (−1)m , and the rotation matrix is (−1)m I2 . (In general, an anticlockwise rotation
through θ has complex eigenvalues e iθ and e−iθ .)
96
Reflexions. Here every (nonzero) vector in the direction u of the mirror is an eigen-
vector with eigenvalue +1, and every (nonzero) vector orthogonal to u is an eigenvector
with eigenvalue −1.
a 0
Axes stretches. The matrix (with a and d nonzero and a 6= d) has the
0 d
position vector of every point on the x-axis (except the origin) as an eigenvector with
eigenvalue a, and the position vector of every point on the y-axis (except the origin) as
an eigenvector with eigenvalue d.
a 0
Dilations. The matrix (with a > 0) has a as an eigenvalue and every
0 a
nonzero vector as an eigenvector corresponding to a.
1 1
Shears. Consider for example . By an easy calculation the only eigen-
0 1
value
is +1, and the eigenvectors
corresponding to this eigenvalue are the vectors
t
: t ∈ R t 6= 0 , that is, the position vectors of all points on the x-axis other
0
than the origin. In general, all shears (in 2 dimensions) have characteristic polynomial
x2 − 2x + 1 = (x − 1)2 , and thus eigenvalues 1, 1. However, all eigenvectors of a shear
lie along a single line (origin excluded).
97
reflexion, with mirror a line in Π having direction v, then A represents a reflexion of R3
with mirror the plane containing u and v.
98
If the equation of the plane Π is ax + by + cz = 0, we can take n = ai + bj + ck, and so:
ax + by + cz
sΠ (xi + yj + zk) = xi + yj + zk − 2 (ai + bj + ck).
a2 + b2 + c2
Thus in particular
a 1
(b2 + c2 − a2 )i − 2abj − 2ack ,
sΠ (i) = i − 2 (ai + bj + ck) = 2
a2 2
+b +c 2 2
a +b +c 2
Similarly, one can compute the second and third columns, to give the following matrix,
which you are not expected to memorise for the examination for this module!
2
b + c 2 − a2 −2ab −2ac
1
SΠ = 2 −2ab c2 + a2 − b2 −2bc .
a + b2 + c 2 2 2 2
−2ac −2bc a +b −c
Note that
a −a
Sπ (n) = SΠ b = −b = −n,
c −c
which is exactly what one would expect since n is orthogonal to the plane Π. Now each
vector x in R3 can be written uniquely as x = u + v where u and n are collinear and v is
parallel to (and thus in) Π, so that v is orthogonal to n. (We have u = (x·n)
|n|2
n = (x·n̂)n̂
and v = x − (x·n̂)n̂, where n̂ is the unit vector in the direction of n. You should prove
the assertions of the previous sentence, and also try to prove that u and v are the unique
vectors with this property.) Then sΠ (u) = −u and sΠ (v) = v, so that u is an eigenvector
of sΠ with corresponding eigenvalue −1 (unless u = 0), and v is an eigenvector of sΠ
with corresponding eigenvalue 1 (unless v = 0). Moreover, sΠ (x) = sΠ (u + v) = −u + v,
so that (exercise for you) x is an eigenvector of sΠ when exactly one of the conditions
u = 0 and v = 0 holds.
All of what we have done above generalises readily to reflexions in a hyperplane Π
of Rn orthogonal to the vector n 6= 0n . I leave the reader to work out the necessary
details.
Exercise. Check that the formula for SΠ gives the right answer when Π is the (x, y)
plane (the plane defined by the equation z = 0). Also check that the transformation sΠ
defined at the start of Example 2 is indeed a linear map.
99