Chapter 8 - Further Matrix Algebra: 8.1 - Eigenvalues and Eigenvectors
Chapter 8 - Further Matrix Algebra: 8.1 - Eigenvalues and Eigenvectors
Chapter 8 - Further Matrix Algebra: 8.1 - Eigenvalues and Eigenvectors
Av = λv
for some real λ is called an eigenvector of A . The scalar λ is the corresponding eigenvalue.
To find the eigenvalues of a square matrix A it suffices to consider the characteristic polynomial,
det(A − λI ).
The roots of this polynomial are the eigenvalues of A . Note that even if the matrix consists of real numbers, it's
eigenvalues may be complex numbers.
To find the eigenvector corresponding to each eigenvalue one fixes a choice of the eigenvalue λ , then solves the
equations
(A − λI )v = 0.
There could be more than one eigenvector for each eigenvalue, also if the eigenvalue is a complex number then
the corresponding eigenvector will be complex as well.
I v = 1 ⋅ v,
hence every non-zero vector is an eigenvector with corresponding eigenvalue 1. If D is a diagonal matrix with
diagonal entries d1 , d2 , … , dn , then each vector ei is an eigenvector with eigenvalue di .
The eigenvalues of orthogonal matrices are usually complex numbers. For any eigenvalue λ of an orthogonal
matrix P , the magnitude |λ| is always 1. For example, if
cos(θ) sin(θ)
P = R(θ) = ( )
− sin(θ) cos(θ)
then the eigenvalues are eiθ and e−iθ with corresponding eigenvectors
i −i
( ) and ( ).
1 1
8.3 - Change of basis
Suppose that {v1 , … , vn } are column vectors of dimension n . Then we call these a basis if the matrix with
n
columns vi is invertible. In particular the vectors e1 , … , en form the canonical basis of R . Any vector w
can be rewritten in a new basis,
w = λ1 v1 + ⋯ + λn vn .
We wont study bases any further in this course, but the point of them is that changing the basis changes the
perspective on a problem. By changing the basis we can make the problem easier.
In this diagonal form, the columns of the matrix Q are all eigenvectors for M . In fact the columns of Q form a
basis. The corresponding eigenvalue of the i th column of Q is the i th value on the diagonal of D .
If they do form a basis then write the eigenvectors as a column for a matrix, V .
If all the eigenvalues of a symmetric matrix A are strictly positive then we say that A is positive definite.
8.6 - Diagonalisation of symmetric matrices
Theorem
If A is an n × n real symmetric matrix then their exists are choice of eigenvectors
v1 , v2 , … , vn
1 if i = j and
vi ⋅ vj = {
0 otherwise.
Then A = P DP
t
, where
1. D is the diagonal matrix with diagonal entries (di ) , for di the eigenvalue corresponding to vi and
2. P is the orthogonal matrix with columns vi .
for some diagonal matrix D and some orthogonal matrix P , whose columns are eigenvectors of A . Multiplying
this out we find,
2
Q(x1 , … , xn ) = d1 (p11 x1 + ⋯ + pn1 xn )
2
+ d2 (p12 x1 + ⋯ + pn2 xn )
⋮
2
+ dn (p1n x1 + ⋯ + pnn xn )
if both d1 , d2 > 0 then the solution is an ellipse with axes (p11 , p21 ) and (p12 , p22 ) and respective radii
d
−1
1
and d2−1 ,
if d1 < 0 < d2 then the solution set is a hyperbola, with axes (p11 , p21 ) and (p12 , p22 ) ,
if d1 = 0 < d2 then the solution set is a pair of parallel lines pointing in the direction (p12 , p22 ) ,
if both d1 , d2 ≤ 0 then the solution set is empty.