UNIT II Eigenvalues and Eigenvectors
UNIT II Eigenvalues and Eigenvectors
UNIT II Eigenvalues and Eigenvectors
---------------------------------------------------------------------------------------------------------------------------
Unit
Eigen Values and Eigen Vectors
2
Example : Find the eigen values and eigen vectors of the following matrix
1 0 −1
𝐴 = [1 2 1]
2 2 3
Solution:
1 0 −1
Step I:- 𝐴 = [1 2 1 ]
2 2 3
1−𝜆 0 −1
𝐴 − 𝜆𝐼 = [ 1 2−𝜆 1 ]
2 2 3−𝜆
1−𝜆 0 −1
Step II:-|𝐴 − 𝜆𝐼| = 0 ↦ | 1 2−𝜆 1 |=0
2 2 3−𝜆
𝜆3 − 𝑆1 𝜆2 + 𝑆2 𝜆 − |𝐴| = 0
𝑆1=1+2+3 = 6
2 1 1 −1 1 0
𝑆2 = | |+| |+| | = 11
2 3 2 3 1 2
|𝐴| = 1 × (6 − 2) − 1 × (2 − 4) = 6
𝜆3 − 6𝜆2 + 11𝜆 − 6 = 0
𝜆 = 1,2,3are the eigen values of A
1−𝜆 0 −1 𝑥 0
Step III:-(𝐴 − 𝜆𝐼)𝑋 = 0 ↦ [ 1 2−𝜆 𝑦
1 ] [ ] = [0]
2 2 3−𝜆 𝑧 0
0 0 −1 𝑥 0
For 𝜆 = 1, [1 1 1 ] [𝑦] = [0] ⇒ 𝑥 + 𝑦 = 0, 𝑧 = 0
2 2 2 𝑧 0
−𝑘 −1
Let y = k, x = - k ,z = 0 so 𝑋1 = [ 𝑘 ] = 𝑘 [ 1 ]
0 0
−1 0 −1 𝑥 0
For 𝜆 = 2, [ 1 0 1 ] [𝑦] = [0] ⇒ 𝑥 + 𝑧 = 0,2𝑥 + 2𝑦 + 𝑧 = 0
2 2 1 𝑧 0
−𝑘 −2
Let z = k, x = - k , y = k/2 so 𝑋2 = [𝑘/2] = 𝑘 [ 1 ]
𝑘 2
−2 0 −1 𝑥 0
For 𝜆 = 3, [ 1 −1 1 ] [𝑦] = [0] ⇒ 2𝑥 + 𝑧 = 0, 2𝑥 + 2𝑦 = 0
2 2 0 𝑧 0
𝑘 1
Let x = k , y = -k, z = -2k so 𝑋3 = [− 𝑘 ] = 𝑘 [−1]
−2𝑘 −2
Example:Find the eigen values and eigen vectors of the following matrix
1 1 1
𝐴 = [0 2 1]
0 0 3
Solution:-
1 1 1
Step I:- 𝐴 = [0 2 1 ]
0 0 3
Given matrix is upper triangular matrix,so diagonal elements are eigen values of A
∴ 𝜆 = 1,2,3are the eigen values of A
To find eigen vector
1−𝜆 0 1 𝑥 0
Step II:- (𝐴 − 𝜆𝐼)𝑋 = 0 ↦ [ 0 2−𝜆 1 ] [𝑦] = [0]
0 0 3−𝜆 𝑧 0
0 1 1 𝑥 0
For 𝜆 = 1, [0 1 1] [𝑦] = [0] ⇒ 𝑥 + 𝑦 = 0, 𝑧 = 0
0 0 2 𝑧 0
−𝑘 −1
Let y = k, x = - k, z = 0 so 𝑋1 = [ 𝑘 ] = 𝑘 [ 1 ]
0 0
−1 1 1 𝑥 0
For𝜆 = 2, [ 0 0 1] [𝑦] = [0] ⇒ 𝑧 = 0, −𝑥 + 𝑦 + 𝑧 = 0
0 0 1 𝑧 0
𝑘 1
Let x = k, y = k, z = 0 so𝑋2 = [𝑘] = 𝑘 [1]
0 0
−2 1 1 𝑥 0
For𝜆 = 3, [ 0 −1 1] [𝑦] = [0] ⇒ −2𝑥 + 𝑦 + 𝑧 = 0, −𝑦 + 𝑧 = 0
0 0 0 𝑧 0
𝑘 1
Let z = k , y = k, x = k so 𝑋3 = [𝑘] = 𝑘 [1]
𝑘 1
Example1.16:-Find the eigen values and eigen vectors of the following matrix
1 −6 −4
𝐴 = [0 4 2]
0 −6 −3
Solution:-
1 −6 −4
Step I: 𝐴 = [0 4 2]
0 −6 −3
1−𝜆 −6 −4
𝐴 − 𝜆𝐼 = [ 0 4−𝜆 2 ]
0 −6 −3 − 𝜆
1−𝜆 −6 −4
Step II:- |𝐴 − 𝜆𝐼| = 0 ↦ | 0 4−𝜆 2 |=0
0 −6 −3 − 𝜆
𝜆3 − 𝑆1 𝜆2 + 𝑆2 𝜆 − |𝐴| = 0
∴ 𝜆 = 1,1,0are the eigen values of A
1−𝜆 −6 −4 𝑥 0
Step III:- (𝐴 − 𝜆𝐼)𝑋 = 0 ↦ [ 0 4−𝜆 2 ] [𝑦 ] = [ 0]
0 −6 −3 − 𝜆 𝑧 0
1 −6 −4 𝑥 0
For 𝜆 = 0, [0 4 2 ] [𝑦] = [0]
0 −6 −3 𝑧 0
⇒ 𝑥 − 6𝑦 − 4𝑧 = 0,4𝑦 + 2𝑧 = 0, −6𝑦 − 3𝑧 = 0
2𝑘 2
Let 𝑧 = 𝑘, 𝑦 = − 𝑘, 𝑥 = 2𝑘 so 𝑋1 = [−𝑘] = 𝑘 [−1]
𝑘 1
0 −6 −4 𝑥 0
For𝜆 = 1, [0 3 2 ] [𝑦] = [0]
0 −6 −4 𝑧 0
⇒ 3𝑦 + 2𝑧 = 0
𝑥 𝑡 1 0
2𝑘 2𝑘
Let 𝑧 = 𝑘, 𝑥 = 𝑡, 𝑦 = − 3 , so 𝑋 = [𝑦] = [− 3 ] = 𝑡 [0] + 𝑘 [−2/3]
𝑧 𝑘 0 1
1 0
𝑠o𝑋2 = [0] , 𝑋3 = [−2/3]
0 1
Exercise
1. Find the eigen values and eigen vectors for lowest eigen value of the following
matrix
4 6 6 4 2 −2
1)[ 1 3 2] 2)[−5 3 2]
−1 −4 −3 −2 4 1
0 2 0 1 0 0
3)[3 −2 3] 4) [0 3 −3]
0 3 0 0 −1 3
1.5 0 1
5) − 0.5 0.5 − 0.5
− 0.5 0 0
2. Find the eigen values and eigen vectors for highest eigen value of the following
matrix
1 1 −2 1 0 −4
1) [−1 2 1] 2)[ 0 5 4]
0 1 −1 −4 4 3
3. Find the eigen values and eigen vectors for of the following matrix
1 1 1 0 1 0
14 −10
1) [0 2 1] 2)[0 0 1] 3)[ ]
5 −1
0 0 3 1 −3 3
4. Find the eigen values and eigen vectors for repeated eigen of the following matrix
4 −1 1 3 1 1 2 0 0 1 2 3
1) [−1 4 −1] 2)[1 3 −1]3)[0 2 0]4)[2 4 6]
1 −1 4 1 −1 3 0 0 2 3 6 9
principle directions are the eigenvectors and the percentage deformation in each
principle direction is the corresponding eigenvalue.
6. Oil companies frequently use eigenvalue analysis to explore land for oil. Oil,
dirt, and other substances all give rise to linear systems which have different
eigenvalues, so eigenvalue analysis can give a good indication of where oil
reserves are located. Oil companies place probes around a site to pick up the
waves that result from a huge truck used to vibrate the ground. The waves are
changed as they pass through the different substances in the ground. The
analysis of these waves directs the oil companies to possible drilling sites.
7. Eigenvalues are not only used to explain natural occurrences, but also to
discover new and better designs for the future. Some of the results are quite
surprising. If you were asked to build the strongest column that you could to
support the weight of a roof using only a specified amount of material, what
shape would that column take? Most of us would build a cylinder like most
other columns that we have seen. However, Steve Cox of Rice University and
Michael Overton of New York University proved, based on the work of J.
Keller and I. Tadjbakhsh, that the column would be stronger if it was largest at
the top, middle, and bottom. At the points of the way from either end, the
column could be smaller because the column would not naturally buckle there
anyway. Does that surprise you? This new design was discovered through the
study of the eigenvalues of the system involving the column and the weight
from above. Note that this column would not be the strongest design if any
significant pressure came from the side, but when a column supports a roof, the
vast majority of the pressure comes directly from above.
8. Very (very, very) roughly then, the eigenvalues of a linear mapping is a
measure of the distortion induced by the transformation and the eigenvectors
tell you about how the distortion is oriented. It is precisely this rough picture
which makes PCA (Principal Component Analysis = A statistical procedure)
very useful.
9. Using singular value decomposition for image compression. This is a note
explaining how you can compress and image by throwing away the small
eigenvalues of AATAAT. It takes an 88megapixel image of an Allosaurus, and
shows how the image looks after compressing by
selecting 11,1010,2525,5050,100100 and 200200 of the largest singular values
10. Deriving Special Relativity is more natural in the language of linear algebra. In
fact, Einstein's second postulate really states that "Light is an eigenvector of the
Lorentz transform." This document goes over the full derivation in detail.
11. Spectral Clustering. Whether it's in plants and biology, medical imaging,
buisness and marketing, understanding the connections between fields on
Facebook, or even criminology, clustering is an extremely important part of
modern data analysis. It allows people to find important subsystems or patterns
inside noisy data sets. One such method is spectral clustering which uses the
eigenvalues of a the graph of a network. Even the eigenvector of the second
smallest eigenvalue of the Laplacian matrix allows us to find the two largest
clusters in a network.
12. Dimensionality Reduction/PCA. The principal components correspond the the
largest eigenvalues of ATAATA and this yields the least squared projection
onto a smaller dimensional hyperplane, and the eigenvectors become the axes
of the hyperplane. Dimensionality reduction is extremely useful in machine
learning and data analysis as it allows one to understand where most of the
variation in the data comes from.
13. Low rank factorization for collaborative prediction. This what Netflix does (or
once did) to predict what rating you'll have for a movie you have not yet
watched. It uses the SVD, and throws away the smallest eigenvalues
of ATAATA.
14. The Google Page Rank algorithm. The largest eigenvector of the graph of the
internet is how the pages are ranked.
15. When you watch a movie on screen (TV/movie theater,..), though the
picture(s)/movie you see is actually 2D, you do not lose much information from
the 3D real world it is capturing. That is because the principal eigen vector is
more towards 2D plane the picture is being captured and any small loss of
information (depth) is inferred automatically by our brain. (Reason why we
most of the times take photos using camera facing directly at us, not from the
top of the head). Each scene requires certain aspects of the image to be
enhanced, that is the reason the camera man/woman chooses his/her camera
angle to capture most of those visual aspects. (apart from colour of costume,
background scene and background music)
16. If you eat pizza, french fries,...or any food.... you are typically translating their
taste into sour, sweet, bitter,salty, hot, etc ... principal components of taste --
though in reality the way a food is prepared is formulated in terms of
ingredients' ratios (sugar,flour,butter, etc...10's of 100's of things that go into
making a specific food) ... However our mind will transform all such
information into the principal components of taste(eigen vector having sour,
bitter, sweet, hot,..) automatically along with the food texture and smell. So we
use eigen vectors every day in many situations without realizing that's how we
learn about a system more effectively. Our brain simply transforms all the
ingredients, cooking methods, final food product into some very effective eigen
vector whose elements are taste sub parts,smell and visual appearance
internally. (All the ingredients and their quantities along with the cooking
procedure represent some transformation matrix A and we can find some
principal eigen vector(s) V with elements as taste+smell+appearance+touch
having some linear transformation directly related. AV = wV , where w
represent eigen values scalar and V an eigen vector) (top wine tasters probably
have bigger taste+smell+appearance eigen vector and also with much bigger
eigen values in each dimension. This concept can be extended to any field of
study.)
17. if we take pictures of a person from many angles(front , back, top, side..) on a
daily basis and would like to measure the changes in the entire body as one
grows,... we can get the most information from the front angle with the axis of
camera perpendicular to the line passing from crown of the head to a point
passing between one's feet. This axis/camera angle captures the most useful
information to measure a person's outer physical body changes as the age
progresses. This axis becomes a principal eigen vectors with the largest eigen
values. (Note: the data/images that we capture directly from the top of the
person may give very less useful information compared to the camera directly
facing him/her in this situation. That is the reason why we use PCA-Principal
Component Analysis technique in determining most effective eigen vectors and
related eigen values to capture most of the needed information without
bothering about all the remaining axes of data capture.)Hope this helps in
understanding why and how we use eigen vectors and eigen values for better
perception in whatever we do on a day to day . Eigen vectors represent those
axes of perception/learn along which we can know/understand/perceive things
around us in very effective way(s).
18. Finally it boils down to the differences between person to person, in
consciously/sub-consciously building/refining such principal eigen vectors and
related eigen values, in each field of learning that differentiate one person from
the other. ( ex: musicians, artists, scientists, mathematicians, camera men,
directors, teachers, doctors, engineers, parents, stock market brokers, weather
prediction, ....)
19. In aeronautical engineering eigenvalues may determine whether the flow over a
wing is laminar or turbulent.
20. In electrical engineering they may determine the frequency response of an
amplifier or the reliability of a national power system.
21. In structural mechanics eigenvalues may determine whether an automobile is
too noisy or whether a building will collapse in an earth-quake.
22. In probability they may determine the rate of convergence of a Markov process.
23. In ecology they may determine whether a food web will settle into a steady
equilibrium.
24. In numerical analysis they may determine whether a discretization of a
differential equation will get the right answer or how fast a conjugate gradient
iteration will converge.
Algebraic Multiplicity:-
Let A be 𝑛 × 𝑛 matrix with eigen value 𝜆. The algebraic multiplicity of 𝜆 is
the number of times 𝜆 is repeated as a root of the characteristics polynomial.
Sanjivani College of Engineering, Kopargaon (An Autonomous Institute) 2.8
Linear Algebra and Partial Differentiation Eigen Values and Eigen Vectors
---------------------------------------------------------------------------------------------------------------------------
Geometric Multiplicity:-
Geometric multiplicity of an eigen value is the number of linearly independent
eigen vectors associated with it.
Note:-
1) Let A be 𝑛 × 𝑛 matrix with eigen value 𝜆.
The geometric multiplicity of 𝜆 = 𝑛 − 𝜌(𝐴 − 𝜆𝐼).
2) If algebraic multiplicity of each eigen value is equal to geometric multiplicity
then matrix is diagonalizable.
3) If all eigen values of matrix A are distinct then matrix A is diagonalizable.
4) Matrix P is formed by grouping the eigenvectors of A in square matrix form.
5) Diagonal matrix D is known as spectral matrix and it has eigen values of A as
its diagonal elements.
1 1 3
Example 1:Determine whether the matrix 𝐴 = [1 5 1] in diagonalizable. If
3 1 1
diagonalizable, find modal matrix and diagonal matrix.
Solution: Characteristic equation is given by,
𝜆3 − 𝑆1 𝜆2 + 𝑆2 𝜆 − |𝐴| = 0
𝑆1 = 1 + 5 + 1 = 7
5 1 1 3 1 1
𝑆2 = [ ]+[ ]+[ ]=4−8+4 =0
1 1 3 1 1 5
|𝐴| = 1(5 − 1) − 1(1 − 3) + 3(1 − 15) = 4 + 2 − 42
= −36
Therefore characteristic equation is
𝜆3 − 7𝜆2 + 36 = 0
Solving, we get
𝜆 = −2,3,6
Here all eigen values of matrix A are distinct
∴ Given matrix A is diagonalizable.
To find modal and diagonal matrix
consider
[𝐴 − 𝜆𝐼]𝑋 = 0
1−𝜆 1 3 𝑥
[ 1 5−𝜆 1 ] [ 𝑦 ]=0
3 1 1−𝜆 𝑧
For 𝜆 = −2
3 1 3 𝑥
[1 7 1] [𝑦] = 0
3 1 3 𝑧
3𝑥 + 𝑦 + 3𝑧 = 0
𝑥 + 7𝑦 + 𝑧 = 0
By Cramer’s rule
𝑥 −𝑦 𝑧
= =
1 3 3 3 3 1
[ ] [ ] [ ]
7 1 1 1 1 7
𝑥 −𝑦 𝑧
= = = 𝑡(𝑠𝑎𝑦)
−20 0 20
𝑥 = −20𝑡, 𝑦 = 0, 𝑧 = 20𝑡
𝑥 −20𝑡 −1
[𝑦] = [ 0 ] = 20𝑡 [ 0 ]
𝑧 20𝑡 1
−1
∴ 𝑋1 = [ 0 ]
1
Similarly for 𝜆 = 3 and 𝜆 = 6 we get
−1 1
𝑋2 = [ 1 ] , 𝑋3 = [2]
−1 1
−1 −1 1
∴ Modal matrix 𝑃 = [ 0 1 2]
1 −1 1
−2 0 0
𝐷 = [ 0 3 0]
0 0 6
2 1
Example: Find the matrix P which transforms the matrix 𝐴 = [ ] in diagonal
2 3
form.
2 1
Solution:𝐴 = [ ]
2 3
Characteristic equation is given by,
𝜆2 − 𝑆1 𝜆 + |𝐴| = 0
𝑆1 = 2 + 3 = 5
|𝐴| = 6 − 2 = 4
∴ 𝜆2 − 5𝜆 + 4 = 0
𝜆 = 1, 4
Consider, [𝐴 − 𝜆𝐼]𝑋 = 0
2−𝜆 1 𝑥
[ ] [𝑦 ] = 0
2 3−𝜆
For 𝜆 = 1
1 1 𝑥
[ ][ ] = 0
2 2 𝑦
1
This gives, 𝑋1 = [ ]
−1
For 𝜆 = 4
−2 1 𝑥
[ ][ ] = 0
2 −1 𝑦
1
This gives, 𝑋2 = [ ]
2
1 1
∴ Modal matrix 𝑃 = [ ]
2 −1
4 0
𝐷=[ ]
0 1
Exercise
Exercise
State the nature of following quadratic forms
1) 3𝑥 2 + 3𝑦 2 + 3𝑧 2 + 2𝑥𝑦 + 2𝑥𝑧 − 2𝑦𝑧
2) 4𝑥 2 + 3𝑦 2 + 𝑧 2 − 8𝑥𝑦 − 6𝑦𝑧 + 4𝑧𝑥
3) 8𝑥 2 + 7𝑦 2 + 3𝑧 2 − 12𝑥𝑦 − 8𝑦𝑧 + 4𝑧𝑥
4) 6𝑥 2 + 3𝑦 2 + 3𝑧 2 − 2𝑦𝑧 + 4𝑧𝑥 − 4𝑥𝑦
5) 𝑥 2 + 3𝑦 2 + 3𝑧 2 − 2𝑦𝑧
6) 2𝑥 2 − 6𝑥𝑦 + 𝑧 2
7) 5𝑥 2 + 26𝑦 2 + 2𝑦𝑧 + 6𝑥𝑧 + 4𝑥𝑦
8) 𝑥 2 + 2𝑦 2 + 3𝑧 2 − 4𝑦𝑧 + 6𝑥𝑦 + 2𝑥𝑧
9) 𝑥 2 − 2𝑦 2 − 3𝑧 2 + 5𝑥𝑦
10) 𝑥 2 + 2𝑧 2 + 8𝑦𝑧 + 6𝑥𝑦 + 4𝑥𝑧
11) 5𝑥 2 + 26𝑦 2 + 10𝑧 2 + 4𝑦𝑧 + 6𝑥𝑦 + 14𝑥𝑧
12) 8𝑥 2 + 7𝑦 2 + 3𝑧 2 − 8𝑦𝑧 + 12𝑥𝑦 + 4𝑥𝑧
13) −2𝑥 2 − 𝑦 2 − 3𝑧 2
Example. Express the following quadratic form in canonical form using linear
transformation 𝑄(𝑥) = 6𝑥12 + 3𝑥22 + 3𝑥32 − 4𝑥1 𝑥2 + 4𝑥1 𝑥3 − 2𝑥2 𝑥3
6 −2 2 𝑥1
Solution: 𝑄(𝑥) = [𝑥1 𝑥2 𝑥3 ] [−2 3 −1] [𝑥2 ]
2 −1 3 𝑥3
6 −2 2
𝐴 = [−2 3 −1]
2 −1 3
Consider 𝐴 = 𝐼𝐴𝐼
6 −2 2 1 0 0 1 0 0
[−2 3 −1] = [0 1 0] 𝐴 [0 1 0]
2 −1 3 0 0 1 0 0 1
1 1
𝑅2 + 3 𝑅1 , 𝑅3 − 3 𝑅1
6 −2 2 1 0 0
7 1 1 1 0 0
0 − 1 0
3 3 = 3 𝐴 [0 1 0]
1 7 1 0 0 1
[ 0 − − 0 1]
3 3 ] [ 3
1 1
𝐶2 + 𝐶1 , 𝐶3 − 𝐶1
3 3
6 0 0 1 0 0
7 1 1 1 1
0 − 1 0 1 −
3 3 = 3 𝐴[ 3 3]
1 7 1 0 1 0
[0 − 3 3 ] [− 3 0 1] 0 0 1
1
𝑅3 + 𝑅2
7
6 0 0 1 0 0
7 1 1 1 1
0 − 1 0 1 −
3 3 = 3 𝐴[ 3 3]
16 2 1 0 1 0
[0 0 7 ] [− 7 7
1] 0 0 1
1
𝐶3 + 𝐶2
7
6 0 0 1 0 0 1 2
7 1 1−
0 0 1 0 3 7
3 = 3 𝐴 1
16 2 1 0 1
7
[0 0 −
7] [ 7 7
1] [
0 0 1 ]
𝐷 = 𝑃𝑇 𝐴𝑃
Exercise
Express the following quadratic forms to canonical form using linaer transformation
1. 𝑄(𝑥) = 10𝑥12 + 2𝑥22 + 5𝑥32 − 4𝑥1 𝑥2 − 10𝑥1 𝑥3 + 6𝑥2 𝑥3
2. 𝑄(𝑥) = 3𝑥12 + 2𝑥22 + 1𝑥32 + 4𝑥1 𝑥2 − 2𝑥1 𝑥3 + 6𝑥2 𝑥3
3. 𝑄(𝑥) = 2𝑥12 + 9𝑥22 + 6𝑥32 + 8𝑥1 𝑥2 + 6𝑥1 𝑥3 + 8𝑥2 𝑥3
4. 𝑄(𝑥) = 6𝑥12 + 3𝑥22 + 14𝑥32 + 4𝑥1 𝑥2 + 18𝑥1 𝑥3 + 4𝑥2 𝑥3
5. 𝑄(𝑥) = 3𝑥12 + 5𝑥22 + 3𝑥32 − 2𝑥1 𝑥2 + 2𝑥1 𝑥3 − 2𝑥2 𝑥3
6. 𝑄(𝑥) = 𝑥12 + 6𝑥22 + 18𝑥32 + 4𝑥1 𝑥2 + 8𝑥1 𝑥3 − 4𝑥2 𝑥3
7. 𝑄(𝑥) = 2𝑥12 + 𝑥22 − 3𝑥32 + 12𝑥1 𝑥2 − 4𝑥1 𝑥3 − 8𝑥2 𝑥3
8. 𝑄(𝑥) = 2𝑥12 + 7𝑥22 + 5𝑥32 − 8𝑥1 𝑥2 + 4𝑥1 𝑥3 − 10𝑥2 𝑥3
9. 𝑄(𝑥) = 3𝑥12 + 3𝑥22 + 3𝑥32 + 6𝑥1 𝑥2 + 2𝑥1 𝑥3 − 2𝑥2 𝑥3
10. 𝑄(𝑥) = 𝑥12 + 2𝑥22 − 7𝑥32 − 4𝑥1 𝑥2 + 8𝑥1 𝑥3
11. 𝑄(𝑥) = 10𝑥12 + 𝑥22 + 𝑥32 − 6𝑥1 𝑥2 + 6𝑥1 𝑥3 − 2𝑥2 𝑥3
For 𝜆 = 6 (𝐴 − 𝜆𝐼)𝑋 = 0
−3 −1 1 𝑥 0
[−1 −1 −1] [𝑦] = [0]
1 −1 −3 𝑧 0
−3𝑥 − 𝑦 + 𝑧 = 0
−𝑥 − 𝑦 − 𝑧 = 0
1
𝑋3 = [−2]
1
2 0 0 1 1 1
∴ 𝐷 = [0 3 0], 𝑀=[ 0 1 −2]
0 0 6 −1 1 1
Orthogonal transformation, 𝑋 = 𝑃𝑌 is,
1 1 1
𝑥1 √2 √3 √6 𝑦
1 2 1
[𝑥2 ] = 0 − [𝑦2 ]
𝑥3 √3 √6 𝑦3
1 1 1
[√2 √3 √6 ]
1 1 1
𝑥1 = 𝑦1 + 𝑦2 + 𝑦3
√2 √3 √6
1 2
𝑥2 = 𝑦2 − 𝑦
√3 √6 3
1 1 1
𝑥3 = 𝑦1 + 𝑦2 + 𝑦3
√2 √3 √6
Exercise
1. 𝑄(𝑥) = 𝑥12 + 3𝑥22 + 3𝑥32 − 2𝑥2 𝑥3
2. 𝑄(𝑥) = 2(𝑥12 + 𝑥1 𝑥2 + 𝑥22 )
3. 𝑄(𝑥) = 7𝑥12 + 10𝑥22 + 7𝑥32 − 4𝑥1 𝑥2 + 2𝑥1 𝑥3 + 4𝑥2 𝑥3
4. 𝑄(𝑥) = 2𝑥12 + 2𝑥22 + 2𝑥32 − 2𝑥1 𝑥3
5. 𝑄(𝑥) = 2𝑥12 + 2𝑥22 − 𝑥32 − 8𝑥1 𝑥2 + 4𝑥1 𝑥3 − 4𝑥2 𝑥3
6. 𝑄(𝑥) = 7𝑥12 − 8𝑥22 − 8𝑥32 + 8𝑥1 𝑥2 − 8𝑥1 𝑥3 − 2𝑥2 𝑥3
7. 𝑄(𝑥) = 3𝑥12 − 2𝑥22 −𝑥32 − 4𝑥1 𝑥2 + 8𝑥1 𝑥3 + 12𝑥2 𝑥3
8. 𝑄(𝑥) = 8𝑥12 + 7𝑥22 + 3𝑥32 − 12𝑥1 𝑥2 + 4𝑥1 𝑥3 − 8𝑥2 𝑥3
9. 𝑄(𝑥) = 𝑥12 + 4𝑥22 + 9𝑥32 + 4𝑥1 𝑥2 + 6𝑥1 𝑥3 + 12𝑥2 𝑥3