Linear Algebra
Linear Algebra
Linear Algebra
Monographs # 1
Constantin Udriste
LINEAR ALGEBRA
Ioana Boca
Linear Algebra
Monographs # 2
* Monographs, 2000
Neither the book nor any part may be reproduced or transmitted in any form
or by any means, electronic or mechanical, including photocopying, microfilming
or by any information storage and retrieval system, without the permission in
writing of the publisher.
Preface
This textbook covers the standard linear algebra material taught at the University
Politehnica of Bucharest, and is designed for a 1-semester course.
The prerequisites are highschool algebra and geometry.
Chapters 14 are intended to introduce first year students to the basic notions
of vector space, linear transformation, eigenvectors and eigenvalues, bilinear and
quadratic forms, and to the usual linear algebra techniques.
The linear algebra language is used in Chapters 5, 6, 7 to present some notions and
results on vectors, straight lines and planes, transformations of coordinate systems.
We end with some exam samples. Each sample involves facts from two or more
chapters.
The topics treated in this book and the presentation of the material are similar
to those in several of the first authors previous works [19][25]. Parts of some linear
algebra sections follow [1]. The selection of topics and problems relies on the teaching
experience of the authors at the University Politehnica of Bucharest, including lectures
and seminars taught in English at the Department of Engineering Sciences.
The publication of this volume was supported by MEN Grant #21815, 28.09.98,
CNCSU-31; this support provided the oportunity to include the present textbook in
the University Lectures Series published by the Editorial House of Balkan Society of
Geometers.
We wish to thank our colleagues for helpful discussions on the problems and topics
treated in this book and on our teaching activities. Any further suggestions will be
greatly appreciated.
The authors
July 12, 2000
iii
Contents
1 Vector Spaces
1
Vector Spaces . . . . . . . . . . . . . . . .
2
Vector Subspaces . . . . . . . . . . . . . .
3
Linear Dependence. Linear Independence
4
Bases and Dimension . . . . . . . . . . . .
5
Coordinates.
Isomorphisms. Change of Coordinates . .
6
Euclidean Vector Spaces . . . . . . . . . .
7
Orthogonality . . . . . . . . . . . . . . . .
8
Problems . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
4
7
9
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
11
15
18
22
2 Linear Transformations
1
General Properties . . . . . . . . . . . . . .
2
Kernel and Image . . . . . . . . . . . . . . .
3
The Matrix of a Linear Transformation . .
4
Particular Endomorphisms . . . . . . . . . .
5
Endomorphisms of Euclidean Vector Spaces
6
Isometries . . . . . . . . . . . . . . . . . . .
7
Problems . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
25
25
29
31
34
37
41
43
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
Euclidean Spaces
. . . . . . . . . .
. . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
45
45
47
51
54
61
63
66
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
67
67
70
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
72
76
79
vi
5 Free Vectors
1
Free Vectors . . . . . . . . . .
2
Addition of Free Vectors . . .
3
Multiplication by Scalars . .
4
Collinearity and Coplanarity
5
Inner Product in V3 . . . . .
6
Vector (cross) Product in V3
7
Mixed Product . . . . . . . .
8
Problems . . . . . . . . . . .
CONTENTS
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
81
81
83
84
85
87
90
92
94
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
95
95
96
97
100
101
102
104
107
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
109
109
110
113
115
116
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Exam Samples
119
Bibliography
131
Chapter 1
Vector Spaces
1
Vector Spaces
The vector space structure is one of the most important algebraic structures.
The basic models for (real) vector spaces are the spaces of ndimensional row or
column matrices:
a1
..
Mn,1 (R) = v = . ; aj R, j = 1, n .
an
We will identify Rn with either one of M1,n (R) or Mn,1 (R). A row (column)
matrix is also called a row (column) vector.
The definition of matrix multiplication makes the use of column vectors more
convenient for us. We will also write a column vector in the form t [a1 , . . . , an ] or
t
(a1 , . . . , an ), as the transpose of a row vector, in order to save space.
An abstract vector space is endowed with two operations: addition of vectors
and multiplication by scalars, where the scalars will be the elements of a field. For
the examples above, these are just the usual matrix addition and multiplication of a
matrix by a real number:
a1
b1
a1 + b1
.. ..
..
. + . =
.
an
bn
an + bn
a1
ka1
k ... = ... ,
an
kan
Let us first recall the definition of a field.
1
where k R .
(a, b) a + b
(a, b) ab ,
(d) K = Q[ 2] = {a + b 2 ; a, b Q}.
DEFINITION 1.2 A vector space V over a field K (or a Kvector space) is a set
endowed with two laws of composition:
(a) addition : V V V, (v, w) v + w
(b) multiplication by scalars : K V V, (k, v) kv
satisfying the following axioms:
(i) (V, +) is an abelian group
(ii) multiplication by scalars is associative with respect to multiplication in K:
k(lv) = (kl)v,
k, l K, v V
v V
k, l K, v, w V.
The elements of K are usually called scalars and the elements of V vectors. A
vector space over C is called a complex vector space; a vector space over R is called
a real vector space. When K is not specified, we understand K = R or K = C.
1. VECTOR SPACES
for all k K, f, g V.
(vi) The solution set of an algebraic linear homogeneous system with n unknowns
and coefficients in K is a Kvector space with the operations induced from Kn .
(vii) The solution set of an ordinary linear homogeneous differential equation is a
real vector space with addition of functions and multiplication of functions by
scalars.
(viii) The set K[X] of all polynomials with coefficients in K is a Kvector space.
THEOREM 1.3 Let V be a Kvector space. Then V has the following properties:
(i) 0K v = 0V , v V
(ii) k 0V = 0V , k K
(iii) (1) v = v, v V.
Proof. To see (i) we use the distributive law to write
0K v + 0K v = (0K + 0K ) v = 0K v + 0V .
Now 0K v cancels out (in the group (V, +)), so we obtain 0K v = 0V . Similarly,
k 0V + k 0V = k(0V + 0V ) = k 0V
implies k 0V = 0V .
For (iii), v + (1) v = (1 + (1)) v = 0K v = 0V . Hence (1) v is the additive
inverse of v. QED
v = 0V or k = 0K
(iv) k v = l v, v 6= 0V
= k = l
(v) k v = k w, k 6= 0K
= v = w
We leave the proof of the corollary as an exercise; all properties follow easily from
Theorem 1.3 and the definition of a vector space.
We will usually write 0 instead of 0K or 0V , since in general it is clear whether
we refer to the number or to the vector zero.
Vector Subspaces
REMARKS 2.2
(ii) W is a vector subspace if and only if W is a vector space with the operations
induced from V.
(iii) W is a vector subspace if and only if
k, l K, u, w V = k u + l w W .
Examples of vector subspaces
(i) {0V } and V are vector subspaces of V. Any other vector subspace is called a
proper vector subspace.
(ii) The straight lines through the origin are proper vector subspaces of R2 .
(iii) The straight lines and the planes through the origin are proper vector subspaces
of R3 .
(iv) The solution set of an algebraic linear homogeneous system with n unknowns is
a vector subspace of Kn .
2. VECTOR SUBSPACES
(v) The set of odd functions and the set of even functions are vector subspaces of
the space of real functions defined on the interval (a, a), a R?+ .
DEFINITION 2.3 (i) Let v1 , . . . , vp V. A linear combination of the vectors
v1 , . . . , vp is a vector of the form
w = k1 v1 + . . . + kp vp ,
kj K .
p N? , kj K, vj S
p
X
ki ui Span S and w =
Then ku + lv =
p
X
i=1
(kki ) ui +
lj wj Span S, with ki , lj
j=1
i=1
K, ui , wj S.
q
X
q
X
j=1
(ii) Exercise !
(iii) The implication from the right to the left is obvious. For the other one, assume
by contradiction that none of the inclusions holds. Then we can take u1 W1 \ W2 ,
u2 W2 \ W1 .
Now u1 , u2 W1 W2 implies u1 + u2 W1 W2 since W1 W2 is a vector
subspace.
But either u1 + u2 W1 or u1 + u2 W2 contradicts u2
/ W1 or u1
/ W2
respectively. Therefore W1 W2 or W2 W1 . QED
REMARKS 2.7
f (x) f (x)
f (x) + f (x)
+
, for any f V and
2
2
(ii) Set
D1 = {(x, y) | 2x + y = 0}
D2 = {(x, y) | x y = 0}
Then R2 = D1 D2 .
ki vi = 0 .
i=1
(k1 , . . . , kp ) 6= (0, . . . , 0), then the vectors v1 , . . . , vp are linearly dependent; if the
relation implies (k1 , . . . , kp ) = (0, . . . , 0), then the vectors are linearly independent.
REMARKS 3.2 (i) Let S be an arbitrary nonempty subset of V. Then S is linearly
dependent if and only if S contains a linearly dependent subset.
(ii) Let S be an arbitrary nonempty subset of V. Then S is linearly independent
if and only if all finite subsets of S are linearly independent.
(iii) Let v1 , . . . , vn Km . Denote by A = [v1 , . . . , vn ] Mm,n (K), the matrix
whose columns are v1 , . . . , vn . Then v1 , . . . , vn are linearly independent if and only if
the system
x1
A ... = 0
xn
admits only the trivial solution, and this is equivalent to rank A = n.
Examples
(i) {0} is linearly dependent; if 0 S, then S is linearly dependent.
(ii) If v V, v 6= 0, then {v} is linearly independent.
(iii) A set {v1 , v2 } of two vectors is linearly dependent if and only if either v1 = 0 or
else v2 is a scalar multiple of v1 .
ki vi = 0 .
i=1
p
X
ki
vi Span (S {v1 }) .
k
i=2 1
q
X
cj wj
j=1
q
X
(cj )wj = 0 ,
j=1
p
X
i=1
that
p+1
X
(3.1)
kj wj = 0 .
j=1
kj wj =
p+1
X
j=1
kj
p
X
i=1
aij vi =
p
X
i=1
p+1
X
j=1
kj aij vi .
Then
p
X
i=1
(3.2)
p+1
X
kj aij vi = 0 is equivalent to
j=1
p+1
X
kj aij = 0 ,
i = 1, . . . , p ,
j=1
vi 6= vj
if i 6= j .
10
COROLLARY 4.4 Any nonzero finitely generated vector space admits a finite basis.
Using Prop.3.4, the next consequences of the theorem are straightforward.
COROLLARY 4.5 (i) Any linearly independent subset of a finitely generated vector
space is finite.
(ii) Any basis of a finitely generated vector space is finite.
THEOREM 4.6 Let V be a finitely generated vector space, V 6= {0}. Then any
two bases have the same number of elements.
Proof. Let B, B0 be two bases of V. Assume that B has n elements and B0 has
n0 elements. We have V = Span B, since the basis B spans V. By Proposition 3.4,
no linearly independent subset of V could have more than n elements. The basis B0
is linearly independent, therefore n0 n.
The same argument works for B0 instead of B, yielding n n0 . Therefore n = n0 .
QED
DEFINITION 4.7 Let V be finitely generated.
If V 6= {0}, then the dimension of V is the number of vectors in a basis of V. The
dimension is denoted by dim V.
If V = {0}, then dim {0} = 0 by definition.
Finitely generated vector spaces are also called finite dimensional vector spaces.
The other vector spaces, which have infinite bases are called infinite dimensional.
The dimension of a finite dimensional vector space is a natural number; dim V = 0
if and only if V = {0}.
When it is necessary to specify the field, we write dimK V instead of dim V. For
example, dimC C = 1, dimR C = 2, since C can be regarded as a complex vector
space, as well as a real vector space.
Examples
(i) e1 = (1, 0, . . . , 0);
and dim Kn = n.
e2 = (0, 1, . . . , 0); . . .
5. COORDINATES.
11
COROLLARY 4.9 Let L, S be finite subsets of V such that L is linearly independent and V = Span (S). Then:
(i) card L dim V card S;
(ii) card L = dim V if and only if L is a basis;
(iii) card S = dim V if and only if S is a basis.
PROPOSITION 4.10 If W is a vector subspace of V and dim V = n, n 1, then
W is finitedimensional and dim W n. Equality holds only if W = V.
Proof. Assume W 6= {0} and let v1 W, v1 6= 0. Then {v1 } is linearly independent.
If W = Span {v1 }, then we are done. If W 6 Span {v1 }, then there exists
v2 W \ Span {v1 }. Proposition 3.3 (ii) applies for L = {v1 }, v = v2 , so {v1 , v2 } is
linearly independent.
Now either W = Span{v1 , v2 } or there exists v3 W\Span{v1 , v2 }. In the latter
case we apply again Proposition 3.3 (ii) and we continue the process. The process
must stop after at most n steps, otherwise at step n + 1 we would find vn+1 such that
{v1 , . . . , vn , vn+1 } is linearly independent, which contradicts Proposition 3.4.
By the above procedure we found a basis of W which contains at most n vectors,
thus W is finite dimensional and dim V n.
Assume that dim W = n. Then any basis of W is linearly independent in V and
contains n elements; by the previous corollary it is also a basis of V. QED
THEOREM 4.11 If U, W are finitedimensional vector subspaces of V, then U +
W and U W are finite dimensional and
dim U + dim W = dim (U + W) + dim (U W) .
Sketch of proof. The conclusion is obvious if U W = U or U W = W. If not,
assume U W 6= {0} and let {v1 , . . . , vp } be a basis of U W. Then, there exist
up+1 , . . . , up+q U and wp+1 , . . . , wp+r W such that
{v1 , . . . , vp , up+1 , . . . , up+q } is a basis of U,
{v1 , . . . , vp , wp+1 , . . . , wp+r } is a basis of W,
and show that {v1 , . . . , vp , up+1 , . . . , up+q , wp+1 , . . . , wp+r } is a basis of U + W.
The idea of proof is similar if U W = {0}. QED
Coordinates.
Isomorphisms. Change of Coordinates
Let V be a finite dimensional vector space. In this section we are going to make
explicit computations with bases. For, it will be necessary to work with (finite)
ordered sets of vectors. Consequently, a finite basis B of V has three qualities: it is
linearly independent, it spans V, and it is ordered.
12
x = x1 v1 + . . . + xn vn ,
xj K ,
j = 1, . . . , n .
Proof. Suppose that B is a basis. Then every x V can be written in the form
(5.1) since V = Span B. If also x = x01 v1 + . . . x0n vn , then
0 = x x = (x1 x01 )v1 + . . . + (xn x0n )vn .
By the linear independence of B it follows that x1 x01 = . . . = xn x0n = 0.
Conversely, the existence of the representation (5.1) for each vector implies V =
Span B. The uniqueness applied for x = 0 = 0v1 + . . . + 0vn gives the linear independence of B. QED
DEFINITION 5.2 The scalars xj are called the coordinates of x with respect to
the basis B. The column vector
x1
X = ...
xn
is called the coordinate vector associated to x.
DEFINITION 5.3 Let V and W be two vector spaces over the same field K. A
map T : V W compatible with the vector space operations, i.e. satisfying:
(a) T (x + y) = T (x) + T (y) ,
(b) T (kx) = kT (x) ,
x, y V
k K, x V
(T is additive)
(T is homogeneous),
k, l K, x, y V ,
5. COORDINATES.
13
14
LEMMA 5.8 Let A = [aij ] 1 i m Mm,n (K). Then the rank of A is equal to the
1 j n
n
X
cij vi ,
j = 1, . . . , n .
i=1
cnj
T (wj ) is the jth column of C. Then:
det C 6= 0 if and only if rank C = n;
rank C = n if and only if T (B0 ) is linearly independent, by Lemma 5.8;
Lemma 5.6 applies for T and T 1 , thus B0 is linearly independent if and only
if T (B0 ) is linearly independent;
B0 is a basis of W if and only if B0 is linearly independent, by Corollary 4.9,
and the conclusion follows.
QED
n
X
j=1
x0j wj =
n
X
j=1
x0j
n
X
i=1
!
cij vi
n
X
i=1
n
X
cij xj vi .
j=1
15
Unless otherwise specified, K will denote either one of the fields R or C. Euclidean
vector spaces are real or complex vector spaces with an additional operation that will
be used to define the length of a vector, the angle between two vectors and orthogonality in a way which generalizes the usual geometric properties of space vectors in
V3 . The Euclidean vector space V3 will make the object of a separate chapter.
DEFINITION 6.1 Let V be a real or complex vector space. An inner (scalar, dot)
product on V is a map h , i : V V K which associates to each pair (v, w) a
scalar denoted by hv, wi, satisfying the following properties:
(i) Linearity in the first variable:
hv1 + v2 , wi = hv1 , wi + hv2 , wi , v1 , v2 , w W
hkv, wi = khv, wi , k K, v, w W
(additivity)
(homogeneity)
v 6= 0, v V .
REMARKS 6.2 (i) Note that linearity in the first variable means that if w is
fixed, then the resulting function of one variable is a linear transformation from
V into K.
(ii) If K = R, then the second equality in (ii) is equivalent to: hv, kwi = khv, wi,
implying linearity in the second variable too, and (iii) is equivalent to hw, vi =
hv, wi.
(iii) By (i) and (ii) in the definition we deduce the conjugate linearity (linearity in
the real case) in the second variable:
hv, w1 + w2 i = hv, w1 i + hv, w2 i
hv, kwi = khv, wi .
(additivity)
(conjugate homogeneity)
v V
w V = v = 0
v V = w = 0 , since hv, vi = 0 = v = 0.
v V
16
DEFINITION 6.3 A real or a complex vector space endowed with a scalar product
is called a Euclidean vector space.
Examples of canonical Euclidean vector spaces
1) Rn with hx, yi = x1 y1 + . . . + xn yn , where x = (x1 , . . . , xn ), y = (y1 , . . . , yn ).
2) Cn with hx, yi = x1 y1 + . . . + xn yn , where x = (x1 , . . . , xn ), y = (y1 , . . . , yn ).
a, bi = k
ak kbk cos , where k
ak, kbk are the lengths of a
and b
3) V3 with h
f (t)g(t) dt .
a
f (t)g(t) dt .
a
hv + w, v + wi 0 ,
Take =
K .
hv, wi
and expand to obtain (6.1).
hw, wi
hv, wi
. Then
hw, wi
hv, wi
w = 0 by positivity. Thus v, w are linearly dependent.
hw, wi
Conversely, suppose v = w. Then
| hv, wi |2
v V
(positivity)
17
k K, v V
PROPOSITION
6.6 Let V be a Euclidean vector space. Then the function defined
p
by kvk = hv, vi is a norm on V.
Proof. Properties (i) and (ii) in Definition 6.5 are straightforward from the positivity and (conjugate) linearity of the inner product. For (iii)
ku + vk2
u, v V
Note that not all norms satisfy the parallelogram law. For example, let k k :
Rn [0, ), n 2 defined by kxk = max{| x1 |, . . . , | xn |} and take u =
(1, 0, . . . , 0), v = (1, 1, 0, . . . , 0).
DEFINITION 6.8 A vector u V with kuk = 1 is called a unit vector or versor.
Any vector v V \ {0} can be written as v = kvku, where u is a unit vector. The
1
vector u =
v is called the unit vector in the direction of v.
kvk
If v, w V \ {0} and K = R, then the Cauchy inequality is equivalent to
(6.3)
hv, wi
1.
kvk kwk
hv, wi
kvk kwk
18
THEOREM 6.10 Let V be a real or complex normed vector space. The function
d : V V R, defined by d(v, w) = kv wk is a distance (metric) on V, i.e. it
satisfies the following properties:
(i) d(v, w) 0, v, w V and d(v, w) = 0 if and only if v = w.
(ii) d(v, w) = d(w, v), v, w V
(iii) d(u, v) d(u, w) + d(w, v), u, v, w V.
The proof is straightforward from the properties of the norm.
A vector space endowed with a distance (i.e. a map satisfying (i), (ii), (iii) in the
previous theorem) is called a metric space. If the distance is defined by a Euclidean
norm, then it is called a Euclidean distance.
Orthogonality
Let V be a Euclidean vector space. In the last section, we defined the angle between
two nonzero vectors. The definition of orthogonality will be compatible with the
definition of the angle.
DEFINITION 7.1 (i) Two vectors v, w V are called orthogonal (or perpendicular) if hv, wi = 0. We write v w when v and w are orthogonal.
(ii) A subset S 6= is called orthogonal if its vectors are mutually orthogonal, i.e.
hv, wi = 0, v, w S, v 6= w.
(iii) A subset S 6= is called orthonormal if it is orthogonal and each vector of S
is a unit vector (i.e. v S, kvk = 1).
j = 1, . . . , p .
j = 1, . . . , p .
But vj 6= 0, since 0
/ S, so hvj , vj i 6= 0 and it follows that kj = 0, j = 1, . . . , p,
which shows that S is linearly independent. QED
Combining the above result and Corollary 4.8 (iii) we obtain the following result.
COROLLARY 7.3 If dim V = n, n 1, then any orthogonal set of n nonzero
vectors is a basis of V.
7. ORTHOGONALITY
19
m = 1, . . . , n .
00
hvm , wi i
.
hwi , wi i
hv2 , w1 i
hw1 , w1 i
....................................................
hvm , w1 i
hvm , wm1 i
wm = vm
w1 . . .
wm1
hw1 , w1 i
hwm1 , wm1 i
....................................................
hvn , wn1 i
hvn , w1 i
w1 . . .
wn1 ,
wn = vn
hw1 , w1 i
hwn1 , wn1 i
w2 = v2
20
hx, u1 i
hx, un i
u1 + . . . +
un .
hu1 , u1 i
hun , un i
n
X
xi ui .
(see (5.1))
i=1
n
X
xi hui , uj i = xj huj , uj i ,
j = 1, . . . , n ,
i=1
thus
xj =
hx, uj i
,
huj , uj i
j = 1, . . . , n .
QED
w V = hv, vi = 0 = v = 0 .
7. ORTHOGONALITY
21
n
X
i=1
w W, w W
is unique. Also
kvk2 = hv, vi = hw, wi + hw, w i + hw , wi + hw , w i = kwk2 + kw k2 ,
since hw, w i = hw , wi = 0.
QED
n
X
j=1
xj yj and kxk2 =
j=1
n
X
x2j .
j=1
n
X
xj yj and kxk2 =
j=1
n
X
j=1
n X
n
X
j=1
xi yj hui , uj i =
i=1 j=1
i=1 j=1
n X
n
X
i=1 j=1
xi yj ij =
n
X
xj yj .
j=1
QED
| x j |2 .
22
THEOREM 7.11 (GramSchmidt infinite dimensional case) If V is infinite dimensional and L = {v1 , . . . , vk , . . .} V is a countable, infinite, linearly
independent set of distinct elements, then there exists an orthonormal set L0 =
{u1 , . . . , uk , . . .} such that
Span {v1 , . . . , vk } = Span {u1 , . . . , uk } ,
k N .
hv2 , w1 i
hw1 , w1 i
.............................................................
hvk , wk1 i
hvk , w1 i
w1 . . .
wk1 ,
k 2,
wk = vk
hw1 , w1 i
hwk1 , wk1 i
.............................................................
w2 = v2
then uj =
1
wj , k N .
kwj k
Problems
1. Let V be a vector space over the field K and S a nonempty set. We define
F = {f |f : S V},
(f + g)(x) = f (x) + g(x),
(tf )(x) = tf (x),
for all
for all
f, g F,
t K, f F.
s x = xs .
8. PROBLEMS
23
6. Is the set Kn [X] of all polynomials of degree at most n, a vector space over
K? What about the set of the polynomials of degree at least n?
7. Show that the set of all convergent sequences of real (complex) numbers is a
vector space over R (C) with respect to the usual addition of sequences and multiplication of a sequence by a number.
8. Prove that the following sets are real vector spaces with respect to the usual
addition of functions and multiplication of a function by a real number.
1) {f | f : I R, I = interval R, f differentiable}
2) {f | f : I R, I = interval R, f admits antiderivatives}
3) {f | f : [a, b] R, f integrable}.
9. Which of the following pairs of operations define a real vector space structure
on R2 ?
1) (x1 , x2 ) + (y1 , y2 ) = (x1 + x2 , x2 y2 ), k(x1 , x2 ) = (kx1 , kx2 ), k R
2) (x1 , x2 ) + (y1 , y2 ) = (x1 + y1 , x2 + y2 ), k(x1 , x2 ) = (x1 , kx2 ), k R
3) (x1 , x2 ) + (y1 , y2 ) = (x1 + y1 , x2 + y2 ), k(x1 , x2 ) = (kx1 , kx2 ), k R.
10. Let Pn be the real vector space of real polynomial functions of degree at most
n. Study which of the following subsets are vector subspaces, then determine the sum
and the intersection of the vector subspaces you found.
A = {p| p(0) = 0}, B = {p| p(0) = 1}, C = {p| p(1) + p(1) = 0}.
11. Study the linear dependence of the following sets:
1) {1, 1, 1), (0, 3, 1), (1, 2, 2)} R3 ,
0 i
0 1
i 0
1 0
M (C),
,
,
,
2)
i 0
1 0
0 i
0 1
3) {1, x, x2 }, {ex , xex , x2 ex }, {ex , ex , sinh x}, {1, cos2 x, cos 2x} C (R) = the
real vector space of C functions on R.
12. Show that the solution set of a linear homogeneous system with n unknowns
(and coefficients in K) is a vector subspace of Kn . Determine its dimension.
13. Consider V = Kn . Prove that every subspace W of V is the solution set of
some linear homogeneous system with n unknowns.
14. A straight line in R2 is identified to the solution set of a (nontrivial) linear
equation with two unknowns. Similarly, a plane in R3 is identified to the solution
set of a linear equation with three unknowns; a straight line in R3 can be viewed as
the intersection of two planes, so it may be identified to the solution set of a linear
system of rank two, with three unknowns.
(a) Prove that the only proper subspaces of R2 are the straight lines passing
through the origin.
(b) Prove that the only proper subspaces of R3 are the planes passing through the
origin, and the straight lines passing through the origin.
15. Which of the following subsets of R3 are vector subspaces?
D1 :
x
y
z
x1
y
z2
= =
, D2 :
= =
1
2
1
1
1
1
24
20. Let V be a complex vector space. Consider the set V with the same additive
group structure, but scalar multiplication restricted to multiplication by real numbers.
Prove that the set V becomes a real vector space in this way.
Denote this real vector space by R V. Show that
1) R Cn = R2n ,
2) if dimV = n, then dimR V = 2n.
21. Explain why the following maps are not scalar products:
n
P
1) : Rn Rn R, (x, y) =
|xi yi |
i=1
Chapter 2
Linear Transformations
1
General Properties
Throughout this section V and W will be vector spaces over the same field K.
We used linear linear transformations, in particular the notion of isomorphism
for the the identification of an ndimensional vector space with Kn (see Prop. 5.5,
Chap.1). In this chapter we will study linear transformations in more detail.
Recall (Def. 5.3) that a linear transformation T from V into W is a map T :
V W satisfying
(1.1)
(1.2)
T (x + y) = T (x) + T (y) ,
T (kx) = kT (x) ,
x, y V
k K, x V
(additivity)
(homogeneity).
k, l K, x, y V,
(linearity)
x1
X AX , where X = ... ;
A(x) =t(AX) ,
xn
x = (x1 , . . . , xn ) Kn ,
X =tx .
26
2 1
Consider the particular case A = 3 0 M3,2 (R) ; then
1 5
x1
x2
2x1 + x2
3x1
=
x1 + 5x2
and
A(x) = (2x1 + x2 , 3x1 , x1 + 5x2 ) .
Conversely, any map A : Kn Km with the property that each component
of A(x) is a linear combination with constant coefficients of the components of x, is
given by a matrix A Mm,n (K as above; A is the coefficient matrix of t (A(x)), i.e.
if for any x Kn , the ith component of A(x) is k1 x1 + . . . + kn xn , then the ith row
of A is (k1 , . . . , kn ).
If m = n = 1, then A = [a] for some a K, and A : K K, A(x) = ax.
Linear transformations are also called vector space homomorphisms (or morphisms),
or linear operators, or just linear maps. Their compatibility with the operations can
be regarded as a kind of transport of the algebraic structure of V to W.
Note that (1.1) says that T is a homomorphism of additive groups.
A linear map F : V K (i.e. W = K) is also called a linear form on V.
More examples of linear transformations
(i) V = Pn = the vector space of real polynomial functions of degree n, W =
Pn1 and T (p) = p0 .
(ii) V = C 1 (a, b), W = C 0 (a, b), T (f ) = f 0 .
Zb
0
f (t) dt.
a
by (1.3) .
1. GENERAL PROPERTIES
27
On the other hand kv1 +lv2 U since U is a subspace of V. Therefore kw1 +lw2
T (U).
(ii) By the linear dependence of v1 , . . . , vp there exist k1 , . . . , kp , not all zero such
that k1 v1 + . . . + kp vp = 0. Then T (k1 v1 + . . . + kp vp ) = T (0) = 0. Now the linearity
of T implies that k1 T (v1 ) + . . . + kp T (vp ) = 0.
QED
REMARK 1.2 Note that if in (ii) of the previous theorem we replace dependent
by independent, the statement we obtain is no longer true. The linear independence
of vectors is preserved only by injective linear maps (see Chap. 1, Lemma 5.6).
THEOREM 1.3 Assume dim V = n and let B = {e1 , . . . , en } be a basis of V, and
w1 , . . . , wn arbitrary vectors in W.
(i) There is a unique linear transformation T : V W such that
(1.4)
T (ej ) = wj ,
j = 1, . . . , n .
n
X
xj ej , xj K. Define
j=1
T (x) =
n
X
also y =
j=1
n
X
yj ej V, and k, l K. Then
j=1
T (kx + ly) =
n
X
j=1
n
X
xj wj + l
j=1
n
X
yj wj = kT (x) + lT (y) ,
j=1
j=1
j=1
n
X
xj ej , y =
j=1
j=1
n
X
(xj yj )wj = 0. The linear dependence of w1 , . . . , wn implies that xj = yj , j,
j=1
thus x = y.
QED
28
(1.6)
x V
x V, k K .
T 2 = T T , . . . , T n = T n1 T = T T n1 ,
29
k, l K .
QED
Im T
2x1 x2 = y1
2x1 + x2 + 3x3 = y2
is compatible
= R2 .
Note that T is surjective, but not injective. We will see later in this section that
a linear transformation between two finite dimensional spaces of the same dimension
is surjective if and only if it is injective.
DEFINITION 2.4 The dimensions of Im T and Ker T are called the rank and
nullity of T respectively.
PROPOSITION 2.5 Let y Im T . The general solution of the equation
(2.1)
T (x) = y
is the sum of the general solution of T (x) = 0 and a particular solution of (2.1).
30
Proof. Say that dim V = n. Then Ker T is finite dimensional too as a subspace
of V. Let p = dim (Ker T ), 1 p n 1 and B1 = {u1 , . . . , up } a basis of Ker T
(the cases p = 0, p = n are left to the reader). Extend B1 to a basis of V (see Chap.
1, Thm. 4.8):
B = {u1 , . . . , up , v1 , . . . , vnp } .
Denote wj = T (vj ), j = 1, . . . , n p and B2 = {w1 , . . . , wnp }.
If we show that B2 is a basis of Im T , it will follow that dim (Im T ) = n p, and
the proof of the theorem will be done. For, let y Im T . Then y = T (x), for some
x V. We write x in terms of the basis B of V:
x = a1 u1 + . . . + ap up + b1 v1 + . . . + bnp vnp ,
and apply T , using T (ui ) = 0. We obtain:
y = T (x) = b1 T (v1 ) + . . . + bnp T (vnp ) = b1 w1 + . . . + bnp wnp .
This proves that Im T = Span B2 .
Now suppose that
(2.3)
k1 w1 + . . . + knp wnp = 0 ,
k1 , . . . , knp K .
31
Throughout this section V and W denote two finite dimensional vector spaces over
the same field K, of dimensions n and m respectively; T : V W is a linear
transformation.
Let B = {v1 , . . . , vn } and C = {w1 , . . . , wm } be bases of V and W respectively.
DEFINITION 3.1 The matrix T Mm,n (K) whose j 0 th column is the coordinate
column vector of T (vj ), j {1, ..., n}, is called the matrix associated to T (or the
matrix of T ) with respect to the bases B and C.
m
P
So, if for each j {1, ..., n} we write T (vj ) =
tij wi , then T = [tij ] 1 i m .
i=1
1 j n
Examples
(i) T : R2 R3 , T (x) = (2x1 + x2 , 3x1 , x1 + 5x2 ).
Let B = {v1 , v2 }; C = {w1 , w2 w3 } be the canonical bases of R2 and R3 respectively. Denote by T the associated matrix.
T
T = MB
.
,C
2
1
Thus the 1st column of T is 3 and the 2nd column of T is 0 ;
1
5
2 1
T = 3 0 .
1 5
32
n
X
t1j xj , ...,
j=1
n
X
tmj xj ),
j=1
then, according to Definition 3.1, the matrix associated to T w.r.t. the canonical
bases of Kn and Km is obviously the same as the coefficient matrix of the column
t
(T (x)), namely T = [tij ].
So, T acts as left multiplication by T .
Moreover, the next proposition points out that any linear transformation of finite
dimensional vector spaces reduces to left multiplication by a matrix. This is why left
multiplication by a matrix was called the main example.
PROPOSITION 3.3 (the matrix of a linear transformation)
Let V, W, B, C, T , T as in Def.3.1.
n
m
P
P
Let also x V, x =
xj vj , and y W, y =
yi wi such that y = T (x).
j=1
i=1
Denote X =t [x1 , ..., xn ], Y =t [y1 , ..., yn ]. Then T can be written in the matrix form
as:
n
X
(3.4)
Y = T X, i.e. yi =
tij xj , i = 1, ..., m.
j=1
Proof. T (vj ) =
m
P
i=1
Then y = T (x) =
n
P
j=1
xj
m
P
i=1
tij wi =
m P
n
P
(
tij )wi .
i=1 j=1
n
P
j=1
tij xj ,
i =
QED
33
preserves the linear dependence and the linear independence of vectors, and so does
1 . Thus T (v1 ), ..., T (vr ) are linearly independent and any p r + 1 elements of
{T (v1 ), ..., T (vn )} are linear dependent.
On the other hand, Im T = Span (T (v1 ), ..., T (vn )). It follows that {T (v1 ), ..., T (vr )}
is a basis of Im T , therefore dim (Im T ) = r, then we apply the dimension formula.
QED
We notice that the rank of the associated matrix T does not depend on the chosen
bases. Also, the notation rank T = dim (Im T ) makes now more sense, since the
previous result shos that dim (Im T ) is indeed the rank of a certain matrix.
Using the above result, it is not hard to deduce the next corollary.
COROLLARY 3.6 Let T L(V, W), V, W finite dimensional, T = the matrix
of T w.r.t. B and C.
(i) T is surjective if and only if dim W = rank T.
(ii) T is injective if and only if dim V = rank T.
(iii) T is bijective if and only if dim V = dim W = rank T.
In this case T is invertible, and the matrix of T 1 w.r.t. C, B is T 1 .
(iv) If dim V = dim W = n, then:
T is injective T is surjective T is bijective n = rank T.
Note that a particular case of (iv) is the case V = W, i.e. T End(V), V finite
dimensional.
PROPOSITION 3.7 Let T End(V), dim V = n, n N and B = {v1 , . . . , vn }
B0 = {v10 , . . . , vn0 } two bases of V. Denote by A and B the matrices associated to
T w.r.t. the basis B, and B0 respectively.
Then B = C 1 AC, where C is the matrix of change from B to B0 .
Proof. Let C = [cij ], A = [aij ], B = [bij ] and C 1 = [dij ]. Then
vj0 =
(3.5)
n
X
cij vi ,
vj =
i=1
(3.6)
T (vi ) =
n
X
k=1
aki vk ,
n
X
dij vi0 ,
j = 1, ..., n.
i=1
i = 1, ..., n;
T (vj0 ) =
n
X
l=1
blj vl0 ,
j = 1, ..., n.
34
n
X
cij T (vi ) =
i=1
n X
n
X
cij aki vk =
i=1 k=1
n X
n X
n
X
(
dlk aki cij )vl0 ,
n X
n
X
i=1 k=1
cij aki
n
X
dlk vl0 =
l=1
j = 1, ...n.
By the uniqueness of the representation of T (vj0 ) w.r.t. B0 and (3.6), it follows that
n P
n
P
blj =
dlk aki cij , l, j = 1, ..., n, thus B = C 1 AC.
QED
k=1 i=1
DEFINITION 3.8 The matrices A, B Mn,n (K) are called similar if there exists
a nonsingular matrix C Mn,n (K), such that B = C 1 AC.
By the previous proposition, two matrices are similar if and only if they represent
the same endomorphism (with respect to different bases).
REMARKS 3.9 Properties of similar matrices
(i) Similarity of matrices is an equivalence relation on Mn,n (K).
(ii) Similar matrices have the same rank.
(iii) Nonsingular similar matrices have the same determinant. As a consequence, it
makes sense to define the determinant of an endomorphism as the determinant
of the associated matrix with respect to an arbitrary basis.
Particular Endomorphisms
4. PARTICULAR ENDOMORPHISMS
35
Proof. The conclusion follows easily from the previous theorem, by induction.
Note that for the induction step we need F1 + ... + Fp1 to be a projection. To show
this, use Fi2 = Fi , and Fi Fj = 0, i 6= j.
The details of the proof are left to the reader.
QED
THEOREM 4.4 A finite dimensional real vector space V admits a complex structure if and only if dim V is even.
Proof. Suppose dim V = n = 2m. Let B = {e1 , ..., em , em+1 , ..., e2m } be a basis of
V. Define the endomorphism F : V V by
F(ei ) = em+i ,
F(em+i ) = ei ,
i = 1, ..., m.
ki N i (x0 ) = 0.
i=0
p1
Applying N
to this equality we obtain k0 = 0. Next, apply successively N p2 ,
2
. . . , N , N to obtain k1 = ... = kp1 = 0.
QED
p1
Note that Span {x0 , N (x0 ), ...N
(x0 )} is an invariant subspace of V.
36
THEOREM 4.6 If dim V = n 1, T L(V, V), then there exist two subspaces U
and W of V, invariant with respect to T such that:
(i) V = U W;
(ii) T |U is nilpotent;
(iii) T |W is invertible, when W 6= {0}.
Proof. Denote Nk = Ker (T k ) and Rk = Im (T k ), k N? . Obviously, these are
invariant subspaces with respect to T , and
Nk Nk+1 , Rk Rk+1 , k.
Moreover,
(4.7)
1 1
(i) T : R3 R3 defined by the matrix T = 0 1
0 0
2
since T = 0.
1
1 is nilpotent of index 2
1
37
(5.1)
x, y V .
We will accept without proof that all (possibly infinite dimensional) Euclidean
vector spaces we are working with in this course have the property mentioned above
for the finite dimensional case. Then the following definition makes sense.
DEFINITION 5.1 The endomorphism T defined by (5.1) is called the adjoint of
T . If K = R, then T is also called the transpose of T .
REMARKS 5.2
(i) idV = idV .
(ii) Note that (5.1) is eqiuvalent to
hT x, yi = hx, T yi ,
(5.2)
(iii) (T ) = T ,
x, y V .
T End(V).
hT x, T yi = hx, yi ,
x, y V
38
We will see later that the definitions given above for endomorphisms and matrices
are related in a natural way.
PROPOSITION 5.4 Let T , S End(V).
(i) (T S) = S T .
(ii) If T is invertible, then T is invertible too; moreover, (T )1 = (T 1 ) .
(iii) (T + S) = T + S .
(iv) If K = C, k C, then (kT ) = kT .
(v) If K = R, k R, then (kT ) = kT .
Proof.
(i)
by conjugate linearity
x V.
x V.
39
x, y V , C .
x, y V kT xk = kxk ,
x V.
1
kx + yk2 kx yk2 + ikx + iyk2 ikx iyk2 ;
4
If K = R, then
1
kx + yk2 kx yk2 .
4
In the left hand side of each identity replace hx, yi by hT x, T yi, then use kT (x +
cy)k = kx + cyk, c {1, i}, to end up with hx, yi.
QED
hx, yi =
hT T x, yi = hx, yi, x, y V
h(T T idV )x, yi = 0, x, y V
T T = idV .
QED
REMARKS 5.9
(i) If T preserves the inner product, then T is injective. (This may be deduced
either from Prop.5.8 or Thm.5.7. By Prop.5.8, T admits a left inverse, thus it is
injective. It is an easy exercise for the reader to prove that Ker T = {0} using Thm.
5.7.
(ii) If V is finite dimensional, then T preserves the inner product if and only if
T T = T T = idV .
In some books an unitary endomorphism T is defined as an endomorphism which
satisfies T T = T T = idV . This condition is stronger than the one we used in the
definition here, but they are equivalent in the finite dimensional case. The equivalence
comes up easily from Prop.5.8, if we recall that a linear transformation of finite
dimensional vector spaces is injective if and only if it is bijective.
THEOREM 5.10 Let V be finite dimensional, dim V = n, B = {e1 , . . . , en } an
orthonormal basis of V, and T = [tij ] the matrix of T w.r.t. B.
(I) Assume K = C. Then:
(I.i) T is Hermitian
(I.ii) T is skew Hermitian
(I.iii) T is unitary
T is Hermitian.
T is skew Hermitian.
T is unitary.
40
T is symmetric
T is skew symmetric
T is orthogonal
T is symmetric.
T is skew symmetric.
T is orthogonal.
Proof. The proofs of (II) are almost the same as the ones for (I). We will only
prove (I.i), leaving the rest to the reader.
n
P
Denote by [tij ] the matrix of T w.r.t. B. Multiplying T ej =
tkj ek by ei in
k=1
(5.4)
tkj ek , ei
= tij .
(5.5)
X
n
j=1
n
X
j,k=1
n
X
xj ej ,
n
X
x k T ek
k=1
xj xk hT ek , ej i =
n
X
xj xk hej , T ek i
j,k=1
n
X
j,k=1
xj xk tjk =
n
X
xj xk tkj
j,k=1
xk xj tkj = hx, T xi ,
k,j=1
6. ISOMETRIES
41
Isometries
42
x, y V.
7. PROBLEMS
43
Problems
1. Let a
6= 0 be a fixed vector in the space of the free vectors V3 , and the map
T : V3 V3 , T (
x) = a
x
.
1) Show that T is a linear transformation.
2) Show that T is neither injective, nor surjective.
3) Find Ker(T ), Im(T ), and show that Ker(T ) Im(T ) = V3 .
2. Let Pn be the complex vector space of the polynomial functions of degree at
most n. Show that the map T : Pn Pn , defined by T p(x) = p(x + 3) p(x),
x C, is a linear transformation. Is T injective?
3. In each of the following cases determine the matrix associated to T with respect
to the canonical bases, the rank, and the nullity of T .
1) T : R3 C3 , T (x) = ix.
2) T : M22 (K) M22 (K), T (A) = t A.
1 i
3) T : C M22 (C), T (x) = x
.
i 1
4.
1)
2)
3)
Let a
6= 0 be a fixed vector in V3 , and the map T : V3 R, T (x) = h
a, x
i.
Show that T is a linear form.
Study the injectivity and the surjectivity T .
Determine Ker(T ), Im(T ) and their dimensions.
0 1 1
1 0 1
1) T1 = 1 1 2 , 2) T2 = 2 1 2
3 2 1
0 1 2
are the matrices of the transformations, with respect to the basis {w1 = (1, 2, 3),
w2 = (3, 1, 2), w3 = (2, 3, 1)}.
7. 1) Show that the endomorphism T : M33 (R) M33 (R), T (A) = t A is a
symmetric involution.
2) Let V be the real Euclidean vector space of C functions f defined on [a,b],
with f (a) = f (b). Show that the differentiation operator D : V V, D(f ) = f 0 is
antisymmetric.
3) Show that the endomorphism T : R2 R2 ,
T (x) = (x1 cos x2 sin , x1 sin + x2 cos ),
is orthogonal.
R,
44
6 2i 2i
T =
4 + 8i 0
Chapter 3
General Properties
46
We saw in Chap.2, Section 3 that similar matrices represent the same endomorphism
(with respect to different bases). By (1.1), (1.2), (1.3) follows that similar matrices
have the same eigenvalues; if B = C 1 AC and X an eigenvector for A, then C 1 X
is an eigenvector for B corresponding to the same eigenvalue.
Sometimes eigenvectors and eigenvalues are called characteristic (or proper) vectors,
and characteristic (or proper) values, respectively.
DEFINITION 1.4 Let K and denote
S() = {x V | T x = x}.
If is an eigenvalue of T , S() is called the eigenspace of .
The following proposition is immediate; this type of result, that describes eigenvalues without involving eigenvectors will prove very helpful.
PROPOSITION 1.5 (i) S() = Ker (T idV ).
(ii) is an eigenvalue of T if and only if Ker (T idV ) 6= {0}.
From (i) follows that S() is a subspace of V, since it is the kernel of a linear
operator.
Examples
(i) Consider V = C (R) and the endomorphism D : V V, D(f ) = f 0 .
Each R is an eigenvalue of D; it is easy to check that the function f ,
f (x) = ex is an eigenvector for . Moreover, for a fixed we can solve the
differential equation f 0 = f whose solution is
S() = {f | f (x) = cex , x R, for some c R} = Span {f }.
In this example it happens that all eigenspaces are one-dimensional.
(ii) V = R3 , T : V V, T (x, y, z) = (4x + 6y, 3x 5y, 3x 6y + z).
We can check that 1 = 2 is an eigenvalue with a corresponding eigenvector
v1 = (1, 1, 3), and 2 = 1 is an eigenvalue with a corresponding eigenvector
v2 = (2, 1, 0), and another one v3 = (0, 0, 1).
At the end of the next section it will be striaghtforward that that -2 and 1 are the
only eigenvalues, and S(2) = {(x, y, z) | x+y = 0, x2y+z = 0} = Span {v1 },
S(1) = {(x, y, z) | x + 2y = 0} = Span {v2 , v3 }; dim S(2) = 1, S(2) is a
straight line, and dim S(1) = 2, S(1) is a plane.
THEOREM 1.6 (i) For an eigenvector of T corresponds a single eigenvalue.
(ii) Eigenvectors corresponding to distinct eigenvalues are linearly independent.
(iii) S() is an invariant subspace of T .
(iv) Eigenspaces corresponding to two distinct eigenvalues are independent.
Proof. (i) Let v V \ {0} such that T v = v and T v = 1 v. Then ( 1 )v = 0,
v 6= 0 imply = 1 .
47
k1 v1 + . . . + kp vp = 0.
k1 1 v1 + . . . + kp p vp = 0
48
a11
a12
...
a21
a22 . . .
det(A I) =
..
an1
an2
...
= 0.
ann
a1n
a2n
a11
a12
a1n
a22
a2n = 3 + 2 (tr A) J + det A,
P () = a21
a31
a32
a33
.
where tr A = a11 + a22 + a33 , J =
+
+
a21 a22 a31 a33 a32 a33
(ii) Assume B = C 1 AC. Then
det (B I) =
=
det (C 1 AC I)
det C 1 det (A I)det C
=
=
det (C 1 (A I)C)
det (A I).
= X.
(?) AX = X,
we find
(??) AX
49
In (?) we left multiply by tX, then take the transpose of both sides to get tX tAX =
tXX. Then ( )
tXX = 0
=
tXX. Left multiplicatin by tX in (??) implies tX AX
t
t
2
4
6 0
The associated matrix with respect to the canonical basis is T = 3 5 0 ,
3 6 1
and the characteristic polynomial
4
6
0
5
0 = ( + 2)( 1)2 .
P () = 3
3
6
1
Then the eigenvalues are the roots of P (), namely -2 and 1. In order to find the
eigenvectors, for each eigenvalue we solve the system (T I)X = 0. Here, for
= 2, we have to solve:
6x + 6y = 0
x+y =0
3x 3y = 0
(T + 2I)X = 0
x 2y + z = 0
3x 6y + 3z = 0
1
x=
y = X = = 1 , R.
z =
50
The eigenspace of = 2 is the solution space of the above system, namely S(2) =
Span {(1, 1, 1)}
For = 1, solve
3x + 6y = 0
3x 6y = 0 x + 2y = 0
(T I)X = 0
3x 6y = 0
0
2
2
+ 0 , , R.
=
X=
1
0
1
x2 + x3 + x4 = 0
0
2x3 + 3x4 = 0
(T I)X = 0
X =
0 , R.
3x4 = 0
0
The column t [1, 0, 0, 0] represents the constant polynomial 1; S(1) = Span {1} =
{the constant polynomials} ' R.
(iii) Let us determine also the eigenvalues
0 1
A= 1 1
0 1
0
1 .
0
From
= (2 2)
P () = 1 1
0
1
0
1
51
T
MB
= D is diagonal. Let D =
.
..
.
0 0 . . . dn
This means that T ej = dj ej j = 1, ..., n i.e. the vectors in B are eigenvectors of
T , and d1 , ..., dn are the associated (not necessarily distinct) eigenvalues.
Conversely, If B1 = {v1 , ..., vn } is a basis of V such that each vj is an eigenvector,
then T vj = j vj , for some j K. It follows that the matrix of T with respect to
B1 is
1 0 . . . 0
0 2 . . . 0
QED
.
0
0 . . . n
THEOREM 3.4 The dimension of an eigenspace of the endomorphism T is less
or equal to the multiplicity order of the corresponding eigenvalue, as a root of the
characteristic polynomial.
52
T (vj ) =
n
X
akj ej , j = p + 1, ..., n.
k=1
T = 0
...
0
B is
...
..
.
...
...
...
a1p+1
0 app+1
...
...
0 anp+1
...
a1n
,
. . . apn
... ...
. . . ann
p
P
j=1
p
P
j=1
j=1
p
P
j=1
j=1
mj = n.
ordered set B1 = {v1 , ..., vn } whose n elements are chosen such that the first m1 form
53
a basis in S(1 ), the next m2 form a basis in S(2 ), and so on, up to the last mp
elements which form a basis in S(p ). Then the elements of B are distinct eigenvectors
(see Thm. 1.6 (iv)). Moreover, using induction on p, one shows that B is a basis of
V. By Prop. 3.3, this means that T is diagonalizable.
QED
COROLLARY 3.6 If T is diagonalizable and 1 , 2 , . . . , p are its distinct eigenvalues, then
V = S(1 ) . . . S(p ).
It is clear now that not all endomorphisms of finite dimensional spaces (and not
all matrices) are diagonalizable.
From the proof of Theorem 3.5 follows that for diagonalizable endomorphisms
(matrices) the diagonal form is unique up to the order of the diagonal entries. The
diagonalizing matrix is not unique either. Moreover, there are infinitely many diagonalizing matrices corresponding to the same diagonal form.
Diagonalization Algorithm
1) Fix a basis of V and determine the matrix T of T with respect to that basis.
2) Determine the eigenvalues of T by solving the characteristic equation, P () = 0.
3) If K = R and there are non-real roots of P (), then we stop with the conclusion
that T is not diagonalizable.
Otherwise, move to step 4).
4) For each eigenvalue j check whether the multiplicity mj is equal to dim S(j ).
For, it suffices to verify if
mj = n rank (T j I), j.
If there exists at least one j such that mj > n rank (T j I), then we stop; T
is not diagonalizable, by Thm. 3.5.
If all equalities hold, point out that T is diagonalizable and go to step 5).
5) Solve the p systems (T j I)X = 0, where p is the number of distinct eigenvalues. For each system chose mj independent solutions, that represent the coordinates
of vectors of a basis in S(j ). Form a basis of V such that the first m1 vectors form
a basis of S(1 ), the next m2 form a basis of S(2 ), and so on.
6) The matrix of T associated to the basis formed in 4) is diagonal; its diagonal
entries are:
1 , . . . 1 ; 2 , . . . , 2 ; p , . . . , p
where each j appears mj times. Let us denote this diagonal matrix by D.
7) The diagonalizing matrix is C whose columns are the solutions of the systems
in 5), i.e. the coordinate-change matrix from the initial basis to the basis formed by
eigenvectors.
Examples
(i) The endomorphism T : R3 R3 , T (x, y, z) = (4x+6y, 3x5y, 3x6y+z).
studied in the previous sections is diagonalizable. Its matrix with respect to the basis
54
2
0 0
{v1 = (1, 1, 1), v2 = (2, 1, 0), v3 = (0, 0, 1)} is D = 0 2 0 . The
0
0 1
1 2 0
1 0 , which satisfies D = C 1 T C.
diagonalizing matrix is C = 1
1
0 1
(ii) The endomorphism T : P3 P3 , T (p) = q, q(X) = p(X + 1) whose eigehenvalues and eigenspaces were determined in Section
nor is its
2 is not diagonalizable,
1 1 1 1
0 1 2 3
1 0 ...
0 1 ...
..
..
.
.
.
..
0 0 ... ...
The matrix
0 0
0 0
Mk,k (K)
..
.
1
is said to be the Jordan cell (or Jordan block) of order k associated to the scalar .
The Jordan cells of order 1, 2 and 3 respectively
1
1
[],
, 0
0
0 0
are:
0
1 .
J1 0 . . . 0
0 J2 . . . 0
J =
(4.1)
(the canonical Jordan form,)
..
.
0
. . . Js
55
where each Ji is a Jordan cell associated to some scalar i .; 1 , ..., s are not necessarily distinct.
A matrix J of type (4.1) is said to be in Jordan form or to be a Jordan matrix.
We call the matrix A Jordanizable if it is similar to a Jordan matrix; if J =
C 1 AC, then C is called the Jordanizing matrix of A.
If T is the matrix of T with respect to the basis B, and the matrix of T with
respect to some other basis B1 is the Jordan matrix J, let us denote by C the
coordinate-change matrix from B to B1 .Then J = C 1 T C, so the endomorphism
T is Jordanizable if and only if its matrix T is Jordanizable.
The basis B1 is called a Jordan basis.
We saw that if an endomorphism admits a diagonal form, the corresponding basis
is made up of eigenvectors. Let us take a closer look at the Jordan basis.
Let J be a Jordan matrix, which is the matrix of T , {e1 , ..., en } the correspoding
Jordan basis, and J1 the first cell of J. Assume J1 is of dimension k1 2 and its
diagonal entries are equal to 1 K. Then we observe that
(4.2)
56
Vj V, j = 1, . . . , p such that:
(i) dim Vj = mj , j = 1, . . . , p
(ii) V = V1 V2 . . . Vp
(iii) T |Vj = N| + | idV , | = , . . . , where N , . . . , N are nilpotent endomorphisms of various orders.
Proof. for each fixed j {1, . . . , p}, consider the endomorphisms Tj = T j idV
and apply Thm. 4.6, Chap.2 to obtain the subspaces Vj and Wj such that V =
Vj Wj , and Tj |Vj is nilpotent and Tj |Wj is invertible. Since Vj is Tj -invariant,
it follows that it is also T = Tj + j idV -invariant.
Let T |Vj End (Vj ) and T |Wj End (Wj ) be the restrictions of calT to V j
and to Wj respectively. From V = Vj Wj follows
det(T idV ) = det(T |Vj idVj )det(T |Wj idWj ), ; as polynomials in .
Then j is an eigenvalue for T |Vj , of multiplicity mj , since T |Wj j idWj is
invertible. On the other hand, j is the only eigenvalue of T |Vj since 0 is the only
eigenvalue of the nilpotent endomorphism Tj |Vj , by the lemma.
It is clear now that the degree of the polynomial det(T |Vj idVj ) is mj , thus
dim Vj = mj . Therefore (i) and (iii) are proved.
p
P
(ii) is immediate by induction on p, using
mj = n.
j=1
We will accept the next theorem without proof, but we note that the missing proof
relies on Thm. 4.5.
THEOREM 4.6 (Jordan) The endomorphism T admits a Jordan form if and only
if its characteristic polynomial has all its n roots (counted with their multiplicities) in
K.
COROLLARY 4.7 Any endomorphism of a finite dimensional complex vector space
(and any complex matrix) admits a Jordan form.
REMARKS 4.8 We would like to point out in more detail the relationship of the
Jordan form J and the decomposition in Theorem 4.5. In the next set of remarks we
keep the notation used in Thm. 4.5. Denote also the Jordan basis corresponding to
J by B1 , and dj = dim S(j ).
57
(i) The number of Jordan cells having j on the diagonal is dj , i.e. the maximal
number of linearly independent eigenvectors.
(ii) mj = dim Vj = the sum of dimensions of Jordan cells corresponding to j .
(iii) Assuming that the Jordan blocks of J are ordered such that the first d1 correspond to 1 , the next d2 correspond to 2 , and so on, it follows that the first
d1 vectors of B1 form a basis of V1 , the next d2 form a basis of V2 , and so on,
up to the last dp vectors of B1 which form a basis of Vp .
In practice, if all multiplicities are small enough we may use the following algorithm.
Algorithm for Finding the Jordan Form and the Jordan Basis
1) Find the matrix T of the endomorphism with respect to an arbitrary (fixed)
basis.
2) Solve the characteristic equation. If this equation has all n roots in K, then
the endomorphism is Jordanizable, otherwise it is not.
3) Compute dim S(j ) = n rank (T j I) mj . We have already noticed that
the number of Jordan blocks corresponding to the eigenvalue j is equal to dim S(j ).
Sometimes, for small values of mj this fact allows us to figure out the Jordan blocks
corresponding to j (see the Remark below).
If dim S(j ) = mj , then there are mj Jordan cells of dimension 1, corresponding
to j .
4) For each eigenvalue j determine the eigenspace S(j ), by solving the linear
homogeneous system
(Syst.1)
(T j I)X = 0.
(T j I)X = X1 .
(T j I)X = X2 .
(T j I)X = Xk
leads to a contradiction. The number k = k(j ) found in this way represents the
dimension of the largest Jordan cell having j on the diagonal.
6) Pick particular values for the parameters which appear in X1 , . . . , Xk to obtain
the basis vectors corresponding to the part of the Jordan matrix that has j on
the diagonal. These particular values must be chosen such that the compatibility
conditions are all satisfied, and the eigenvectors are linearly independent.
58
[]
0
0
0
1
if dim S() = 1;
0
0
if dim S() = 3.
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
if dim S() = 3;
0
0
0
m=4
m=2
m=3
0
0
0
0
0
0
if dim S() = 1;
if dim S() = 2.
0
0
0
0
if dim S() = 2;
if dim S() = 1;
or
0
0
0
0
0
0
0
0
0
0
0
0
if dim S() = 2;
0
0
0
0
if dim S() = 4.
0
Note that dim S() distinguishes all above cases except for m = 4, dim S() = 2.
Examples (i) Find the Jordan form and the Jordan basis for the endomorphism
T : R4 R4 ,
T (x) = (3x1 x2 x3 2x4 , x1 + x2 x3 x4 , x1 x4 , x2 + x3 + x4 ).
We apply the algorithm.
1) The matrix of T with respect to the canonical basis is
3 1 1 2
1
1 1 1
.
T =
1
0
0 1
0 1
1
1
59
1 1
0 1
.
J =
1
2
This is the matrix of T with respect to a Jordan basis B = {e1 , e2 , e3 , e4 }, where
e1 , e3 are linearly independent eigenvectors in S(1), e2 is a principal vector for e1 ,
and e4 is an eigenvector in S(2).
We will apply the rest of the algorithm for 1 , then for 2 .
For 1 = 1:
4) Solve
(Syst.1)
(T I)X = 0;
X1 =t [ ],
, R.
e1 and e3 are of the form X1 with and chosen for each of them, such that e1 , e3
are linearly independent, and e1 admits a principal vector.
5) The system
(Syst.2)
(T I)X =t [ ]
, R.
We know from 3) that there is no need to look for more principal vectors.
6) For e1 , choose = 1 = in X1 ; e1 = (1, 0, 0, 1). For e2 , = 1 (like in e1 ),
and pick = 1, = 0, which give e2 = (0, 1, 0, 0).
If in X1 we take = 1, = 0 for e3 , we obtain e3 = (1, 1, 1, 0).
For 2 = 2:
4) The system (T 2I)X = 0 has the general solution t [2a a a 0] a R.
Any nontrivial solution does the job for e4 ; let us take a = 1. Then e4 = (2, 1, 1, 0).
The problem is now completely solved. The Jordanizing matrix is
1
0 1 2
0 1 1 1
.
C=
0
0 1 1
1
0 0 0
We may check that J = C 1 T C, or equivalently, CJ = T C.
Note that we determined the Jordan form J immediately after we found out
dim S(1); then it was clear that an eighenvector associated to the eigenvalue 1 = 1
60
admits either one principal vector, or none, so we stoped looking for principal vectors
after solving (Syst.2). Let us see what would happen if we tried to determine one
more principal vector, namely to solve
(T I) =t [ + + + ].
The compatibility condition for this system is = 0, that contradicts e1 6= 0.
(ii) Determine the Jordan form and the Jordanizing matrix of
4 1 1 1
1
2
0 1
M4,4 (R).
A=
0
0
3 0
0
0
1 3
We find P () = ( 3)4 , 1 = 3, m(3) = 4 = dim R4 , thus A is Jordanizable.
dim S(3) = 4rank (A3I) = 2, thus the Jordan form of A is one of the following:
3 1 0 0
3 1 0 0
0 3 0 0
0 3 1 0
J =
0 0 3 1 or J = 0 0 3 0 .
0 0 0 3
0 0 0 3
We will find out which of J or J is the correct one, while looking for the Jordan
basis. The system
(Syst.1)
(T I)X = 0
+ ],
(T 3I)X =t [
, R.
+ ]
(T 3I)X =t [a b
a + b].
a + b],
1
0 0 0
0
0 1 0
3 1 0 0
0 3 0 0
1
J =
0 0 3 1 will certainly satisfy J = C AC.
0 0 0 3
61
hT v, vi
.
hv, vi
hT v, vi
hT v, vi
hv, T vi
=
=
= .
hv, vi
hv, vi
hv, vi
Thus R.
(ii) Let 1 6= 2 be eigenvalues with corresponding eigenvectors v1 and v2 respectively. From
1 hv1 , v2 i = hT v1 , v2 i = hv1 , T v2 i = 2 hv1 , v2 i = 2 hv1 , v2 i,
follows (1 2 )hv1 , v2 i = 0, i.e. hv1 , v2 i = 0, as 1 6= 2.
(iii) will be proved by induction on n. For n = 1 it is trivially true. Let n 2
and assume (iii) is true for vector spaces of dimension n 1. From (i) follows that
T has at least one real eigenvalue 1 . Let v1 be an eigenvector associated to 1 and
denote U = {v1 } . We know from Chapter 1 that U is a subspace of V; moreover,
V = Span {v1 } U , thus dim U = n 1.
In order to apply the inductive hypothesis for U we need to show that U is T invariant. For, let x U ; then hT x, v1 i = hx, T v1 i = 1 hx, v1 i, thus T x U.
The restriction T |U : U U is Hermitian too, so by the inductive hypothesis
there exists an orthonormal basis {u2 , . . . , un } of U , made up of eigenvectors. Then
1
v1 , u2 , . . . , un } is the required basis of V.
QED.
{
||v1 ||
Translating the theorem in matrix form we obtain
COROLLARY 5.3 Let A Mn,n (C), A Hermitian. Then
62
(i) A is diagonalizable.
(ii) If D is a diagonal form of A, then D Mn,n (R).
(iii) There exists a diagonalizing unitary matrix (in Mn,n (C).)
REMARK 5.4 Let A Mn (R), A symmetric. Since real symmetric matrices are
particular cases of Hermitian matrices, the previous corollary applies, thus all eigenvalues of A are real and A is diagonalizable; there exists a diagonalizing orthogonal
matrix C Mn,n (R). In particular. Theorem 2.4 comes out as corollary of (i).
COROLLARY 5.5 If K = R and T is symmetric, then
(i) Assuming V finite dimensional, all roots of the characteristic polynomial of T
are real.
(ii) Eigenvectors corresponding to distinct eigenvalues are orthogonal.
(iii) If dim(V) = n N , then there exists an orthonormal basis of V, made up of
eigenvectors. Thus a symmetric endomorphism is always diagonalizable.
Similarly we can deduce some properties of eigenvalues and eigenvectors of skew
Hermitian and skew symmetric endomorphisms.
REMARK 5.6 (i) The eigenvalues of a skew Hermitian endomorphism are purely
imaginary or zero. Assuming V finite dimensional over R, it follows that all
roots of the characteristic polynomial are purely imaginary or zero.
(ii) Eigenvectors corresponding to distinct eigenvalues of a skew Hermitian endomorphism are orthogonal.
(iii) Note that skew symmetric endomorphisms are not necessarily diagonalizable.
THEOREM 5.7 Let V be a complex (real) Euclidian space and T End(V) an
unitary (orthogonal) endomorphism.
(i) If there exist any eigenvalues, their absolute value is 1.
(ii) Eigenvectors corresponding to distinct eigenvalues are orthogonal.
(iii) If K = C, V is finite dimensional, and T is unitary, then T is diagonalizable
and V admits an orthonormal basis made up of eigenvalues.
(iv) If K = R, V is finite dimensional, T is orthogonal and the characteristic
polynomial has no other roots in C besides 1 or -1, then T is diagonalizable and
V admits an orthonormal basis made up of eigenvectors.
63
A2 = CJ 2 C 1 ,
. . . , Am = CJ m C 1 .
If A admits the diagonal form, i.e. J = D is a diagonal matrix, then the diagonal
form D of A is even more convenient to use for computing powers of A.
A = CDC1,
A2 = CD2 C 1 ,
. . . , Am = CDm C 1 ,
64
P () =
n
X
ak k , ak K.
k=0
an An + an1 An1 + . . . + a1 A + a0 I
An Bn1 +
n
+A Bn1 An1 Bn2 +
+An1 Bn2 An2 Bn3 +
.
+..................+
2
+A B1 AB0 +
+AB0
=
0
QED.
65
If the series are convergent, their sums are denoted by f (A), and fP
(A) respectively,
where f is the function defined by the convergent series f (t) =
am tm , t K.
m
f (A) is called function of matrix or matrix function, and f (A) is called function of
endomorphism.
On finite dimensional vector spaces, the study of series of endomorphisms reduces
to the studyPof matrix series. On the other hand, as a consequence of theorem Cayley
Hamilton,
am Am can be expressed as a matrix polynomial Q(A), of degree n 1
m
P
whose coefficients are numerical series. If am Am is convergent, then the coefficients
of Q(A) are convergent numerical series.
If A admits
eigenvalues 1 , . . . , n , then the polynomial of degree n 1
Pdistinct
am Am can be written in the Lagrange form
associated to
m
f (A) =
n
X
(A 1 I) . . . (A j1 I)(A j+1 I) . . . (A n I)
j=1
(j 1 ) . . . (j j1 )(j j+1 ) . . . (j n )
or
f (A) =
n
X
f (j ),
Zj f (j ),
j=1
p mX
k 1
X
Zkj f (j) (k ),
k=1 j=0
where f (j) denotes the derivative of order j of the funtion f , and Zkj are matrices
independent of f .
The following series are convergent for any matrix A.
eA =
X
1 m
A ,
m!
m=0
sin A =
(1)m
A2m+1 ,
(2m
+
1)!
m=0
cos A =
X
(1)m 2m
A .
(2m)!
m=0
X
tm m
A ,
m!
m=0
where t K.
This series is very useful in the theory of linear differential with constant coefficients.
66
Problems
1. Determine the eigenvalues and the eigenvectors of the endomorphisms in problems 4 - 7, Chapter 2.
2. Let V be the real Euclidean vector space of the continuous real functions on
[0, 2]. Let T : V V be the endomorphism defined by
g = T (f ), g(x) =
Z2
[1 + cos(x t)] f (t)dt, x [0, 2].
0
1) Show that the subspace Im(T ) is finite dimensional and find an orthogonal
basis of this subspace of V.
2) Determine Ker(T ).
3) Show that T is symmetric; find its eigenvalues and eigenvectors.
3. Determine the diagonal form and the
7 4 1
1) A : R3 R3 , A = 4 7 1
4 4 4
1 0 0
1
0
1
0
0
3) A : R4 R4 , A =
0 0 1 2
1 0 2 5
4
6
2)A : R3 R3 , A = 3 5
3 6
eA .
0
0
1
4. Determine the canonical forms (diagonal or Jordan) of the following endomorphisms, and the corresponding bases.
1 0 3
6 6 15
3
3
3
3
1) A : R R , A = 3 2 3 2) A : R R , A = 1 5 5
3 0 1
1 2 2
0 1 0
2 1
3) A : R2 R2 , A =
4) A : R3 R3 , A = 0 0 1
1 4
2 5 4
2
0 0 0
1
3 1 1
4
4
5) A : R R , A =
0
0 0 1
1 1 0 2
5. For each matrix A in the previous problem compute An , eA , sin A, cos A using
the Cayley-Hamilton theorem.
6. Use the Cayley - Hamilton theorem to determine A1 and the value of the
matrix polynomial Q(A) = A4 + 3A3 9A2 28A, where
1 2
2
A = 2 1 2 .
2
2 1
Chapter 4
Bilinear Forms
x, y, z V, k, l K.
j=1
i=1 j=1
j=1
68
Using the notation aij = A(ei , ej ), A = [aij ] Mn,n (K), we may describe the
bilinear form A by the one of the following equalities
A(x, y) =
n X
n
X
aij xi yj
i=1 j=1
or
A(x, y) = t XAY
QED.
1. BILINEAR FORMS
69
QED.
A(x, y) =
n X
n
X
i=1 j=1
aij xi yj =
n X
n
X
j=1
j=1
aij xi yj
i=1
which shows that x Ker A if and only if x is a solution of the linear homogeneous
system
n
X
aij xi = 0 j = 1, . . . , n.
i=1
70
Quadratic Forms
Throughout the section V denotes a K-vector space and A a symmetric bilinear form
on V.
DEFINITION 2.1 A map Q : V K is called a quadratic form on V if there
exists a symmetric bilinear form A such that Q(x) = A(x, x), x V.
REMARK 2.2 In the definition of the quadratic form, Q and A are uniquely determined by each other. Obviously Q is determined by A. For the converse, the
following computation gives a formula for A in terms of Q.
Q(x + y) = A(x + y, x + y) = A(x, x) + A(y, y) + A(x, y) + A(y, x).
From the symmetry of A,
Q(x + y) = A(x, x) + A(y, y) + 2A(x, y), therefore
1
(Q(x + y) Q(x) Q(y)).
2
The symmetric bilinear form A associated to Q is called the polar of Q.
A(x, y) =
Example The quadratic form corresponding to the inner product on a real vector
space is the square of the Euclidean norm
Q(x) = hx, xi = kxk2 , x V.
Q(x) = A(x, x) =
n X
n
X
aij xi xj = t XAX,
i=1 j=1
2. QUADRATIC FORMS
71
(iii) If dim V = n N? , then dim U + dim U dim V; equality holds if and only
if A|U is nondegenerate.
(iv) If dim V = n N? , then
U U = V A|U is nondegenerate.
Proof. (i) Let y1 , y2 U , i.e. A(x, y1 ) = 0, A(x, y2 ) = 0, x U. For any
k, l K
A(x, ky1 + ly2 ) = kA(x, y1 ) + lA(x, y2 ) = 0,
thus ky1 + ly2 U .
(ii) y U implies A(x, y) = 0, x U; in particular A(ui ) = 0, i = 1, . . . , p,
n
P
since ui U. Conversely, if x U, then x =
xi ui , using the given basis of U.
i=1
Then
A(x, y) =
n
X
xi A(ui , y) = 0, i.e. y U .
i=1
QED.
Example Note that if a symmetric bilinear form A (or the quadratic form Q) is
nondegenerate on the space V , it might happen that its restriction to some subspace of V is degenerate. For example, Q(x) = x21 x22 + x23 is nondegenerate on
R3 ; however, the restriction of Q to U = {x R3 | x1 + x2 = 0} is degenerate,
as 0 6= x = (1, 1, 0) Ker(A|U ). Indeed, for A(x, y) = x1 y1 x2 y2 + x3 y3 ,
A((1, 1, 0), (a, a, b)) = 0, a, b R.
72
n
X
ai x i yi ,
Q(x) =
i=1
n
X
ai x2i
i=1
not identically zero (otherwise the problem is solved and nothing needs to be done).
Case 1. There exists i such that aii 6= 0.
We may assume without loss of generality that i = 1, so we can write
Q(x) = a11 x21 + 2
n
X
j=2
By factoring out
we obtain
a1j x1 xj +
n X
n
X
aij xi xj .
i=2 j=2
1
, from the terms which contain x1 , then completing the square
a11
n X
n
X
1
2
Q(x) =
(a11 x1 + a12 x2 + . . . + a1n xn ) +
a0ij xi xj .
a11
i=2 j=2
...
a11
a11
a11
0
0
1
...
0
C =
,
...
...
...
...
0
gives
Now
...
n
n
1 02 X X 0 0 0
Q(x) =
x +
aij xi xj .
a11 1
i=2 j=2
n P
n
P
i=2 j=2
if this new quadratic form is identically zero, or in canonical form, the reduction is
done, otherwise we continue by applying the algorithm for this new quadratic form,
which is defined on a vector space of dimension n 1.
Case 2. aii = 0, i = 1, . . . , n.
As Q is not identically zero, there is at least one element aij 6= 0, i 6= j. After the
change of coordinates
xi = x0i + x0j , xj = x0i x0j , xk = x0k , k 6= i, j
the expression of the quadratic form becomes Q(x) =
n P
n
P
i=1 j=1
02
since xi xj = x02
i xj . The matrix of change of basis which corresponds to this change
of coordinates is the nonsingular matrix
1
0 ... 0 ... 0 ... 0
... ... ... ... ... ... ... ...
0
0 ... 1 ...
1 ... 0
C 00 =
... ... ... ... ... ... ... ... .
0
0
.
.
.
1
.
.
.
1
.
.
.
0
x3 = x03 ,
74
02
0 0
0 0
as a12 = 1 6= 0. Then Q(x) = 2x02
1 2x2 + 2x1 x3 + 2x2 x3 .
1
1
0 0
Next, we follow Case 1 and obtain Q(x) = (2x01 + x03 )2 x02
2x02
2 + 2x2 x3 .
2
2 3
The change of coordinates
x002 = x02 ,
x003 = x03 ,
1 002 1 002
x x2 .
2 1
2
Combining the two changes of coordinates we get
leads to Q(x) =
x001 = x1 + x2 + x3 ,
x02 =
1 0
1
x1 x02 ,
2
2
1/2 0
1 1 0
C = 1 1 0 0 1
0 0 1
0 0
C 1
1/2
1/2
0 = 1/2
0
1
1
1
1
= 1/2 1/2 0 .
0
0
1
x3 = x03 .
1 1/2
1 1/2 ;
0
1
The columns of C represent the coordinates of the vectors of the new basis w.r.t.
1
1
the initial basis. Thus the new basis is {f1 , f2 , f3 }, f1 = e1 + e2 , f2 = e1 e2 ,
2
2
1
1
f3 = e1 e2 + e3 .
2
2
THEOREM 3.2 (Jacobis method) Let Q : V R be a quadratic form, and
A = [aij ] the associated matrix w.r.t. an arbitrary fixed basis B = {e1 , . . . , en }.
Denote 0 = 1. If all the determinants
a
1 = a11 , 2 = 11
a21
a12
a22
a11
, 3 = a21
a31
a12
a22
a32
a13
a23
a33
, . . . , n = det A
are not zero, then there exists a basis B1 = {f1 , . . . , fn } such that the expression of
Q w.r.t. this new basis is
(3.1)
Q(x) =
n
X
i1
i=1
x02
i ,
satisfying
(3.3)
A(ei , fj ) = 0, if 1 i < j n,
A(ei , fi ) = 1, i = 1, . . . n
c1i a11
c1i a21
...
c1i ai1,1
c1i ai1
+
+
...
+
+
c2i a12
c2i a22
...
c2i ai1,2
c2i ai2
+
+
...
+
+
...
...
...
...
...
+
cii a1i
+
cii a2i
...
...
+ cii ai1,i
+
cii aii
= 0
= 0
= 0
= 1
a11
. . . a1,i1
0
...
...
...
...
i1
cii =
ai1,1 . . . ai1,i1 0 = i .
ai1
...
ai,i1
1
So the basis B1 = {f1 , . . . , fn } is uniquely determined by (3.2), (3.3). It remains to
show that the matrix B = [bij ] associated to Q w.r.t. this basis is diagonal, with the
required diagonal entries. For, compute
bij = A(fi , fj ) = . . . = c1i A(e1 , fj ) + . . . + cii A(ei , fj ), i j.
i1
. The matrix B is
i
symmetric, since A is symmetric (as the polar of a quadratic form), thus bij = 0, for
i > j too.
QED.
76
Real quadratic forms of constant sign are useful in many applications, for example
in extemum problems for functions of several real variables. This why it is helpful
to study these forms, and state some methods which indicate whether the sign is
constant or not.
V will denote a real vector space.
DEFINITION 4.1 A quadratic form Q : V R is said to be:
(i) positive definite if Q(x) > 0, x V \ {0};
(ii) negative definite if Q(x) < 0, x V \ {0};
(iii) positive semidefinite if Q(x) 0, x V;
(iv) negative semidefinite if Q(x) > 0, x V;
(v) indefinite if x1 , x2 V such that Q(x1 ) > 0 and Q(x2 ) < 0.
The notions defined above are used for symmetric bilinear forms too, with the
obvious meaning. A real symmetric bilinear form A : V V R is positive definite
if the associated quadratic form Q(x) = A(x, x) is positive definite, etc.
Example The inner product on a real vector space is a positive definite symmetric
bilinear form.
From now on, throughout the section we assume that dim V = n N? .
The following remarks are easy observations relating some of the resuls in this
chapter.
77
n
P
i=1
n
X
i=1
n
X
a0i x02
i respectively.
i=1
We may assume that in the expression of Q w.r.t. B (B0 ) the first p (p0 ) coefficients
are strictly positive, the next q (q 0 ) are strictly negative, and the last d (d0 ) are zero.
Morever, we may assume that ai , a0i {1, 0, 1}. Then
(4.1)
Q(x) =
p
X
i=1
x2i
p+q
X
i=p+1
x2i
p
X
i=1
x02
i
0
pX
+q 0
i=p0 +1
x02
i .
78
x0p0 +1 = . . . = x0n = 0,
1 4 8
A = 4 7 4 ;
8 4 1
1 4
= 9, 3 = det A = 729.
0 = 1, 1 = a11 = 1, 2 =
4 7
1 02
1 02
Using formula (3.1) we obtain Q(x) = x02
x .
1 x2 +
9
81 3
We look for the corresponding basis B1 = {f1 , f2 , f3 } of the form (3.2). The coefficients cij , 1 i < j n = 3 are given by the solutions of the linear equations (3.3)
as follows.
A(e1 , f1 ) = 1 c11 = 1.
5. PROBLEMS
79
4
1
A(e1 , f2 ) = 0
c12 4c22 = 0
c12 =
; c22 =
.
A(e2 , f2 ) = 1
4c12 + 7c22 = 1
9
9
A(e1 , f3 ) = 0
c13 4c23 8c33 = 0
4
1
8
A(e2 , f3 ) = 0
4c13 + 7c23 4c33 = 0 c13 =
; c23 =
; c23 =
.
81
81
81
A(e3 , f3 ) = 1
8c13 4c23 + c33 = 0
4
1
8
4
1
It follows that f1 = e1 , f2 = e1 e2 , f3 = e1 e2 + e3 , and the matrix
9
9
81
81
81
of change is
1 4/9 8/81
C = 0 1/9 4/81 .
0
0
1/81
Next we will use the eigenvalues method.
The eigenvalues of A are 1 = 9, 2 = 9, 3 = 9. Therefore Q(x) = 9y12 +
2
9y2 9y32 , where y1 , y2 , y3 are the coordinates of x w.r.t. a basis made up of eigenvectors, such that the matrix of change is orthogonal. To determine such a basis
{u1 , u2 , u3 }, pick first some linearly independent eigenvectors. Let us take v1 =
(1, 0, 1), v2 = (1, 2, 0), v3 = (2, 1, 2). Using the Gram-Schmidt procedure for
the first two (which happen not to be orthogonal, as they correspond
tothe same
eigenvalue),
and
normalizing
the
third
we
obtain
the
u
=
(1/
2,
0,
1/
2), u2 =
1
Problems
80
Chapter 5
Free Vectors
1
Free Vectors
E3 will denote the 3-dimensional space of elementary geometry.
For any two fixed points A, B E3 consider the oriented segment AB.
Fig. 1
If A 6= B, the straight line AB is called the supporting line of the oriented segment
AB. The length (module or norm) of AB is the length of the segment AB and it is
If A = B we say that AB=AA is the zero oriented segment with the origin at A.
The zero segment has length zero.
82
Fig. 2
called the free vector AB. Thus AB = {CD | CDAB}, and AB AB. Any element
to AB.
DEFINITION 1.4 The length (norm) of a free vector a
is the length of a represen
tative AB, AB a
.
83
Fig. 3
Fig. 4
Fig. 5
84
a
+ b as the class of the oriented diagonal P B of the parallelogram P ABC, where
PA = a
, P C = b.
Addition of free vectors + : V3 V3 V3 is a well defined binary operation
since it does not depend on the choice of the point P .
THEOREM 2.2 Addition of free vectors has the following properties:
(i)
a, b, c V3 , a
+ (b + c) = (
a + b) + c (associativity)
(ii)
a, b V3 , a
+b=b+a
(commutativity)
(iii)
a V3 , a
+ 0 = 0 + a
=a
(0 is the identity element)
(iv)
a V3 a
V3 such that a
+ (
a) = (
a) + a
= 0 (each vector has an
00 00
inverse with respect to + )
Proof. (ii) and (iii) are immediate from the definition.
For (iv) we check easily that
a is the opposite of a
, i.e. if a
= P A, then
a = AP .
If a
, b, c are pairwise not collinear, then associativity follows from Fig. 6.
Fig. 6
Fig. 7
=a
; this is x
=a
+ (b).
If a
= AB and b = AD, then x
=a
b = DB (Fig. 7).
By the observation which proves (iv), it makes sense to write AB = BA, for any
A, B E3 .
Multiplication by Scalars
In this section we will define on V3 a natural structure of a real vector space. Addition
of free vectors was already defined. We still need an external operation R V3 V3
which satisfies the definition of the vector space.
DEFINITION 3.1 (multiplication of free vectors by scalars, Fig. 8).
Let k R and a
V3 . Then k
a V3 is defined as follows
(i) if a
= 0 or k = 0, then k
a = 0;
(ii) if a
6= 0 and k 6= 0, then k
a is the vector which has the direction of a
, length
|k| k
ak, the sense of a
for k > 0, and the sense of
a for k < 0.
Note that a
and k
a are collinear, and kk
ak = |k| k
ak k R, a
V3 .
85
Fig. 8
THEOREM 3.2 Multiplication of free vectors by real scalars has the following properties:
(i) k, l R,
a V3 , k(l
a) = (kl)
a
(ii)
a V3 , 1
a=a
(iii) k, l R,
a V3 , (k + l)
a = k
a + l
a
(iv) k R,
a, b V3 , k(
a + b) = k
a + kb.
Proof. (i) (iii) are left to the reader.
We will prove (iv) for a
, b not collinear. The collinear case is left to the reader as
well.
Let OA = a
and AB = b. Then a
+ b = OA + AB = OB.
Suppose k > 0. Then there is A0 such that OA0 = k
a and B 0 such that OB 0 =
k(
a + b).
Fig. 9
The triangles 4OAB and 4OA0 B 0 are similar since they have a common angle
whose sides are proportional. This similarity implies ABkA0 B 0 and kA0 B 0 k = kkABk.
The same orientation of OA, OA0 and OB, OB 0 gives the same orientation of AB and
A0 B 0 , thus A0 B 0 = kAB (Fig. 9).
The case k < 0 is similar to k > 0, and the case k = 0 is trivial.
QED.
COROLLARY 3.3 V3 is a real vector space.
The collinearity and coplanarity of free vectors are notions we defined geometrically.
In this section we will see they are related to the algebraic structure of a real vector
space that was defined on V3 .
and b are collinear and a
6= 0, then there
PROPOSITION 4.1 If the free vectors a
86
b = kbk a
,
k
ak
kbk
a
has the direction, sense and length of b.
k
ak
Similarly, if b 6= 0 and the sense of b is opposite to sense of a
(i.e., b has the sense
kbk
of
a), b =
a
, and the existence of k is proved.
k
ak
For the uniqueness, let k, l R such that b = k
a = l
a. Then k = l since a
6= 0,
by the properties of vector space operations.
QED
since
Fig. 10
5. INNER PRODUCT IN V3
87
Fig. 11
QED
Inner Product in V3
Fig. 12
The definition of the angle does not depend on the representatives chosen.
DEFINITION 5.2 Let u
V3 be a versor, b 6= 0 and = 6 (
u, b).
The vector u b = kbk cos u
is called the orthogonal projection of b onto u
.
Fig. 13
88
The number pru b = kbk cos is called the algebraic measure of the orthogonal
projection of b onto u
(Fig. 13).
By definition u 0 = 0, pru 0 = 0.
Note that
i
pru b < 0
, ; pru b = 0 = or b = 0.
2
2
LEMMA 5.3 For any versor u
, the functions u : V3 V3 and pru : V3 R are
linear transformations.
We leave the proof as an exercise. The picture below (Fig. 14) describes one of
the cases that arise in order to prove additivity, where OA is collinear to u
, OB = b,
OC = c, OABC is a parallelogram, BB 0 OA, CC 0 OA, DD0 OA.
Then AD = b + c, u b = OB 0 = C 0 D0 , u c = OC 0 , u (b + c) = u (OD) = OD0 .
The additivity of u follows from OD0 = OC 0 + C 0 D0 .
Fig. 14
case a
= 0 is obvious), and let u
=
be the versor of a
. Then
k
ak
h
a, b + ci = hk
ak
u, b + ci = k
akh
u, b + ci, by homogeneity.
From the definitions
Then
pru b = h
u, bi,
b V3 .
h
u, b + ci = h
u, bi + h
u, ci by the previous lemma.
5. INNER PRODUCT IN V3
It follows that
h
a, b + ci
89
= k
ak(h
u, bi + h
u, ci)
= hk
ak
u, bi + hk
ak
u, ci
= h
a, bi + h
a, ci.
The scalar product defined above is called the canonical scalar product on V3 .
COROLLARY 5.5 V3 is a Euclidean vector space.
p
REMARKS 5.6 1) k
ak = h
a, a
i,
a V3 . Therefore the length of a free vector
is the Euclidean norm of that vector with respect to the canonical scalar product.
2) The Cauchy-Schwarz inequality follows directly from the definition using:
| cos | 1 |h
a, bi| k
ak kbk.
3) h
a, bi = 0 a
= 0 or b =
0 or 6 (
a, b) = .
2
As in a general Euclidean vector space, the free vectors a
, b are called orthogonal
if h
a, bi = 0.
It is convenient to work with orthonormal bases.
The coordinates of a vector with respect to an orthonormal basis are called Euclidean coordinates.
Fig. 15
be an orthonormal basis (Fig. 15), i.e. the values of the scalar products
Let {i, j, k}
are given by the table
h, i i j k
i 1 0 0
j 0 1 0
k 0 0 1
which leads to the canonical expression of the scalar product of a
= a1i + a2 j + a3 k
and b = b1 i + b2 j + b3 k, namely
h
a, bi = a1 b1 + a2 b2 + a3 b3 .
p
In particular the norm of a
is k
ak = a21 + a22 + a23 and
cos =
a1 b1 + a2 b2 + a3 b3
h
a, bi
p
=p 2
,
k
ak kbk
a1 + a22 + a23 b21 + b22 + b23
for a
6= 0, b 6= 0, [0, ].
90
Let a
, b V3 . If a
6= 0 and b 6= 0, then denotes the angle of a
and b.
DEFINITION 6.1 The vector denoted a
b and defined by (Fig. 16)
k
ak kbk sin
e, if a
, b noncollinear
a
b=
0,
if a
, b collinear (in particular a
= 0 or b = 0),
where e is a versor whose direction is given by e a
, e b and the sense of e is given
Fig. 16
The function: V3 V3 V3 ,
proposition below.
(
a, b) a
b is bilinear. This follows from the
(2) b a
=
a b (anticommutativity)
(3) t(
a b) = (t
a) b = a
(tb), t R (homogeneity)
(4) a
(b + c) = a
b+a
c (additivity or distributivity)
(5) k
a bk2 = k
ak2 kbk2 h
a, bi2 (the Lagrange identity)
(6) If a
, b are not collinear, then k
a bk represents the area of the parallelogram
determined by a
and b.
Proof. (1), (2), (3) are immediate from the definition. For (5) multiply
sin2 = 1 cos2
ak2 kbk2 . The area of the parallelogram in Fig.16 is
by k
k
ak kbk sin = k
a bk,
so 6) is an obvious geometric interpretation of the vector product. For (4) assume
that a
is a versor and let be a plane perpendicular on a
(Fig. 17). Let also b0 and
0
c be the projections of b respectively c onto the plane P (i.e. b0 is determined by the
intersections of the lines passing through O and B and perpendicular to P , and P ,
where b = OB)
Then
a
b = a
b0 , a
c = a
c0
and
a
b0 , a
c0 , a
(b0 + c0 )
91
a
(b + c) = a
b + a
c.
Fig. 17
i
j
k
i
0
k
j
j
k
k j
i
0
i 0
i
j
k
a
b = a1 a2 a3 = (a2 b3 a3 b2 )i + (a3 b1 a1 b3 )j + (a1 b2 a2 b1 )k.
b1 b2 b3
(b
c) is called the double vector product of a
, b, c.
DEFINITION 6.3 The vector a
Fig. 18
a
(b c) =
b
c
= h
a, cib h
a, bi
c.
h
a, bi h
a, ci
Note that w
=a
(b c), b, c are coplanar (Fig. 18). In general a
(b c) 6= (
a b) c
thus the vector product is not associative.
92
Mixed Product
Fig. 19
a
b c
a
= 0 or the direction of a
V = |h
a, b, ci|.
PROPOSITION 7.3 (Properties of the mixed product)
(1)
If
a
= a1i + a2 j + a3 k
b = b1i + b2 j + b3 k ,
c = c1i + c2 j + c3 k
h
a, b ci
(2)
(3)
a1
then h
a, b ci = b1
c1
= h
c, a
bi = hb, c a
i
= h
a, c bi
a b, ci.
= h
ht
a, b ci = h
a, (tb) ci = h
a, b t
ci = th
a, b ci.
a2
b2
c2
h
u + v, b ci = h
u, b ci + h
v , b ci.
a3
b3
c3
7. MIXED PRODUCT
(5)
93
h
a, ci h
a, di
h
a b, c di =
hb, ci hb, di
Proof. We will prove only (5) leaving the other proofs to the reader. Note that (1)
is a straightforward computation using the formula for b c and the known products
(2), (3), (4) are easy consequences of (1). For (5) denote w
of i, j, k;
=a
b. Then
hw,
c di
But
w
= hd,
ci =
c wi
= hd,
c w
= c (
a b) =
by (2)
h
c, a
i
h
c, bi
= hd,
h
hw,
c di
c, bi
a h
c, a
ibi
= h
c, bihd, a
i + h
c, a
ihd, bi
and get
h
a, ci h
a, di
.
hb, ci hb, di
A basis {
a, b, c} of V3 is said to have positive orientation if h
a, b ci is strictly
positive, and negative orientation if h
a, b ci is strictly negative.
has positive orientation since hi, j ki
= 1.
The canonical basis {i, j, k}
94
Problems
1. Let A(1, 2, 0), B(1, 0, 3), C(2, 1, 1) be points in E3 . Are these points
collinear? Find the area of the triangle ABC, and the altitude from the base BC to
the vertex A.
2.
1)
2)
3)
Given the points A(1, 1, 2), B(2, 3, 0), C(0, 1, 1), D(1, 2, 3), compute:
the mixed product hAB, AC ADi; are the points coplanar?
the volume of the tetrahedron ABCD;
the altitude of the tetrahedron, from the base ACD to the vertex B.
3. Show that
a
(b c) + b (
ca
) + c (
a b) = 0
h
a (b c), (b (
ca
)) (
c (
a b))i > 0.
2) a
x
= b.
(
u, v) =
,
2
6
(
u, w)
=
,
3
6
(
v , w)
=
.
4
b = i + j + 3k,
c = i + 4j. If the
6. Consider the vectors a
= i + 4j + 6k,
vectors are coplanar, determine the possible values of . In this case, decompose a
along b and c.
7. Show that if a
b + b c + c a
= 0, then the vectors a
, b, c are coplanar.
8. Let a
, b, c V3 be noncoplanar vectors such that h
a, bi 6= 0. Compute
a
b a
(b c)
E=
,
+ 2.
h
a, bi h
a, b ci
9. Given the points A(4, 2, 2), B(3, 1, 1), C(4, 2, 0), determine the vertex D of
the tetrahedron ABCD such that D Oz and the volume of ABCD is 4. Determine
also the altitude from the base ABC to D.
Chapter 6
Cartesian Frames
We mentioned that a fixed origin O E3 provides a natural one-to-one correspondence between E3 and V3 , assigning to each point M E3 a unique vector r = OM
V3 , called the position vector of M . Each fixed basis of V3 determines a bijective
correspondence between V3 and R3 .
a fixed orThroughout this chapter, O E3 will be a fixed origin and {i, j, k}
by i, j, k respectively are called the Cartesian axes of the frame {O; i, j, k}.
The
Cartesian coordinates of the point M are the algebraic measures of the orthogonal
projections of OM onto the coordinate axis (Fig. 20).
Fig. 20
95
96
y=0
z=0
Ox :
,
Oy :
,
z=0
x=0
Oz :
x=0
.
y=0
Any two axes determine a plane called a coordinate plane. The coordinate planes
xOy, yOz, zOx are characterized by the equations
xOy : z = 0,
yOz : x = 0,
zOx : y = 0.
The vector a
is called a director vector of D. Any vector of the form k
a, k R\{0}
is another director vector of D and may be used to obtain the equations of D, instead
of using a
.
Fig. 21
r = r0 + t
a,
t R,
x = x0 + lt
y = y0 + mt ,
(2.3)
t R (the parametric equations of D in R3 )
z = z0 + nt
or to
(2.4)
y y0
z z0
x x0
=
=
l
m
n
97
with the convention that if a denominator is zero , then the corresponding numerator
is zero too. More precisely,
(1) if l = 0, mn 6= 0, then
x = x0 ,
y y0
z z0
=
m
n
and D k yOz;
(2) if l = m = 0, then
x = x0 ,
y = y0
and D k Oz.
Note that a
6= 0, thus at most two of the coordinates of a
could possibly be zero.
x x1
y y1
z z1
=
=
.
x2 x1
y2 y1
z2 z1
Fig. 22
98
ax + by + cz + d = 0
normal vector n
= ai + bj + ck.
yOz : x = 0;
zOx : y = 0.
k is a normal vector for xOy and for any plan P parallel to xOy. Then (Fig. 24)
P : z = a,
a R \ {0},
for P kxOy.
Fig. 24
a R \ {0},
for P kyOz
P : y = a,
a R \ {0},
for P kzOx.
a2 + b2 6= 0,
if P xOy
P : by + cz + d = 0
b2 + c2 6= 0,
if P yOz
P : ax + cz + d = 0
a2 + c2 6= 0,
if P zOx.
99
and
ax + cz = 0 respectively.
M, M1 , M2 , M3
are coplanar in E3
M1 M , M1 M2 , M1 M3
are coplanar in V3
x x1 y y1 z z1
x
y
y
z
z
(3.3)
1
2
1
2
1 =0
2
x3 x1 y3 y1 z3 z1
in R3 or to
(3.4)
x1
y1
z1
x2
y2
z2
x3
y3
z3
1
=0
1
Fig. 25
Fig. 26
100
x = x0 + rl1 + sl2
y = y0 + rm1 + sm2
(3.6)
r, s R
z = z0 + rn1 + sn2
(the parametric equations of the plane in R3 ).
The coplanarity of M0 M , M0 M1 , M0 M2 is also equivalent to hM0 M , v1 v2 i = 0,
i.e.,
x x0 y y0 z z0
= 0.
l
m
n
(3.7)
1
1
1
l2
m2
n2
Fig. 27
It is known that any two distinct planes are either parallel or they have a common
line.
i = aii + bi j + ci k,
Let Pi : ai x + bi y + ci z + di = 0. a2i + b2i + c2i 6= 0, i = 1, 2, n
and consider the system
(
a1 x + b1 y + c1 z + d1 = 0
(4.1)
a2 x + b2 y + c2 z + d2 = 0 .
The planes P1 , P2 are identical their equations are equivalent, i.e.,
b1
a1
=
=
a2
b2
c1
d1
= . This means that the system is compatible of rank 1. The planes are parallel
c2
d2
n
1, n
2 are collinear but the system is not compatible, i.e.,
a1
b1
c1
d1
=
=
6= .
a2
b2
c2
d2
101
The planes intersect along a straight line D the system (4.1) is compatible of
rank 2. In this case the equations (4.1) represent the line D = P1 P2 .
The set of all planes passing through the straight line D is called the pencil of
planes determined by P1 and P2 . The line D is called the axes of the pencil (Fig. 28).
The equation of an arbitrary plane of the pencil is
(4.2) r(a1 x + b1 y + c1 z + d1 ) + s(a2 x + b2 y + c2 z + d2 ) = 0, s, t R, s2 + t2 6= 0,
which also said to be the equation of the pencil.
The set of all planes parallel or coincident to a given plane P1 is called a pencil of
parallel planes, whose equations are of the form:
a1 x + b1 y + c1 z + = 0,
R.
Note that from (4.1) one can deduce easily the equations of the line D = P1 P2
in any of the forms studied previously, using the director vector a
= n
1 n
2 and
M0 (x0 , y0 , z0 ), where (x0 , y0 , z0 ) is a particular solution of the system (4.1)
Fig. 28
102
Fig. 29
The numbers cos , cos , cos are called the director cosines of (D, e). From
k
ek = 1 follows
cos2 + cos2 + cos2 = 1.
If e =
then
,a
= li + mj + nk,
k
ak
cos =
l2
l
,
+ m2 + n2
cos =
l2
m
,
+ m2 + n2
cos =
l2
n
.
+ m 2 + n2
Fig. 30
Angles in Space
k
ak kbk
l1 + m21 + n21 l22 + m22 + n22
6. ANGLES IN SPACE
103
Fig. 31
Note that
(1) D1 D2 h
a, bi = 0 l1 l2 + m1 m2 + n1 n2 = 0
(2) D1 kD2 a
b = 0, D1 6= D2
l1
m1
n1
=
=
, D1 6= D2 .
l2
m2
n2
Fig. 32
h
n1 , n
2i
a1 a2 + b1 b2 + c1 c2
p
, [0, ],
=p 2
kn1 k kn2 k
a1 + b21 + c21 a22 + b22 + c22
i = 1, 2.
where n
i = aii + bi j + ci k,
Fig. 33
al + bm + cn
h
a, n
i
=
.
2
2
k
ak k
nk
a + b + c2 l2 + m2 + n2
DkP or D P h
a, n
i = 0,
DP
a
n
= 0
b
c
a
=
= .
l
m
n
104
Distances in Space
Fig. 34
|h
n, M1 M0 i|
|ax0 + by0 + cz0 + d|
=
.
k
nk
a2 + b2 + c2
Fig. 35
7. DISTANCES IN SPACE
105
In order to find out the equations of D we may consider the plane P1 determined
by D and D1 , and the plane P2 determined by D and D2 ; since D = P1 P2 , we are
going to use the equations of P1 and P2 (see Section 4, (4.1)).
For i = 1, 2 let Mi be an arbitrary fixed point on Di ; it turns out that Pi is the
plane determined by Mi and the noncollinear vectors n
and a
i . As we have seen at
the end of Section 3, it follows that
Pi : hMi M , a
i n
i = 0,
i = 1, 2,
hM1 M , a
1 n
i = 0
D:
,
hM2 M , a
2 n
i = 0
where M is the current point of D.
Fig. 36
Then Q1 is the plane parallel to D2 , passing through D1 , and kABk is the distance
from B to Q1 . But d(B, Q1 ) = d(M2 , Q1 ) = d(D2 , Q1 ), since D2 kQ1 . On the other
hand d(M2 , Q1 ) is the height of the parallelepiped constructed on the supports of the
vectors M1 M2 , a
1 , a
2 , that corresponds to the basis determined by a
1 , a
2 . Then
d(D1 , D2 ) =
1 a
2 i|
|hM1 M2 , a
.
k
a1 a
2 k
8. PROBLEMS
107
Problems
1. Write the equations of a straight line passing through the point A(1, 1, 2) and
parallel to the straight line D.
y+2
z+1
x4
=
=
2
3
4
x y 3z + 2 = 0
2) D :
2x y + 2z 3 = 0
1) D :
2. Compute the distance from the point A(1, 1, 1) to the straight line D.
x1
y+1
z1
=
=
2
1
3
xy+z =0
2) D :
x+yz =0
1) D :
3. Write the equation of the plane passing through A(1, 1, 1) and perpendicular
to D.
x
y1
z+1
1) D : =
=
1
2
1
xy =0
2) D :
x + 2y z + 1 = 0
4. Write the equation of the plane which passes through the point A(0, 1, 1) and
through the straight line D given as follows:
1) D : x = 4 + 2t, y = 2 + 3t, z = 1 + 4t;
2x y + z + 1 = 0
2) D :
x + y + z = 0.
5. Consider the planes
P : x 2y + 2z 7 = 0, Q : 2x y 2z + 1 = 0, R : 2x + 2y + z 2 = 0.
1) Show that the planes are pairwise perpendicular.
2) Find the common point of these three planes.
3) Find the distance from A(2, 4, 7) to the plane P .
6. Determine the projection of the straight line D onto the plane
P : 2x + 3y + 4z 10 = 0, where D is given by:
x y 3z + 2 = 0
1) D :
2x y + 2z 3 = 0;
x4
y+2
z+1
=
=
;
2
3
4
y+2
z+1
x1
=
=
.
3) D :
3
2
0
2) D :
108
x
y
z
x1
y1
z
= = , D2 :
=
= .
1
2
3
2
1
1
Prove that D1 , D2 are noncoplanar, determine the equations of their common
perpendicular, and the distance between D1 and D2 .
D1 :
8. Write down the equation of the plane P which contains the point A(3, 1, 2)
and the line
2x y 3z 2 = 0
D:
x + 3y z + 4 = 0.
Determine also the distance from A to D.
9. Consider the point A(1, 2, 5) and the plane Q : x + 2y + 2z + 1 = 0. Find the
projection of A onto Q, the symmetric of A w.r.t. Q, and the distance from A to Q.
10. Given the points A(3, 1, 3), B(5, 1, 1), C(0, 4, 3), determine:
1) the parametric equations of the straight lines AB and AC
2) the angle between AB and AC
3) the distance from A to the straight line BC.
11. If the point M (3, 4, 2) is the projection of the origin onto the plane P , determine the equation of P .
12. Determine the parameters , such that the planes
P : 2x y + 3z 1 = 0, Q : x + 2y z + = 0, R : x + y 6z + 10 = 0
1) have exactly one common point;
2) have a common straight line;
3) intersect about three parallel and distinct straight lines.
13. Consider the points A(1, 3, 2), B(1, 2, 1), C(0, 1, 1), D(2, 0, 1), and the
plane P : 2x + y z 1 = 0. Which of the given points is on the same side as the
origin, with respect to the plane P ?
Chapter 7
Transformations of
Coordinate Systems
Changes of Cartesian frames are related to isometries of R3 . Using the group of
isometries of R3 we may define in a natural way the congruence of space figures in
E3 . On the other hand, isometries may be described geometrically. It turns out
that the fundamental isometries are: rotation, translation, symmetry w.r.t. a plane,
symmetry w.r.t. a point, and translation. Any isometry is a composite of isometries
listed above.
Rotations and symmetries are called orthogonal transformations and they actually
correspond to linear orthogonal transformations on V3 ' R3 .
As we have seen, any isometry is the product of a translation and an orthogonal
transformation.
and
Let I = T R be an isometry determined by the frames F = {O, i, j, k}
F0 = {O0 ; i0 , j0 , k0 }. The isometry I is said to be positive (displacement) if the basis
has positive orientation, and negative (antidisplacement) otherwise.
{i, j, k}
Translations and rotations are positive isometries; symmetries are negative isometries.
We say that the Cartesian frame O0 x0 y 0 z 0 is obtained by the translation of the Cartesian frame Oxyz, if the axes of the new frame O0 x0 y 0 z 0 are parallel and of the same
sense as the axes of the initial frame (Fig. 37).
The translation T is described by:
= k.
110
Fig. 37
If the coordinates of the new origin w.r.t. the initial frame are O0 (a, b, c), let us
determine now the relationship between the coordinates x, y, z and x0 , y 0 , z 0 of the
same point M w.r.t each coordinate system. For, note that OM = OO0 + O0 M . In
this relation becomes
terms of the basis {i, j, k},
xi + yj + z k = ai + bj + ck + x0i + y 0 j + z 0 k,
thus
x = x0 + a, y = y 0 + b, z = z 0 + c.
0
x
a
x
a
1 0
y = b + y0 = b + 0 1
z
c
z0
c
0 0
0
0
x
0 y0 .
1
z0
{O; i0 , j0 , k0 }, keeping the same origin O and changing the orthonormal basis {i, j, k}
of V3 into another orthonormal basis {i0 , j0 , k0 }, such that {i0 , j0 , k0 } has positive
orientation. If the coordinates of the new basis vectors w.r.t. the old basis are known
(i.e. the matrix of change is known), then we can express a relationship between the
coordinates x, y, z and x0 , y 0 , z 0 of the same point M w.r.t each coordinate system.
This is immediate since
OM = xi + yj + z k = x0 i0 + y 0 j0 + z 0 k0 .
Denote the matrix of change by R = [aij ] M3,3 (R). Then
0
x
x
y = R y0 .
(2.1)
z0
z
111
to {i0 , j0 , k0 }
Since both bases are orthonormal, the linear transformation sending {i, j, k}
is
is an orthogonal transformation R of V3 whose associated matrix w.r.t. {i, j, k}
R. The entries of R can be expressed in terms of inner products of the bases vectors
(see Chapter 1, section 7) as follows
k
R(i) = i0 = hi0 , iii + hi0 , jij + hi0 , ki
k
R(j) = j0 = hj0 , iii + hj0 , jij + hj0 , ki
= k0 = hk0 , iii + hk0 , jij + hk0 , ki
k.
R(k)
Then
x
x
y0 = t R y .
z0
z
means hi0 , j0 , k0 i = 1. But hi0 , j0 , k0 i = det R,
The positive orientation of {i, j, k}
thus R is a rotation.
If we do not impose the positive orientation of the new frame, then the determinant
of the matrix of change can be -1. This is the case of a symmetry, or rotation followed
by a symmetry.
Particular cases.
1) Rotation about the Oz axis. Denote by the rotation angle (Fig. 38).
112
R(k)
Therefore R is described by
x = x0 cos y 0 sin
y = x0 sin + y 0 cos
R:
z = z0.
Obviously, the determinant of the associated matrix is +1, thus R is a positive isometry. In particular, a rotation in the xOy plane, of angle is described by
R:
x = x0 cos y 0 sin
y = x0 sin + y 0 cos .
R:
x = x0 cos y 0 sin + a
y = x0 sin + y 0 cos + b.
Fig. 39
3. CYLINDRICAL COORDINATES
113
Fig. 40
it follows
By xi + yj + z k = x0i + y 0 j + z 0 k,
S : x = x0 , y = y 0 , z = z 0
or in matrix form
0
x
1 0 0
x
y = 0 1 0 y0 .
z
0 0 1
z0
Cylindrical Coordinates
Fig. 41
114
The numbers , , z are called the cylindrical coordinates of the point M . The
cylindrical coordinates and the Cartesian coordinates of M are related by
x = cos
y = sin
z = z.
If we impose > 0, [0, 2), then the above relations give a one-to-one correspondence between E3 \ Oz and (0, ) [0, 2) R.
Coordinate sufaces
= 0 : circular cylinder with generator lines parallel to Oz.
= 0 : semiplane bounded by Oz.
z = z0 : plane parallel to xOy, without the point (0, 0, z0 ).
Coordinate curves
= 0 , z = z0 : semiline parallel to xOy, with the origin on Oz.
= 0 , z = z0 : circle whose center is on Oz, contained in a plane parallel to xOy.
= 0 , = 0 : line parallel to Oz.
The coordinate curves of different types are orthogonal, so the coordinate surfaces
of different types are orthogonal too.
Consider the point M (, , z). The unit vectors e , e , ez tangent to the coordinate
curves passing through M are pairwise orthogonal. The moving orthonormal frame
{M (, , z); e , e , ez } is called cylindrical frame (Fig. 42)
Fig. 42
e = cos i + sin j
e = sin i + cos j
ez = k.
These formulas are based on the rule which gives the components of a vector w.r.t. an
orthonormal basis as projections of that vector onto the basis vectors. For example
k = cos i + sin j.
e = h
e , iii + h
e , jij + h
e , ki
4. SPHERICAL COORDINATES
115
Spherical Coordinates
Fig. 43
The numbers r, , are called the spherical coordinates of the point M . The
Cartesian and the spherical coordinates of M are related by
x = r sin cos
y = r sin sin
z = z cos .
These formulas provide a one-to-one correspondence between the sets E3 \ Oz and
(0, ) (0, ) [0, 2).
Coordinate sufaces
r = r0 : sphere of radius r, center at O, without the north and the south poles.
= 0 : semiplane bounded by Oz.
= 0 : semicone without vertex (the origin).
Coordinate curves
= 0 , = 0 : semiline with origin at O.
r = r0 , = 0 : circle whose center is on Oz, contained in a plane parallel to xOy.
= 0 , r = r0 : open semicircle.
The coordinate curves of different types are orthogonal, so the coordinate surfaces
of different types are orthogonal too.
Consider the point M (r, , ). The unit vectors er , e , e tangent to the coordinate
curves passing through M are pairwise orthogonal. The moving orthonormal frame
{M (r, , ); er , e , e } is called spherical frame (Fig.44).
116
Fig. 44
e = sin i + cos j.
Problems
x2
y+1
z3
=
=
.
2
1
2
Let i0 be the unit director vector of D (choose a sense on D). Let j 0 be a versor
contained in yOz and perpendicular on D, and the versor k0 such that {i0 , j0 , k0 } is
an orthonormal basis.
5. PROBLEMS
117
Find out the change of basis formulas and compare the orientations of the two
frames.
7. Consider three points given by their cylindrical coordinates:
A(5,
4
5
, 4), B(7,
, 2), C(2,
, 1).
3
3
6
Show that A and B belong to a plane passing through Oz; determine the Cartesian
coordinates of A and C, and the distance d(A, C).
8. Write the following equations in spherical coordinates.
(x2 + y 2 + z 2 )2 = 3(x2 + y 2 );
118
Exam Samples
I.
1. Eigenvalues and eigenvectors; characteristic polynomial.
2. Determine the equation of the plane equidistant from
D1 :
x1
y+1
z
=
= ,
1
1
1
D2 :
x
y
z1
=
=
.
2
1
1
II.
1. Bases and dimension.
2. Prove the identity
h
a, ci
h
a b, c di =
hb, ci
h
a, di
.
hb, di
5 6 6
1
4
2 .
3 6 4
III.
1. Scalar product and vector product of free vectors.
2. Choose the basis {1, x, x2 , x3 } in the vector space Vof all real polynomials of degree 3. Let D denote the differentation operator and T : V V, T (p)(x) = xp0 (x).
Determine the matrix of each of the following linear transformations: D, T , DT , T D.
Find the eigenvalues and eigenvectors of T .
3. Let V be the real vector space of all polynomials of order 2 and
Z 1Z 1
A(x, y) =
x(t)y(s)dtds.
0
119
120
EXAM SAMPLES
Show that A is a bilinear form which is symmetric and positive semidefinite. Find
the matrix of A with respect to the basis {1, t, t2 }.
IV.
1. Euclidean vector spaces.
2. Show that if the vector b is perpendicular to c, then (
a b) c = bh
a, ci.
V.
1. Linear transformations: general properties, kernel and image.
2. Show that the area of a figure F contained in the plane P : z = px + qy + l and
the area of its projection F onto the xOy plane are related by
p
S(F ) = 1 + p2 + q 2 S(F ).
Z b
3. Show that hf, gi =
f (t)g(t)dt is a scalar product on the real vector of all
a
VI.
1. Scalar product and mixed product of free vectors.
2. Let V and W be vector spaces, each of dimension 2, and {e1 , e2 } a basis in
Vand {f1 , f2 } a basis in W. Let T : V W, T (e1 + e2 ) = 3f1 + 9f2 , T (3e1 + 2e2 ) =
7f1 + 23f2 , be a linear transformation. Compute T (e2 e1 ) and determine the nullity
and rank of T . Determine T 1 .
3. Determine an orthogonal matrix C which reduces the quadratic form Q(x) =
2x21 + 4x1 x2 + 5x22 to a diagonal form.
VII.
1. The matrix of a linear transformation. Particular endomorphisms.
2. Find the equation of the plane P passing through the point (1,1,1), which is
perpendicular to the planes Q1 : x + y + z + 1 = 0, Q2 : x y + z = 0. Using the
normal vectors of P, Q1 , Q2 and the Gram-Schmidt procedure, find an orthonormal
basis of R3 .
3. Determine the canonical form of
Q(x) = 3x21 5x22 7x23 8x1 x2 + 8x2 x3 .
EXAM SAMPLES
121
VIII.
1. Free vectors: collinearity, coplanarity.
2. Let V be the vector space of all continuous functions on (, ) and such that
Zx
Zx
the integral
f (t)dt exists for all x. Define T : V V, g = T (f ), g(x) =
f (t)dt.
Prove that every positive real number is an eigenvalue for T and determine the
eigenfunctions corresponding to .
3. Let Q : R3 R, Q(x) = x1 x2 + x2 x3 + x3 x1 . Determine the canonical form
of Q and the image of
x1 1
x2
x3
=
=
D:
1
1
2
by Q.
IX.
1. Vectors spaces. Vector subspaces.
2. Let T : C3 C3 be represented by the matrix
1 + 2i
i
4
12
1+i
1i
b
.
4
12
1
2i
c
4
12
Determine a, b and c so that T is unitary (for C3 with the usual inner product).
3. Find the projection of
D:
x1
y
z2
= =
2
1
2
onto
P :x+y+z =0
and the symmetric of P with respect to D.
X.
1. The spectrum of endomorphisms on Euclidean spaces.
2. Determine the relation between a, b, c, d, , , , , , such that the three planes
ax + by + cz + d = 0, x + y + z + = 0, (ax + by + cz) + (x + y + z) = 0
have no points in common.
3. Let {x, y} be a linearly independent set in a Euclidean space V.
Define f : R R by f () = kx yk.
122
EXAM SAMPLES
XI.
1. Diagonal form of an endomorphism.
2. Let P : x + y 2z = 0. Find the equation of the plane Q symmetric to P about
xOy (about the origin O).
3. Let A : R4 R4 R,
A(x, y) = x1 y2 x2 y1 + x1 y3 x3 y1 + x1 y4 x4 y1 + x4 y4 .
Find the matrix of A with respect to the basis f1 = (1, 1, 0, 0), f2 = (0, 1, 1, 0),
f3 = (0, 1, 0, 1), f4 = (1, 0, 0, 1).
XII.
1. Basic facts on straight lines and planes.
2. Let P4 denote the space of all polynomials in t of degree
at most
3. Find the
d
dx
2
matrix representation of T : P4 P4 , y = T x, y(t) =
(t 1)
with respect
dt
dt
3
1 5
3
to the basis B = 1, t, t2 , t3 t .
2
2 2
2
3. LetP
V consist of all infinite sequences
P x = {xn } of real numbers for which
the series
x2n converges. Define hx, yi =
xn yn . Prove that this series converges
1
1
absolutely and hx, yi is a scalar product. Compute hx, yi if xn = and yn =
.
n
n+1
XIII.
1. Quadratic forms.
2. Let a
, b, c V3 such that b perpendicular to c. Show that (
a b) c = bh
a, ci.
3. Let V = C[0, T ] and define (Px)(t) = x(0)(1 t), for 0 t T . Show that P
is a projection and determine the range of P.
XIV.
1. Polar, cylindrical and spherical coordinates.
2. ZConsider the linear transformation T : V V given by y = T x, where
2
y(t) =
x(0)4 cos 2(t s)ds, and V = L{1, cos s, cos 2s, sin s, sin 2s}.
0
EXAM SAMPLES
123
XV.
1. Equations of a straight line.
2. Let V = L2 [, ], and V1 = L(A1 ), V2 = L(A2 ), where A1 = {1, cos t, cos 2t, . . .},
A2 = {sin t, sin 2t, . . .}. Show that A1 , A2 are linearly independent, and the sum of
V1 and V2 is direct.
Z
(1 + cos(x t))f (t)dt. Find a basis for
3. Let T : V V, g = T (f ), g(x) =
T (V).
(a) Determine the kernel of T .
(b) Find the eigenvalues of T .
XVI.
1. Equation of a plane in space.
2. Let V be the real vector space of all functions of the form x(t) = a cos(t + ),
where is fixed. Show that B = {cos t, sin t} is a basis of V. Give examples of
linear transformations on V.
3. Let V be the real Euclidean space of real polynomial functions on [-1,1]. Determine which of the following linear transformations is symmetric or skew symmetric:
T (f )(x) = f (x), T (f )(x) = f (x) + f (x), T (f )(x) = f (x) f (x).
XVII.
1. Transformations of Cartesian frames.
2. Let V denote the real vector space of all polynomials in t, of degree at most
dx
four, and define T : V V by T = D2 + 2D + I, where Dx =
(differential
dt
operator).
(a) Represent T by a matrix T w.r.t. the basis {1, t, t2 , t3 , t4 }.
(b) Represent T 2 by a matrix.
n
X
i=1
xi
yj , hx, yi =
xi yi are scalar products or not. When hx, yi is not a scalar
y=1
i=1
XVIII.
1. Jordan form of an endomorphism.
124
EXAM SAMPLES
2. What is the locus of points with the property that the ratio of distances to two
given planes is constant?
Z
3.
In the linear space of all real polynomials, with scalar product hx, yi =
y1 (t) = 3(2t 1), y2 (t) = 5(6t2 6t + 1) form an orthonormal set spanning the
same subspace as {x0 , x1 , x2 }.
1
XIX.
1. Free vectors: addition, multiplication of a vector by a scalar.
2. Let V be the vector
Z x space of all continuous functions defined on (, ), and
such that the integral
tf (t)dt exists for all real numbers x. Define T : V V,
Z x
g = T (f ), g(x) =
tf (t)dt. Prove that every negative is a proper value for T
XX.
1. Linear transformations on Euclidean spaces.
2. Show that the locus of points equidistant from three pairwise non-parallel
planes is a straight line.
3. Show that the quadratic forms
Q(x) = 3x21 + 4x22 + 5x23 + 4x1 x2 4x2 x3 ,
Q(x) = 2x21 + 5x22 + 5x23 + 4x1 x2 4x1 x3 8x2 x3
are positive definite.
XXI.
1. Isometries.
2. Write the equations of the straight line D passing through the point (1,1,1)
and parallel to the planes P : x y + z = 0, Q : x + 2y z = 0. Find the points of P
which are at the distance 2 from D.
3. Define T : C2 C2 by
y1
y2
a b
c d
x1
x2
EXAM SAMPLES
125
XXII.
1. Polynomials of matrices. Functions of matrices.
2. Find out the angles between the coordinate axes and the plane P : x+y+2z = 0.
x
y1
z
Determine the symmetric of P with respect to the line D : =
= .
1
1
1
3. Let x, y be vectors in a Euclidean space Vand assume that
kx + (1 )yk = kxk,
[0, 1].
Show that x = y.
XXIII.
1. Orthogonality. The Gram-Schmidt orthogonalization procedure.
2. Prove that: if the vectors a
and b are perpendicular to the vector c, then
(
a b) c = 0. Give sufficient conditions for the collinearity of (
a b) c and c.
3. Find the canonical form of T : R3 R3 ,
7
4 1
7 1 .
T = 4
4 4
4
XXIV.
1. Linear dependence and independence.
2. Find the angles between the coordinate planes and the straight line
D:
x1
y
z+2
=
=
.
1
2
1
0 1
0
0
1 .
T = 0
1 3
3
126
EXAM SAMPLES
XXV.
1. Bilinear forms.
2. Prove that the straight lines joining the mid-points of the opposite edges of a
tetrahedron intersect at one point. Express the coordinates of this point in terms of
the coordinates of the vertices of the tetrahedron.
3. Show that the set of all functions xn (t) = eint , n = 0, 1, 2, . . . is linearly
independent in L2 [0, 2].
XXVI.
1. Eigenvalues and eigenvectors of an endomorphism.
2. Given the point A(0, 1, 2) and the line
x+y =0
D:
x + z + 1 = 0,
detemine the symmetric of A with respect to D and the symmetric of D w.r.t. A.
3. Let Q : R3 R,
Q(x) = 4x21 + x22 + 9x23 + 4x1 x2 12x1 x3 6x2 x3 .
Reduce Q to the canonical expression using the eigenvalues method; determine
the corresponding basis.
Use another method of reduction to the canonical form and check the inertia law.
XXVII.
1. The common perpendicular of two noncoplanar straight lines. The distance
between two noncoplanar straight lines.
2. Consider the real vector space V = C (R) and D : V V, D(f ) = f 0 .
Find the eigenvalues and the eigenvectors of D. Is = 2 an eigenvalue? What is the
dimension of S(2)?
3. Let T : R2 [X] R2 [X] be a linear map such that
T (1 + X) = X 2 , T (X + X 2 ) = 1 X, T (1 + X + X 2 ) = X + X 2 .
Find the matrix associated to T w.r.t. canonical basis of R2 [X]. Determine the
dimension and a basis for each of Ker(T ) and Im(T ).
XXVIII.
1. Endomorphisms of Euclidean vector spaces.
2. Consider v = (14, 3, 6) R3 and S = {v1 , v2 , v3 }, where v1 = (3, 0, 7),
v2 = (1, 4, 3), v3 = (2, 2, 2). Determine the orthogonal projection w of v on Span S
and the vector w .
EXAM SAMPLES
127
4
6 0
T = 3 5 0
3 6 1
the matrix of T w.r.t. the canonical basis. Find the diagonal form of T and the
corresponding basis. Is T injective? Is T surjective?
XXIX.
1. The vector space of free vectors: mixed product - properties, geometric interpretation.
2. Let T : R3 R3 be an endomorphism such that T (1, 0, 0) = (3, 1, 0),
T (0, 0, 1) = (1, 2, 0), Ker T = {(, 2, 3) | R}. Determine a basis for Ker T
and Im T respectively; determine also the matrix of T w.r.t. the canonical basis.
3. Given
1
A = 1
2
1
3
1
2
1 ,
1
find the canonical expression of the quadratic form Q : R3 R, Q(x) = t XAX and
the correspoding change of coordinates.
XXX.
1. Matrix polynomials.
2. Consider the vectors v1 = (3, 2, 1), v2 = (1, 2, 1), v3 = (1, 0, 2). Prove that
there exists a unique linear form f : R3 R such that f (v1 ) = 5, f (v2 ) = 3,
f (v3 ) = 6. Write f (x) in terms of the components of x, for x arbitrary in R3 , and
determine an orthonormal basis of Ker f . Is Im f a proper subspace of R?
3. Let M (1, 2, 5) and Q : x + 2y + 2z + 1 = 0. Determine the projection of the
point M onto the plane Q, the symmetric of M w.r.t. Q, the distance from M to Q,
and the symmetric of Q w.r.t. M .
XXXI.
1. Bilinear forms.
2. Use the Cayley-Hamilton theorem to determine the inverse of the matrix
1 2
2
A = 2 1 2
2
2 1
and the value of the matrix polynomial R(A) = A3 + 3A2 8A 27I.
128
EXAM SAMPLES
XXXII.
1. Quadratic forms.
2. If a
, b, c V3 are noncoplanar vectors such that h
a, bi 6= 0, show that
(b c)
a
b a
E=
,
h
a, bi h
a, b ci
does not depend on a
, b, c.
3. Determine the eigenvalues and the eigenvectors of the linear transformation
T : R3 R3 , T (x) = (x1 2x2 x3 , x1 + x2 + x3 , x1 x3 ).
(a) Is T diagonalizable? (b) Find orthonormal bases in KerT and ImT .
XXXIII.
1. Kernel and image of a linear transformation.
2. Given the vectors
b = i j + k,
c = 3i + j k,
a
= i j + 3k,
find the value of R such that a
, b, c are coplanar.
For = 2, determine the altitude of the parallelipiped constructed on some representatives with common origin of the vectors a
, b, c, corresponding to the base whose
sides are the representatives of a
and b.
3. Let
4
T = 2
6
2
1
3
6
3
9
XXXIV.
1. Collinearity and coplanarity in V3 .
EXAM SAMPLES
129
1 2 1
A = 1 2 3 .
3 0 1
(b) Is A a symmetric bilinear form?
(c) Let Q : R3 R be a quadratic form,
Q(x) = x21 + 8x22 + x23 + 16x1 x2 + 4x1 x3 + 4x2 x3 .
Determine the canonical form of Q and the corresponding basis, using Jacobis
method.
3. Prove that the straight lines
D1 :
y
z
x
= = ,
1
2
3
x1
y1
z
=
=
2
1
1
are noncoplanar, determine the equations of their common perpendicular and the
distance between D1 and D2 .
D2 :
XXXV.
1. Linear dependence. Linear independence.
2. Consider the straight line D and the plane P of equations
D:
y
z
x
= =
1
2
1
P : x 2y + z = 0.
(a) Find the projection of D onto P .
(b) If D and P are viewed as vector subspaces of R3 , determine D + P , D P
and an orthonormal basis of P .
3. Given the matrix
1
A= 0
0
0
2
0
1
0 ,
3
1 0 (3n 1)/2
.
0
An = 0 2n
0 0
3n
130
Bibliography
[1] M.Artin, Algebra, Prentice-Hall, 1991.
[2] Gh.Atanasiu, Gh.Munteanu, M.Postolache, Algebr
a liniar
a, geometrie analitic
a si diferential
a (in Romanian), Editura All, Bucuresti, 1994.
a liniar
a, geometrie analitic
a si diferential
a (in Romanian),
[3] V. Balan, Algebr
Universitatea Politehnica Bucuresti, 1998.
[4] R.M.Bowen, C.C.Wang, Introduction to Vectors and Tensors, vol. 1-2,
Plenum Press, New York, 1976.
[5] V.Brnzanescu, O.Stanasila, Matematici speciale (in Romanian), Editura
All, Bucuresti, 1994.
[6] J.Dieudonne, Linear Algebra and Geometry, Paris, Hermann, 1969.
a liniar
a (in Romanian), Geometry Balkan
[7] O.Dogaru, M.Doroftei, Algebr
Press, Bucuresti, 1998.
[8] Gh.Th.Gheorghiu, Algebr
a liniar
a, geometrie analitic
a si diferential
a si programare (in Romanian), Editura Didactica si Pedagogica, Bucuresti, 1977.
[9] S.Ianus, Curs de geometrie diferential
a (in Romanian), Universitatea Bucuresti, 1981.
[10] I.D.Ion, R.Nicolae, Algebra (in Romanian), Editura Didactica si Pedagogica,
Bucuresti, 1981.
[11] N.Jacobson, Lectures in Abstract Algebra, II - Linear algebra, SpringerVerlag, 1975.
[12] W.Klingenberg, Lineare Algebra und Geometrie, Springer-Verlag, Berlin,
1990.
[13] S. Lang, Algebra, Addison-Wesley, 1984.
[14] A.V.Pogorelov, Analytic Geometry, Mir Publishers, Moscow, 1961.
[15] C.Radu, Algebr
a liniar
a, geometrie analitic
a si diferential
a (in Romanian),
Editura All, Bucuresti, 1996.
131
132
BIBLIOGRAPHY
[16] C.Radu, C.Dragusin, L.Dragusin, Aplicatii de algebr
a, geometrie si matematici speciale (in Romanian), Editura Didactica si Pedagogica, Bucuresti,
1991.
[17] L.Smith, Linear Algebra, Springer-Verlag, 1978.
[18] N.Soare, Curs de geometrie (in Romanian), Universitatea Bucuresti, 1996.
[19] C.Udriste, Linear Algebra, University Politehnica of Bucharest, 1991-1992.
[20] C.Udriste, Problems in Algebra, Geometry and Differential Equations I, II,
University Politehnica of Bucharest, 1992.
[21] C.Udriste, Aplicatii de algebr
a, geometrie si ecuatii diferetiale (in Romanian), Editura Didactica si Pedagogica, Bucuresti, 1993.
[22] C.Udriste, Algebr
a liniar
a, Geometrie analitic
a (in Romanian), Geometry
Balkan Press, Bucuresti, 1996.
[23] C.Udriste, O.Dogaru, Algebr
a liniar
a, Geometrie analitic
a (in Romanian),
Universitatea Politehnica Bucuresti, 1991.
[24] C.Udriste, C.Radu, C.Dicu, O.Malancioiu, Probleme de algebr
a, geometrie
si ecuatii diferentiale (in Romanian), Editura Didactica si Pedagogica, Bucuresti, 1981.
[25] C.Udriste, C.Radu, C.Dicu, O.Malancioiu, Algebr
a, geometrie si ecuatii
diferentiale (in Romanian), Editura Didactica si Pedagogica, Bucuresti,
1982.