Algebra1908_SOLN

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Linear Algebra Qualifying Exam – Fall 2019

(Try six of the eight questions)

Notations: End(V ) – the space of all the linear transformations from V to V ; spec(α)
– the set of all the eigenvalues of α; F = R or C.

1. Prove that:
(a) Let A ∈ Mm×n (R) and b ∈ Rm . If Ax = b has no solution, then there exists
y ∈ Rm such that AT y = 0 and hy, bi =
6 0 where h·, ·i is the dot product on Rm .
Answer by OF Let R(A) be the range space of A ∈ Mm×n (R). By the
assumption that Ax = b has no solution, we conclude that

b 6∈ R(A). (1)

Write
b = b1 + b2 , (2)
where b1 ∈ R(A) and b2 ∈ R(A)⊥ . By (1) and (2), we have

b2 6= 0. (3)

Now, using the property for the range space that R(A)⊥ = ker(AT ), we obtain
that
AT b2 = 0. (4)
Set y = b2 , then hb2 , bi = hb2 , b2 i = kb2 k2 6= 0, which the last equality follows
from (3). This together (4) show that y = b2 satisfies all the requirement.
(b) Let V be a vector space over a field F and let v1 , · · · , vn (n ≥ 2) be distinct
vectors in V . If there exist α ∈ End(V ) and a linear functional δ on V such
that the matrix
 
δ(v1 ) δ(v2 ) δ(v3 ) . . . δ(vn )
 δα(v1 ) δα(v2 ) δα(v3 ) . . . δα(vn ) 
 δα2 (v1 ) 2 2 2
 
 δα (v2 ) δα (v3 ) . . . δα (vn ) 

 .. .. .. .. .
.. 
 . . . . 
δαn−1 (v1 ) δαn−1 (v2 ) δαn−1 (v3 ) . . . δαn−1 (vn )

is nonsingular, then the set {v1 , v2 , · · · , vn } is linearly independent.


Answer by OF Let a1 , a2 , ...an be scalars in F satisfying

a1 v1 + a2 v2 + ... + an vn = 0. (5)

Applying αk , 1 ≤ k ≤ n − 1, to equation (5) yields

αk (a1 v1 + a2 v2 + ... + an vn ) = αk (0) = 0 ∀ 1 ≤ k ≤ n − 1.

Since α ∈ End(V ), then

a1 αk (v1 ) + a2 αk (v2 ) + ... + an αk (vn ) = 0 ∀ 0≤k ≤n−1 (6)


Applying δ to equation (5) and (6), we have
δ(a1 αk (v1 ) + a2 αk (v2 ) + ... + an αk (vn )) = 0, 0 ≤ k ≤ n − 1.
Since delta is a linear functional, we can write this as:
a1 δαk (v1 ) + a2 δαk (v2 ) + ... + an δαk (vn ) = 0 ∀ 0 ≤ k ≤ n − 1. (7)
Thus, we can write the linear systems generated in matrix form given below:
    
δ(v1 ) δ(v2 ) δ(v3 ) ... δ(vn ) a1 0
 δα(v1 ) δα(v2 ) δα(v3 ) . . . δα(vn )   a2  0
 δα2 (v1 ) δα2 (v2 ) δα2 (v3 ) . . .
     
   a3  = 0
δα(vn )     
 .. .. .. .. ..   ..   .. 
 . . . . .   .  .
δαn−1 (v1 ) δαn−1 (v2 ) δαn−1 . . . δαn−1 (vn ) an 0
Since the left matrix in the above matrix equation is non-singular, then, a0i s = 0
for 1 ≤ i ≤ n. Thus, {v1 , v2 , v3 , ..., vn } is linearly independent.

2. Let V be a finite-dimensional vector space and α ∈ End(V ) satisfying:


ˆ pα (x) = (x − 1)2 (x − 2)3 where pα (x) is the characteristic polynomial of α.
ˆ dim ker (α − 2I) = 1 where I denotes the identity map on V .
(a) Find all possible Jordan canonical forms of α, up to similarity.
(b) Find all possible minimal polynomials of α.
Answer by AN The similar question is used in the qualifying exam in Spring
2020.
3. Prove or disprove.
(a) If A ∈ Mn×n (R) such that rank A = 1, then det(I − A) = 1 − tr(A).
Answer by IV We prove that the statement det(I − A) = 1 − tr(A) holds. Let A
be a matrix with rankA = 1. Then by Schur theorem, one can find a nonsingular
matrix Q such that Q−1 AQ is an upper triangular matrix, which has rank one.
Write Q−1 AQ = [a1 , . . . , an ]. Then it follows from the rank one property for
the matrix Q−1 AQ that
ai = αi b, 1 ≤ i ≤ n (8)
for some 0 6= b = (b1 , . . . , bn )T ∈ Rn and αi ∈ R, 1 ≤ i ≤ n.
By direct computation, we have
det(I − A) = det(I − Q−1 AQ)
 
1 − α1 b1 × × × ×

 1 − α 2 b2 × × × 

 .. 
= det 
 . × × 

 .. 
.

0 ×
1 − αn bn

n
Y
= (1 − αi bi ), (9)
i=1

2
where × is some number which could be different at different occurrences. There-
fore it suffices to prove
n
Y n
X
(1 − αi bi ) = 1 − αi bi = 1 − trQ−1 AQ = 1 − trA. (10)
i=1 i=1

Now we consider two cases to prove (10).


Case 1 αi bi = 0 for all 1 ≤ i ≤ n.
In this case, we have
n
Y n
X
(1 − αi bi ) = 1 = 1 − αi bi
i=1 i=1

and hence (10) is established.


Case 2 αi bi 6= 0 for some 1 ≤ i ≤ n.
Let i0 be the minimal integer such that αi0 bi0 6= 0. Recall that the lower triangle
entries of I − Q−1 AQ are −αi bj for 1 ≤ j < i ≤ n, and hence

αi bj = 0 for all 1 ≤ j < i ≤ i. (11)

Taking i = i0 in the above equation and applying αi0 bi0 6= 0 yields

bj = 0 for all 1 ≤ j < i0 . (12)

Similarly, taking j = i0 in (11) and applying αi0 bi0 6= 0 yields

ai = 0 for all i0 < i ≤ n. (13)

Combining (12) and (13), we obtain that

αi bi = 0 for all i 6= i0 . (14)

Therefore
n
Y n
X
(1 − αi bi ) = 1 − αi0 bi0 = 1 − αi bi ,
i=1 i=1

and (10) is proved.


(b) Let V be a finite-dimensional inner product space over R. If α ∈ End(V ) such
that hα(v), vi = 0 for all v ∈ V , then α = 0.

4. Prove that:
(a) Let V be a finite-dimensional vector space. If β ∈ End(V ) is a projection (i.e.,
β 2 = β), then β is diagonalizable.
Answer by JN Let n = dim(V ) and β be a projection so that β 2 = β, then
Spec(β) = {0, 1}. Recall that an endomorphism is diagonalizable if and only
if its minimal polynomial splits completely over F. Since the eigenvalues of β

3
are 0 and/or 1, we have that the minimal polynomial for β is either m1 (t) =
t, m2 (t) = t − 1, or m3 (t) = t(t − 1). Since all three of these (trivially) split
completely, we get that β must be diagonalizable.
Answer by EC Denote the dimension of the linear space V by n and the rank
of B be r ≤ n. By the definition of the rank of a matrix, there exist linear
independent columns b1 , . . . , br of the matrix B. It follows from B 2 = B, we
have that
Bbj = bj , 1 ≤ j ≤ r. (15)
Let br+1 , . . . , bn be a basis of the orthogonal complement of the image of the
matrix B. Then we have

Bbj = 0, r + 1 ≤ j ≤ n. (16)

Clearly {e1 , . . . , er , er+1 , . . . , en } is a basis for the linear space V . This together
with (15) and (16) proves that the representation matrix of the projection B with
respect to the basis {e1 , . . . , en } is a diagonal matrix and hence the projection
B is diagonalizable.
(b) Let V be the complex vector space consisting of matrices A ∈ M2×2 (C) with
tr(A) = 0. If α ∈ End(V ) is defined by
 
1 0
α(A) = HA − AH where H = ,
0 −1

then α is diagonalizable.
Answer by JN Define
     
1 0 0 0 0 1
e1 = , e2 = , e3 =
0 −1 1 0 0 0

One may verify that {e1 , e2 , e3 } is a basis for the complex vector space V con-
sisting of matrices A ∈ M2×2 (C) with tr(A) = 0. By direct computation, we we
get the following.
   
1 0 0 0
α(e1 ) = α( )= = 0e1 + 0e2 + 0e3 ,
0 −1 0 0
   
0 0 0 0
α(e2 ) = α( )= = 0e1 − 2e2 + 0e3 ,
1 0 −2 0
   
0 1 0 2
α(e3 ) = α( )= = 0e1 + 0e2 + 2e3 .
0 0 0 0

The matrix representation of α with respect to the basis {e1 , e2 , e3 } is given by


 
0 0 0
0 −2 0 , (17)
0 0 2

which is a diagonal matrix. This proves that the map α on the linear space V
is diagonalizable.

4
5. Let V be a finite-dimensional inner product space over F and α ∈ End(V ) be
self-adjoint. Recall that the norm of α is defined by

||α|| = sup{||α(v)|| : v ∈ V and ||v|| = 1}.

(a) Prove that ||α|| = max{|λ| : λ ∈ spec(α)}.

(b) Let t ∈ F and  > 0. Suppose that v ∈ V such that ||v|| = 1 and

||α(v) − tv|| < .

Prove that α has an eigenvalue λ such that |λ − t| < .

6. Let V be a finite-dimensional inner product space over C and α ∈ End(V ).

(a) Prove that ker(α) = (im(α∗ ))⊥ .


(b) Assume that α is normal. First prove that ||α(v)|| = ||α∗ (v)|| for any v ∈ V ,
and then use it to prove that λ ∈ spec(α) if and only if λ̄ ∈ spec(α∗ ).
(c) Assume that α is norma and λ1 , λ2 are distinct eigenvalues of α with correspond-
ing eigenvectors v1 and v2 . Use (b) to prove that v1 and v2 are orthogonal.

7. Let V be a finite-dimensional inner product space over F with inner product h · i.

(a) Prove the Riesz representation theorem: Let f be a linear functional on V .


Then there exists a unique vector u ∈ V such that

f (v) = hv, ui, ∀v ∈ V.

(b) Let h · i1 be another inner product for V . Prove that there is (unique) positive
definite α ∈ End(V ) such that

hu, vi1 = hα(u), vi ∀u, v ∈ V.

8. Suppose that α ∈ End(C2 ) is defined by α(x, y) = (−4y, x).


(a) Find the singular values s1 and s2 of α.
b) Find two orthonormal bases {u1 , u2 } and {v1 , v2 } for C2 such that

α(v) = s1 hv, u1 iv1 + s1 hv, u2 iv2 , ∀v ∈ C2 .

You might also like