GRWG
GRWG
GRWG
Section 2.2
1. Label the following statements as true or false. Assume that V and W are
…nite-dimensional vector spaces with ordered bases and , respectively, and
T , U : V ! W are linear transformations.
5. Let
1 0 0 1 0 0 0 0
= ; ; ; ;
0 0 0 0 1 0 0 1
= 1; x; x2 ; and = f1g :
1
where 0 denotes di¤erentiation. Compute [T ] . (c) De…ne T : M2 2 (F ) ! F
by tr (A). Compute [T ] . (d) De…ne T : P2 (R) ! R by T (f (x)) = f (2).
Compute [T ] . (e) If
1 2
A= ;
0 4
compute [A] . (f) If f (x) = 3 6x + x2 , compute [f (x)] . (g) For a 2 F ,
compute [a] .
Solution. (a) We have
1 0 1 0
T = ;
0 0 0 0
0 1 0 0
T = ;
0 0 1 0
0 0 0 1
T = ;
1 0 0 0
0 0 0 0
T =
0 1 0 1
so that 0 1
1 0 0 0
B0 0 1 0C
[T ] = B
@0
C:
1 0 0A
0 0 0 1
(b) We have
0 2 0 1
T (1) = =2 ;
0 0 0 0
1 2 1 0 0 1
T (x) = = (1) +2 ;
0 0 0 0 0 0
0 2 0 1 0 0
T x2 = =2 +2
0 2 0 0 0 1
so that 0 1
0 1 0
B2 2 2C
[T ] = B
@0
C:
0 0A
0 0 2
2
(c) We have
1 0
T = 1;
0 0
0 1
T = 0;
0 0
0 0
T = 0;
1 0
0 0
T =1
0 1
so that
[T ] = 1 0 0 1 :
(d) We have
T (1) = 1; T (x) = 2; T x2 = 4
so that
[T ] = 1 2 4 :
(e) We have
1 2 1 0 0 1 0 0
A= = (1) + ( 2) +4
0 4 0 0 0 0 0 1
so that 1 0
1
B 2C
[A] = B C
@ 0 A:
4
(f) We have f (x) = 3 6x + x2 = 3 (1) + ( 6) x + (1) x2 so that
0 1
3
[f (x)] = @ 6A :
1
[a] = (a) :
3
and 2 3
c1 (x)
6 7
T (x) = [x] = 4 ... 5 2 F n :
cn (x)
This proves that T : V ! F n de…ned by
2 3
c1 (x)
6 7
T (x) = [x] = 4 ... 5
cn (x)
which implies
2 3 2 3 2 3
cc1 (x) + c1 (y) c1 (x) c1 (y)
6 .. 7 6 . 7 6 . 7
T (cx + y) = 4 . 5 = c 4 .. 5 + 4 .. 5
ccn (x) + cn (y) cn (x) cn (y)
= cT (x) + T (y) :
4
If n = 1 then fT1 g is linearly independent. Thus, assume now that n > 1.
Suppose now that we have shown that ci = 0 for all integers i with 1 i m <
n for some integer m. We will prove that cm+1 = 0 as well. To see this we have
Ti xm+1 = 0 if m + 1 < i and Tm+1 xm+1 = (m + 1)! 6= 0 implying
n
! n
X X
0= ci Ti (x) = ci Ti (x) = cm+1 (m + 1)!
i=m+1 i=m+1
Section 2.3
(b) [T (v)] = [T ] [v] for all v 2 V . Answer: True. Explanation: This is true
by Theorem 2.14.
1 1
A= ;
0 0
5
we have
" #
1 0
[LA ] = LA j LA
0 1
" #
1 0
= j
0 1
1 1
=
0 1
and
1 1
[LA ] = LA j LA
0 1
1 2
= j
0 0
1 2
= :
0 0
2
On the other hand, if = then T 2 = [T ] [T ] = ([T ] ) .
(f ) A2 = I implies that A = I or A = I. Answer: False, if A 6= I.
Explanation: We know that if A2 = I then for this multiplication to be
well-de…ned we can show that A must be a square matrix and also implies
that A 1 exist so that A 1 = A 1 I = A 1 A2 = A. Both I have this
property but another example is
0 1
A=
1 0
since
0 1 0 1 1 0
A2 = = :
1 0 1 0 0 1
(g) T = LA for some matrix A. Answer: False, unless V and W are of the
form F n and F m . Explanation: T (x) = LA (x) = Ax which is matrix
multiplication which is de…ned only when V and W are of the form F n
and F m . In this case, Theorem 2.15.(d) tells us that if T : F n ! F m is
linear then there exists a unique m n matrix A such that T = LA .
(h) A2 = O implies that A = O, where O is the zero matrix. Answer: False, in
general. Explanation: If A is invertible then O = A 1 O = A 1 A2 = A.
Thus, we should look for a counterexample A that is not invertible. For
instance,
0 1
A= 6= O;
0 0
0 1 0 1 0 0
A2 = = = O:
0 0 0 0 0 0
6
(i) LA+B = LA + LB . Answer: True. Explanation: This is true by Theorem
2.15.(c).
(j) If A is square and Aij = ij for all i and j then A = I. Answer: True.
Explanation: Recall, two matrices are equal i¤ the entries in the same
position are equal. And I = ( ij ) is the identity matrix and ij is the
Kronecker delta symbol.
Let and be the standard ordered bases of P2 (R) and R3 , respectively. (a)
Compute [U ] , [T ] , and [U T ] directly. Then use Theorem 2.11 to verify your
result. (b) Let h (x) = 3 2x + x2 . Compute [h (x)] and [U (h (x))] . Then
use [U ] from (a) and Theorem 2.14 to verify your result.
Solution. First, the standard ordered bases of P2 (R) and R3 , respectively, are
= 1; x; x2 ;
= f(1; 0; 0) ; (0; 1; 0) ; (0; 0; 1)g :
7
Thus,
h i
[U ] = [U (1)] j [U (x)] j U x2
h i
= [(1; 0; 1)] j [U ((1; 0; 1))] j [U ((0; 1; 0))]
2 3
1 1 0
= 40 0 15 ;
1 1 0
h i
[T ] = [T (1)] j [T (x)] j T x2
h i
= [2] j [(3 + x) + 2x] j 2x (3 + x) + 2x2
h i
= [2] j [3x + 3] j 6x + 4x2
2 3
2 0 0
= 40 3 65 ;
0 3 4
h i
[U T ] = [U T (1)] j [U T (x)] j U T x2
h i
= [U (2)] j [U (3x + 3)] j U 6x + 4x2
h i
= [(2; 0; 2)] j [(3; 3; 3)] j [(6; 4; 6)]
2 3
2 3 6
= 40 3 4 5;
2 3 6
2 3
3
[h (x)] = 3 2x + x2 = 4 25 ;
1
[U (h (x))] = U 3 2x + x2 = [(1; 1; 5)]
2 3
1
= 415 :
5
Finally, using [U ] from (a) and Theorem 2.14, which tells us that [U (h (x))] =
8
[U ] [h (x)] , to verify the result by
2 32 3
1 1 0 3
[U ] [h (x)] = 40 0 15 4 25
1 1 0 1
2 3
1
= 415 = [U (h (x))] :
5
4. For each of the following parts, let T be the linear transformation de…ned
in the corresponding part of Exercise 5 of Section 2.2. Use Theorem 2.14 to
compute the following vectors:
1 4
(a) [T (A)] , where A = .
1 6
1 3
(c) [T (A)] , where A = .
2 4
that 01
1
B4C
[A] = B C
@ 1A :
6
Hence by Theorem 2.14 we have
0 10 1 0 1
1 0 0 0 1 1
B0 0 1 0C B 4C B 1C
B
[T (A)] = [T ] [A] = @ CB C=B C:
0 1 0 0 @
A 1A @ 4A
0 0 0 1 6 6
9
(b): We de…ned the linear operator
f 0 (0) 2f (1)
T : P2 (R) ! R by T (f (x)) = ;
0 f 00 (3)
[T ] = 1 2 4 :
And we calculate
0
1
6
[f (x)] = 6 x + 2x2 = @ 1A :
2
10
Hence by Theorem 2.14 we have
0 1
6
[T (f (x))] = [T ] [f (x)] = 1 2 4 @ 1A = 12:
2
14. Assume the notation in Theorem 2.13.
(a) Suppose that z is a (column) vector in F p . Use Theorem 2.13(b) to prove
that Bz is a linear combination of the columns of B. In particular, if
t
z = (a1 ; a2 ; : : : ; ap ) , then show that
p
X
Bz = aj vj :
j=1
11
By part (a) using A instead of B and vj instead of z, we have
n
X
uj = Avj = Bij Ai ;
i=1
where Ai is the ith column of A. From this we see we have proven that column
j of AB is a linear combination of the columns of A with the coe¢ cients in the
linear combination being entries of column j of B. (c): Applying the transpose
to wA and using part (a) (with At instead of B and wt instead of z) we have
that
m
X
t
(wA) = At wt = ai wi ;
i=1
where
w = (a1 ; a2 ; : : : ; am )
and wi is the ith column vector of At , i.e., wit is the ith row of A. Hence,
m
X
wA = ai wit :
i=1
From this we see we have proven that wA is a linear combination of the rows of
A with the coe¢ cients in the linear combination being the coordinates of w.
(d): By part (b) we know that (assuming A is an m n matrix and B is an
n p matrix)
xi = B t wi ;
t
where xi is the ith column of B t At = (AB) (i.e., xti the ith row of AB) and wi
is the ith column of At (i.e., wit is the ith row of A). Thus, we can write
t
wi = (Ai1 ; Ai2 ; : : : ; Ain ) :
t
where (B t )j is the jth column of B T , i.e., (B t )j is the jth row of B. Now taking
the transpose we have
X n
t
xti = Aij B t j .
j=1
From this representation we see we have proven that the row i of AB, i.e., xti ,
t
is a linear combination of the rows of B, i.e., the (B t )j for j = 1; : : : ; n, with
the coe¢ cients in the linear combination being the entries of row i of A, i.e.,
the Aij for j = 1; : : : ; n.
12