GRWG

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

MTH5102 Spring 2019

HW Assignment 3: Sec. 2.2, #1, 5, 8, 14; Sec. 2.3, #1, 3, 4, 14


The due date for this assignment is 2/20/19.

Section 2.2

1. Label the following statements as true or false. Assume that V and W are
…nite-dimensional vector spaces with ordered bases and , respectively, and
T , U : V ! W are linear transformations.

(a) For any scalar a, aT + U is a linear transformation from V to W . Answer:


True. Explanation: For any x; y in V and scalar c we have

(aT + U ) (cx + y) = a [T (cx + y)] + U (cx + y)


= a [cT (x) + T (y)] + cU (x) + U (x)
= c [aT (x) + U (x)] + aT (y) + U (y)
= c [(aT + U )] (x) + (aT + U ) (y) :

(b) [T ] = [U ] implies T = U . Answer: True. Explanation: This follows from


the corollary to Theorem 2.6 (p. 73).
(c) If m = dim (V ) and n = dim (W ), then [T ] is an m n matrix. Answer:
h i
True. Explanation: Let = fv1 ; : : : ; vm g. Then [T ] = [T (v1 )] j j [T (vm )]
has m columns and the number of rows it has is j j = n.
(d) [T + U ] = [T ] + [U ] . Answer: True. Explanation: This is true by
Theorem 2.8.(a).
(e) L (V; W ) is a vector space. Answer: True. Explanation: This is true by
Theorem 2.7.(b).
(f ) L (V; W ) = L (W; V ). Answer: False, unless V = W . Explanation: If
V = W there is nothing to prove. If V 6= W then linear transformations
are functions so the domains of the functions in L (V; W ) is V where as the
domain of the functions in L (W; V ) is W which proves that L (V; W ) 6=
L (W; V ).

5. Let
1 0 0 1 0 0 0 0
= ; ; ; ;
0 0 0 0 1 0 0 1
= 1; x; x2 ; and = f1g :

(a) De…ne T : M2 2 (F ) ! M2 2 (F ) by T (A) = At . Compute [T ] . (b)


De…ne
f 0 (0) 2f (1)
T : P2 (R) ! R by T (f (x)) = ;
0 f 00 (3)

1
where 0 denotes di¤erentiation. Compute [T ] . (c) De…ne T : M2 2 (F ) ! F
by tr (A). Compute [T ] . (d) De…ne T : P2 (R) ! R by T (f (x)) = f (2).
Compute [T ] . (e) If
1 2
A= ;
0 4
compute [A] . (f) If f (x) = 3 6x + x2 , compute [f (x)] . (g) For a 2 F ,
compute [a] .
Solution. (a) We have

1 0 1 0
T = ;
0 0 0 0
0 1 0 0
T = ;
0 0 1 0
0 0 0 1
T = ;
1 0 0 0
0 0 0 0
T =
0 1 0 1

so that 0 1
1 0 0 0
B0 0 1 0C
[T ] = B
@0
C:
1 0 0A
0 0 0 1
(b) We have

0 2 0 1
T (1) = =2 ;
0 0 0 0
1 2 1 0 0 1
T (x) = = (1) +2 ;
0 0 0 0 0 0
0 2 0 1 0 0
T x2 = =2 +2
0 2 0 0 0 1

so that 0 1
0 1 0
B2 2 2C
[T ] = B
@0
C:
0 0A
0 0 2

2
(c) We have

1 0
T = 1;
0 0
0 1
T = 0;
0 0
0 0
T = 0;
1 0
0 0
T =1
0 1

so that
[T ] = 1 0 0 1 :
(d) We have
T (1) = 1; T (x) = 2; T x2 = 4
so that
[T ] = 1 2 4 :
(e) We have

1 2 1 0 0 1 0 0
A= = (1) + ( 2) +4
0 4 0 0 0 0 0 1

so that 1 0
1
B 2C
[A] = B C
@ 0 A:
4
(f) We have f (x) = 3 6x + x2 = 3 (1) + ( 6) x + (1) x2 so that
0 1
3
[f (x)] = @ 6A :
1

(g) For a 2 F we have a = (a) 1 so that

[a] = (a) :

8. Let V be an n-dimensional vector space with an ordered basis . De…ne


T : V ! F n by T (x) = [x] . Prove that T is linear.
Proof. Let = fv1 ; : : : ; vn g be the ordered basis for V . Then for any x 2 V
there exists unique scalars c1 (x) ; : : : ; cn (x) in F such that
n
X
x= ci (x) vi
i=1

3
and 2 3
c1 (x)
6 7
T (x) = [x] = 4 ... 5 2 F n :
cn (x)
This proves that T : V ! F n de…ned by
2 3
c1 (x)
6 7
T (x) = [x] = 4 ... 5
cn (x)

for all x 2 V is a well-de…ned function. Now let x; y 2 V and c 2 F . Then


n
X n
X
cx + y = c ci (x) vi + ci (y) vi
i=1 i=1
n
X
= [cci (x) + ci (y)] vi
i=1

which implies
2 3 2 3 2 3
cc1 (x) + c1 (y) c1 (x) c1 (y)
6 .. 7 6 . 7 6 . 7
T (cx + y) = 4 . 5 = c 4 .. 5 + 4 .. 5
ccn (x) + cn (y) cn (x) cn (y)
= cT (x) + T (y) :

This proves the T is linear.


14. Let V = P (R), and for j 1 de…ne Tj (f (x)) = f (j) (x), where f (j) (x)
is the jth derivative of f (x). Prove that the set fT1 ; T2 ; : : : ; Tn g is a linearly
independent subset of L (V ) for any positive integer n.
Proof. First, we know that T1 : V ! V de…ned by T1 (f (x)) = f (1) (x) for
any f 2 V is a well-de…ned function since f is a polynomial and hence f (1) is
a polynomial (which follows from linearity of d=dx and the fact d (xm ) =dx =
mxm 1 for any nonnegative integer m). The linearity of T1 is proven in Calculus
1 course. Hence, T1 2 L (V ). Next, we have Tn = T1n (composition of T1 with
itself n times) for any positive integer n so that Tn = T1n 2 L (V ) (by Theorem
2.9 on compositions of linear transformations is a linear transformation, proved
in next section but is elementary). Now if
n
X
ci Ti = 0
i=1

for some scalars c1 ; : : : ; cn in R then Ti (x) = 0 for 2 i n and T1 (x) = 1 so


that !
X n X n
0= ci Ti (x) = ci Ti (x) = c1 :
i=1 i=1

4
If n = 1 then fT1 g is linearly independent. Thus, assume now that n > 1.
Suppose now that we have shown that ci = 0 for all integers i with 1 i m <
n for some integer m. We will prove that cm+1 = 0 as well. To see this we have
Ti xm+1 = 0 if m + 1 < i and Tm+1 xm+1 = (m + 1)! 6= 0 implying
n
! n
X X
0= ci Ti (x) = ci Ti (x) = cm+1 (m + 1)!
i=m+1 i=m+1

implying cm+1 = 0. This prove that c1 = = cn = 0 and therefore,


fT1 ; T2 ; : : : ; Tn g is a linearly independent subset of L (V ) for any positive in-
teger n.

Section 2.3

1. Label the following statements as true or false. In each part, V , W , and


Z denote vector space with ordered (…nite) bases , , and , respectively;
T : V ! W and U : W ! Z denote linear transformations; and A and B
denote matrices.

(a) [U T ] = [T ] [U ] . Answer: False, in general. Explanation: By Theorem


2.11 we know that [U T ] = [U ] [T ] so that if [U ] and [T ] do not
commute then [U T ] 6= [T ] [U ] .

(b) [T (v)] = [T ] [v] for all v 2 V . Answer: True. Explanation: This is true
by Theorem 2.14.

(c) [U (w)] = [U ] [w] for all w 2 W . Answer: False, in general. Explanation:


As is in V not necessarily in W then [w] isn’t even well-de…ned for w
in W .
(d) [IV ] = I. Answer: True. Explanation: This is true by Theorem 2.12.(d).
2
(e) T 2 = [T ] . Answer: False, in general unless = . Explanation:
By Thereom 2.11 we know that T 2 = [T ] [T ] so if [T ] and [T ]
2
2
don’t commute then T = [T ] [T ] 6= [T ] . For instance, consider
1 0 1 1
V = R2 , = ; , and = ; . Then for LA , left
0 1 0 1
multiplication by A, where

1 1
A= ;
0 0

5
we have
" #
1 0
[LA ] = LA j LA
0 1
" #
1 0
= j
0 1
1 1
=
0 1
and
1 1
[LA ] = LA j LA
0 1
1 2
= j
0 0
1 2
= :
0 0
2
On the other hand, if = then T 2 = [T ] [T ] = ([T ] ) .
(f ) A2 = I implies that A = I or A = I. Answer: False, if A 6= I.
Explanation: We know that if A2 = I then for this multiplication to be
well-de…ned we can show that A must be a square matrix and also implies
that A 1 exist so that A 1 = A 1 I = A 1 A2 = A. Both I have this
property but another example is
0 1
A=
1 0
since
0 1 0 1 1 0
A2 = = :
1 0 1 0 0 1
(g) T = LA for some matrix A. Answer: False, unless V and W are of the
form F n and F m . Explanation: T (x) = LA (x) = Ax which is matrix
multiplication which is de…ned only when V and W are of the form F n
and F m . In this case, Theorem 2.15.(d) tells us that if T : F n ! F m is
linear then there exists a unique m n matrix A such that T = LA .
(h) A2 = O implies that A = O, where O is the zero matrix. Answer: False, in
general. Explanation: If A is invertible then O = A 1 O = A 1 A2 = A.
Thus, we should look for a counterexample A that is not invertible. For
instance,
0 1
A= 6= O;
0 0
0 1 0 1 0 0
A2 = = = O:
0 0 0 0 0 0

6
(i) LA+B = LA + LB . Answer: True. Explanation: This is true by Theorem
2.15.(c).
(j) If A is square and Aij = ij for all i and j then A = I. Answer: True.
Explanation: Recall, two matrices are equal i¤ the entries in the same
position are equal. And I = ( ij ) is the identity matrix and ij is the
Kronecker delta symbol.

3. Let g (x) = 3 + x. Let T : P2 (R) ! P2 (R) and U : P2 (R) ! R3 be the


linear transformations respectively de…ned by

T (f (x)) = f 0 (x) g (x) + 2f (x) and U a + bx + cx2 = (a + b; c; a b) :

Let and be the standard ordered bases of P2 (R) and R3 , respectively. (a)
Compute [U ] , [T ] , and [U T ] directly. Then use Theorem 2.11 to verify your
result. (b) Let h (x) = 3 2x + x2 . Compute [h (x)] and [U (h (x))] . Then
use [U ] from (a) and Theorem 2.14 to verify your result.
Solution. First, the standard ordered bases of P2 (R) and R3 , respectively, are

= 1; x; x2 ;
= f(1; 0; 0) ; (0; 1; 0) ; (0; 0; 1)g :

7
Thus,
h i
[U ] = [U (1)] j [U (x)] j U x2
h i
= [(1; 0; 1)] j [U ((1; 0; 1))] j [U ((0; 1; 0))]
2 3
1 1 0
= 40 0 15 ;
1 1 0
h i
[T ] = [T (1)] j [T (x)] j T x2
h i
= [2] j [(3 + x) + 2x] j 2x (3 + x) + 2x2
h i
= [2] j [3x + 3] j 6x + 4x2
2 3
2 0 0
= 40 3 65 ;
0 3 4
h i
[U T ] = [U T (1)] j [U T (x)] j U T x2
h i
= [U (2)] j [U (3x + 3)] j U 6x + 4x2
h i
= [(2; 0; 2)] j [(3; 3; 3)] j [(6; 4; 6)]
2 3
2 3 6
= 40 3 4 5;
2 3 6
2 3
3
[h (x)] = 3 2x + x2 = 4 25 ;
1
[U (h (x))] = U 3 2x + x2 = [(1; 1; 5)]
2 3
1
= 415 :
5

Now by Theorem 2.11 we know that [U T ] = [U ] [T ] and we verify this using


the above calculations and matrix multiplication by
2 32 3 2 3
1 1 0 2 0 0 2 3 6
[U ] [T ] = 40 0 15 40 3 65 = 40 3 4 5 = [U T ] :
1 1 0 0 3 4 2 3 6

Finally, using [U ] from (a) and Theorem 2.14, which tells us that [U (h (x))] =

8
[U ] [h (x)] , to verify the result by
2 32 3
1 1 0 3
[U ] [h (x)] = 40 0 15 4 25
1 1 0 1
2 3
1
= 415 = [U (h (x))] :
5

4. For each of the following parts, let T be the linear transformation de…ned
in the corresponding part of Exercise 5 of Section 2.2. Use Theorem 2.14 to
compute the following vectors:

1 4
(a) [T (A)] , where A = .
1 6

(b) [T (f (x))] , where f (x) = 4 6x + 3x2 .

1 3
(c) [T (A)] , where A = .
2 4

(d) [T (f (x))] , where f (x) = 6 x + 2x2 .

Solution. (a): The linear operator T : M2 2 (F ) ! M2 2 (F ) is de…ned by


T (A) = At . We already showed that
0 1
1 0 0 0
B0 0 1 0C
[T ] = B
@0
C:
1 0 0A
0 0 0 1

And it follows from


1 0 0 1 0 0 0 0
A=1 +4 + ( 1) +6
0 0 0 0 1 0 0 1

that 01
1
B4C
[A] = B C
@ 1A :
6
Hence by Theorem 2.14 we have
0 10 1 0 1
1 0 0 0 1 1
B0 0 1 0C B 4C B 1C
B
[T (A)] = [T ] [A] = @ CB C=B C:
0 1 0 0 @
A 1A @ 4A
0 0 0 1 6 6

9
(b): We de…ned the linear operator

f 0 (0) 2f (1)
T : P2 (R) ! R by T (f (x)) = ;
0 f 00 (3)

and showed that 0 1


0 1 0
B2 2 2C
[T ] = B
@0
C:
0 0A
0 0 2
And we calculate that
0
1
4
[f (x)] = 4 6x + 3x 2
= @ 6A :
3

Hence by Theorem 2.14 we have


0 1 0 1
0 1 0 0 1 6
B2 4
2 2C B2C
[T (f (x))] = [T ] [f (x)] = B
@0
C @ 6A = B C :
0 0A @0A
3
0 0 2 6

(c): We de…ned the linear operator T : M2 2 (F ) ! F by tr (A) and computed


previously that
[T ] = 1 0 0 1 :
And we calculate 0 1
1
1 3 B3C
[A] = =B C
@2A :
2 4
4
Hence by Theorem 2.14 we have
0 1
1
B3C
[T (A)] = [T ] [A] = 1 0 0 1 B C
@2A = 5:
4

(d) We de…ned T : P2 (R) ! R by T (f (x)) = f (2) and computed

[T ] = 1 2 4 :

And we calculate
0
1
6
[f (x)] = 6 x + 2x2 = @ 1A :
2

10
Hence by Theorem 2.14 we have
0 1
6
[T (f (x))] = [T ] [f (x)] = 1 2 4 @ 1A = 12:
2
14. Assume the notation in Theorem 2.13.
(a) Suppose that z is a (column) vector in F p . Use Theorem 2.13(b) to prove
that Bz is a linear combination of the columns of B. In particular, if
t
z = (a1 ; a2 ; : : : ; ap ) , then show that
p
X
Bz = aj vj :
j=1

(b) Extend (a) to prove that column j of AB is a linear combination of the


columns of A with the coe¢ cients in the linear combination being entries
of column j of B.
(c) For any row vector w 2 F m , prove that wA is a linear combination of
the rows of A with the coe¢ cients in the linear combination being the
coordinates of w.
(d) Prove the analogous result to (b) about rows: Row i of AB is a linear
combination of the rows of B with the coe¢ cients in the linear combination
being the entries of row i of A.
Proof. (a): We can write
p
X
t
z = (a1 ; a2 ; : : : ; ap ) = aj ej ;
j=1

where ej is the jth standard ordered basis vector of F p . Then by Theorem


2.13(b) we know that vj = Bej is the jth column of B so that by linearity of
left multiplication by B, i.e., LB , we have
p
X p
X
Bz = LB z = LB a j ej = aj LB ej
j=1 j=1
p
X Xp
= aj Bej = aj vj :
j=1 j=1

(b): By Theorem 2.13(a) we know that (assuming A is an m n matrix and B


is an n p matrix)
uj = Avj ;
where uj is the jth column of AB and vj = Bej is the jth column of B. Thus,
we can write
t
vj = (B1j ; B2j ; : : : ; Bnj ) :

11
By part (a) using A instead of B and vj instead of z, we have
n
X
uj = Avj = Bij Ai ;
i=1

where Ai is the ith column of A. From this we see we have proven that column
j of AB is a linear combination of the columns of A with the coe¢ cients in the
linear combination being entries of column j of B. (c): Applying the transpose
to wA and using part (a) (with At instead of B and wt instead of z) we have
that
m
X
t
(wA) = At wt = ai wi ;
i=1

where
w = (a1 ; a2 ; : : : ; am )
and wi is the ith column vector of At , i.e., wit is the ith row of A. Hence,
m
X
wA = ai wit :
i=1

From this we see we have proven that wA is a linear combination of the rows of
A with the coe¢ cients in the linear combination being the coordinates of w.
(d): By part (b) we know that (assuming A is an m n matrix and B is an
n p matrix)
xi = B t wi ;
t
where xi is the ith column of B t At = (AB) (i.e., xti the ith row of AB) and wi
is the ith column of At (i.e., wit is the ith row of A). Thus, we can write
t
wi = (Ai1 ; Ai2 ; : : : ; Ain ) :

and we have by part (b) that


n
X
xi = B t wi = Aij B t j
;
j=1

t
where (B t )j is the jth column of B T , i.e., (B t )j is the jth row of B. Now taking
the transpose we have
X n
t
xti = Aij B t j .
j=1

From this representation we see we have proven that the row i of AB, i.e., xti ,
t
is a linear combination of the rows of B, i.e., the (B t )j for j = 1; : : : ; n, with
the coe¢ cients in the linear combination being the entries of row i of A, i.e.,
the Aij for j = 1; : : : ; n.

12

You might also like