TUT 2 Solutions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Exercises on orthogonal vectors and subspaces

Problem 16.1: (4.1 #7. Introduction to Linear Algebra: Strang) For every
system of m equations with no solution, there are numbers y 1 , ..., Ym that
multiply the equations so they add up to 0 = 1. This is called Fredholm's
Alternative:
Exactly one of these problems has a solution:
Ax= b ORATy = OwithyTb = 1.
If b is not in the column space of A it is not orthogonal to the nullspace of
AT_ Multiply the equations X1 - X2 = 1, x2 - X3 = 1 and X1 - X3 = 1 by
numbers Yt, y2 and y3 chosen so that the equations add up to 0 = 1.
Solution: Let y1 = 1, y2 = 1 and y3 = -1. Then the left-hand side of
the sum of the equations is:
(x1 - x2) + (x2 - X3) - (x1 - X3) = Xt - X2 + X2 - X3 + X3 - Xt = 0,
and the right-hand side verifies that yTb = 1:
1+1-1=1.

Problem 16.2: (4.1#32.) Suppose I give you four nonzero vectors r, n, c


and l in R 2 .
a) What are the conditions for those to be bases for the four fundamental
subspaces C(AT), N(A), C(A), and N(AT) of a 2 by 2 matrix?
b) What is one possible matrix A?

Solution:
a) In order for r and n to be bases for N (A) and C(AT), we must have
r •n = 0,

as the row space and null space must be orthogonal. Similarly, in


order for c and l to form bases for C(A) and N (AT) we need
c·l = 0,
as the column space and the left nullspace are orthogonal. In addi-
tion, we need:

dimN(A) + dimC(AT) = n and dimN(AT) + dimC(A) = m;


however, in this case n = m = 1, and as the four vectors we are given
: are nonzero both of these equations reduce to 1 + 1 = 2, which is
automatically satisfied.
b) One possible such matrix is A= err.
Note that each column of A will be a multiple of c, so it will have the
desired column space. On the other hand, each row of A will be a
multiple of r, so A will have the desired row space. The nullspaces
don't need to be checked, as any matrix with the correct row and
column space will have the desired nullspaces (as the nullspaces are
just the orthogonal complements of the row and column spaces).
Exercises on projections onto subspaces

Problem 15.1: (4.2 #13. Introduction to Linear Algebra: Strang) Suppose A


is the four by four identity matrix with its last column removed; A is four
by three. Project b = (1,2,3,4) onto the column space of A. What shape is
the projection matrix P and what is P?
Solution: P will be four by four since we are projecting a 4-dimensional
vector to another 4-dimensional vector. We will have:
1 0 0 0
0 1 0 0
P=
0 0 1 0
0 0 0 0
This can be seen by observing that the column space of A is the wxy-space,
so we just need to subtract the z coordinate from the 4-dimensional vector
(w, x, y, z) we're projecting. The projection of bis therefore:
1 0 0 0 1 1
0 1 0 0 2 2
p= Pb= 0 0 1 0 3 3
0 0 0 0 4 0

Problem 15.2: (4.2 #17.) If P 2 = P, show that (I - P) 2 = I - P. For the


matrices A and P from the previous question, P projects onto the column
space of A and I - P projects onto the _ _ __
Solution:
( I - P) 2 = 12 - IP - PI + P2 = I - 2P + P2 = I - 2P + P = I - P.
Using the matrices A and P from the previous question,
0 0 0 0
0 0 0 0
t '~ 1-P=
0 0 0 0
0 0 0 1

projects onto the left nullspace of A .


Exercises on projection matrices and least squares

Problem 16.1: (4.3 #17. Introduction to Linear Algebra: Strang) Write down
three equations for the line b = C + Dt to go through b = 7 at t = -1,
b = 7 at t = 1, and b = 21 at t = 2. Find the least squares solution
x = ( C, D) and draw the closest line.

Solution: [! -i] [g] fl= [


2

The solution~= [:] comes from [~ ~] [ g] = [~;].


Problem 16.2: (4.3 #18.) Find the projection p = Ax in the previous prob-
lem. This gives the three heights of the closest line. Show that the error
vectorise= (2, -6, 4). Why is Pe= 0?
Solution: p = Ax = (5, 13, 17) gives the heights of the closest line. The
error is b - p = (2, -6,4). This error e has Pe= Pb - Pp= p - p = 0.

Problem 16.3: (4.3 #19.) Suppose the measurements at t = -1, 1,2 are
the errors 2, -6, 4 in the previous problem. Compute x and the closest
line to these new measurements. Explain the answer: b = (2, - 6, 4) is
perpendicular to _ _ _ _ so the projection is p = 0.

Solution: If b = error e then bis perpendicular to the column space of A .


Projection p = 0.

Problem 16.4: (4.3 #20.) Suppose the measurements at t = -1, 1, 2 are


b = (5, 13, 17). Compute x and the closest line and e. The error is e = 0
because this b is _ _ __

Solution: If b =Ax= (5, 13, 17) then x = (9,4) and e = 0 since bis in the
column space of A.

Problem 16.5: (4.3 #21.) Which of the four subspaces contains the error
vector e? Which contains p? Which contains x? What is the nullspace of
A?
Solution: e is in N (AT); pis in C(A ); i is in C(AT); N(A) = {O} = zero
vector only.

Problem 16.6: (4.3 #22.) Find the best line C + Dt to fit b = 4, 2, -1, 0, 0 at
times t = -2, -1, 0, 1, 2.

Solution: The least squares equation is [~ 1~] [ ~] = [-i~].


Solution: C = 1, D = -1. Line 1 - t. Symmetric t's => diagonal AT A

.
' '
Exercises on orthogonal matrices and Gram-Schmidt

Problem 17.1: (4.4 #10.b Introduction to Linear Algebra: Strang)


Orthonormal vectors are automatically linearly independent.
Matrix Proof: Show that Qx = 0 implies x = 0. Since Q may be rectan-
gular, you can use QT but not Q - 1 .

Solution: By definition, Q is a matrix whose columns are orthonormal,


and so we know that QT Q = I (where Q may be rectangular). Then:

Qx = 0 ~ QT Qx = QTO ~ Ix = 0 ~ x = 0.

Thus the nullspace of Q is the zero vector, and so the columns of Q are
linearly independent. There are no non-zero linear combinations of the
columns that equal the zero vector. Thus, orthonormal vectors are auto-
matically linearly independent.

Problem 17.2: (4.4 #18) Given the vectors a, band c listed below, use the
Gram-Schmidt process to find orthogonal vectors A, B, and C that span
the same space.

a= (1, - 1,0,0), b = (0, 1, - 1,0),c = (0,0, 1, - 1).

Show that {A, B, C} and {a, b, c} are bases for the space of vectors per-
pendicular to d = (1, 1, 1, 1).

Solution: We apply Gram-Schmidt to a, b, c. First, we set

A= a= (1,-1,0,0).

Next we find B :

ATb
B=b- ATAA=(0,1,-1,0)+
1 (1,-1,0,0)= (1 , 1 ,-1,0.
)
2 2 2
_ We know from the first problem that the elements of the set {A, B, C}
- are linearly independent, and each vector is orthogonal to (1,1,1,1). The
space of vectors perpendicular to dis three dimensional (since the row
space of (1, 1, 1, 1) is one-dimensional, and the number of dimensions of
the row space added to the number of dimensions of the nullspace add to
4). Therefore { A, B, C} forms a basis for the space of vectors perpendicular
to d.
Similarly, {a, b, c} is a basis for the space of vectors perpendicular to d
because the vectors are linearly independent, orthogonal to (1,1,1,1), and
because there are three of them.
Exercises on properties of determinants

Problem 18.1: (5.1 #10. Introduction to Linear Algebra: Strang) If the en-
tries in every row of a square matrix A add to zero, solve Ax = 0 to prove
that det A= 0. If those entries add to one, show that det(A -1) = 0. Does
this mean that det A = 1?
Solution: If the entries of every row of A sum to zero, then Ax = 0
when x = (1, ... , 1) since each component of Ax is the sum of the entries
in a row of A. Since A has a non-zero nullspace, it is not invertible and
detA = 0.
If the entries of every row of A sum to one, then the entries in every
row of A - I sum to zero. Hence A - I has a non-zero nullspace and
det(A -1) = 0.
If det(A - I ) = 0 it is not necessarily true that det A = 1. For example,
the rows of A= [ ~ ~] sum to one but detA = -1.
Problem 18.2: (5.1 #18.) Use row operations and the properties of the
determinant to calculate the three by three "Vandermonde determinant":
2
1 a a2 ]
det 1 b b = (b - a) (c - a) (c - b).
[1 C
2
c

Solution: Using row operations and properties of the determinant, we


have:

t I •
~ ~
2

det [ ! : :~ ] - det [
1 c c 2 1
b a
c
b2 a a2
c2
]

1 2

= det [ 0 b a a b/ a2 ]
0 c - a c2 - a2

= (b- a) det [ ~

~
1
=( b- a) det [ 0 a
0 0 (c-a)(c-b)
] b:

~
a
a2 ]
= (b - a) (c - . a) (c - b) det [ 1 b1a
0

~
0
= (b - a) (c - a)(c - b) det [ 1
0 n
= (b-a)(c-a)(c-b) . ✓

I 4
Exercises on determinant formulas and cofactors

Problem 19.1: Compute the determinant of:


0 0 0 1
1 0 0 0
A= 0 1 0 0 .
0 0 1 0
Which method of computing the determinant do you prefer for this prob-
lem, and why?

Solution: The preferred method is that of using cofactors. We apply the


Big Formula:
detA = L
P= (a,{3,... ,w)
( det P)a1a:a213 · · · anw

to A:
0 0 0 1 0 0 1 0 0 1 0 0
detA = 0 1 0 0 -0 0 0 0 +o 0 1 0 -1 0 1 0
0 1 0 0 1 0 0 0 0 0 0 1

1 0 0
= ' -1 0 1 0 -- -1 .
0 0 1
This is quicker than row exchange:
0 0 0 1 1 0 0 0
1 0 0 0 0 0 0 1
detA = det = -det
0 1 0 0 0 1 0 0
0 0 1 0 0 0 1 0

1 0 0 0 1 0 0 0
0 1 0 0 0 1 0 0
= det 0 0 0 1
= -det 0 0 1 0
0 0 1 0 0 0 0 1

- -1 .
Problem 19.2: (5.2 #33. Introduction to Linear Algebra: Strang) The sym-
metric Pascal matrices have determinant 1. If I subtract 1 from the n, n
entry, why does the determinant become zero? (Use rule 3 or cofactors.)

1 1 1 1 1 1 1 1
1 2 3 4 1 2 3 4
det
1 3 6 10
= 1 (known) det
1 3 6 10
= 0 (to explain).
1 4 10 20 1 4 10 19

Solution: The difference in the n, n entry (in the example, the difference
between 19 and 20) multiplies its cofactor, the determinant of then -1 by
n - 1 symmetric Pascal matrix. In our example this ~atrix is

We're told that this matrix has determinant 1. Since the n, n entry multi-
plies its cofactor positively, the overall determinant drops by 1 to become
0.

' .
Exercises on Cramer's rule, inverse matrix, and volume

Problem 20.1: (5.3 #8. Introduction to Linear Algebra: Strang) Suppose

A= uin-
Find its cofactor matrix C and multiply ACT to find det( A).

~ ~]
3
C= [ - and ACT = _ _

If you change a1,3 = 4 to 100, why is det(A) unchanged?


Solution: We fill in the cofactor matrix C and then multiply to obtain
ACT:

C = [ : -~ -~]
-6 2 1
and

ACT = [ !~ ~
1 2 5
] [ -~
0 -1
-~ ] = [
1
~
0 0 3
] = 31. ~~ ~
Since ACT= det(A)I, we have det(A) = 3. If 4 is changed to 100, det(A)
is unchanged because the cofactor of that entry is 0, and thus its value does
not contribute to the determinant.

Problem 20.2: (5.3 #28.) Spherical coordinates p, cp, 0 satisfy

x = p sincpcos0, y = p sincpsin0 and z = p cos cp.

Find the three by three matrix of partial derivatives:

dX/dp dx/acp dX/d0]


ay I ap ay I acp ay I ae .
[ dZ/dp dz/acp dZ/d0
Simplify its determinant to J = p2 sin 4'. In spherical coordinates,

dV = p2 sin <P dp d<fJ d0


is the volume of an infinitesimal "coordinate box."
Solution: The rows are formed by the partials of x,y, and z with respect
top, <P, and 0:

sin <P cos 0 p cos <P cos 0 -p sin <P sin 0 ]


sin <P sin 0 p cos <P sin 0 p sin <P cos 0 .
[ cos cp -p sin cp 0

Expanding its determinant J along the bottom row, we get:

J
= [ p cos 4'cos 0 -p sin <P sin 0 ]
cos <P p cos <P sin 0 p sin <P cos 0
-(- sintti) [ S~</JC~S 0 -p S~</Jsin0 ] +O
P .,, sm cp sm 0 p sm cp cos 0

cos cp (p2 cos cp sin <P cos2 0 + p2 cos <P sin cp sin2 0)
+p sin <P(p sin2 <P cos2 0 + p sin2 <P sin2 0)

cos <P(p2 cos cp sin <P( cos2 0 + sin2 0)) + p sin <P(P sin2 <P( cos2 0 + sin2 0))
cos <P(p 2 cos <P sin <P) + p2 sin3 <P
p2 sin <P( cos2 <P + sin2 <P)
J p2 sin cp.

t I .f
Exercises on eigenvalues and eigenvectors

Problem 21.1: (6.1 #19. Introduction to Linear Algebra: Strang) A three by


three matrix B is known to have eigenvalues 0, 1 and 2. This information
is enough to find three of these (give the answers where possible):

a) The rank of B
b) The determinant of BT B
c) The eigenvalues of BTB
d) The eigenvalues of ( B2 + I) - 1

Solution:

a) B has Oas an eigenvalue and is therefore singular (not invertible). Since


Bis a three by three matrix, this means that its rank can be at most 2.
Since B has two distinct nonzero eigenvalues, its rank is exactly 2.
b) Since Bis singular, det(B) = 0. Thus det(BTB) = det(BT) det(B) = 0.
c) There is not enough information to find the eigenvalues of BTB. For
example:

If B = [O 1
2
] then BT B = [O 1 J.
If B = [O ! 2
] then BT B = [O 2
4
] .

d) If p(t) is a polynomial and if xis an eigenvector of A with eigenvalue


A, then
p(A)x = p(A)x.
We also know that if A is an eigenvalue of A then 1 / A is an eigenvalue
of A- 1 • Hence the eigenvalues of (B2 + 1)-1 are 02~ 1 , 12~ 1 and 22~ 1 , or
1, 1/2 and 1/5.
Problem 21.2: (6.1 #29.) Find the eigenvalues of A, B, and C when

A = 01 42 53] , B =
[ 006
[o0 o2 1]
0 and C = [22 22 2]
2 .
300 222

Solution: Since the eigenvalues of a triangular matrix are its diagonal


entries, the eigenvalues of A are 1,4, and 6. For B we have:

det{B - AI) = {-A)(2 -A){-A) - 3(2 - A)


= {A2 - 3) (2 - A).
Hence the eigenvalues of B are ±v'3 and 2. Finally, for C we have:
det{C - AI) = {2 - A)[{2 - A) 2 - 4] - 2[2{2 - A) - 4] + 2[4 - 2{2 - A)]
, = A3 - 6A2 = A2 {A - 6).

The eigenvalues of C are 6, 0, and 0.


We can quickly check our answers by computing the determinants of
A and B and by noting that C is singular.
Exercises on diagonalization and powers of A

Problem 22.1: (6.2 #6. Introduction to Linear Algebra: Strang) Describe all
matrices S that diagonalize this matrix A (find all eigenvectors):

A= [i ~]-
Then describe all matrices that diagonalize A- 1.
Solution: To find the eigenvectors of A, we first find the eigenvalues:

det 4-A
[ l 2
O] = 0 ==> (4 - A) (2 - A) = 0.
_ A

Hence the eigenvalues are A1 = 4 and A2 = 2. Using these values, we find


the eigenvectors by solving (A - Al)x = 0 :

thus any multiple of (2,1) is an eigenvector for A1.

( A - A2 I) x = [~ ~ ] [~ ] = [ ~ ] ==> y = 0, z = free variable,

thus any multiple of (0,1) is an eigenvector for A2. Therefore the columns
of the matrices S that diagonalize A are nonzero multiples of (2,1) and
(1,0). They can appear in either order.
Finally, because A - 1 = SA- 1s- 1 the same matrices Swill diagonalize
A - 1.

Problem 22.2: (6.2 #16.) Find A and S to diagonalize A:

A= [ .6 .9 ] .
.4 .1

What is the limit of Ak ask ➔ oo? What is the iimit matrix of SAks- 1? In
the columns of this matrix you see the _ _ __
Solution: Since each of the columns of A sums to one, A is a Markov
matrix and definitely has eigenvalue A1 = 1. The trace of A is .7, so the
other eigenvalue is A2 = .7 - 1 = - .3. To find S we need to find the
corresponding eigenvectors:

(A - A1I)x1 = [ -:: -:~ ] [ ~ ] = [ ~ ] ==> x1 = (9,4).

(A -A2I)x2 = [ :! :! ][~ ] = [ ~] ==> y = -z ==> x2 = (1, -1).


Putting these together, we have:

So
00
9 1 ] [1 ] ( 1) [ 1 1] 1 [ 9 9 ]
SA S
- 1
➔ [
4 -1 0 13 4 -9 = 13 4 4 .

In the columns of this matrix you see the steady state vector.
Exercises on differential equations and eAt

Problem 23.1: (6.3 #14.a Introduction to Linear Algebra: Strang) The ma-
trix in this question is skew-symmetric (AT== - A):

~: =
[
0
-c O
b -a
c -b
a
0
lu or
u 1' == cu2 - bu3
u2 = au3 - cu1
u3'
I

= bu 1 - auz.

Find the derivative of 11 u (t) 11 2 using the definition:

llu(t)ll 2 = u1 2 + u2 2 + u32.
What does this tell you about the rate of change of the length of u? What
does this tell you about the range of values of u ( t)?
Solution:

dllu(t)ll 2 d(u1 2 + u22 + u3 2)


dt dt
= 2u1u1' + 2u2u2' + 2u3u3'
= 2u1(cu2 - bu3) + 2u2(au3 - cu1) + 2u3(bu1 - au2)
- .
-0

This means Ilu(t) 11 2 stays equal to Ilu(0) 11 2 - Because u(t) never changes
length, it is always on the circumference of a circle of radius Ilu(0) 11-

Problem 23.2: (6.3 #24.) Write A = [ ~ ; ] as SAS- 1 . Multiply SeAt s - 1


to find the matrix exponential e At. Check your work by evaluating eAt and
the derivative of eAt when t = 0.

Solution: The eigenvalues of A are A1 = 1 and Az = 3, with correspond-


ing eigenvectors x 1 = (1,0) and x2 = (1,2). This gives us the following
values for S,A, and s- 1 :
We use these to find eAt :
3
SeAt s-1 = [ 1 1 ] [ e t O ] [ 1 -1 / 2 ] _ [ et .5e t - ,Set ] _ At
0 2 0 e3t O 1I2 - 0 e3t - e ·

Check:

= [ i0 .5e t
3
- .set ]
eAt e3t equals I when t = 0. ✓

deAt et 1.5e3t - .set ]


dt [ 0 3e3t ·
deAt
dt t=O

t I ii
Exercises on symmehic matrices and positive definiteness

Problem 25.1: (6.4 #10. Introduction to Linear Algebra: Strang) Here is a


quick "proof" that the eigenvalues of all real matrices are real:

False Proof: Ax = Ax gives xT Ax = f


AxT x so A = xT x is real.
X X

There is a hidden assumption in this proof which is not justified. Find the
flaw by testing each step on the 90 ° rotation matrix:

with A = i and x = ( i, I).

Solution: We can esily confirm that Ax =Ax= [ -~ ] . Next, check if


xT Ax = AxT x is true for the 90 ° rotation matrix:
xT Ax [i 1 ] [ - ~] =0

AxTx

xT Ax
i[ i
AxTx. ✓
1] [ n = 0

Note that xTx = 0. Since the next and last step involves dividing by this
term, the hidden assumption must be that xTx =I= 0. H x = (a, b) then

2
xTx = [ a b ] [ : ] = a + b2 .
The "proof" assumes that the squares of the components of the eigenvec-
tor cannot sum to zero: a2 + b2 =I= 0. This may be false if the components
are complex.

Problem 25.2: (6.5 #32.) A group of nonsingular matrices includes AB


and A- 1 if it includes A and B. "Products and inverses stay in the group."
Which of these are groups?
a) Positive definite symmetric matrices A.
b) Orthogonal matrices Q.
c) All exponentials etA of a fixed matrix A.
d) Matrices D with determinant 1.

Solution:
a) The positive definite symmetric matrices A do not form a group. To
show this, we provide a counterexample in the form of two positive
definite symmetric matrices A and B whose product is not a positive
definite symmetric matrix.

If A = [ i !]and B = [ 1 ~2 i
1 2
] then AB = [ i:; /
5 ] is not
symmetric.
b) The orthogonal matrices Q form a group. If A and Bare orthogonal
matrices, then:

AT A = I=> A - 1 =AT=> A - 1 is orthogonal, and


BTB = I=> (AB)T AB = BT AT AB = BTB =I=> AB is orthogonal.

c) The exponentials etA of a fixed matrix A form a group. For the elements
ePA and eqA:

= e - pA is of the form iA
(ePA) - 1
ePAeqA = e(p+q)A is of the form iA

d) The matrices D with determinant 1 form a group. If detA = 1 then


det A - l = 1. If matrices A and B have determinant 1 then their product
also has determinant 1:

det(AB) = det(A) det(B) = 1.


Exercises on positive definite matrices and minima

Problem 27.1: (6.5 #33. Introduction to Linear Algebra: Strang) When A


and Bare symmetric positive definite, AB might not even be symmetric,
but its eigenvalues are still positive. Start from ABx = Ax and take dot
products with Bx. Then prove A > 0.
Solution:

ABx = Ax
(ABx)TBx = (Ax)TBx
(Bx)TATBx = AxTBx
(Bx)rA(Bx) = A(xTBx).

where Ar = A because A is symmetric. Since A is positive definite we


know (Bx) TA (Bx) > 0, and since B is positive definite x r Bx > 0. Hence,
A must be positive as well.

Problem 27.2: Find the quadratic form associated with the matrix [ ~ ~ ].
Is this function f (x, y) always positive, always negative, or sometimes
positive and sometimes negative?
Solution: To find the quadratic form, compute xTAx:

f(x,y) [x y] [~ ~][;]
x(x +Sy)+ y(7x + 9y)
x 2 + 12xy + 9y2.

This expression can be positive, e.g. when y = 0 and x =f 0.


The expression will sometimes be negative because A is not positive
definite. For instance, f (2, -2) = -8. Thus the quadratic form associated
with the matrix A is sometimes positive and sometimes negative. An-
other way to reach this conclusion is to note that det A = -26 is negative
and so A is not positive definite.

You might also like