TUT 2 Solutions
TUT 2 Solutions
TUT 2 Solutions
Problem 16.1: (4.1 #7. Introduction to Linear Algebra: Strang) For every
system of m equations with no solution, there are numbers y 1 , ..., Ym that
multiply the equations so they add up to 0 = 1. This is called Fredholm's
Alternative:
Exactly one of these problems has a solution:
Ax= b ORATy = OwithyTb = 1.
If b is not in the column space of A it is not orthogonal to the nullspace of
AT_ Multiply the equations X1 - X2 = 1, x2 - X3 = 1 and X1 - X3 = 1 by
numbers Yt, y2 and y3 chosen so that the equations add up to 0 = 1.
Solution: Let y1 = 1, y2 = 1 and y3 = -1. Then the left-hand side of
the sum of the equations is:
(x1 - x2) + (x2 - X3) - (x1 - X3) = Xt - X2 + X2 - X3 + X3 - Xt = 0,
and the right-hand side verifies that yTb = 1:
1+1-1=1.
Solution:
a) In order for r and n to be bases for N (A) and C(AT), we must have
r •n = 0,
Problem 16.1: (4.3 #17. Introduction to Linear Algebra: Strang) Write down
three equations for the line b = C + Dt to go through b = 7 at t = -1,
b = 7 at t = 1, and b = 21 at t = 2. Find the least squares solution
x = ( C, D) and draw the closest line.
Problem 16.3: (4.3 #19.) Suppose the measurements at t = -1, 1,2 are
the errors 2, -6, 4 in the previous problem. Compute x and the closest
line to these new measurements. Explain the answer: b = (2, - 6, 4) is
perpendicular to _ _ _ _ so the projection is p = 0.
Solution: If b =Ax= (5, 13, 17) then x = (9,4) and e = 0 since bis in the
column space of A.
Problem 16.5: (4.3 #21.) Which of the four subspaces contains the error
vector e? Which contains p? Which contains x? What is the nullspace of
A?
Solution: e is in N (AT); pis in C(A ); i is in C(AT); N(A) = {O} = zero
vector only.
Problem 16.6: (4.3 #22.) Find the best line C + Dt to fit b = 4, 2, -1, 0, 0 at
times t = -2, -1, 0, 1, 2.
.
' '
Exercises on orthogonal matrices and Gram-Schmidt
Qx = 0 ~ QT Qx = QTO ~ Ix = 0 ~ x = 0.
Thus the nullspace of Q is the zero vector, and so the columns of Q are
linearly independent. There are no non-zero linear combinations of the
columns that equal the zero vector. Thus, orthonormal vectors are auto-
matically linearly independent.
Problem 17.2: (4.4 #18) Given the vectors a, band c listed below, use the
Gram-Schmidt process to find orthogonal vectors A, B, and C that span
the same space.
Show that {A, B, C} and {a, b, c} are bases for the space of vectors per-
pendicular to d = (1, 1, 1, 1).
A= a= (1,-1,0,0).
Next we find B :
ATb
B=b- ATAA=(0,1,-1,0)+
1 (1,-1,0,0)= (1 , 1 ,-1,0.
)
2 2 2
_ We know from the first problem that the elements of the set {A, B, C}
- are linearly independent, and each vector is orthogonal to (1,1,1,1). The
space of vectors perpendicular to dis three dimensional (since the row
space of (1, 1, 1, 1) is one-dimensional, and the number of dimensions of
the row space added to the number of dimensions of the nullspace add to
4). Therefore { A, B, C} forms a basis for the space of vectors perpendicular
to d.
Similarly, {a, b, c} is a basis for the space of vectors perpendicular to d
because the vectors are linearly independent, orthogonal to (1,1,1,1), and
because there are three of them.
Exercises on properties of determinants
Problem 18.1: (5.1 #10. Introduction to Linear Algebra: Strang) If the en-
tries in every row of a square matrix A add to zero, solve Ax = 0 to prove
that det A= 0. If those entries add to one, show that det(A -1) = 0. Does
this mean that det A = 1?
Solution: If the entries of every row of A sum to zero, then Ax = 0
when x = (1, ... , 1) since each component of Ax is the sum of the entries
in a row of A. Since A has a non-zero nullspace, it is not invertible and
detA = 0.
If the entries of every row of A sum to one, then the entries in every
row of A - I sum to zero. Hence A - I has a non-zero nullspace and
det(A -1) = 0.
If det(A - I ) = 0 it is not necessarily true that det A = 1. For example,
the rows of A= [ ~ ~] sum to one but detA = -1.
Problem 18.2: (5.1 #18.) Use row operations and the properties of the
determinant to calculate the three by three "Vandermonde determinant":
2
1 a a2 ]
det 1 b b = (b - a) (c - a) (c - b).
[1 C
2
c
t I •
~ ~
2
det [ ! : :~ ] - det [
1 c c 2 1
b a
c
b2 a a2
c2
]
1 2
= det [ 0 b a a b/ a2 ]
0 c - a c2 - a2
= (b- a) det [ ~
~
1
=( b- a) det [ 0 a
0 0 (c-a)(c-b)
] b:
~
a
a2 ]
= (b - a) (c - . a) (c - b) det [ 1 b1a
0
~
0
= (b - a) (c - a)(c - b) det [ 1
0 n
= (b-a)(c-a)(c-b) . ✓
I 4
Exercises on determinant formulas and cofactors
to A:
0 0 0 1 0 0 1 0 0 1 0 0
detA = 0 1 0 0 -0 0 0 0 +o 0 1 0 -1 0 1 0
0 1 0 0 1 0 0 0 0 0 0 1
1 0 0
= ' -1 0 1 0 -- -1 .
0 0 1
This is quicker than row exchange:
0 0 0 1 1 0 0 0
1 0 0 0 0 0 0 1
detA = det = -det
0 1 0 0 0 1 0 0
0 0 1 0 0 0 1 0
1 0 0 0 1 0 0 0
0 1 0 0 0 1 0 0
= det 0 0 0 1
= -det 0 0 1 0
0 0 1 0 0 0 0 1
- -1 .
Problem 19.2: (5.2 #33. Introduction to Linear Algebra: Strang) The sym-
metric Pascal matrices have determinant 1. If I subtract 1 from the n, n
entry, why does the determinant become zero? (Use rule 3 or cofactors.)
1 1 1 1 1 1 1 1
1 2 3 4 1 2 3 4
det
1 3 6 10
= 1 (known) det
1 3 6 10
= 0 (to explain).
1 4 10 20 1 4 10 19
Solution: The difference in the n, n entry (in the example, the difference
between 19 and 20) multiplies its cofactor, the determinant of then -1 by
n - 1 symmetric Pascal matrix. In our example this ~atrix is
We're told that this matrix has determinant 1. Since the n, n entry multi-
plies its cofactor positively, the overall determinant drops by 1 to become
0.
' .
Exercises on Cramer's rule, inverse matrix, and volume
A= uin-
Find its cofactor matrix C and multiply ACT to find det( A).
~ ~]
3
C= [ - and ACT = _ _
C = [ : -~ -~]
-6 2 1
and
ACT = [ !~ ~
1 2 5
] [ -~
0 -1
-~ ] = [
1
~
0 0 3
] = 31. ~~ ~
Since ACT= det(A)I, we have det(A) = 3. If 4 is changed to 100, det(A)
is unchanged because the cofactor of that entry is 0, and thus its value does
not contribute to the determinant.
J
= [ p cos 4'cos 0 -p sin <P sin 0 ]
cos <P p cos <P sin 0 p sin <P cos 0
-(- sintti) [ S~</JC~S 0 -p S~</Jsin0 ] +O
P .,, sm cp sm 0 p sm cp cos 0
cos cp (p2 cos cp sin <P cos2 0 + p2 cos <P sin cp sin2 0)
+p sin <P(p sin2 <P cos2 0 + p sin2 <P sin2 0)
cos <P(p2 cos cp sin <P( cos2 0 + sin2 0)) + p sin <P(P sin2 <P( cos2 0 + sin2 0))
cos <P(p 2 cos <P sin <P) + p2 sin3 <P
p2 sin <P( cos2 <P + sin2 <P)
J p2 sin cp.
t I .f
Exercises on eigenvalues and eigenvectors
a) The rank of B
b) The determinant of BT B
c) The eigenvalues of BTB
d) The eigenvalues of ( B2 + I) - 1
Solution:
If B = [O 1
2
] then BT B = [O 1 J.
If B = [O ! 2
] then BT B = [O 2
4
] .
A = 01 42 53] , B =
[ 006
[o0 o2 1]
0 and C = [22 22 2]
2 .
300 222
Problem 22.1: (6.2 #6. Introduction to Linear Algebra: Strang) Describe all
matrices S that diagonalize this matrix A (find all eigenvectors):
A= [i ~]-
Then describe all matrices that diagonalize A- 1.
Solution: To find the eigenvectors of A, we first find the eigenvalues:
det 4-A
[ l 2
O] = 0 ==> (4 - A) (2 - A) = 0.
_ A
thus any multiple of (0,1) is an eigenvector for A2. Therefore the columns
of the matrices S that diagonalize A are nonzero multiples of (2,1) and
(1,0). They can appear in either order.
Finally, because A - 1 = SA- 1s- 1 the same matrices Swill diagonalize
A - 1.
A= [ .6 .9 ] .
.4 .1
What is the limit of Ak ask ➔ oo? What is the iimit matrix of SAks- 1? In
the columns of this matrix you see the _ _ __
Solution: Since each of the columns of A sums to one, A is a Markov
matrix and definitely has eigenvalue A1 = 1. The trace of A is .7, so the
other eigenvalue is A2 = .7 - 1 = - .3. To find S we need to find the
corresponding eigenvectors:
So
00
9 1 ] [1 ] ( 1) [ 1 1] 1 [ 9 9 ]
SA S
- 1
➔ [
4 -1 0 13 4 -9 = 13 4 4 .
In the columns of this matrix you see the steady state vector.
Exercises on differential equations and eAt
Problem 23.1: (6.3 #14.a Introduction to Linear Algebra: Strang) The ma-
trix in this question is skew-symmetric (AT== - A):
~: =
[
0
-c O
b -a
c -b
a
0
lu or
u 1' == cu2 - bu3
u2 = au3 - cu1
u3'
I
= bu 1 - auz.
llu(t)ll 2 = u1 2 + u2 2 + u32.
What does this tell you about the rate of change of the length of u? What
does this tell you about the range of values of u ( t)?
Solution:
This means Ilu(t) 11 2 stays equal to Ilu(0) 11 2 - Because u(t) never changes
length, it is always on the circumference of a circle of radius Ilu(0) 11-
Check:
= [ i0 .5e t
3
- .set ]
eAt e3t equals I when t = 0. ✓
t I ii
Exercises on symmehic matrices and positive definiteness
There is a hidden assumption in this proof which is not justified. Find the
flaw by testing each step on the 90 ° rotation matrix:
AxTx
xT Ax
i[ i
AxTx. ✓
1] [ n = 0
Note that xTx = 0. Since the next and last step involves dividing by this
term, the hidden assumption must be that xTx =I= 0. H x = (a, b) then
2
xTx = [ a b ] [ : ] = a + b2 .
The "proof" assumes that the squares of the components of the eigenvec-
tor cannot sum to zero: a2 + b2 =I= 0. This may be false if the components
are complex.
Solution:
a) The positive definite symmetric matrices A do not form a group. To
show this, we provide a counterexample in the form of two positive
definite symmetric matrices A and B whose product is not a positive
definite symmetric matrix.
If A = [ i !]and B = [ 1 ~2 i
1 2
] then AB = [ i:; /
5 ] is not
symmetric.
b) The orthogonal matrices Q form a group. If A and Bare orthogonal
matrices, then:
c) The exponentials etA of a fixed matrix A form a group. For the elements
ePA and eqA:
= e - pA is of the form iA
(ePA) - 1
ePAeqA = e(p+q)A is of the form iA
ABx = Ax
(ABx)TBx = (Ax)TBx
(Bx)TATBx = AxTBx
(Bx)rA(Bx) = A(xTBx).
Problem 27.2: Find the quadratic form associated with the matrix [ ~ ~ ].
Is this function f (x, y) always positive, always negative, or sometimes
positive and sometimes negative?
Solution: To find the quadratic form, compute xTAx:
f(x,y) [x y] [~ ~][;]
x(x +Sy)+ y(7x + 9y)
x 2 + 12xy + 9y2.