Proofs of Norms

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Chapter 4

Norms and Matrix Conditioning

4.1 Matrix Norms


Let us review a few facts about vector norms that we will need.
Theorem 4.1.1 (Reverse Triangle Inequality). Let k · k : Cn ! R. Then
kxk ky k  kx yk ,
for all x, y 2 Cn .
Proof. By the usual triangle inequality,
kxk = kx y + y k  kx y k + ky k .
Thus
kxk ky k  kx yk . (4.1.1)
Reversing the roles of x and y ,
ky k kxk  ky xk = kx yk .
The last equation implies
kx y k  kxk ky k (4.1.2)
Estimates (4.1.1) and (4.1.2) combine to give
kx y k  kxk ky k  kx yk ,
which is equivalent to
kxk ky k  kx yk .

87
88 CHAPTER 4. NORMS AND MATRIX CONDITIONING

Theorem 4.1.2. Every vector norm k · k : Cn ! R is a uniformly continuous


function with respect to the metric d(x, y ) = kx y k1 .
Proof. Let " > 0 be given. Suppose x, h = [hi ] 2 Cn are arbitrary. Then,
using the reverse triangle inequality and the triangle inequality

kx + hk kxk  khk
n
X
= hi e i
i=1
n
X
 |hi | ke i k
i=1
n
X
 max |hi | ke i k
1in
i=1
= M khk1 ,

where e j is the j th canonical basis vector and


n
X
M := ke i k .
i=1
"
Set = M. Then, if khk1 < , then

kx + hk kxk < ",

which proves that k · k : Cn ! R is continuous at x. Since is independent of


x, it is, in fact, uniformly continuous.

Theorem 4.1.3. Suppose that k · k : Cn ! R is a vector norm. There exist


positive constants 0 < M1  M2 such that

M1 kxk1  kxk  M2 kxk1 , 8 x 2 Cn .

In this case, we say that k · k and k · k1 are equivalent.


Proof. Set
n 1
S1 := {x 2 Cn | kxk1 = 1} .
S1n 1 is closed and bounded and, therefore, compact. Since k · k : S n 1 ! R
1
n 1,
is continuous function defined on a compact set, there exist points x i 2 S1
i = 1, 2, such that (the minimum is achieved)

M1 := inf kxk = kx 1 k > 0,


n 1
x2S1
4.1. MATRIX NORMS 89

and (the maximum is achieved)

M2 := sup kxk = kx 2 k > 0,


n 1
x2S1

and, clearly, M1  M2 . For any arbitrary y 2 Cn? ,


y n 1
2 S1 .
ky k 1

It follows that
y
M1   M2 ,
ky k1
or, equivalently,
M1 ky k1  ky k  M2 ky k1 .
Since the result is clearly true for y = 0, the proof is complete.

Exercise 4.1.4. Since any two vector norms k · ka , k · kb : Cn ! R are equiv-


alent to the infinity norm k · k1 : Cn ! R, they are, in fact, equivalent to
each other. In other words, given any two vector norms k · ka , k · kb , there are
positive constants 0 < C1  C2 such that

C1 kxka  kxkb  C2 kxka , 8 x 2 Cn .

The reader should prove this.

Recall the following definition,

Definition 4.1.5. A function k · k : Cm⇥n ! R is called a matrix norm i↵ it


satisfies

1. kAk = 0 implies that A = O 2 Cm⇥n , the zero matrix.

2. k↵Ak = |↵| · kAk, for all ↵ 2 C and A 2 Cm⇥n .

3. kA + Bk  kAk + kBk, for all A, B 2 Cm⇥n .

Definition 4.1.6. Suppose that k · k : Cm⇥n ! R is a matrix norm. Then


k · k is called consistent with respect to the norms k · kCm : Cm ! R and
k · kCn : Cn ! R i↵
kAxkCm  kAk kxkCn ,
for all A 2 Cm⇥n and x 2 Cn .
90 CHAPTER 4. NORMS AND MATRIX CONDITIONING

Definition 4.1.7. Suppose that k · k : Cn⇥n ! R is a matrix norm. Then k · k


is called sub-multiplicative i↵, for all A, B 2 Cn⇥n

kABk  kAk · kBk .

Example 4.1.8. For any A = [ai,j ] 2 Cm⇥n define matrix max norm via

kAkmax := max |ai,j |,


1im
1jn

and the Frobenius norm via


v
uX
u m X
n
kAkF := t |ai,j |2 .
i=1 j=1

These functions are bona fide norms, as the reader should confirm. The matrix
max norm is not sub-multiplicative; the Frobenius norm is sub-multiplicative.
It can also be shown that the Frobenius norm is consistent with respect to the
`2 (Cm ) and `2 (Cn ) norms. The reader should prove this.
Definition 4.1.9. Suppose that k · kCm : Cm ! R and k · kCn : Cn ! R are
vector norms. The function k · k : Cm⇥n ! R defined by
kAxkCm
kAk := sup , 8 A 2 Cm⇥n ,
x2Cn? kxkCn

is called the induced matrix norm with respect to k · kCm and k · kCn .
Whenever n = m, it is understood that the vector norms are the same.
We say that k · k : Cn⇥n ! R is the induced matrix norm with respect the
vector norm k · k : Cn ! R i↵
kAxk
kAk := sup , 8 A 2 Cn⇥n ,
x2Cn? kxk

dropping the superfluous subscripts on the vector norms.


Remark 4.1.10. We also use the terms operator norm and subordinate matrix
norm interchangeably with the term induced matrix norm.
Example 4.1.11. For any p 2 [1, 1] we define the induced matrix p-norm via
kAxk`p (Cm )
kAkp := sup , A 2 Cm⇥m .
x2Cn? kxk`p (Cn )

We will use these frequently in the book.


4.1. MATRIX NORMS 91

Theorem 4.1.12. Consider the induced matrix 2-norm k · k2 : Cm⇥n ! R,


defined via
kAxk`2 (Cm )
kAk2 = sup ,
x2Cn? kxk`2 (Cn )

where, recall, Cn? := Cn \ {0}. Then


p
kAk2 = max i ,
1in

where the i are the eigenvalues of the matrix B = AH A.


Proof. The eigenvalues of B = AH A are real and non-negative. To see this,
let ( , w ) be an eigen-pair. Then

0  kAw k2`2 (Cm ) = w H AH Aw = w H ( w ) = w H w = kw k2`2 (Cn ) .

Hence
kAw k2`2 (Cm )
= 0.
kw k2`2 (Cn )
As B = AH A is Hermitian, it has an orthonormal basis of eigenvectors

S = {w 1 , . . . , w n } ⇢ Cn .

Let x 2 Cn? be arbitrary.


P There exist unique constants c1 , . . . , cn 2 C, not all
zero, such that x = ni=1 ci w i . Then
Pn Pn
kAxk2`2 (Cm ) x H AH Ax i=1 i |ci |
2 |ci |2
= = P n  max Pi=1
n = max ,
kxk2`2 (Cn ) x Hx i=1 |ci |
2
i=1 |ci |
2

where max1in i =: max .Since the far right hand side is independent of x
q p
kAk2  max i = max i .
1in 1in

Now let w max 2 Cn? be an eigenvector associated to max . Then note


s s
kAw max k`2 (Cm ) wH
max A H Aw
max
p wHmax w max
p
= H
= max H
= max .
kw max k`2 (Cn ) w max w max w max w max

Thus, in fact, the upper bound is achieved, and we have


p
kAk2 = max i.
1in
92 CHAPTER 4. NORMS AND MATRIX CONDITIONING

Theorem 4.1.13. Suppose that k · k : Cm⇥n ! R is the induced norm with


respect to the vector norms k · kCm and k · kCn . Then k · k is a bona fide matrix
norm. It is consistent with respect to the vector norms vector norms k · kCm
and k · kCn used to define it. If m = n, the induced norm is sub-multiplicative.
Furthermore, q
kAk  C ⇢(AH A) < 1,

for some C > 0, for all A 2 Cm⇥n , where, recall,

⇢(B) := max | |, B 2 Cp⇥p ,


2 (B)

is the spectral radius. Finally,

kAk = sup kAxkCm ,


x2SCn n 1

where
SCn n 1 = {x 2 Cn | kxkCn = 1} .

Proof. (Positive Definiteness:) Suppose that kAk = 0. Then, kAxkCm = 0,


for all x 2 Cn . This implies that Ax = 0, for all x 2 Cn . Thus Ae i = 0, for
all i = 1, · · · , n. This implies that the columns of A are all zero, which implies
that A is the zero matrix.
(Non-negative Homogeneity:) Let ↵ 2 C and A 2 Cm⇥n be arbitrary. Then

k↵AxkCm kAxkCm kAxkCm


k↵Ak = sup = sup |↵| = |↵| sup = |↵| kAk ,
x2Cn? kxkCn x2Cn? kxkCn x2Cn? kxkCn

where we have used a familiar property of the supremum.


(Consistency:) It helps to prove the consistency next. For any x 2 Cn? and
A 2 Cm⇥n ,
kAw kCm kAxkCm
kAk = sup ,
n
w 2C? kw k C n kxkCn

since the supremum (i.e., the least upper bound) is an upper bound. Hence,

kAxkCm  kAk · kxkCn .

Of course the last inequality holds trivially for x = 0.


4.1. MATRIX NORMS 93

(Triangle Inequality:) Let A, B 2 Cm⇥n be arbitrary. Then, using the triangle


inequality for vector norms and consistency,
k(A + B)xkCm kAxkCm + kBxkCm

kxkCn kxkCn
kAxkCm kBxkCm
= +
kxkCn kxkCn
kAk kxkCn kBk kxkCn
 +
kxkCn kxkCn
= kAk + kBk ,

for all x 2 Cn? . Hence


k(A + B)xkCm
kA + Bk = sup  kAk + kBk .
x2Cn? kxkCn

(Sub-Multiplicativity:) Suppose that A, B 2 Cn⇥n are arbitrary. Then, using


consistency twice,
kABxkCn kAk · kBxkCn

kxkCn kxkCn
kAk · kBk kxkCn

kxkCn
= kAk · kBk ,

for all x 2 Cn? . Hence


kABxkCn
kABk = sup  kAk · kBk .
x2Cn? kxkCn

(Finiteness:) By equivalence of vector norms, there are constants 0 < M1 


M2 and 0 < C1  C2 such that

M1 kxk`2 (Cm )  kxkCm  M2 kxk`2 (Cm ) , 8 x 2 Cm ,

and, similarly,

C1 kxk`2 (Cn )  kxkCn  C2 kxk`2 (Cn ) , 8 x 2 Cn .

Therefore, for any A 2 Cm⇥n and any x 2 Cn? ,

kAxkCm M2 kAxk`2 (Cm )


 .
kxkCn C1 kxk`2 (Cn )
94 CHAPTER 4. NORMS AND MATRIX CONDITIONING

Therefore,
M2
kAk  kAk2 ,
C1
and the previous theorem gives the desired result.
(Equivalent Definition:) Using non-negative homogeneity,
kAxkCm
kAk = sup
x2Cn? kxkCn
1
= sup Ax
kxkCn
x2Cn? Cm
✓ ◆
x
= sup A
n
x2C? kxk Cn Cm
= sup kAw kCm .
w 2SCn n 1

Theorem 4.1.14. Suppose that k · k : Cm⇥n ! R is the induced norm with


respect to the vector norms k · kCm and k · kCn . For any A 2 Cm⇥n , there is an
element x 2 SCn n 1 such that
kAk = kAxkCm .
Proof. The function kA( · )kCm : Cn ! R is uniformly continuous. The proof
of this fact is left to reader. If this is the case, kA( · )kCm : SCn n 1 ! R is a
uniformly continuous function defined on a compact set. Therefore, there are
vectors x 1 , x 2 2 SCn n 1 , such that
sup kAxkCm = kAx 1 kCm and inf kAxkCm = kAx 2 kCm .
x2SCn n 1 x2SCn n 1

Since
kAk = sup kAw kCm ,
w 2SCn n 1

the result follows.


Theorem 4.1.15. Suppose that k · k : Cn⇥n ! R is the induced matrix norm
with respect to the vector norm k · k : Cn ! R. Then
kIn k = 1
and, for every A 2 Cn⇥n
⇢(A)  kAk .
Proof. The proof is an exercise.
4.2. CONDITIONING 95

4.2 Conditioning
Definition 4.2.1 (condition number). Suppose that A 2 Cn⇥n is invertible.
The condition number of A with respect to the matrix norm k · k : Cn⇥n ! R
is
(A) := kAkkA 1 k.
Before we get on to the meaning and utility of the condition number, let
us get some elementary properties out of the way.
Proposition 4.2.2. Suppose that k · k : Cn⇥n ! R is an induced matrix norm
with respect to the vector norm k · k : Cn ! R, and A 2 Cn⇥n is invertible.
Then
kAk A 1 =: (A) 1.
Furthermore,
1
 kA Bk ,
kA 1 k
for any B 2 Cn⇥n that is singular. Consequently,
1 kA Bk
 inf .
(A) det(B)=0 kAk
Proof. Exercise.

There are some nice formulae for and estimates of the condition number
with respect to the induced matrix 2-norm:
Proposition 4.2.3. Suppose that A 2 Cn⇥n is invertible and k · k2 : Cn⇥n ! R
is the induced matrix 2-norm.
1. Then, if the singular values of A are 1 2 ··· n > 0,
1 1
2 (A) := kAk2 kA k2 = .
n

2. If the eigenvalues of B = AH A are 0 < µ1  µ2  · · ·  µn , then


r
µn
2 (A) = .
µ1

3. Denote p (A) = kAkp · A 1 p , for p 2 [1, +1], where k · kp is the


induced matrix norm with respect to the vector norm k · k`p (Cn ) . Then,
p
2 (A)  1 (A)1 (A).
96 CHAPTER 4. NORMS AND MATRIX CONDITIONING

4.
1 kA Bk2
= inf .
2 (A) det(B)=0 kAk2
5. If, additionally, it is assumed that A is Hermitian, then
max | |
2 (A)
2 (A) = .
min | |
2 (A)

6. If additionally, it is known that A is HPD with eigenvalues 0 < 1 


2  · · ·  n then
n
2 (A) = .
1

Proof. Exercise.

The first and last formulae give an easy geometric interpretation of the
condition number. It is the ratio of the maximal stretching to the minimal
stretching under the action of the matrix A.
Definition 4.2.4. Suppose that A 2 Cn⇥n is invertible and b 2 Cn is given.
The residual vector with respect to x 0 2 Cn is defined as

r = r (x 0 ) := b Ax 0 = A(x x 0 ),

where x = A 1 b. The error vector with respect to x 0 is defined as

e = e(x 0 ) := x x 0.

Consequently,
Ae = r .
It often happens that we have obtained an approximate solution x 0 2 Cn .
We would like to have some measure of the error, but a direct measurement
of the error would require the exact solution x. The next best thing is the
residual, which is an indirect measurmnent of the error, as the last definition
suggests. The next theorem tells us how useful the residual is in determining
the relative size of the error.
Theorem 4.2.5. Suppose that A 2 Cn⇥n is invertible, b 2 Cn? is given, and
x = A 1 b. Assume that k · k : Cn⇥n ! R is the induced matrix norm with
respect to the vector norm k · k : Cn ! R. Then
1 kr k kek kr k
  (A) .
(A) kbk kxk kbk
4.2. CONDITIONING 97

Proof. Since e = A 1r , using consistency of the induced norm


1 1
kek = A r  A kr k .
Likewise,
kbk = kAxk  kAk kxk ,
which implies
1 1
 kAk .
kxk kbk
Combining the first and third inequalities, we get half of the result:
kek 1 kr k kr k
 kAk A = (A) .
kxk kbk kbk
The other half is left as a homework exercise.

Now, suppose that we wish to find x 2 Cn , such that Ax = b, where


A 2 Cn⇥n is invertible and b 2 Cn is given. We sometimes call A and b the
data of the problem. We can imagine a scenario in which the coefficient matrix
A and the vector b are perturbed during the process of storing their values or
solving the system of equations. Let A 2 Cn⇥n and b 2 Cn be known (or
estimable) perturbations of the data. The problem that is actually solved is
(A + A) (x + x) = b + b.
The question is this: how large is the relative error? In other words, how large
is kkxk
xk
? Formally, the perturbation in x 2 Cn is
1
x = (A + A) (b + b) x,
provided A + A is invertible. Observe that since A is invertible we have
1 1 1 1 1 1
(A + A) = (A(In + A A)) = (In + A A) A .
Therefore, we have reduced the question of the invertibility of A + A to a
more general question: given M 2 Cn⇥n , when is In ± M invertible?
Theorem 4.2.6 (Neumann Series). Suppose that k · k : Cn⇥n ! R is an
induced matrix norm with respect to the vector norm k · k : Cn ! R. Let
M 2 Cn⇥n with kMk < 1, then In M is invertible,
1 1
k(In M) k ,
1 kMk
and
1
X
1
(In M) = Mk .
k=0
98 CHAPTER 4. NORMS AND MATRIX CONDITIONING

Proof. Using the reverse triangle inequality and consistency, since kMk < 1,
for any x 2 Cn ,
k(In M)xk kxk kMxk (1 kMk)kxk.
This inequality implies that, if (In M)x = 0 then x = 0. Therefore, In M
is invertible.
To obtain the norm estimate notice that
1
1 = kIn k = k(In M)(In M) k
1 1
= (In M) M(In M)
1 1
k(In M) k kMkk(In M) k,
where we have used the reverse triangle inequality for the matrix norm, which
holds as for vector norms, and sub-multiplicativity. The upper bound of the
quantity k(In M) 1 k now follows.
Finally, consider

S` := Mk .
k=0
We will show that S` (In M) ! In as ` ! 1. Indeed,
X̀ X̀ X̀
S` (In M) = Mk (In M) = Mk Mk+1 = In M`+1 ,
k=0 k=0 k=0

which shows that, as ` ! 1,


kS` (In M) In k = kM`+1 k  kMk`+1 ! 0,
using the sub-multiplicativity of the induced norm and the fact that kMk <
1.

Corollary 4.2.7. Suppose that k · k : Cn⇥n ! R is an induced matrix norm


with respect to the vector norm k · k : Cn ! R. Let M 2 Cn⇥n with kMk < 1,
then In + M is invertible and
1 1
k(In + M) k .
1 kMk
Corollary 4.2.8. Suppose that k · k : Cn⇥n
! R is an induced matrix norm
with respect to the vector norm k · k : C ! R. If S 2 Cn⇥n is invertible and
n

T 2 Cn⇥n satifies
S 1 kS Tk < 1,
then T is invertible.
4.2. CONDITIONING 99

Proof. Notice that


1
T = S(In (In S T)),
and, therefore, T will be invertible provided In (In S 1 T) is invertibble.
Defining M = In S 1 T, we realize that we need kMk < 1. Observe that
1 1 1
kMk = kIn S Tk = kS (S T)k  kS kkS Tk < 1.

Let us first begin by assuming that b = 0.


Theorem 4.2.9. Suppose that k · k : Cn⇥n ! R is an induced matrix norm
with respect to the vector norm k · k : Cn ! R. Suppose that x 2 Cn solves
Ax = b, where A 2 Cn⇥n is invertible and b 2 Cn is known. Assume that
A 2 Cn⇥n satisfies kA 1 Ak < 1 and that x 2 Cn satisfies the perturbed
problem
(A + A)(x + x) = b.
Then x is uniquely determined and
k xk (A) k Ak

kxk 1 (A) kkAk
Ak kAk

Proof. Since M := A 1 A satisfies kMk < 1 we know that A + A is invert-


ible. Therefore, x exists and is unique. In addition, we have
1 1 1 1
(A + A) = (A(In M)) = (In M) A
1k 1
and k(In M)  1 kMk . Moreover,
1
kMk  kA kk Ak

implies
1 1
 1 kk
.
1 kMk 1 kA Ak
Now
1 1
x = (A + A) b A b
1 1 1
= (In M) A b A b
1 1 1
= (In M) (A b (In M)A b)
1 1
= (In M) MA b
1
= (In M) Mx.
100 CHAPTER 4. NORMS AND MATRIX CONDITIONING

Consequently,
1
k xk  k(In M) k · kMk · kxk
kA 1 kk Ak
 kxk
1 kA 1 kk Ak
(A) k Ak
= k Ak kAk
kxk.
1 (A) kAk

The result follows.

To end the discussion, let us see what happens when we perturb both A
and b.

Theorem 4.2.10. Suppose that k · k : Cn⇥n ! R is an induced matrix norm


with respect to the vector norm k · k : Cn ! R. Suppose that x 2 Cn solves
Ax = b, where A 2 Cn⇥n is invertible and b 2 Cn is known. Assume that
A 2 Cn⇥n satisfies kA 1 Ak < 1, b 2 Cn is given, and x 2 Cn satisfies the
perturbed problem
(A + A)(x + x) = b + b.
Then x is uniquely determined and
✓ ◆
k xk (A) k bk k Ak
 + .
kxk 1 (A) kkAk
Ak kbk kAk

Proof. Let M = A 1 A. We then have that x = A 1b and x + x =


(In M) 1 A 1 (b + b). Therefore,
1 1 1
x = (In M) A (b + b) A b
1 1 1 1
= (In M) A b+A b (In M)A b
1 1 1
= (In M) (A b MA b)

This shows that


1 1 1
k xk  kA bk + kMA bk .
1 (A) kkAk
Ak

Notice also that

1 1 k Ak
kMA bk = kMxk  kMkkxk  kA k · k Ak · kxk = (A) kxk
kAk
4.3. PROBLEMS 101

and
1 1 kAxk k bk
kA bk  kA kk bk  (A) kxk.
kAxk kbk
The previous three inequalities, when combined, yield
✓ ◆
(A) k bk k Ak
k xk  + kxk,
1 (A) k Ak kbkkAk
kAk

as we intended to show.

4.3 Problems
1. Suppose that k · k : Cm⇥n ! R is the induced norm with respect to the
vector norms k · kCm and k · kCn , and A 2 Cm⇥n , prove that the function
kA( · )kCm : Cn ! R is uniformly continuous. Use this fact to prove that
there is vector x 2 SCn n 1 such that

kAk = kAxkCm .

2. Suppose that k · k : Cn⇥n ! R is the induced norm with respect to the


vector norm k · k : Cn ! R. Let A 2 Cn⇥n be invertible. Prove that
1 kAy k
1
= minn .
kA k y 2C? ky k

3. Let A 2 Cn⇥n be nonsingular. Show that the condition numbers 1 (A)


and 1 (A) will not change after permutation of rows or columns.
4. Suppose that k · k : Cn⇥n ! R is a matrix norm and  is the condition
number defined with respect to it. Let A 2 Cn⇥n be nonsingular and
0 6= ↵ 2 C. Show that (↵A) = (A).
5. Show that if Q 2 Cn is unitary, then 2 (Q) = 1.
6. Suppose that k · k : Cn⇥n ! R is the induced norm with respect to the
vector norm k · k : Cn ! R and  is the condition number defined with
respect to this norm. Let A 2 Cn be invertible. Show that
max | |
2 (A)
(A) ,
min | |
2 (A)

Show that if AH = A, then equality holds for the 2-norm.


102 CHAPTER 4. NORMS AND MATRIX CONDITIONING

7. Let A = SH S with S 2 Cn⇥n nonsingular. Give an expression for 2 (A)


in terms of 2 (S).

8. Let S, T 2 Cn⇥n with S invertible and kT Sk · kS 1k < 1. Show that


T is invertible and
1 1 1 1
kT k kS k, q = kS Tk · kS k.
1 q
Prove that the set of invertible matrices is open in Cn⇥n .

9. Suppose that k · k : Cn⇥n ! R is the induced norm with respect to the


vector norm k · k : Cn ! R. Let A 2 Cn⇥n be invertible and suppose
b 2 Cn? . Suppose x 2 Cn satisfies Ax = b. Let the perturbations
x, b 2 Cn satisfy A x = b, so that A (x + x) = b + b.

(a) Prove the error (or perturbation) estimate


1 k bk k xk k bk
  (A) .
(A) kbk kxk kbk

(b) Show that for any invertible matrix A, the upper bound for kkxk
xk

above can be attained for suitable choices of b and b.


p
10. Let A 2 Cn⇥n be HPD. Define kxkA : Cn ! R via kxkA := x H Ax.
Prove that this is a vector norm, and satisfies the estimates
p p
1 kxk2  kxkA  n kxk2 ,

where 0 < 1  · · ·  n are the eigenvalues of A, with both inequalities


attainable (though perhaps not simultaneously) for suitable choices of
x.

11. For any x 2 Cn , prove

kxk1  kxk1  n kxk1 ,


p
kxk1  kxk2  n kxk1 ,
p
kxk2  kxk1  n kxk2 .

12. For any A 2 Cn⇥n , prove the following:

(a)
1 p
p kAk2  kAk1  nkAk2 .
n
4.3. PROBLEMS 103

(b) If A is nonsingular
1 1 (A)
  n.
n 2 (A)
13. Suppose that k · k : Cn⇥n ! R is the induced norm with respect to
the vector norm k · k : Cn ! R. Suppose that A, B 2 Cn⇥n and A is
non-singular and B is singular. Prove that
1 kA Bk
 ,
 (A) kAk

where (A) = kAk · A 1 .

Note: This formula is useful in a couple of ways. First, it says that if


A is close in norm to a singular matrix B, then (A) will be very large.
Thus, nearly singular matrices are ill-conditioned. Second, this formula
gives an upper bound on (A) 1 .

14. Let A 2 Cn⇥n be invertible. Prove that


1 kA Bk2
= inf .
2 (A) det(B)=0 kAk2

15. Suppose 
1.0000 2.0000
A= .
1.0001 2.0000

(a) Calculate 1 (A) := kAk1 · A 1 and 1 (A) := kAk1 · A 1 .


1 1
(b) Use the result of problem 13 to obtain upper bounds on 1 (A) 1
and also on 1 (A) 1 .
h i
(c) Suppose that you wish to solve Ax = b, where b = 3.0000 .
h 3.0001 i
Instead of x you obtain the approximation x 0 = x + x = 0.0000 .
h i1.5000
For this approximation you discover b 0 = b + b = 3.0000
3.0000 , where
Ax 0 = b 0 . Calculate k xk1 / kxk1 exactly. (You will need the exact
solution, of course). Then use the general estimate
k xk k bk
 (A)
kxk kbk
to obtain an upper bound for k xk1 / kxk1 . How good is k bk1 / kbk1
as indicator of the size of k xk1 / kxk1 .
104 CHAPTER 4. NORMS AND MATRIX CONDITIONING

16. Suppose 
a b
A= ,
b a
where a and b are real numbers. Show that the subordinate matrix norms
satisfy kAk1 = kAk2 = kAk1 .

17. Suppose that 


a b
A= ,
b a
where a and b are real numbers. Show kAk2 = (a2 + b2 )1/2 .

18. Suppose that k · k : Cn⇥n ! R is the induced norm with respect to the
vector norm k · k : Cn ! R. Show that if is an eigenvalue of AH A,
where A 2 Cn⇥n , then

0  AH kAk .

19. Suppose that A 2 Cn⇥n is invertible. Show that


r
n
2 (A) = ,
1

where n is the largest eigenvalue of B := AH A, and 1 is the smallest


eigenvalue of B.

20. Suppose that A is HPD with eigenvalues 0 < 1  2  ···  n.


Prove that
n
2 (A) = .
1

21. Suppose that A 2 Cn⇥n is invertible. Use the results of problems 18 and
19 to show that p
2 (A)  1 (A) 1 (A) .

You might also like