Anal21.Dvi-calculus On Banach Space

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

ANALYSIS TOOLS W ITH APPLICATIONS 423

22. Banach Spaces III: Calculus


In this section, X and Y will be Banach space and U will be an open subset of
X.
Notation 22.1 ( , O, and o notation). Let 0 ∈ U ⊂o X, and f : U −
→ Y be a
function. We will write:
(1) f (x) = (x) if limx→0 kf (x)k = 0.
(2) f (x) = O(x) if there are constants C < ∞ and r > 0 such that
kf (x)k ≤ Ckxk for all x ∈ B(0, r). This is equivalent to the condition
that lim supx→0 kfkxk
(x)k
< ∞, where
kf (x)k
lim sup ≡ lim sup{kf (x)k : 0 < kxk ≤ r}.
x→0 kxk r↓0

(3) f (x) = o(x) if f (x) = (x)O(x), i.e. limx→0 kf (x)k/kxk = 0.


Example 22.2. Here are some examples of properties of these symbols.
(1) A function f : U ⊂o X → Y is continuous at x0 ∈ U if f (x0 + h) =
f (x0 ) + (h).
(2) If f (x) = (x) and g(x) = (x) then f (x) + g(x) = (x).
Now let g : Y → Z be another function where Z is another Banach
space.
(3) If f (x) = O(x) and g(y) = o(y) then g ◦ f (x) = o(x).
(4) If f (x) = (x) and g(y) = (y) then g ◦ f (x) = (x).
22.1. The Differential.
Definition 22.3. A function f : U ⊂o X → Y is differentiable at x0 + h0 ∈ U
if there exists a linear transformation Λ ∈ L(X, Y ) such that
(22.1) f (x0 + h) − f (x0 + h0 ) − Λh = o(h).
We denote Λ by f 0 (x0 ) or Df (x0 ) if it exists. As with continuity, f is differentiable
on U if f is differentiable at all points in U.
Remark 22.4. The linear transformation Λ in Definition 22.3 is necessarily unique.
Indeed if Λ1 is another linear transformation such that Eq. (22.1) holds with Λ
replaced by Λ1 , then
(Λ − Λ1 )h = o(h),
i.e.
k(Λ − Λ1 )hk
lim sup = 0.
h→0 khk
On the other hand, by definition of the operator norm,
k(Λ − Λ1 )hk
lim sup = kΛ − Λ1 k.
h→0 khk
The last two equations show that Λ = Λ1 .
Exercise 22.1. Show that a function f : (a, b) → X is a differentiable at t ∈ (a, b)
in the sense of Definition 4.6 iff it is differentiable in the sense of Definition 22.3.
Also show Df (t)v = v f˙(t) for all v ∈ R.
424 BRUCE K. DRIVER †

Example 22.5. Assume that GL(X, Y ) is non-empty. Then f : GL(X, Y ) →


GL(Y, X) defined by f (A) ≡ A−1 is differentiable and
f 0 (A)B = −A−1 BA−1 for all B ∈ L(X, Y ).
Indeed (by Eq. (3.13)),
¡ ¢
f (A + H) − f (A) = (A + H)−1 − A−1 = (A I + A−1 H )−1 − A−1

X
¡ ¢
= I + A−1 H )−1 A−1 − A−1 = (−A−1 H)n · A−1 − A−1
n=0

X
= −A−1 HA−1 + (−A−1 H)n .
n=2

Since

X ∞
X kA−1 k2 kHk2
k (−A−1 H)n k ≤ kA−1 Hkn ≤ ,
n=2 n=2
1 − kA−1 Hk
we find that
f (A + H) − f (A) = −A−1 HA−1 + o(H).

22.2. Product and Chain Rules. The following theorem summarizes some basic
properties of the differential.
Theorem 22.6. The differential D has the following properties:
Linearity: D is linear, i.e. D(f + λg) = Df + λDg.
Product Rule: If f : U ⊂o X → Y and A : U ⊂o X → L(X, Z) are
differentiable at x0 then so is x → (Af )(x) ≡ A(x)f (x) and
D(Af )(x0 )h = (DA(x0 )h)f (x0 ) + A(x0 )Df (x0 )h.
Chain Rule: If f : U ⊂o X → V ⊂o Y is differentiable at x0 ∈ U, and
g : V ⊂o Y → Z is differentiable at y0 ≡ f (ho ), then g ◦ f is differentiable
at x0 and (g ◦ f )0 (x0 ) = g 0 (y0 )f 0 (x0 ).
Converse Chain Rule: Suppose that f : U ⊂o X → V ⊂o Y is continuous
at x0 ∈ U, g : V ⊂o Y → Z is differentiable y0 ≡ f (ho ), g 0 (y0 ) is invertible,
and g ◦ f is differentiable at x0 , then f is differentiable at x0 and
(22.2) f 0 (x0 ) ≡ [g 0 (x0 )]−1 (g ◦ f )0 (x0 ).
Proof. For the proof of linearity, let f, g : U ⊂o X → Y be two functions which
are differentiable at x0 ∈ U and c ∈ R, then
(f + cg)(x0 + h) = f (x0 ) + Df (x0 )h + o(h) + c(g(x0 ) + Dg(x0 )h + o(h)
= (f + cg)(x0 ) + (Df (x0 ) + cDg(x0 ))h + o(h),
which implies that (f + cg) is differentiable at x0 and that
D(f + cg)(x0 ) = Df (x0 ) + cDg(x0 ).
For item 2, we have
A(x0 + h)f (x0 + h) = (A(x0 ) + DA(x0 )h + o(h))(f (x0 ) + f 0 (x0 )h + o(h))
= A(x0 )f (x0 ) + A(x0 )f 0 (x0 )h + [DA(x0 )h]f (x0 ) + o(h),
which proves item 2.
ANALYSIS TOOLS W ITH APPLICATIONS 425

Similarly for item 3,


(g ◦ f )(x0 + h) = g(f (x0 )) + g 0 (f (x0 ))(f (x0 + h) − f (x0 )) + o(f (x0 + h) − f (x0 ))
= g(f (x0 )) + g 0 (f (x0 ))(Df (x0 )x0 + o(h)) + o(f (x0 + h) − f (x0 )
= g(f (x0 )) + g 0 (f (x0 ))Df (x0 )h + o(h),
where in the last line we have used the fact that f (x0 + h) − f (x0 ) = O(h) (see Eq.
(22.1)) and o(O(h)) = o(h).
Item 4. Since g is differentiable at y0 = f (x0 ),
g(f (x0 + h)) − g(f (x0 )) = g 0 (f (x0 ))(f (x0 + h) − f (x0 )) + o(f (x0 + h) − f (x0 )).
And since g ◦ f is differentiable at x0 ,
(g ◦ f )(x0 + h) − g(f (x0 )) = (g ◦ f )0 (x0 )h + o(h).
Comparing these two equations shows that
f (x0 + h) − f (x0 ) = g 0 (f (x0 ))−1 {(g ◦ f )0 (x0 )h + o(h) − o(f (x0 + h) − f (x0 ))}
= g 0 (f (x0 ))−1 (g ◦ f )0 (x0 )h + o(h)
(22.3) − g 0 (f (x0 ))−1 o(f (x0 + h) − f (x0 )).
Using the continuity of f, f (x0 + h) − f (x0 ) is close to 0 if h is close to zero, and
hence ko(f (x0 + h) − f (x0 ))k ≤ 12 kf (x0 + h) − f (x0 )k for all h sufficiently close to
0. (We may replace 12 by any number α > 0 above.) Using this remark, we may
take the norm of both sides of equation (22.3) to find
1
kf (x0 + h) − f (x0 )k ≤ kg 0 (f (x0 ))−1 (g ◦ f )0 (x0 )kkhk + o(h) + kf (x0 + h) − f (x0 )k
2
for h close to 0. Solving for kf (x0 + h) − f (x0 )k in this last equation shows that
(22.4) f (x0 + h) − f (x0 ) = O(h).
(This is an improvement, since the continuity of f only guaranteed that f (x0 + h) −
f (x0 ) = (h).) Because of Eq. (22.4), we now know that o(f (x0 +h)−f (x0 )) = o(h),
which combined with Eq. (22.3) shows that
f (x0 + h) − f (x0 ) = g 0 (f (x0 ))−1 (g ◦ f )0 (x0 )h + o(h),
i.e. f is differentiable at x0 and f 0 (x0 ) = g 0 (f (x0 ))−1 (g ◦ f )0 (x0 ).
Corollary 22.7. Suppose that σ : (a, b) → U ⊂o X is differentiable at t ∈ (a, b)
and f : U ⊂o X → Y is differentiable at σ(t) ∈ U. Then f ◦ σ is differentiable at t
and
d(f ◦ σ)(t)/dt = f 0 (σ(t))σ̇(t).
Example 22.8. Let us continue on with Example 22.5 but now let X = Y to
simplify the notation. So f : GL(X) → GL(X) is the map f (A) = A−1 and
f 0 (A) = −LA−1 RA−1 , i.e. f 0 = −Lf Rf .
where LA B = AB and RA B = AB for all A, B ∈ L(X). As the reader may easily
check, the maps
A ∈ L(X) → LA , RA ∈ L(L(X))
are linear and bounded. So by the chain and the product rule we find f 00 (A) exists
for all A ∈ L(X) and
f 00 (A)B = −Lf 0 (A)B Rf − Lf Rf 0 (A)B .
426 BRUCE K. DRIVER †

More explicitly
(22.5) [f 00 (A)B] C = A−1 BA−1 CA−1 + A−1 CA−1 BA−1 .
Working inductively one shows f : GL(X) → GL(X) defined by f (A) ≡ A−1 is
C ∞.
22.3. Partial Derivatives.
Definition 22.9 (Partial or Directional Derivative). Let f : U ⊂o X → Y be a
function, x0 ∈ U, and v ∈ X. We say that f is differentiable at x0 in the direction v
d
iff dt |0 (f (x0 + tv)) =: (∂v f )(x0 ) exists. We call (∂v f )(x0 ) the directional or partial
derivative of f at x0 in the direction v.
Notice that if f is differentiable at x0 , then ∂v f (x0 ) exists and is equal to f 0 (x0 )v,
see Corollary 22.7.
Proposition 22.10. Let f : U ⊂o X → Y be a continuous function and D ⊂ X be
a dense subspace of X. Assume ∂v f (x) exists for all x ∈ U and v ∈ D, and there
exists a continuous function A : U → L(X, Y ) such that ∂v f (x) = A(x)v for all
v ∈ D and x ∈ U ∩ D. Then f ∈ C 1 (U, Y ) and Df = A.
Proof. Let x0 ∈ U, > 0 such that B(x0 , 2 ) ⊂ U and M ≡ sup{kA(x)k : x ∈
B(x0 , 2 )} < ∞43 . For x ∈ B(x0 , ) ∩ D and v ∈ D ∩ B(0, ), by the fundamental
theorem of calculus,
(22.6)
Z 1 Z 1 Z 1
df (x + tv)
f (x + v) − f (x) = dt = (∂v f )(x + tv) dt = A(x + tv) v dt.
0 dt 0 0
For general x ∈ B(x0 , ) and v ∈ B(0, ), choose xn ∈ B(x0 , ) ∩ D and vn ∈
D ∩ B(0, ) such that xn → x and vn → v. Then
Z 1
(22.7) f (xn + vn ) − f (xn ) = A(xn + tvn ) vn dt
0
holds for all n. The left side of this last equation tends to f (x + v) − f (x) by the
continuity of f. For the right side of Eq. (22.7) we have
Z 1 Z 1 Z 1
k A(x + tv) v dt − A(xn + tvn ) vn dtk ≤ kA(x + tv) − A(xn + tvn ) kkvk dt
0 0 0
+ M kv − vn k.
It now follows by the continuity of A, the fact that kA(x + tv) − A(xn + tvn ) k ≤ M,
and the dominated convergence theorem that right side of Eq. (22.7) converges to
R1
0
A(x + tv) v dt. Hence Eq. (22.6) is valid for all x ∈ B(x0 , ) and v ∈ B(0, ). We
also see that
(22.8) f (x + v) − f (x) − A(x)v = (v)v,
43 It should be noted well, unlike in finite dimensions closed and bounded sets need not be
compact, so it is not sufficient to choose sufficiently small so that B(x0 , 2 ) ⊂ U. Here is a
counter P example. Let X ≡ H be a Hilbert space, {en }∞ n=1 be an orthonormal set. Define
f (x) ≡ ∞ n=1 nφ(kx − en k), where φ is any continuous function on R such that φ(0) = 1 and φ

is supported in (−1, 1). Notice that ken − em k2 = 2 for all m 6= n, so that ken − em k = 2.
Using this fact it is rather easy to check that for any x0 ∈ H, there is an > 0 such that for all
x ∈ B(x0 , ), only one term in the sum defining f is non-zero. Hence, f is continuous. However,
f (en ) = n → ∞ as n → ∞.
ANALYSIS TOOLS W ITH APPLICATIONS 427

R1
where (v) ≡ 0 [A(x + tv) − A(x)] dt. Now
Z 1
k (v)k ≤ kA(x + tv) − A(x)k dt ≤ max kA(x + tv) − A(x)k → 0 as v → 0,
0 t∈[0,1]

by the continuity of A. Thus, we have shown that f is differentiable and that


Df (x) = A(x).

22.4. Smooth Dependence of ODE’s on Initial Conditions . In this subsec-


tion, let X be a Banach space, U ⊂o X and J be an open interval with 0 ∈ J.
Lemma 22.11. If Z ∈ C(J ×U, X) such that Dx Z(t, x) exists for all (t, x) ∈ J ×U
and Dx Z(t, x) ∈ C(J × U, X) then Z is locally Lipschitz in x, see Definition 5.12.
Proof. Suppose I @@ J and x ∈ U. By the continuity of DZ, for every t ∈ I
there an open neighborhood Nt of t ∈ I and t > 0 such that B(x, t ) ⊂ U and
sup {kDx Z(t0 , x0 )k : (t0 , x0 ) ∈ Nt × B(x, t )} < ∞.
By the compactness of I, there exists a finite subset Λ ⊂ I such that I ⊂ ∪t∈I Nt .
Let (x, I) := min { t : t ∈ Λ} and
K(x, I) ≡ sup {kDZ(t, x0 )k(t, x0 ) ∈ I × B(x, (x, I))} < ∞.
Then by the fundamental theorem of calculus and the triangle inequality,
µZ 1 ¶
kZ(t, x1 )−Z(t, x0 )k ≤ kDx Z(t, x0 + s(x1 − x0 )k ds kx1 −x0 k ≤ K(x, I)kx1 −x0 k
0

for all x0 , x1 ∈ B(x, (x, I)) and t ∈ I.


Theorem 22.12 (Smooth Dependence of ODE’s on Initial Conditions). Let X be
a Banach space, U ⊂o X, Z ∈ C(R × U, X) such that Dx Z ∈ C(R × U, X) and
φ : D(Z) ⊂ R × X → X denote the maximal solution operator to the ordinary
differential equation
(22.9) ẏ(t) = Z(t, y(t)) with y(0) = x ∈ U,
see Notation 5.15 and Theorem 5.21. Then φ ∈ C 1 (D(Z), U ), ∂t Dx φ(t, x) exists
and is continuous for (t, x) ∈ D(Z) and Dx φ(t, x) satisfies the linear differential
equation,
d
(22.10) Dx φ(t, x) = [(Dx Z) (t, φ(t, x))]Dx φ(t, x) with Dx φ(0, x) = IX
dt
for t ∈ Jx .
Proof. Let x0 ∈ U and J be an open interval such that 0 ∈ J ⊂ J¯ @@ Jx0 ,
y0 := y(·, x0 )|J and
O := {y ∈ BC(J, U ) : ky − y0 k∞ < } ⊂o BC(J, X).
By Lemma 22.11, Z is locally Lipschitz and therefore Theorem 5.21 is applicable.
By Eq. (5.30) of Theorem 5.21, there exists > 0 and δ > 0 such that G :
B(x0 , δ) → O defined by G(x) ≡ φ(·, x)|J is continuous. By Lemma 22.13 below,
for > 0 sufficiently small the function F : O → BC(J, X) defined by
Z ·
(22.11) F (y) ≡ y − Z(t, y(t))dt.
0
428 BRUCE K. DRIVER †

is C 1 and
Z ·
(22.12) DF (y)v = v − Dy Z(t, y(t))v(t)dt.
0
By the existence and uniqueness Theorem 5.5 for linear ordinary differential
equations, DF (y) is invertible for any y ∈ BC(J, U ). By the definition of φ,
F (G(x)) = h(x) for all x ∈ B(x0 , δ) where h : X → BC(J, X) is defined by
h(x)(t) = x for all t ∈ J, i.e. h(x) is the constant path at x. Since h is a bounded
linear map, h is smooth and Dh(x) = h for all x ∈ X. We may now apply the
converse to the chain rule in Theorem 22.6 to conclude G ∈ C 1 (B(x0 , δ), O) and
DG(x) = [DF (G(x))]−1 Dh(x) or equivalently, DF (G(x))DG(x) = h which in turn
is equivalent to
Z t
Dx φ(t, x) − [DZ(φ(τ, x)]Dx φ(τ, x) dτ = IX .
0
As usual this equation implies Dx φ(t, x) is differentiable in t, Dx φ(t, x) is continuous
in (t, x) and Dx φ(t, x) satisfies Eq. (22.10).
Lemma 22.13. Continuing the notation used in the proof of Theorem 22.12 and
further let Z ·
f (y) ≡ Z(τ, y(τ )) dτ for y ∈ O .
0
Then f ∈ C 1 (O , Y ) and for all y ∈ O ,
Z ·
f 0 (y)h = Dx Z(τ, y(τ ))h(τ ) dτ =: Λy h.
0
Proof. Let h ∈ Y be sufficiently small and τ ∈ J, then by fundamental theorem
of calculus,
Z 1
Z(τ, y(τ ) + h(τ )) − Z(τ, y(τ )) = [Dx Z(τ, y(τ ) + rh(τ )) − Dx Z(τ, y(τ ))]dr
0
and therefore,
Z t
(f (y + h) − f (y) − Λy h) (t) = [Z(τ, y(τ ) + h(τ )) − Z(τ, y(τ )) − Dx Z(τ, y(τ ))h(τ ) ] dτ
0
Z t Z 1
= dτ dr[Dx Z(τ, y(τ ) + rh(τ )) − Dx Z(τ, y(τ ))]h(τ ).
0 0
Therefore,
(22.13) k(f (y + h) − f (y) − Λy h)k∞ ≤ khk∞ δ(h)
where Z Z 1
δ(h) := dτ dr kDx Z(τ, y(τ ) + rh(τ )) − Dx Z(τ, y(τ ))k .
J 0
With the aide of Lemmas 22.11 and Lemma 5.13,
(r, τ, h) ∈ [0, 1] × J × Y → kDx Z(τ, y(τ ) + rh(τ ))k
is bounded for small h provided > 0 is sufficiently small. Thus it follows from the
dominated convergence theorem that δ(h) → 0 as h → 0 and hence Eq. (22.13)
implies f 0 (y) exists and is given by Λy . Similarly,
Z
0 0
kf (y + h) − f (y)kop ≤ kDx Z(τ, y(τ ) + h(τ )) − Dx Z(τ, y(τ ))k dτ → 0 as h → 0
J
ANALYSIS TOOLS W ITH APPLICATIONS 429

showing f 0 is continuous.
Remark 22.14. If Z ∈ C k (U, X), then an inductive argument shows that φ ∈
C k (D(Z), X). For example if Z ∈ C 2 (U, X) then (y(t), u(t)) := (φ(t, x), Dx φ(t, x))
solves the ODE,
d
(y(t), u(t)) = Z̃ ((y(t), u(t))) with (y(0), u(0)) = (x, IdX )
dt
where Z̃ is the C 1 — vector field defined by
Z̃ (x, u) = (Z(x), Dx Z(x)u) .
Therefore Theorem 22.12 may be applied to this equation to deduce: Dx2 φ(t, x) and
Dx2 φ̇(t, x) exist and are continuous. We may now differentiate Eq. (22.10) to find
Dx2 φ(t, x) satisfies the ODE,
d 2 ¡ ¢
D φ(t, x) = [ ∂Dx φ(t,x) Dx Z (t, φ(t, x))]Dx φ(t, x) + [(Dx Z) (t, φ(t, x))]Dx2 φ(t, x)
dt x
with Dx2 φ(0, x) = 0.
22.5. Higher Order Derivatives. As above, let f : U ⊂o X − → Y be a function.
If f is differentiable on U, then the differential Df of f is a function from U to
the Banach space L(X, Y ). If the function Df : U − → L(X, Y ) is also differen-
tiable on U, then its differential D2 f = D(Df ) : U − → L(X, L(X, Y )). Similarly,
D3 f = D(D(Df )) : U − → L(X, L(X, L(X, Y ))) if the differential of D(Df ) ex-
ists. In general, let L1 (X, Y ) ≡ L(X, Y ) and Lk (X, Y ) be defined inductively by
Lk+1 (X, Y ) = L(X, Lk (X, Y )). Then (Dk f )(x) ∈ Lk (X, Y ) if it exists. It will be
convenient to identify the space Lk (X, Y ) with the Banach space defined in the
next definition.
Definition 22.15. For k ∈ {1, 2, 3, . . .}, let Mk (X, Y ) denote the set of functions
f : Xk −
→ Y such that
→ f hv1 , v2 , . . . , vi−1 , v, vi+1 , . . . , vk i ∈ Y is
(1) For i ∈ {1, 2, . . . , k}, v ∈ X −
linear 44 for all {vi }ni=1 ⊂ X.
(2) The norm kf kMk (X,Y ) should be finite, where
kf hv1 , v2 , . . . , vk ikY
kf kMk (X,Y ) ≡ sup{ : {vi }ki=1 ⊂ X \ {0}}.
kv1 kkv2 k · · · kvk k
Lemma 22.16. There are linear operators jk : Lk (X, Y ) → Mk (X, Y ) defined in-
ductively as follows: j1 = IdL(X,Y ) (notice that M1 (X, Y ) = L1 (X, Y ) = L(X, Y ))
and
(jk+1 A)hv0 , v1 , . . . , vk i = (jk (Av0 ))hv1 , v2 , . . . , vk i ∀vi ∈ X.
(Notice that Av0 ∈ Lk (X, Y ).) Moreover, the maps jk are isometric isomorphisms.
Proof. To get a feeling for what jk is let us write out j2 and j3 explicitly. If A ∈
L2 (X, Y ) = L(X, L(X, Y )), then (j2 A)hv1 , v2 i = (Av1 )v2 and if A ∈ L3 (X, Y ) =
L(X, L(X, L(X, Y ))), (j3 A)hv1 , v2 , v3 i = ((Av1 )v2 )v3 for all vi ∈ X.
It is easily checked that jk is linear for all k. We will now show by induction that
jk is an isometry and in particular that jk is injective. Clearly this is true if k = 1
since j1 is the identity map. For A ∈ Lk+1 (X, Y ),
44 I will routinely write f hv , v , . . . , v i rather than f (v , v , . . . , v ) when the function f
1 2 k 1 2 k
depends on each of variables linearly, i.e. f is a multi-linear function.
430 BRUCE K. DRIVER †

k(jk (Av0 ))hv1 , v2 , . . . , vk ikY


kjk+1 AkMk+1 (X,Y ) ≡ sup{ : {vi }ki=0 ⊂ X \ {0}}
kv0 kkv1 kkv2 k · · · kvk k
k(jk (Av0 ))kMk (X,Y )
≡ sup{ : v0 ∈ X \ {0}}
kv0 k
kAv0 kLk (X,Y )
= sup{ : v0 ∈ X \ {0}}
kv0 k
= kAkL(X,Lk (X,Y )) ≡ kAkLk+1 (X,Y ) ,
wherein the second to last inequality we have used the induction hypothesis. This
shows that jk+1 is an isometry provided jk is an isometry.
To finish the proof it suffices to shows that jk is surjective for all k. Again this is
true for k = 1. Suppose that jk is invertible for some k ≥ 1. Given f ∈ Mk+1 (X, Y )
we must produce A ∈ Lk+1 (X, Y ) = L(X, Lk (X, Y )) such that jk+1 A = f. If such
an equation is to hold, then for v0 ∈ X, we would have jk (Av0 ) = f hv0 , · · · i. That
is Av0 = jk−1 (f hv0 , · · · i). It is easily checked that A so defined is linear, bounded,
and jk+1 A = f.
From now on we will identify Lk with Mk without further mention. In particular,
we will view Dk f as function on U with values in Mk (X, Y ).
Theorem 22.17 (Differentiability). Suppose k ∈ {1, 2, . . .} and D is a dense
subspace of X, f : U ⊂o X − → Y is a function such that (∂v1 ∂v2 · · · ∂vl f )(x)
exists for all x ∈ D ∩ U, {vi }li=1 ⊂ D, and l = 1, 2, . . . k. Further assume
there exists continuous functions Al : U ⊂o X − → Ml (X, Y ) such that such
that (∂v1 ∂v2 · · · ∂vl f )(x) = Al (x)hv1 , v2 , . . . , vl i for all x ∈ D ∩ U, {vi }li=1 ⊂ D,
and l = 1, 2, . . . k. Then Dl f (x) exists and is equal to Al (x) for all x ∈ U and
l = 1, 2, . . . , k.
Proof. We will prove the theorem by induction on k. We have already proved
the theorem when k = 1, see Proposition 22.10. Now suppose that k > 1 and that
the statement of the theorem holds when k is replaced by k − 1. Hence we know
that Dl f (x) = Al (x) for all x ∈ U and l = 1, 2, . . . , k − 1. We are also given that
(22.14) (∂v1 ∂v2 · · · ∂vk f )(x) = Ak (x)hv1 , v2 , . . . , vk i ∀x ∈ U ∩ D, {vi } ⊂ D.
Now we may write (∂v2 · · · ∂vk f )(x) as (Dk−1 f )(x)hv2 , v3 , . . . , vk i so that Eq.
(22.14) may be written as
(22.15)
∂v1 (Dk−1 f )(x)hv2 , v3 , . . . , vk i) = Ak (x)hv1 , v2 , . . . , vk i ∀x ∈ U ∩ D, {vi } ⊂ D.
So by the fundamental theorem of calculus, we have that
(22.16)
Z 1
((Dk−1 f )(x + v1 ) − (Dk−1 f )(x))hv2 , v3 , . . . , vk i = Ak (x + tv1 )hv1 , v2 , . . . , vk i dt
0
for all x ∈ U ∩ D and {vi } ⊂ D with v1 sufficiently small. By the same argument
given in the proof of Proposition 22.10, Eq. (22.16) remains valid for all x ∈ U and
{vi } ⊂ X with v1 sufficiently small. We may write this last equation alternatively
as,
Z 1
k−1 k−1
(22.17) (D f )(x + v1 ) − (D f )(x) = Ak (x + tv1 )hv1 , · · · i dt.
0
ANALYSIS TOOLS W ITH APPLICATIONS 431

Hence
Z 1
k−1 k−1
(D f )(x+v1 )−(D f )(x)−Ak (x)hv1 , · · · i = [Ak (x+tv1 )−Ak (x)]hv1 , · · · i dt
0
from which we get the estimate,
(22.18) k(Dk−1 f )(x + v1 ) − (Dk−1 f )(x) − Ak (x)hv1 , · · · ik ≤ (v1 )kv1 k
R1
where (v1 ) ≡ 0 kAk (x + tv1 ) − Ak (x)k dt. Notice by the continuity of Ak that
(v1 ) − → 0. Thus it follow from Eq. (22.18) that Dk−1 f is differentiable
→ 0 as v1 −
k
and that (D f )(x) = Ak (x).
Example 22.18. Let f : L∗ (X, Y ) − → L∗ (Y, X) be defined by f (A) ≡ A−1 . We

assume that L (X, Y ) is not empty. Then f is infinitely differentiable and
(22.19) X
(Dk f )(A)hV1 , V2 , . . . , Vk i = (−1)k {B −1 Vσ(1) B −1 Vσ(2) B −1 · · · B −1 Vσ(k) B −1 },
σ

where sum is over all permutations of σ of {1, 2, . . . , k}.


Let me check Eq. (22.19) in the case that k = 2. Notice that we have already
shown that (∂V1 f )(B) = Df (B)V1 = −B −1 V1 B −1 . Using the product rule we find
that
(∂V2 ∂V1 f )(B) = B −1 V2 B −1 V1 B −1 + B −1 V1 B −1 V2 B −1 =: A2 (B)hV1 , V2 i.
Notice that kA2 (B)hV1 , V2 ik ≤ 2kB −1 k3 kV1 k · kV2 k, so that kA2 (B)k ≤ 2kB −1 k3 <
∞. Hence A2 : L∗ (X, Y ) −→ M2 (L(X, Y ), L(Y, X)). Also
k(A2 (B) − A2 (C))hV1 , V2 ik ≤ 2kB −1 V2 B −1 V1 B −1 − C −1 V2 C −1 V1 C −1 k
≤ 2kB −1 V2 B −1 V1 B −1 − B −1 V2 B −1 V1 C −1 k
+ 2kB −1 V2 B −1 V1 C −1 − B −1 V2 C −1 V1 C −1 k
+ 2kB −1 V2 C −1 V1 C −1 − C −1 V2 C −1 V1 C −1 k
≤ 2kB −1 k2 kV2 kkV1 kkB −1 − C −1 k
+ 2kB −1 kkC −1 kkV2 kkV1 kkB −1 − C −1 k
+ 2kC −1 k2 kV2 kkV1 kkB −1 − C −1 k.
This shows that
kA2 (B) − A2 (C)k ≤ 2kB −1 − C −1 k{kB −1 k2 + kB −1 kkC −1 k + kC −1 k2 }.
Since B − → B −1 is differentiable and hence continuous, it follows that A2 (B) is
also continuous in B. Hence by Theorem 22.17 D2 f (A) exists and is given as in Eq.
(22.19)
ExampleR 22.19. Suppose that f : R − → R is a C ∞ — function and
1
F (x) ≡ 0 f (x(t)) dt for x ∈ X ≡ C([0, 1], R) equipped with the norm kxk ≡
maxt∈[0,1] |x(t)|. Then F : X − → R is also infinitely differentiable and
Z 1
(22.20) (Dk F )(x)hv1 , v2 , . . . , vk i = f (k) (x(t))v1 (t) · · · vk (t) dt,
0

for all x ∈ X and {vi } ⊂ X.


432 BRUCE K. DRIVER †

To verify this example, notice that


Z 1
d d
(∂v F )(x) ≡ |0 F (x + sv) = |0 f (x(t) + sv(t)) dt
ds ds 0
Z 1 Z 1
d
= |0 f (x(t) + sv(t)) dt = f 0 (x(t))v(t) dt.
0 ds 0
Similar computations show that
Z 1
(∂v1 ∂v2 · · · ∂vk f )(x) = f (k) (x(t))v1 (t) · · · vk (t) dt =: Ak (x)hv1 , v2 , . . . , vk i.
0
Now for x, y ∈ X,
Z 1
|Ak (x)hv1 , v2 , . . . , vk i − Ak (y)hv1 , v2 , . . . , vk i| ≤ |f (k) (x(t)) − f (k) (y(t))| · |v1 (t) · · · vk (t) |dt
0
k
Y Z 1
≤ kvi k |f (k) (x(t)) − f (k) (y(t))|dt,
i=1 0

which shows that


Z 1
kAk (x) − Ak (y)k ≤ |f (k) (x(t)) − f (k) (y(t))|dt.
0
This last expression is easily seen to go to zero as y → x in X. Hence Ak is
continuous. Thus we may apply Theorem 22.17 to conclude that Eq. (22.20) is
valid.
22.6. Contraction Mapping Principle.
Theorem 22.20. Suppose that (X, ρ) is a complete metric space and S : X → X
is a contraction, i.e. there exists α ∈ (0, 1) such that ρ(S(x), S(y)) ≤ αρ(x, y) for
all x, y ∈ X. Then S has a unique fixed point in X, i.e. there exists a unique point
x ∈ X such that S(x) = x.
Proof. For uniqueness suppose that x and x0 are two fixed points of S, then
ρ(x, x0 ) = ρ(S(x), S(x0 )) ≤ αρ(x, x0 ).
Therefore (1 − α)ρ(x, x0 ) ≤ 0 which implies that ρ(x, x0 ) = 0 since 1 − α > 0. Thus
x = x0 .
For existence, let x0 ∈ X be any point in X and define xn ∈ X inductively by
xn+1 = S(xn ) for n ≥ 0. We will show that x ≡ limn→∞ xn exists in X and because
S is continuous this will imply,
x = lim xn+1 = lim S(xn ) = S( lim xn ) = S(x),
n→∞ n→∞ n→∞
showing x is a fixed point of S.
So to finish the proof, because X is complete, it suffices to show {xn }∞
n=1 is a
Cauchy sequence in X. An easy inductive computation shows, for n ≥ 0, that
ρ(xn+1 , xn ) = ρ(S(xn ), S(xn−1 )) ≤ αρ(xn , xn−1 ) ≤ · · · ≤ αn ρ(x1 , x0 ).
Another inductive argument using the triangle inequality shows, for m > n, that,
m−1
X
ρ(xm , xn ) ≤ ρ(xm , xm−1 ) + ρ(xm−1 , xn ) ≤ · · · ≤ ρ(xk+1 , xk ).
k=n
ANALYSIS TOOLS W ITH APPLICATIONS 433

Combining the last two inequalities gives (using again that α ∈ (0, 1)),
m−1
X ∞
X αn
ρ(xm , xn ) ≤ αk ρ(x1 , x0 ) ≤ ρ(x1 , x0 )αn αl = ρ(x1 , x0 ) .
1−α
k=n l=0

This last equation shows that ρ(xm , xn ) → 0 as m, n → ∞, i.e. {xn }∞


n=0 is a
Cauchy sequence.
Corollary 22.21 (Contraction Mapping Principle II). Suppose that (X, ρ) is a
complete metric space and S : X −→ X is a continuous map such that S (n) is a
contraction for some n ∈ N. Here
n tim es
z }| {
(n)
S ≡ S ◦ S ◦ ... ◦ S
and we are assuming there exists α ∈ (0, 1) such that ρ(S (n) (x), S (n) (y)) ≤ αρ(x, y)
for all x, y ∈ X. Then S has a unique fixed point in X.
Proof. Let T ≡ S (n) , then T : X − → X is a contraction and hence T has a
unique fixed point x ∈ X. Since any fixed point of S is also a fixed point of T, we
see if S has a fixed point then it must be x. Now
T (S(x)) = S (n) (S(x)) = S(S (n) (x)) = S(T (x)) = S(x),
which shows that S(x) is also a fixed point of T. Since T has only one fixed point,
we must have that S(x) = x. So we have shown that x is a fixed point of S and
this fixed point is unique.
Lemma 22.22. Suppose that (X, ρ) is a complete metric space, n ∈ N, Z is a
topological space, and α ∈ (0, 1). Suppose for each z ∈ Z there is a map Sz : X → X
with the following properties:
(n) (n)
Contraction property: ρ(Sz (x), Sz (y)) ≤ αρ(x, y) for all x, y ∈ X and
z ∈ Z.
Continuity in z: For each x ∈ X the map z ∈ Z → Sz (x) ∈ X is continu-
ous.
By Corollary 22.21 above, for each z ∈ Z there is a unique fixed point G(z) ∈ X
of Sz .
Conclusion: The map G : Z → X is continuous.
(n)
Proof. Let Tz ≡ Sz . If z, w ∈ Z, then
ρ(G(z), G(w)) = ρ(Tz (G(z)), Tw (G(w)))
≤ ρ(Tz (G(z)), Tw (G(z))) + ρ(Tw (G(z)), Tw (G(w)))
≤ ρ(Tz (G(z)), Tw (G(z))) + αρ(G(z), G(w)).
Solving this inequality for ρ(G(z), G(w)) gives
1
ρ(G(z), G(w)) ≤ ρ(Tz (G(z)), Tw (G(z))).
1−α
Since w → Tw (G(z)) is continuous it follows from the above equation that G(w) →
G(z) as w → z, i.e. G is continuous.
434 BRUCE K. DRIVER †

22.7. Inverse and Implicit Function Theorems. In this section, let X be a


Banach space, U ⊂ X be an open set, and F : U → X and : U → X be
continuous functions. Question: under what conditions on is F (x) := x + (x)
a homeomorphism from B0 (δ) to F (B0 (δ)) for some small δ > 0? Let’s start by
looking at the one dimensional case first. So for the moment assume that X = R,
U = (−1, 1), and : U → R is C 1 . Then F will be one to one iff F is monotonic.
This will be the case, for example, if F 0 = 1 + 0 > 0. This in turn is guaranteed
by assuming that | 0 | ≤ α < 1. (This last condition makes sense on a Banach space
whereas assuming 1 + 0 > 0 is not as easily interpreted.)
Lemma 22.23. Suppose that U = B = B(0, r) (r > 0) is a ball in X and : B
→ X is a C 1 function such that kD k ≤ α < ∞ on U. Then for all x, y ∈ U we

have:
(22.21) k (x) − (y)k ≤ αkx − yk.
Proof. By the fundamental theorem of calculus and the chain rule:
Z 1
d
(y) − (x) = (x + t(y − x))dt
0 dt
Z 1
= [D (x + t(y − x))](y − x)dt.
0
Therefore, by the triangle inequality and the assumption that kD (x)k ≤ α on B,
Z 1
k (y) − (x)k ≤ kD (x + t(y − x))kdt · k(y − x)k ≤ αk(y − x)k.
0

Remark 22.24. It is easily checked that if : B = B(0, r) → X is C 1 and satisfies


(22.21) then kD k ≤ α on B.
Using the above remark and the analogy to the one dimensional example, one is
lead to the following proposition.
Proposition 22.25. Suppose that U = B = B(0, r) (r > 0) is a ball in X, α ∈
(0, 1), : U → X is continuous, F (x) ≡ x + (x) for x ∈ U, and satisfies:
(22.22) k (x) − (y)k ≤ αkx − yk ∀x, y ∈ B.
Then F (B) is open in X and F : B → V := F (B) is a homeomorphism.
Proof. First notice from (22.22) that
kx − yk = k(F (x) − F (y)) − ( (x) − (y))k
≤ kF (x) − F (y)k + k (x) − (y)k
≤ kF (x) − F (y)k + αk(x − y)k
from which it follows that kx − yk ≤ (1 − α)−1 kF (x) − F (y)k. Thus F is injective
.
on B. Let V = F (B) and G = F −1 : V − → B denote the inverse function which
exists since F is injective.
We will now show that V is open. For this let x0 ∈ B and z0 = F (x0 ) =
x0 + (x0 ) ∈ V. We wish to show for z close to z0 that there is an x ∈ B such that
.
F (x) = x + (x) = z or equivalently x = z − (x). Set Sz (x) = z − (x), then we are
looking for x ∈ B such that x = Sz (x), i.e. we want to find a fixed point of Sz . We
will show that such a fixed point exists by using the contraction mapping theorem.
ANALYSIS TOOLS W ITH APPLICATIONS 435

Step 1. Sz is contractive for all z ∈ X. In fact for x, y ∈ B,


(22.23) kSz (x) − Sz (y)k = k (x) − (y))k ≤ αkx − yk.
.
Step 2. For any δ > 0 such the C = B(x0 , δ) ⊂ B and z ∈ X such that
kz − z0 k < (1 − α)δ, we have Sz (C) ⊂ C. Indeed, let x ∈ C and compute:
kSz (x) − x0 k = kSz (x) − Sz0 (x0 )k
= kz − (x) − (z0 − (x0 ))k
= kz − z0 − ( (x) − (x0 ))k
≤ kz − z0 k + αkx − x0 k
< (1 − α)δ + αδ = δ.
wherein we have used z0 = F (x0 ) and (22.22).
Since C is a closed subset of a Banach space X, we may apply the contraction
mapping principle, Theorem 22.20 and Lemma 22.22, to Sz to show there is a
continuous function G : B(z0 , (1 − α)δ) → C such that
G(z) = Sz (G(z)) = z − (G(z)) = z − F (G(z)) + G(z),
i.e. F (G(z)) = z. This shows that B(z0 , (1 − α)δ) ⊂ F (C) ⊂ F (B) = V. That is
z0 is in the interior of V. Since F −1 |B(z0 ,(1−α)δ) is necessarily equal to G which is
continuous, we have also shown that F −1 is continuous in a neighborhood of z0 .
Since z0 ∈ V was arbitrary, we have shown that V is open and that F −1 : V → U
is continuous.
Theorem 22.26 (Inverse Function Theorem). Suppose X and Y are Banach
spaces, U ⊂o X, f ∈ C k (U → X) with k ≥ 1, x0 ∈ U and Df (x0 ) is invert-
ible. Then there is a ball B = B(x0 , r) in U centered at x0 such that
(1) V = f (B) is open,
(2) f |B : B → V is a homeomorphism,
.
(3) g = (f |B )−1 ∈ C k (V, B) and
−1
(22.24) g 0 (y) = [f 0 (g(y))] for all y ∈ V.
Proof. Define F (x) ≡ [Df (x0 )]−1 f (x + x0 ) and (x) ≡ x − F (x) ∈ X for
x ∈ (U − x0 ). Notice that 0 ∈ U − x0 , DF (0) = I, and that D (0) = I − I = 0.
Choose r > 0 such that B̃ ≡ B(0, r) ⊂ U − x0 and kD (x)k ≤ 12 for x ∈ B̃. By
Lemma 22.23, satisfies (22.23) with α = 1/2. By Proposition 22.25, F (B̃) is open
and F |B̃ : B̃ → F (B̃) is a homeomorphism. Let G ≡ F |−1

which we know to be a
continuous map from F (B̃) → B̃.
Since kD (x)k ≤ 1/2 for x ∈ B̃, DF (x) = I + D (x) is invertible, see Corollary
.
3.70. Since H(z) = z is C 1 and H = F ◦ G on F (B̃), it follows from the converse
to the chain rule, Theorem 22.6, that G is differentiable and
DG(z) = [DF (G(z))]−1 DH(z) = [DF (G(z))]−1 .
Since G, DF, and the map A ∈ GL(X) → A−1 ∈ GL(X) are all continuous maps,
(see Example 22.5) the map z ∈ F (B̃) → DG(z) ∈ L(X) is also continuous, i.e. G
is C 1 .
Let B = B̃ + x0 = B(x0 , r) ⊂ U. Since f (x) = [Df (x0 )]F (x − x0 ) and Df (x0 ) is
invertible (hence an open mapping), V := f (B) = [Df (x0 )]F (B̃) is open in X. It
436 BRUCE K. DRIVER †

is also easily checked that f |−1


B exists and is given by

(22.25) f |−1
B (y) = x0 + G([Df (x0 )]
−1
y)
for y ∈ V = f (B). This shows that f |B : B → V is a homeomorphism and it follows
.
from (22.25) that g = (f |B )−1 ∈ C 1 (V, B). Eq. (22.24) now follows from the chain
rule and the fact that
f ◦ g(y) = y for all y ∈ B.
Since f 0 ∈ C k−1 (B, L(X)) and i(A) := A−1 is a smooth map by Example 22.18,
g = i ◦ f 0 ◦ g is C 1 if k ≥ 2, i.e. g is C 2 if k ≥ 2. Again using g 0 = i ◦ f 0 ◦ g, we may
0

conclude g 0 is C 2 if k ≥ 3, i.e. g is C 3 if k ≥ 3. Continuing bootstrapping our way


.
up we eventually learn g = (f |B )−1 ∈ C k (V, B) if f is C k .
Theorem 22.27 (Implicit Function Theorem). Now suppose that X, Y, and W
are three Banach spaces, k ≥ 1, A ⊂ X × Y is an open set, (x0 , y0 ) is a
point in A, and f : A → W is a C k — map such f (x0 , y0 ) = 0. Assume that
D2 f (x0 , y0 ) ≡ D(f (x0 , ·))(y0 ) : Y → W is a bounded invertible linear transforma-
tion. Then there is an open neighborhood U0 of x0 in X such that for all connected
open neighborhoods U of x0 contained in U0 , there is a unique continuous function
u : U → Y such that u(x0 ) = yo , (x, u(x)) ∈ A and f (x, u(x)) = 0 for all x ∈ U.
Moreover u is necessarily C k and
(22.26) Du(x) = −D2 f (x, u(x))−1 D1 f (x, u(x)) for all x ∈ U.
Proof. Proof of 22.27. By replacing f by (x, y) → D2 f (x0 , y0 )−1 f (x, y) if
necessary, we may assume with out loss of generality that W = Y and D2 f (x0 , y0 ) =
IY . Define F : A → X × Y by F (x, y) ≡ (x, f (x, y)) for all (x, y) ∈ A. Notice that
· ¸
I D1 f (x, y)
DF (x, y) =
0 D2 f (x, y)
which is invertible iff D2 f (x, y) is invertible and if D2 f (x, y) is invertible then
· ¸
I −D1 f (x, y)D2 f (x, y)−1
DF (x, y)−1 = .
0 D2 f (x, y)−1
Since D2 f (x0 , y0 ) = I is invertible, the implicit function theorem guarantees that
there exists a neighborhood U0 of x0 and V0 of y0 such that U0 ×V0 ⊂ A, F (U0 ×V0 )
is open in X × Y, F |(U0 ×V0 ) has a C k —inverse which we call F −1 . Let π2 (x, y) ≡ y
for all (x, y) ∈ X × Y and define C k — function u0 on U0 by u0 (x) ≡ π2 ◦ F −1 (x, 0).
Since F −1 (x, 0) = (x̃, u0 (x)) iff (x, 0) = F (x̃, u0 (x)) = (x̃, f (x̃, u0 (x))), it follows
that x = x̃ and f (x, u0 (x)) = 0. Thus (x, u0 (x)) = F −1 (x, 0) ∈ U0 × V0 ⊂ A and
f (x, u0 (x)) = 0 for all x ∈ U0 . Moreover, u0 is C k being the composition of the C k —
functions, x → (x, 0), F −1 , and π2 . So if U ⊂ U0 is a connected set containing x0 ,
we may define u ≡ u0 |U to show the existence of the functions u as described in
the statement of the theorem. The only statement left to prove is the uniqueness
of such a function u.
Suppose that u1 : U → Y is another continuous function such that u1 (x0 ) = y0 ,
and (x, u1 (x)) ∈ A and f (x, u1 (x)) = 0 for all x ∈ U. Let
O ≡ {x ∈ U |u(x) = u1 (x)} = {x ∈ U |u0 (x) = u1 (x)}.
Clearly O is a (relatively) closed subset of U which is not empty since x0 ∈ O.
Because U is connected, if we show that O is also an open set we will have shown
ANALYSIS TOOLS W ITH APPLICATIONS 437

that O = U or equivalently that u1 = u0 on U. So suppose that x ∈ O, i.e.


u0 (x) = u1 (x). For x̃ near x ∈ U,
(22.27) 0 = 0 − 0 = f (x̃, u0 (x̃)) − f (x̃, u1 (x̃)) = R(x̃)(u1 (x̃) − u0 (x̃))
where
Z 1
(22.28) R(x̃) ≡ D2 f ((x̃, u0 (x̃) + t(u1 (x̃) − u0 (x̃)))dt.
0
From Eq. (22.28) and the continuity of u0 and u1 , limx̃→x R(x̃) = D2 f (x, u0 (x))
which is invertible45 . Thus R(x̃) is invertible for all x̃ sufficiently close to x. Using
Eq. (22.27), this last remark implies that u1 (x̃) = u0 (x̃) for all x̃ sufficiently close
to x. Since x ∈ O was arbitrary, we have shown that O is open.
22.8. More on the Inverse Function Theorem. In this section X and Y will
denote two Banach spaces, U ⊂o X, k ≥ 1, and f ∈ C k (U, Y ). Suppose x0 ∈ U,
h ∈ X, and f 0 (x0 ) is invertible, then
f (x0 + h) − f (x0 ) = f 0 (x0 )h + o(h) = f 0 (x0 ) [h + (h)]
where
(h) = f 0 (x0 )−1 [f (x0 + h) − f (x0 )] − h = o(h).
In fact by the fundamental theorem of calculus,
Z 1
¡ 0 ¢
(h) = f (x0 )−1 f 0 (x0 + th) − I hdt
0
but we will not use this here.
Let h, h0 ∈ B X (0, R) and apply the fundamental theorem of calculus to t →
f (x0 + t(h0 − h)) to conclude
(h0 ) − (h) = f 0 (x0 )−1 [f (x0 + h0 ) − f (x0 + h)] − (h0 − h)
·Z 1 ¸
¡ 0 ¢
= f (x0 )−1 f 0 (x0 + t(h0 − h)) − I dt (h0 − h).
0
Taking norms of this equation gives
·Z 1 ¸
° 0 °
0
k (h ) − (h)k ≤ ° °
f (x0 ) f (x0 + t(h − h)) − I dt kh0 − hk ≤ α kh0 − hk
−1 0 0
0
where
° 0 °
(22.29) α := sup °f (x0 )−1 f 0 (x) − I ° .
L(X)
x∈B X (x0 ,R)

We summarize these comments in the following lemma.


Lemma 22.28. Suppose x0 ∈ U, R > 0, f : B X (x0 , R) → Y be¡a C 1 — function ¢
such that f 0 (x0 ) is invertible, α is as in Eq. (22.29) and ∈ C 1 B X (0, R), X is
defined by
(22.30) f (x0 + h) = f (x0 ) + f 0 (x0 ) (h + (h)) .
Then
(22.31) k (h0 ) − (h)k ≤ α kh0 − hk for all h, h0 ∈ B X (0, R).
45 Notice that DF (x, u (x)) is invertible for all x ∈ U since F | 1
0 0 U0 ×V0 has a C inverse. There-
fore D2 f (x, u0 (x)) is also invertible for all x ∈ U0 .
438 BRUCE K. DRIVER †

Furthermore if α < 1 (which may be achieved by shrinking R if necessary) then


f 0 (x) is invertible for all x ∈ B X (x0 , R) and
° 0 −1 ° 1 ° °
(22.32) sup °f (x) ° ≤ °f 0 (x0 )−1 ° .
L(Y,X) 1−α L(Y,X)
x∈B X (x0 ,R)

Proof. It only remains to prove Eq. (22.32), so suppose now that α < 1. Then
by Proposition 3.69 f 0 (x0 )−1 f 0 (x) is invertible and
°£ ¤−1 ° 1
° 0 °
° f (x0 )−1 f 0 (x) ° ≤ for all x ∈ B X (x0 , R).
1−α
£ ¤
Since f 0 (x) = f 0 (x0 ) f 0 (x0 )−1 f 0 (x) this implies f 0 (x) is invertible and
° 0 −1 ° ° £ ¤−1 0 ° 1 ° °
°f (x) ° = ° ° f 0 (x0 )−1 f 0 (x)
°
f (x0 )−1 ° ≤ °f 0 (x0 )−1 ° for all x ∈ B X (x0 , R).
1−α

Theorem 22.29 (Inverse Function Theorem). Suppose U ⊂o X, k ≥ 1 and f ∈


C k (U, Y ) such that f 0 (x) is invertible for all x ∈ U. Then:
(1) f : U → Y is an open mapping, in particular V := f (U ) ⊂o Y.
(2) If f is injective, then f −1 : V → U is also a C k — map and
¡ −1 ¢0 £ ¤−1
f (y) = f 0 (f −1 (y)) for all y ∈ V.
(3) If x0 ∈ U and R > 0 such that B X (x0 , R) ⊂ U and
° 0 °
sup °f (x0 )−1 f 0 (x) − I ° = α < 1
x∈B X (x0 ,R)

(which may always be achieved by taking R sufficiently small by continuity


of f 0 (x)) then f |B X (x0 ,R) : B X (x0 , R) → f (B X (x0 , R)) is invertible and
¡ ¢
f |−1
B X (x0 ,R)
: f B X (x0 , R) → B X (x0 , R) is C k .
(4) Keeping the same hypothesis as in item 3. and letting y0 = f (x0 ) ∈ Y,
f (B X (x0 , r)) ⊂ B Y (y0 , kf 0 (x0 )k (1 + α)r) for all r ≤ R
and ° °
B Y (y0 , δ) ⊂ f (B X (x0 , (1 − α)−1 °f 0 (x0 )−1 ° δ))
° °
for all δ < δ(x0 ) := (1 − α) R/ °f 0 (x0 )−1 ° .
Proof. Let x0 and R > 0 be as in item 3. above and be as defined in Eq.
(22.30) above, so that for x, x0 ∈ B X (x0 , R),
f (x) = f (x0 ) + f 0 (x0 ) [(x − x0 ) + (x − x0 )] and
f (x0 ) = f (x0 ) + f 0 (x0 ) [(x0 − x0 ) + (x0 − x0 )] .
Subtracting these two equations implies
f (x0 ) − f (x) = f 0 (x0 ) [x0 − x + (x0 − x0 ) − (x − x0 )]
or equivalently
x0 − x = f 0 (x0 )−1 [f (x0 ) − f (x)] + (x − x0 ) − (x0 − x0 ).
Taking norms of this equation and making use of Lemma 22.28 implies
° °
kx0 − xk ≤ °f 0 (x0 )−1 ° kf (x0 ) − f (x)k + α kx0 − xk
ANALYSIS TOOLS W ITH APPLICATIONS 439

which implies
° 0 °
°f (x0 )−1 °
0
(22.33) kx − xk ≤ kf (x0 ) − f (x)k for all x, x0 ∈ B X (x0 , R).
1−α
¡ ¢
This shows that f |B X (x0 ,R) is injective and that f |−1 B X (x0 ,R)
: f B X (x0 , R) →
B X (x0 , R) is Lipschitz continuous because
° ° ° °
° °f (x0 ) ° 0
0 −1 ¡ ¢
° −1 0 −1
°f |B X (x0 ,R) (y ) − f |B X (x0 ,R) (y)° ≤ ky − yk for all y, y 0 ∈ f B X (x0 , R) .
1−α
Since x0 ∈ X was chosen arbitrarily, if we know f : U → Y is injective, we then
know that f −1 : V = f (U ) → U is necessarily continuous. The remaining assertions
of the theorem now follow from the converse to the chain rule in Theorem 22.6 and
the
¡ fact that ¢f is an open mapping (as we shall now show) so that in particular
f B X (x0 , R) is open.
Let y ∈ B Y (0, δ), with δ to be determined later, we wish to solve the equation,
for x ∈ B X (0, R),
f (x0 ) + y = f (x0 + x) = f (x0 ) + f 0 (x0 ) (x + (x)) .
Equivalently we are trying to find x ∈ B X (0, R) such that
x = f 0 (x0 )−1 y − (x) =: Sy (x).
Now using Lemma 22.28 and the fact that (0) = 0,
° ° ° °
kSy (x)k ≤ °f 0 (x0 )−1 y ° + k (x)k ≤ °f 0 (x0 )−1 ° kyk + α kxk
° °
≤ °f 0 (x0 )−1 ° δ + αR.
Therefore if we assume δ is chosen so that
° 0 ° ° °
°f (x0 )−1 ° δ + αR < R, i.e. δ < (1 − α) R/ °f 0 (x0 )−1 ° := δ(x0 ),

then Sy : B X (0, R) → B X (0, R) ⊂ B X (0, R).


Similarly by Lemma 22.28, for all x, z ∈ B X (0, R),
kSy (x) − Sy (z)k = k (z) − (x)k ≤ α kx − zk
which shows Sy is a contraction on B X (0, R). Hence by the contraction mapping
principle in Theorem 22.20, for every y ∈ B Y (0, δ) there exits a unique solution
x ∈ B X (0, R) such that x = Sy (x) or equivalently
f (x0 + x) = f (x0 ) + y.
Letting y0 = f (x0 ), this last statement implies there exists a unique function g :
B Y (y0 , δ(x0 )) → B X (x0 , R) such that f (g(y)) = y ∈ B Y (y0 , δ(x0 )). From Eq.
(22.33) it follows that
kg(y) − x0 k = kg(y) − g(y0 )k
° 0 ° ° 0 °
°f (x0 )−1 ° °f (x0 )−1 °
≤ kf (g(y)) − f (g(y0 ))k = ky − y0 k .
1−α 1−α
This shows ° °
g(B Y (y0 , δ)) ⊂ B X (x0 , (1 − α)−1 °f 0 (x0 )−1 ° δ)
and therefore
³ ° ´
¡ ¢ −1 °
B Y (y0 , δ) = f g(B Y (y0 , δ)) ⊂ f B X (x0 , (1 − α) °f 0 (x0 )−1 ° δ)
440 BRUCE K. DRIVER †

for all δ < δ(x0 ).


This last assertion implies f (x0 ) ∈ f (W )o for any W ⊂o U with x0 ∈ W. Since
x0 ∈ U was arbitrary, this shows f is an open mapping.

22.8.1. Alternate construction of g. Suppose U ⊂o X and f : U → Y is a C 2 —


function. Then we are looking for a function g(y) such that f (g(y)) = y. Fix an
x0 ∈ U and y0 = f (x0 ) ∈ Y. Suppose such a g exists and let x(t) = g(y0 + th) for
some h ∈ Y. Then differentiating f (x(t)) = y0 + th implies
d
f (x(t)) = f 0 (x(t))ẋ(t) = h
dt
or equivalently that
−1
(22.34) ẋ(t) = [f 0 (x(t))] h = Z(h, x(t)) with x(0) = x0
0 −1
where Z(h, x) = [f (x(t))] h. Conversely if x solves Eq. (22.34) we have
d
dt f (x(t)) = h and hence that

f (x(1)) = y0 + h.
Thus if we define
g(y0 + h) := eZ(h,·) (x0 ),
then f (g(y0 + h)) = y0 + h for all h sufficiently small. This shows f is an open
mapping.

22.9. Applications. A detailed discussion of the inverse function theorem on Ba-


nach and Fréchet spaces may be found in Richard Hamilton’s, “The Inverse Func-
tion Theorem of Nash and Moser.” The applications in this section are taken from
this paper.
Theorem 22.30 (Hamilton’s Theorem on p. 110.). Let p : U := (a, b) → V :=
(c, d) be a smooth function with p0 > 0 on (a, b). For every g ∈ C2π

(R, (c, d)) there

exists a unique function y ∈ C2π (R, (a, b)) such that
ẏ(t) + p(y(t)) = g(t).
0 0
Proof. Let Ṽ := C2π (R, (c, d)) ⊂o C2π (R, R) and
© 1
ª 1
Ũ := y ∈ C2π (R, R) : a < y(t) < b and c < ẏ(t) + p(y(t)) < d for all t ⊂o C2π (R, (a, b)).
The proof will be completed by showing P : Ũ → Ṽ defined by
P (y)(t) = ẏ(t) + p(y(t)) for y ∈ Ũ and t ∈ R
is bijective.
Step 1. The differential of P is given by P 0 (y)h = ḣ + p0 (y)h, see Exercise
22.7. We will now show that the linear mapping P 0 (y) is invertible. Indeed let
f = p0 (y) > 0, then the general solution to the Eq. ḣ + f h = k is given by
R
Z t R
− 0t f (τ )dτ t
h(t) = e h0 + e− τ f (s)ds k(τ )dτ
0

where h0 is a constant. We wish to choose h0 so that h(2π) = h0 , i.e. so that


³ ´ Z 2π R t
−c(f )
h0 1 − e = e− τ f (s)ds k(τ )dτ
0
ANALYSIS TOOLS W ITH APPLICATIONS 441

where
Z 2π Z 2π
c(f ) = f (τ )dτ = p0 (y(τ ))dτ > 0.
0 0
1
The unique solution h ∈ C2π (R, R) to P 0 (y)h = k is given by
³ ´−1 R t Z 2π Rt
Z t Rt
h(t) = 1 − e−c(f ) e− 0 f (τ )dτ e− τ
f (s)ds
k(τ )dτ + e− τ
f (s)ds
k(τ )dτ
0 0
³ ´−1 R t Z 2π Rt
Z t Rt
= 1 − e−c(f ) e− 0 f (s)ds e− τ
f (s)ds
k(τ )dτ + e− τ
f (s)ds
k(τ )dτ.
0 0

Therefore P 0 (y) is invertible for all y. Hence by the implicit function theorem,
P : Ũ → Ṽ is an open mapping which is locally invertible.
Step 2. Let us now prove P : Ũ → Ṽ is injective. For this suppose y1 , y2 ∈ Ũ
such that P (y1 ) = g = P (y2 ) and let z = y2 − y1 . Since

ż(t) + p(y2 (t)) − p(y1 (t)) = g(t) − g(t) = 0,

if tm ∈ R is point where z(tm ) takes on its maximum, then ż(tm ) = 0 and hence

p(y2 (tm )) − p(y1 (tm )) = 0.

Since p is increasing this implies y2 (tm ) = y1 (tm ) and hence z(tm ) = 0. This shows
z(t) ≤ 0 for all t and a similar argument using a minimizer of z shows z(t) ≥ 0 for
all t. So we conclude y1 = y2 .
Step 3. Let W := P (Ũ ), we wish to show W = Ṽ . By step 1., we know W is
an open subset of Ṽ and since Ṽ is connected, to finish the proof it suffices to show
W is relatively closed in Ṽ . So suppose yj ∈ Ũ such that gj := P (yj ) → g ∈ Ṽ .
We must now show g ∈ W, i.e. g = P (y) for some y ∈ W. If tm is a maximizer of
yj , then ẏj (tm ) = 0 and hence gj (tm ) = p(yj (tm )) < d and therefore yj (tm ) < b
because p is increasing. A similar argument works for the minimizers then allows us
to conclude ran(p◦yj ) ⊂ ran(gj ) @@ (c, d) for all j. Since gj is converging uniformly
to g, there exists c < γ < δ < d such that ran(p ◦ yj ) ⊂ ran(gj ) ⊂ [γ, δ] for all j.
Again since p0 > 0,

ran(yj ) ⊂ p−1 ([γ, δ]) = [α, β] @@ (a, b) for all j.

In particular sup {|ẏj (t)| : t ∈ R and j} < ∞ since

(22.35) ẏj (t) = gj (t) − p(yj (t)) ⊂ [γ, δ] − [γ, δ]

which is a compact subset of R. The Ascoli-Arzela Theorem 3.59 now allows us to


assume, by passing to a subsequence if necessary, that yj is converging uniformly
0
to y ∈ C2π (R, [α, β]). It now follows that

ẏj (t) = gj (t) − p(yj (t)) → g − p(y)


1 0
uniformly in t. Hence we concluded that y ∈ C2π (R, R) ∩ C2π (R, [α, β]), ẏj → y and
P (y) = g. This has proved that g ∈ W and hence that W is relatively closed in Ṽ .
442 BRUCE K. DRIVER †

22.10. Exercises.
Exercise 22.2. Suppose that A : R → L(X) is a continuous function and V : R →
L(X) is the unique solution to the linear differential equation
(22.36) V̇ (t) = A(t)V (t) with V (0) = I.
Assuming that V (t) is invertible for all t ∈ R, show that V −1 (t) ≡ [V (t)]−1 must
solve the differential equation
d −1
(22.37) V (t) = −V −1 (t)A(t) with V −1 (0) = I.
dt
See Exercise 5.14 as well.
Exercise 22.3 (Differential Equations with Parameters). Let W be another Ba-
nach space, U × V ⊂o X × W and Z ∈ C 1 (U × V, X).For each (x, w) ∈ U × V, let
t ∈ Jx,w → φ(t, x, w) denote the maximal solution to the ODE
(22.38) ẏ(t) = Z(y(t), w) with y(0) = x
and
D := {(t, x, w) ∈ R × U × V : t ∈ Jx,w }
as in Exercise 5.18.
(1) Prove that φ is C 1 and that Dw φ(t, x, w) solves the differential equation:
d
Dw φ(t, x, w) = (Dx Z)(φ(t, x, w), w)Dw φ(t, x, w) + (Dw Z)(φ(t, x, w), w)
dt
with Dw φ(0, x, w) = 0 ∈ L(W, X). Hint: See the hint for Exercise 5.18
with the reference to Theorem 5.21 being replace by Theorem 22.12.
(2) Also show with the aid of Duhamel’s principle (Exercise 5.16) and Theorem
22.12 that
Z t
Dw φ(t, x, w) = Dx φ(t, x, w) Dx φ(τ, x, w)−1 (Dw Z)(φ(τ, x, w), w)dτ
0

Exercise 22.4. (Differential of eA ) Let f : L(X) → L∗ (X) be the exponential


function f (A) = eA . Prove that f is differentiable and that
Z 1
(22.39) Df (A)B = e(1−t)A BetA dt.
0

Hint: Let B ∈ L(X) and define w(t, s) = et(A+sB) for all t, s ∈ R. Notice that
(22.40) dw(t, s)/dt = (A + sB)w(t, s) with w(0, s) = I ∈ L(X).
Use Exercise 22.3 to conclude that w is C 1 and that w0 (t, 0) ≡ dw(t, s)/ds|s=0
satisfies the differential equation,
d 0
(22.41) w (t, 0) = Aw0 (t, 0) + BetA with w(0, 0) = 0 ∈ L(X).
dt
Solve this equation by Duhamel’s principle (Exercise 5.16) and then apply Proposi-
tion 22.10 to conclude that f is differentiable with differential given by Eq. (22.39).
Exercise 22.5 (Local ODE Existence). Let Sx be defined as in Eq. (5.22) from the
proof of Theorem 5.10. Verify that Sx satisfies the hypothesis of Corollary 22.21.
In particular we could have used Corollary 22.21 to prove Theorem 5.10.
ANALYSIS TOOLS W ITH APPLICATIONS 443

Exercise 22.6 (Local ODE Existence Again). Let J = [−1, 1], Z ∈ C 1 (X, X),
Y := C(J, X) and for y ∈ Y and s ∈ J let ys ∈ Y be defined by ys (t) := y(st). Use
the following outline to prove the ODE
(22.42) ẏ(t) = Z(y(t)) with y(0) = x
has a unique solution for small t and this solution is C 1 in x.
(1) If y solves Eq. (22.42) then ys solves
ẏs (t) = sZ(ys (t)) with ys (0) = x
or equivalently
Z t
(22.43) ys (t) = x + s Z(ys (τ ))dτ.
0
Notice that when s = 0, the unique solution to this equation is y0 (t) = x.
(2) Let F : J × Y → J × Y be defined by
Z t
F (s, y) := (s, y(t) − s Z(y(τ ))dτ ).
0
Show the differential of F is given by
µ Z t Z · ¶
F 0 (s, y)(a, v) = a, t → v(t) − s Z 0 (y(τ ))v(τ )dτ − a Z(y(τ ))dτ .
0 0
(3) Verify F 0 (0, y) : R × Y → R × Y is invertible for all y ∈ Y and notice that
F (0, y) = (0, y).
(4) For x ∈ X, let Cx ∈ Y be the constant path at x, i.e. Cx (t) = x for all
t ∈ J. Use the inverse function Theorem 22.26 to conclude there exists > 0
and a C 1 map φ : (− , ) × B(x0 , ) → Y such that
F (s, φ(s, x)) = (s, Cx ) for all (s, x) ∈ (− , ) × B(x0 , ).
(5) Show, for s ≤ that ys (t) := φ(s, x)(t) satisfies Eq. (22.43). Now define
y(t, x) = φ( /2, x)(2t/ ) and show y(t, x) solve Eq. (22.42) for |t| < /2
and x ∈ B(x0 , ).
Exercise 22.7. Show P defined in Theorem 22.30 is continuously differentiable
and P 0 (y)h = ḣ + p0 (y)h.

You might also like