Anal21.Dvi-calculus On Banach Space
Anal21.Dvi-calculus On Banach Space
Anal21.Dvi-calculus On Banach Space
Since
∞
X ∞
X kA−1 k2 kHk2
k (−A−1 H)n k ≤ kA−1 Hkn ≤ ,
n=2 n=2
1 − kA−1 Hk
we find that
f (A + H) − f (A) = −A−1 HA−1 + o(H).
22.2. Product and Chain Rules. The following theorem summarizes some basic
properties of the differential.
Theorem 22.6. The differential D has the following properties:
Linearity: D is linear, i.e. D(f + λg) = Df + λDg.
Product Rule: If f : U ⊂o X → Y and A : U ⊂o X → L(X, Z) are
differentiable at x0 then so is x → (Af )(x) ≡ A(x)f (x) and
D(Af )(x0 )h = (DA(x0 )h)f (x0 ) + A(x0 )Df (x0 )h.
Chain Rule: If f : U ⊂o X → V ⊂o Y is differentiable at x0 ∈ U, and
g : V ⊂o Y → Z is differentiable at y0 ≡ f (ho ), then g ◦ f is differentiable
at x0 and (g ◦ f )0 (x0 ) = g 0 (y0 )f 0 (x0 ).
Converse Chain Rule: Suppose that f : U ⊂o X → V ⊂o Y is continuous
at x0 ∈ U, g : V ⊂o Y → Z is differentiable y0 ≡ f (ho ), g 0 (y0 ) is invertible,
and g ◦ f is differentiable at x0 , then f is differentiable at x0 and
(22.2) f 0 (x0 ) ≡ [g 0 (x0 )]−1 (g ◦ f )0 (x0 ).
Proof. For the proof of linearity, let f, g : U ⊂o X → Y be two functions which
are differentiable at x0 ∈ U and c ∈ R, then
(f + cg)(x0 + h) = f (x0 ) + Df (x0 )h + o(h) + c(g(x0 ) + Dg(x0 )h + o(h)
= (f + cg)(x0 ) + (Df (x0 ) + cDg(x0 ))h + o(h),
which implies that (f + cg) is differentiable at x0 and that
D(f + cg)(x0 ) = Df (x0 ) + cDg(x0 ).
For item 2, we have
A(x0 + h)f (x0 + h) = (A(x0 ) + DA(x0 )h + o(h))(f (x0 ) + f 0 (x0 )h + o(h))
= A(x0 )f (x0 ) + A(x0 )f 0 (x0 )h + [DA(x0 )h]f (x0 ) + o(h),
which proves item 2.
ANALYSIS TOOLS W ITH APPLICATIONS 425
More explicitly
(22.5) [f 00 (A)B] C = A−1 BA−1 CA−1 + A−1 CA−1 BA−1 .
Working inductively one shows f : GL(X) → GL(X) defined by f (A) ≡ A−1 is
C ∞.
22.3. Partial Derivatives.
Definition 22.9 (Partial or Directional Derivative). Let f : U ⊂o X → Y be a
function, x0 ∈ U, and v ∈ X. We say that f is differentiable at x0 in the direction v
d
iff dt |0 (f (x0 + tv)) =: (∂v f )(x0 ) exists. We call (∂v f )(x0 ) the directional or partial
derivative of f at x0 in the direction v.
Notice that if f is differentiable at x0 , then ∂v f (x0 ) exists and is equal to f 0 (x0 )v,
see Corollary 22.7.
Proposition 22.10. Let f : U ⊂o X → Y be a continuous function and D ⊂ X be
a dense subspace of X. Assume ∂v f (x) exists for all x ∈ U and v ∈ D, and there
exists a continuous function A : U → L(X, Y ) such that ∂v f (x) = A(x)v for all
v ∈ D and x ∈ U ∩ D. Then f ∈ C 1 (U, Y ) and Df = A.
Proof. Let x0 ∈ U, > 0 such that B(x0 , 2 ) ⊂ U and M ≡ sup{kA(x)k : x ∈
B(x0 , 2 )} < ∞43 . For x ∈ B(x0 , ) ∩ D and v ∈ D ∩ B(0, ), by the fundamental
theorem of calculus,
(22.6)
Z 1 Z 1 Z 1
df (x + tv)
f (x + v) − f (x) = dt = (∂v f )(x + tv) dt = A(x + tv) v dt.
0 dt 0 0
For general x ∈ B(x0 , ) and v ∈ B(0, ), choose xn ∈ B(x0 , ) ∩ D and vn ∈
D ∩ B(0, ) such that xn → x and vn → v. Then
Z 1
(22.7) f (xn + vn ) − f (xn ) = A(xn + tvn ) vn dt
0
holds for all n. The left side of this last equation tends to f (x + v) − f (x) by the
continuity of f. For the right side of Eq. (22.7) we have
Z 1 Z 1 Z 1
k A(x + tv) v dt − A(xn + tvn ) vn dtk ≤ kA(x + tv) − A(xn + tvn ) kkvk dt
0 0 0
+ M kv − vn k.
It now follows by the continuity of A, the fact that kA(x + tv) − A(xn + tvn ) k ≤ M,
and the dominated convergence theorem that right side of Eq. (22.7) converges to
R1
0
A(x + tv) v dt. Hence Eq. (22.6) is valid for all x ∈ B(x0 , ) and v ∈ B(0, ). We
also see that
(22.8) f (x + v) − f (x) − A(x)v = (v)v,
43 It should be noted well, unlike in finite dimensions closed and bounded sets need not be
compact, so it is not sufficient to choose sufficiently small so that B(x0 , 2 ) ⊂ U. Here is a
counter P example. Let X ≡ H be a Hilbert space, {en }∞ n=1 be an orthonormal set. Define
f (x) ≡ ∞ n=1 nφ(kx − en k), where φ is any continuous function on R such that φ(0) = 1 and φ
√
is supported in (−1, 1). Notice that ken − em k2 = 2 for all m 6= n, so that ken − em k = 2.
Using this fact it is rather easy to check that for any x0 ∈ H, there is an > 0 such that for all
x ∈ B(x0 , ), only one term in the sum defining f is non-zero. Hence, f is continuous. However,
f (en ) = n → ∞ as n → ∞.
ANALYSIS TOOLS W ITH APPLICATIONS 427
R1
where (v) ≡ 0 [A(x + tv) − A(x)] dt. Now
Z 1
k (v)k ≤ kA(x + tv) − A(x)k dt ≤ max kA(x + tv) − A(x)k → 0 as v → 0,
0 t∈[0,1]
is C 1 and
Z ·
(22.12) DF (y)v = v − Dy Z(t, y(t))v(t)dt.
0
By the existence and uniqueness Theorem 5.5 for linear ordinary differential
equations, DF (y) is invertible for any y ∈ BC(J, U ). By the definition of φ,
F (G(x)) = h(x) for all x ∈ B(x0 , δ) where h : X → BC(J, X) is defined by
h(x)(t) = x for all t ∈ J, i.e. h(x) is the constant path at x. Since h is a bounded
linear map, h is smooth and Dh(x) = h for all x ∈ X. We may now apply the
converse to the chain rule in Theorem 22.6 to conclude G ∈ C 1 (B(x0 , δ), O) and
DG(x) = [DF (G(x))]−1 Dh(x) or equivalently, DF (G(x))DG(x) = h which in turn
is equivalent to
Z t
Dx φ(t, x) − [DZ(φ(τ, x)]Dx φ(τ, x) dτ = IX .
0
As usual this equation implies Dx φ(t, x) is differentiable in t, Dx φ(t, x) is continuous
in (t, x) and Dx φ(t, x) satisfies Eq. (22.10).
Lemma 22.13. Continuing the notation used in the proof of Theorem 22.12 and
further let Z ·
f (y) ≡ Z(τ, y(τ )) dτ for y ∈ O .
0
Then f ∈ C 1 (O , Y ) and for all y ∈ O ,
Z ·
f 0 (y)h = Dx Z(τ, y(τ ))h(τ ) dτ =: Λy h.
0
Proof. Let h ∈ Y be sufficiently small and τ ∈ J, then by fundamental theorem
of calculus,
Z 1
Z(τ, y(τ ) + h(τ )) − Z(τ, y(τ )) = [Dx Z(τ, y(τ ) + rh(τ )) − Dx Z(τ, y(τ ))]dr
0
and therefore,
Z t
(f (y + h) − f (y) − Λy h) (t) = [Z(τ, y(τ ) + h(τ )) − Z(τ, y(τ )) − Dx Z(τ, y(τ ))h(τ ) ] dτ
0
Z t Z 1
= dτ dr[Dx Z(τ, y(τ ) + rh(τ )) − Dx Z(τ, y(τ ))]h(τ ).
0 0
Therefore,
(22.13) k(f (y + h) − f (y) − Λy h)k∞ ≤ khk∞ δ(h)
where Z Z 1
δ(h) := dτ dr kDx Z(τ, y(τ ) + rh(τ )) − Dx Z(τ, y(τ ))k .
J 0
With the aide of Lemmas 22.11 and Lemma 5.13,
(r, τ, h) ∈ [0, 1] × J × Y → kDx Z(τ, y(τ ) + rh(τ ))k
is bounded for small h provided > 0 is sufficiently small. Thus it follows from the
dominated convergence theorem that δ(h) → 0 as h → 0 and hence Eq. (22.13)
implies f 0 (y) exists and is given by Λy . Similarly,
Z
0 0
kf (y + h) − f (y)kop ≤ kDx Z(τ, y(τ ) + h(τ )) − Dx Z(τ, y(τ ))k dτ → 0 as h → 0
J
ANALYSIS TOOLS W ITH APPLICATIONS 429
showing f 0 is continuous.
Remark 22.14. If Z ∈ C k (U, X), then an inductive argument shows that φ ∈
C k (D(Z), X). For example if Z ∈ C 2 (U, X) then (y(t), u(t)) := (φ(t, x), Dx φ(t, x))
solves the ODE,
d
(y(t), u(t)) = Z̃ ((y(t), u(t))) with (y(0), u(0)) = (x, IdX )
dt
where Z̃ is the C 1 — vector field defined by
Z̃ (x, u) = (Z(x), Dx Z(x)u) .
Therefore Theorem 22.12 may be applied to this equation to deduce: Dx2 φ(t, x) and
Dx2 φ̇(t, x) exist and are continuous. We may now differentiate Eq. (22.10) to find
Dx2 φ(t, x) satisfies the ODE,
d 2 ¡ ¢
D φ(t, x) = [ ∂Dx φ(t,x) Dx Z (t, φ(t, x))]Dx φ(t, x) + [(Dx Z) (t, φ(t, x))]Dx2 φ(t, x)
dt x
with Dx2 φ(0, x) = 0.
22.5. Higher Order Derivatives. As above, let f : U ⊂o X − → Y be a function.
If f is differentiable on U, then the differential Df of f is a function from U to
the Banach space L(X, Y ). If the function Df : U − → L(X, Y ) is also differen-
tiable on U, then its differential D2 f = D(Df ) : U − → L(X, L(X, Y )). Similarly,
D3 f = D(D(Df )) : U − → L(X, L(X, L(X, Y ))) if the differential of D(Df ) ex-
ists. In general, let L1 (X, Y ) ≡ L(X, Y ) and Lk (X, Y ) be defined inductively by
Lk+1 (X, Y ) = L(X, Lk (X, Y )). Then (Dk f )(x) ∈ Lk (X, Y ) if it exists. It will be
convenient to identify the space Lk (X, Y ) with the Banach space defined in the
next definition.
Definition 22.15. For k ∈ {1, 2, 3, . . .}, let Mk (X, Y ) denote the set of functions
f : Xk −
→ Y such that
→ f hv1 , v2 , . . . , vi−1 , v, vi+1 , . . . , vk i ∈ Y is
(1) For i ∈ {1, 2, . . . , k}, v ∈ X −
linear 44 for all {vi }ni=1 ⊂ X.
(2) The norm kf kMk (X,Y ) should be finite, where
kf hv1 , v2 , . . . , vk ikY
kf kMk (X,Y ) ≡ sup{ : {vi }ki=1 ⊂ X \ {0}}.
kv1 kkv2 k · · · kvk k
Lemma 22.16. There are linear operators jk : Lk (X, Y ) → Mk (X, Y ) defined in-
ductively as follows: j1 = IdL(X,Y ) (notice that M1 (X, Y ) = L1 (X, Y ) = L(X, Y ))
and
(jk+1 A)hv0 , v1 , . . . , vk i = (jk (Av0 ))hv1 , v2 , . . . , vk i ∀vi ∈ X.
(Notice that Av0 ∈ Lk (X, Y ).) Moreover, the maps jk are isometric isomorphisms.
Proof. To get a feeling for what jk is let us write out j2 and j3 explicitly. If A ∈
L2 (X, Y ) = L(X, L(X, Y )), then (j2 A)hv1 , v2 i = (Av1 )v2 and if A ∈ L3 (X, Y ) =
L(X, L(X, L(X, Y ))), (j3 A)hv1 , v2 , v3 i = ((Av1 )v2 )v3 for all vi ∈ X.
It is easily checked that jk is linear for all k. We will now show by induction that
jk is an isometry and in particular that jk is injective. Clearly this is true if k = 1
since j1 is the identity map. For A ∈ Lk+1 (X, Y ),
44 I will routinely write f hv , v , . . . , v i rather than f (v , v , . . . , v ) when the function f
1 2 k 1 2 k
depends on each of variables linearly, i.e. f is a multi-linear function.
430 BRUCE K. DRIVER †
Hence
Z 1
k−1 k−1
(D f )(x+v1 )−(D f )(x)−Ak (x)hv1 , · · · i = [Ak (x+tv1 )−Ak (x)]hv1 , · · · i dt
0
from which we get the estimate,
(22.18) k(Dk−1 f )(x + v1 ) − (Dk−1 f )(x) − Ak (x)hv1 , · · · ik ≤ (v1 )kv1 k
R1
where (v1 ) ≡ 0 kAk (x + tv1 ) − Ak (x)k dt. Notice by the continuity of Ak that
(v1 ) − → 0. Thus it follow from Eq. (22.18) that Dk−1 f is differentiable
→ 0 as v1 −
k
and that (D f )(x) = Ak (x).
Example 22.18. Let f : L∗ (X, Y ) − → L∗ (Y, X) be defined by f (A) ≡ A−1 . We
∗
assume that L (X, Y ) is not empty. Then f is infinitely differentiable and
(22.19) X
(Dk f )(A)hV1 , V2 , . . . , Vk i = (−1)k {B −1 Vσ(1) B −1 Vσ(2) B −1 · · · B −1 Vσ(k) B −1 },
σ
Combining the last two inequalities gives (using again that α ∈ (0, 1)),
m−1
X ∞
X αn
ρ(xm , xn ) ≤ αk ρ(x1 , x0 ) ≤ ρ(x1 , x0 )αn αl = ρ(x1 , x0 ) .
1−α
k=n l=0
(22.25) f |−1
B (y) = x0 + G([Df (x0 )]
−1
y)
for y ∈ V = f (B). This shows that f |B : B → V is a homeomorphism and it follows
.
from (22.25) that g = (f |B )−1 ∈ C 1 (V, B). Eq. (22.24) now follows from the chain
rule and the fact that
f ◦ g(y) = y for all y ∈ B.
Since f 0 ∈ C k−1 (B, L(X)) and i(A) := A−1 is a smooth map by Example 22.18,
g = i ◦ f 0 ◦ g is C 1 if k ≥ 2, i.e. g is C 2 if k ≥ 2. Again using g 0 = i ◦ f 0 ◦ g, we may
0
Proof. It only remains to prove Eq. (22.32), so suppose now that α < 1. Then
by Proposition 3.69 f 0 (x0 )−1 f 0 (x) is invertible and
°£ ¤−1 ° 1
° 0 °
° f (x0 )−1 f 0 (x) ° ≤ for all x ∈ B X (x0 , R).
1−α
£ ¤
Since f 0 (x) = f 0 (x0 ) f 0 (x0 )−1 f 0 (x) this implies f 0 (x) is invertible and
° 0 −1 ° ° £ ¤−1 0 ° 1 ° °
°f (x) ° = ° ° f 0 (x0 )−1 f 0 (x)
°
f (x0 )−1 ° ≤ °f 0 (x0 )−1 ° for all x ∈ B X (x0 , R).
1−α
which implies
° 0 °
°f (x0 )−1 °
0
(22.33) kx − xk ≤ kf (x0 ) − f (x)k for all x, x0 ∈ B X (x0 , R).
1−α
¡ ¢
This shows that f |B X (x0 ,R) is injective and that f |−1 B X (x0 ,R)
: f B X (x0 , R) →
B X (x0 , R) is Lipschitz continuous because
° ° ° °
° °f (x0 ) ° 0
0 −1 ¡ ¢
° −1 0 −1
°f |B X (x0 ,R) (y ) − f |B X (x0 ,R) (y)° ≤ ky − yk for all y, y 0 ∈ f B X (x0 , R) .
1−α
Since x0 ∈ X was chosen arbitrarily, if we know f : U → Y is injective, we then
know that f −1 : V = f (U ) → U is necessarily continuous. The remaining assertions
of the theorem now follow from the converse to the chain rule in Theorem 22.6 and
the
¡ fact that ¢f is an open mapping (as we shall now show) so that in particular
f B X (x0 , R) is open.
Let y ∈ B Y (0, δ), with δ to be determined later, we wish to solve the equation,
for x ∈ B X (0, R),
f (x0 ) + y = f (x0 + x) = f (x0 ) + f 0 (x0 ) (x + (x)) .
Equivalently we are trying to find x ∈ B X (0, R) such that
x = f 0 (x0 )−1 y − (x) =: Sy (x).
Now using Lemma 22.28 and the fact that (0) = 0,
° ° ° °
kSy (x)k ≤ °f 0 (x0 )−1 y ° + k (x)k ≤ °f 0 (x0 )−1 ° kyk + α kxk
° °
≤ °f 0 (x0 )−1 ° δ + αR.
Therefore if we assume δ is chosen so that
° 0 ° ° °
°f (x0 )−1 ° δ + αR < R, i.e. δ < (1 − α) R/ °f 0 (x0 )−1 ° := δ(x0 ),
f (x(1)) = y0 + h.
Thus if we define
g(y0 + h) := eZ(h,·) (x0 ),
then f (g(y0 + h)) = y0 + h for all h sufficiently small. This shows f is an open
mapping.
where
Z 2π Z 2π
c(f ) = f (τ )dτ = p0 (y(τ ))dτ > 0.
0 0
1
The unique solution h ∈ C2π (R, R) to P 0 (y)h = k is given by
³ ´−1 R t Z 2π Rt
Z t Rt
h(t) = 1 − e−c(f ) e− 0 f (τ )dτ e− τ
f (s)ds
k(τ )dτ + e− τ
f (s)ds
k(τ )dτ
0 0
³ ´−1 R t Z 2π Rt
Z t Rt
= 1 − e−c(f ) e− 0 f (s)ds e− τ
f (s)ds
k(τ )dτ + e− τ
f (s)ds
k(τ )dτ.
0 0
Therefore P 0 (y) is invertible for all y. Hence by the implicit function theorem,
P : Ũ → Ṽ is an open mapping which is locally invertible.
Step 2. Let us now prove P : Ũ → Ṽ is injective. For this suppose y1 , y2 ∈ Ũ
such that P (y1 ) = g = P (y2 ) and let z = y2 − y1 . Since
if tm ∈ R is point where z(tm ) takes on its maximum, then ż(tm ) = 0 and hence
Since p is increasing this implies y2 (tm ) = y1 (tm ) and hence z(tm ) = 0. This shows
z(t) ≤ 0 for all t and a similar argument using a minimizer of z shows z(t) ≥ 0 for
all t. So we conclude y1 = y2 .
Step 3. Let W := P (Ũ ), we wish to show W = Ṽ . By step 1., we know W is
an open subset of Ṽ and since Ṽ is connected, to finish the proof it suffices to show
W is relatively closed in Ṽ . So suppose yj ∈ Ũ such that gj := P (yj ) → g ∈ Ṽ .
We must now show g ∈ W, i.e. g = P (y) for some y ∈ W. If tm is a maximizer of
yj , then ẏj (tm ) = 0 and hence gj (tm ) = p(yj (tm )) < d and therefore yj (tm ) < b
because p is increasing. A similar argument works for the minimizers then allows us
to conclude ran(p◦yj ) ⊂ ran(gj ) @@ (c, d) for all j. Since gj is converging uniformly
to g, there exists c < γ < δ < d such that ran(p ◦ yj ) ⊂ ran(gj ) ⊂ [γ, δ] for all j.
Again since p0 > 0,
22.10. Exercises.
Exercise 22.2. Suppose that A : R → L(X) is a continuous function and V : R →
L(X) is the unique solution to the linear differential equation
(22.36) V̇ (t) = A(t)V (t) with V (0) = I.
Assuming that V (t) is invertible for all t ∈ R, show that V −1 (t) ≡ [V (t)]−1 must
solve the differential equation
d −1
(22.37) V (t) = −V −1 (t)A(t) with V −1 (0) = I.
dt
See Exercise 5.14 as well.
Exercise 22.3 (Differential Equations with Parameters). Let W be another Ba-
nach space, U × V ⊂o X × W and Z ∈ C 1 (U × V, X).For each (x, w) ∈ U × V, let
t ∈ Jx,w → φ(t, x, w) denote the maximal solution to the ODE
(22.38) ẏ(t) = Z(y(t), w) with y(0) = x
and
D := {(t, x, w) ∈ R × U × V : t ∈ Jx,w }
as in Exercise 5.18.
(1) Prove that φ is C 1 and that Dw φ(t, x, w) solves the differential equation:
d
Dw φ(t, x, w) = (Dx Z)(φ(t, x, w), w)Dw φ(t, x, w) + (Dw Z)(φ(t, x, w), w)
dt
with Dw φ(0, x, w) = 0 ∈ L(W, X). Hint: See the hint for Exercise 5.18
with the reference to Theorem 5.21 being replace by Theorem 22.12.
(2) Also show with the aid of Duhamel’s principle (Exercise 5.16) and Theorem
22.12 that
Z t
Dw φ(t, x, w) = Dx φ(t, x, w) Dx φ(τ, x, w)−1 (Dw Z)(φ(τ, x, w), w)dτ
0
Hint: Let B ∈ L(X) and define w(t, s) = et(A+sB) for all t, s ∈ R. Notice that
(22.40) dw(t, s)/dt = (A + sB)w(t, s) with w(0, s) = I ∈ L(X).
Use Exercise 22.3 to conclude that w is C 1 and that w0 (t, 0) ≡ dw(t, s)/ds|s=0
satisfies the differential equation,
d 0
(22.41) w (t, 0) = Aw0 (t, 0) + BetA with w(0, 0) = 0 ∈ L(X).
dt
Solve this equation by Duhamel’s principle (Exercise 5.16) and then apply Proposi-
tion 22.10 to conclude that f is differentiable with differential given by Eq. (22.39).
Exercise 22.5 (Local ODE Existence). Let Sx be defined as in Eq. (5.22) from the
proof of Theorem 5.10. Verify that Sx satisfies the hypothesis of Corollary 22.21.
In particular we could have used Corollary 22.21 to prove Theorem 5.10.
ANALYSIS TOOLS W ITH APPLICATIONS 443
Exercise 22.6 (Local ODE Existence Again). Let J = [−1, 1], Z ∈ C 1 (X, X),
Y := C(J, X) and for y ∈ Y and s ∈ J let ys ∈ Y be defined by ys (t) := y(st). Use
the following outline to prove the ODE
(22.42) ẏ(t) = Z(y(t)) with y(0) = x
has a unique solution for small t and this solution is C 1 in x.
(1) If y solves Eq. (22.42) then ys solves
ẏs (t) = sZ(ys (t)) with ys (0) = x
or equivalently
Z t
(22.43) ys (t) = x + s Z(ys (τ ))dτ.
0
Notice that when s = 0, the unique solution to this equation is y0 (t) = x.
(2) Let F : J × Y → J × Y be defined by
Z t
F (s, y) := (s, y(t) − s Z(y(τ ))dτ ).
0
Show the differential of F is given by
µ Z t Z · ¶
F 0 (s, y)(a, v) = a, t → v(t) − s Z 0 (y(τ ))v(τ )dτ − a Z(y(τ ))dτ .
0 0
(3) Verify F 0 (0, y) : R × Y → R × Y is invertible for all y ∈ Y and notice that
F (0, y) = (0, y).
(4) For x ∈ X, let Cx ∈ Y be the constant path at x, i.e. Cx (t) = x for all
t ∈ J. Use the inverse function Theorem 22.26 to conclude there exists > 0
and a C 1 map φ : (− , ) × B(x0 , ) → Y such that
F (s, φ(s, x)) = (s, Cx ) for all (s, x) ∈ (− , ) × B(x0 , ).
(5) Show, for s ≤ that ys (t) := φ(s, x)(t) satisfies Eq. (22.43). Now define
y(t, x) = φ( /2, x)(2t/ ) and show y(t, x) solve Eq. (22.42) for |t| < /2
and x ∈ B(x0 , ).
Exercise 22.7. Show P defined in Theorem 22.30 is continuously differentiable
and P 0 (y)h = ḣ + p0 (y)h.