Integracion y Seriessss
Integracion y Seriessss
Integracion y Seriessss
Abstract
This is a solution manual of selected exercise problems from Analysis on manifolds, by James R.
Munkres [1]. If you find any typos/errors, please email me at [email protected].
Contents
1 Review of Linear Algebra 3
3 Review of Topology in Rn 4
5 The Derivative 5
14 Rectifiable Sets 7
15 Improper Integrals 7
16 Partition of Unity 7
18 Diffeomorphisms in Rn 7
1
19 Proof of the Change of Variables Theorem 7
23 Manifolds in Rn 10
26 Multilinear Algebra 15
27 Alternating Tensors 16
34 Orientable Manifolds 27
2
1 Review of Linear Algebra
A good textbook on linear algebra from the viewpoint of finite-dimensional spaces is Lax [2]. In the
below, we make connections between the results presented in the current section and that reference.
Theorem 1.1 (page 2) corresponds to Lax [2, page 5], Chapter 1, Lemma 1.
Theorem 1.2 (page 3) corresponds to Lax [2, page 6], Chapter 1, Theorem 4.
Theorem 1.5 (page 7) corresponds to Lax [2, page 37], Chapter 4, Theorem 2 and the paragraph below
Theorem 2.
|A · B| ≤ m|A||B|.
Therefore, { m }
∑
|A · B| = max aik bkj ; i = 1, · · · n, j = 1, · · · , p ≤ m|A||B|.
k=1
3. Show that the sup norm on R2 is not derived from an inner product on R2 . [Hint: Suppose ⟨x, y⟩ is an
inner product on R2 (not the dot product) having the property that |x| = ⟨x, x⟩1/2 . Compute ⟨x ± y, x ± y⟩
and apply to the case x = e1 and y = e2 .]
1
Proof. Suppose ⟨·, ·⟩ is an inner product on R2 having the property that |x| = ⟨x, x⟩ 2 , where |x| is the sup
norm. By the equality ⟨x, y⟩ = 14 (|x + y|2 − |x − y|2 ), we have
1 1 3
⟨e1 , e1 + e2 ⟩ = (|2e1 + e2 |2 − |e2 |2 ) = (4 − 1) = ,
4 4 4
1 1
⟨e1 , e2 ⟩ = (|e1 + e2 |2 − |e1 − e2 |2 ) = (1 − 1) = 0,
4 4
⟨e1 , e1 ⟩ = |e1 | = 1.
2
So ⟨e1 , e1 +e2 ⟩ ̸= ⟨e1 , e2 ⟩+⟨e1 , e1 ⟩, which implies ⟨·, ·⟩ cannot be an inner product. Therefore, our assumption
is not true and the sup norm on R2 is not derived from an inner product on R2 .
3
( ) ( )
b11 b12 b13 b + b12 2b11 − b12 + b13
Proof. B = . Then BA = 11 . So BA = I2 if and only if
b21 b22 b23 b21 + b22 2b21 − b12 + b23
b11 + b12 = 1
b + b = 0
21 22
2b11 − b12 + b13 = 0
2b21 − b22 + b23 = 1.
Plug −b12 = b11 − 1 and −b22 = b21 into the las two equations, we have
{
3b11 + b13 = 1
3b21 + b23 = 1.
( ) ( )
0 1 1 1 0 −2
So we can have the following two different left inverses for A: B1 = and B2 = .
0 0 1 1 −1 −2
(b)
Proof. By Theorem 2.2, A has no right inverse.
2.
Proof. (a) By Theorem 1.5, n ≥ m and among the n row vectors of A, there are exactly m of them are
linearly
[ ] independent. By applying elementary row operations to A, we can reduce A to the [ echelon
] form
Im Im
. So we can find a matrix D that is a product of elementary matrices such that DA = .
0 0
(b) If rankA = m, by part (a) there exists a matrix D that is a product of elementary matrices such that
[ ]
I
DA = m .
0
Let B = [Im , 0]D, then BA = Im , i.e. B is a left inverse of A. Conversely, if B is a left inverse of A, it is
easy to see that A as a linear mapping from Rm to Rn is injective. This implies the column vectors of A are
linearly independent, i.e. rankA = m.
(c) A has a right inverse if and only if Atr has a left inverse. By part (b), this implies rankA = rankAtr =
n.
4.
Proof. Suppose (Dk )K k=1 is a sequence of elementary matrices such that DK · · · D2 D1 A = In . Note DK · · · D2 D1 A =
DK · · · D2 D1 In A, we can conclude A−1 = DK · · · D2 D1 In .
5.
( )
d −b
Proof. A−1 = 1
by Theorem 2.14.
−c a d−bc
3 Review of Topology in Rn
2.
4
Proof. For any closed subset C of Y , f −1 (C) = [f −1 (C) ∩ A] ∪ [f −1 (C) ∩ B]. Since f −1 (C) ∩ A is a closed
subset of A, there must be a closed subset D1 of X such that f −1 (C) ∩ A = D1 ∩ A. Similarly, there is a
closed subset D2 of X such that f −1 (C) ∩ B = D2 ∩ B. So f −1 (C) = [D1 ∩ A] ∪ [D2 ∩ B]. A and B are
closed in X, so D1 ∩ A, D2 ∩ B and [D1 ∩ A] ∪ [D2 ∩ B] are all closed in X. This shows f is continuous.
7.
Proof. (a) Take f (x) ≡ y0 and let g be such that g(y0 ) ̸= z0 but g(y) → z0 as y → y0 .
5 The Derivative
1.
Proof. By definition, limt→0 f (a+tu)−f
t
(a)
exists. Consequently, limt→0 f (a+tu)−f (a)
t = limt→0 f (a+tcu)−f (a)
ct
exists and is equal to cf ′ (a; u).
2.
u1 u2
Proof. (a) f (u) = f (u1 , u2 ) = u21 +u22
. So
f (tu) − f (0) 1 t2 u1 u2 1 u1 u2
= 2 2 2 = .
t t t (u1 + u2 ) t u21 + u22
So lim(x,y)→0 √ |xy| = 0. This shows f (x, y) = |xy| is differentiable at 0 and the derivative is 0. However,
x +y 2
2
for any fixed y, f (x, y) is not a differentiable function of x at 0. So its partial derivative w.r.t. x does not
exist in a neighborhood of 0, which implies f is not of class C 1 in a neighborhood of 0.
5
7 The Chain Rule
8 The Inverse Function Theorem
9 The Implicit Function Theorem
10 The Integral over a Rectangle
6.
Proof. (a) Straightforward from the Riemann condition (Theorem 10.3).
(b) Among all the sub-rectangles determined by P , those whose sides contain the newly added point have
a combined volume no greater than (meshP )(width(Q))n−1 . So
′′
0 ≤ L(f, P ) − L(f, P ) ≤ 2M (meshP )(widthQ)n−1 .
The result for upper sums can be derived similarly.
(c) Given ε > 0, choose a partition P ′ such that U (f, P ′ ) − L(f, P ′ ) < 2ε . Let N be the number of
partition points in P ′ and let
ε
δ= .
8M N (widthQ)n−1
Suppose P has mesh less than δ, the common refinement P ′′ of P and P ′ is obtained by adjoining at most
N points to P . So by part (b)
ε ε
0 ≤ L(f, P ′′ ) − L(f, P ) ≤ N · 2M (meshP )(widthQ)n−1 ≤ 2M N (widthQ)n−1 = .
8M N (widthQ)n−1 4
Similarly, we can show 0 ≤ U (f, P ) − U (f, P ′′ ) ≤ 4ε . So
U (f, P ) − L(f, P ) = [U (f, P ) − U (f, P ′′ )] + [L(f, P ′′ ) − L(f, P )] + [U (f, P ′′ ) − L(f, P ′′ )]
ε
≤ + ε4 + [U (f, P ′ ) − L(f, P ′ )]
4
ε ε
≤ +
2 2
= ε.
This shows for any given ε > 0, there is a δ > 0 such that U (f, P ) − L(f, P ) < ε for every partition P of
mesh less than δ.
7.
∑
Proof. (Sufficiency) Note | R f (xR )v(R) − A| < ε can be written as
∑
A−ε< f (xR )v(R) < A + ε.
R
This shows U (f, P ) ≤ A +∫ε and L(f, P ) ≥ A − ε. So U (f, P ) − L(f, P ) ≤ 2ε. By ∫Problem 6, we conclude f
is integrable over Q, with Q f ∈ [A − ε, A + ε]. Since ε is arbitrary, we conclude Q f = A.
(Necessity) By Problem 6, for any given ε > 0, there is a δ > 0 such that U (f, P ) − L(f, P ) < ε for every
partition P of mesh less than δ. For any such partition P , if for each sub-rectangle R determined by P , xR
is a point of R, we must have
∑
L(f, P ) − A ≤ f (xR )v(R) − A ≤ U (f, P ) − A.
R
6
11 Existence of the Integral
12 Evaluation of the Integral
13 The Integral over a Bounded Set
14 Rectifiable Sets
15 Improper Integrals
16 Partition of Unity
17 The Change of Variables Theorem
18 Diffeomorphisms in Rn
19 Proof of the Change of Variables Theorem
20 Applications of Change of Variables
21 The Volume of a Parallelepiped
1. (a)
( ) a 1 + a2 ab ac
I
tr tr
Proof. Let v = (a, b, c), then X X = (I3 , v ) 3
= I3 + b (a, b, c) = ab 1 + b2 bc .
v
c ca cb 1 + c2
(b)
and
1/2
1 0 0 0 1 0 1 0 0
V (X) = det2 I3 + det2 0 1 0 + det2 0 0 1 + det2 0 0 1 = (1 + c2 + a2 + b2 )1/2 .
a b c a b c a b c
2.
∑
Proof. Let X = (x1 , · · · , xi , · · · , xk ) and Y = (x1 , · · · , λxi , · · · , xk ). Then V (Y ) = [ [I] det2 YI ]1/2 =
∑ ∑ 1
[ [I] λ2 det2 XI ]1/2 = |λ|[ [I] det2 XI ] 2 = |λ|V (X).
3.
Proof. Suppose P is determined by x1 , · · · , xk . Then V (h(P)) = V (λx1 , · · · , λxk ) = |λ|V (x1 , λx2 , · · · , λxk ) =
· · · = |λ|k V (x1 , x2 , · · · , xk ) = |λ|k V (P).
4. (a)
7
Proof. Straightforward.
(b)
Proof.
∑
3 ∑
3 ∑
3
||a||2 ||b||2 − ⟨a, b⟩2 = ( a2i )( b2j ) − ( ak bk )2
i=1 j=1 k=1
∑
3 ∑
3
= a2i b2j − a2k b2k − 2(a1 b1 a2 b2 + a1 b1 a3 b3 + a2 b2 a3 b3 )
i,j=1 k=1
∑
3
= a2i b2j − 2(a1 b1 a2 b2 + a1 b1 a3 b3 + a2 b2 a3 b3 )
i,j=1,i̸=j
5. (a)
Proof. Suppose V1 and V2 both satisfy conditions (i)-(iv). Then by the Gram-Schmidt process, the uniqueness
is reduced to V1 (x1 , · · · , xk ) = V2 (x1 , · · · , xk ), where x1 , · · · , xk are orthonormal.
(b)
Proof. Following the hint, we can assume without loss of generality that W = Rn and the inner product is
the dot product on Rn . Let V (x1 , · · · , xk ) be the volume function, then (i) and (ii) are implied by Theorem
21.4, (iii) is Problem 2, and (iv) is implied by Theorem 21.3: V (x1 , · · · , xk ) = [det(X tr X)]1/2 .
2.
Proof. Let x denote the general point of A. Then
1 0 ··· 0
0 1 ··· 0
0 0 ··· 0
Dα(x) = ···
··· ··· ···
0 0 ··· 1
D1 f (x) D2 f (x) · · · Dk f (x)
[ ∑k ]1/2 ∫ [ ∑k ]1/2
and by Theorem 21.4, V (Dα(x)) = 1 + i=1 (Di f (x))2 . So v(Yα ) = A 1 + i=1 (Di f (x))2 .
3. (a)
8
( )
∫ ∫ ∫ −a sin t
Proof. v(Yα ) = A V (Dα) and Yα πi dV = A πi ◦ αV (Dα). Since Dα(t) = , V (Dα) = |a|. So
a cos t
∫ ∫π ∫ ∫π
v(Yα ) = |a|π, Yα π1 dV = 0 a cos t|a| = 0, and Yα π2 dV = 0 a sin t|a| = 2a|a|. Hence the centroid is
(0, 2a/π).
(b)
Proof. By Example 4, v(Yα ) = 2πa2 and
∫ ∫ ∫ 2π ∫ a
a r cos θ · ar
π1 dV = x√ = √ = 0,
Yα A a −x −y
2 2 2
0 0 a2 − r 2
∫ ∫ ∫ 2π ∫ a
a r sin θ · ar
π2 dV = y√ = √ = 0,
Yα A a −x −y
2 2 2
0 0 a2 − r2
∫ ∫ √
a
π3 dV = a2 − x2 − y 2 √ = a3 π.
Yα A a − x2 − y 2
2
∫ considering the segment connecting x1 and x2 , we can find a point ξ ∈ Ā such that V (Dα(ξ))v(A) =
By
A
V (Dα). This shows there is a point ξ of R such that
∫
1
v(∆1 (R)) = V (Dα) = V (Dα(ξ))v(A) = V (Dα(ξ)) · v(R).
A 2
A similar result for v(∆2 (R)) can be proved similarly.
(b)
Proof. V (Dα) as a continuous function is uniformly continuous on the compact set Q.
(c)
Proof.
∫ ∑ ∫
A(P ) −
V (Dα) ≤ V (Dα)
v(∆1 (R)) + v(∆2 (R)) −
Q R R
∑ 1 ∫
V (Dα)
= 2 [V (Dα(ξ1 (R))) + V (Dα(ξ2 (R)))] v(R) −
R R
∑ ∫ V (Dα(ξ1 (R))) + V (Dα(ξ2 (R)))
≤ − V (Dα) .
2
RR
Given ε > 0, there exists a δ > 0 such that if x1 , x2 ∈ Q with |x1 − x2 | < δ, we must have |V (Dα(x1 )) −
ε
V (Dα(x2 ))| < v(Q) . So for every partition P of Q of mesh less than δ,
∫ ∑∫
ε
A(P ) − V (Dα)< = ε.
v(Q)
Q R R
9
23 Manifolds in Rn
1.
Proof. In this case, we set U = R and V = M = {(x, x2 ) : x ∈ R}. Then α maps U onto V in a one-to-one
fashion. Moreover, we have
(1) α is of class C ∞ .
(2) α−1 ((x, x2[)) =
] x is continuous, for (xn , xn ) → (x, x ) as n → ∞ implies xn → x as n → ∞.
2 2
1
(3) Dα(x) = has rank 1 for each x ∈ U .
2x
So M is a 1-manifold in R2 covered by the single coordinate patch α.
2.
Proof. For any point p ∈ S 1 with p ̸= (1, 0), we let U = (0, 2π), V = S 1 − (1, 0), and α : U → V be defined
by α(θ) = (cos θ, sin θ). Then α maps U onto V continuously in a one-to-one fashion. Moreover,
(1) α is of class C ∞ .
(2) α−1 is continuous,
[ ] for (cos θn , sin θn ) → (cos θ, sin θ) as n → ∞ implies θn → θ as n → ∞.
− sin θ
(3) Dα(θ) = has rank 1.
cos θ
So α is a coordinate patch. For p = (1, 0), we consider U = (−π, π), V = S 1 − (−1, 0), and β : U → V
be defined by β(θ) = (cos θ, sin θ). We can prove in a similar way that β is a coordinate patch. Combined,
we can conclude the unit circle S 1 is a 1-manifold in R2 .
(b)
4.
Proof. Let U = A and V = {(x, f (x)) : x ∈ A}. Define α : U → V by α(x) = (x, f (x)). Then α maps U
onto V in a one-to-one fashion. Moreover,
(1) α is of class C r .
(2) α−1 is continuous,
[ ] for (xn , f (xn )) → (x, f (x)) as n → ∞ implies xn → x as n → ∞.
Ik
(3) Dα(x) = has rank k.
Df (x)
So V is a k-manifold in Rk+1 with a single coordinate patch α.
5.
Proof. For any x ∈ M and y ∈ N , there is a coordinate patch α for x and a coordinate patch β for y,
respectively. Denote by U the domain of α, which is open in Rk and by W the domain of β, which is open in
either Rl or Hl . Then U × W is open in either Rk+l or Hk+l , depending on W is open in Rl or Hl . This is the
essential reason why we need at least one manifold to have no boundary: if both M and N have boundaries,
U × W may not be open in Rk+l or Hk+l .
10
The rest of the proof is routine. We define a map f : U × W → α(U ) × β(W ) by f (x, y) = (α(x), β(y)).
Since α(U ) is open in M and β(W ) is open in N by the definition of coordinate patch, f (U × W ) =
α(U ) × β(W ) is open in M × N under the product topology. f is one-to-one and continuous, since α and β
enjoy such properties. Moreover,
(1) f is of class C r , since α and β are of class C r .
(2) f −1 = (α−1 [, β −1 ) is continuous −1 −1
] since α and β are continuous.
Dα(x) 0
(3) Df (x, y) = clearly has rank k + l for each (x, y) ∈ U × W .
0 Dβ(y)
Therefore, we conclude M × N is a k + l manifold in Rm+n .
6. (a)
Proof. We define α1 : [0, 1) → [0, 1) by α1 (x) = x and α2 : [0, 1) → (0, 1] by α2 (x) = −x + 1. Then it’s easy
to check α1 and α2 are both coordinate patches.
(b)
Proof. Intuitively I × I cannot be a 2-manifold since it has “corners”. For a formal proof, assume I × I is
a 2-manifold of class C r with r ≥ 1. By Theorem 24.3, ∂(I × I), the boundary of I × I, is a 1-manifold
without boundary of class C r . Assume α is a coordinate patch of ∂(I × I) whose image includes one of those
corner points. Then Dα cannot exist at that corner point, contradiction. In conclusion, I × I cannot be a
2-manifold of class C r with r ≥ 1.
−2z
T . By Theorem 24.4, N is a 3-manifold and T = ∂N is a 2-manifold without boundary.
2.
Proof. We write any point x ∈ Rn+k as (x1 , x2 )[with x1 ∈ Rn] and x2 ∈ Rk . We first assume Dx1 f (p) has
Dx 1 f D x 2 f
rank n. Define F (x) = (f (x), x2 ), then DF = . So detDF (p) = detDx1 f (p) ̸= 0. By the
0 Ik
inverse function theorem, there is an open set U of Rn+k containing p such that F carries U in a one-to-one
fashion onto an open set W of Rn+k and its inverse function G is of class C r . Denote by π : Rn+k → Rn the
projection π(x) = x1 , then f ◦ G(x) = π ◦ F ◦ G(x) = π(x) on W .
∂(f1 ,··· ,fn )
In general, since Df (p) has rank n, there will be j1 < · · · < jn such that the matrix ∂(x j1 ,··· ,xjn ) has rank
is of the type considered previously. So using the notation of the previous paragraph, f ◦ (H ◦ G)(x) = π(x)
on W .
11
By the lemma and using its notation, ∀p ∈ M = {x : f (x) = 0}, there is a C r -diffeomorphism G
between an open set W of Rn+k and an open set U of Rn+k containing p, such that f ◦ G = π on W . So
U ∩ M = {x ∈ U : f (x) = 0} = G(W ) ∩ (f ◦ G ◦ G−1 )−1 ({0}) = G(W ) ∩ G(π −1 ({0})) = G(W ∩ {0} × Rk ).
Therefore α(x1 , · · · , xk ) := G((0, x1 , · · · , xk )) is a k-dimensional coordinate patch on M about p. Since p is
arbitrarily chosen, we have proved M is a k-manifold without boundary in Rn+k .
Now, ∀p ∈ N = {x : f1 (x) = · · · = fn−1 (x), fn (x) ≥ 0}, there are two cases: fn (p) > 0 and fn (p) = 0.
For the first case, by an argument similar to that of M , we can find a C r -diffeomorphism G1 between an
open set W of Rn+k and an open set U of Rn+k containing p, such that f ◦ G1 = π1 on W . Here π1 is the
projection mapping to the first (n − 1) coordinates. So U ∩ N = U ∩ {x : f1 (x) = · · · = fn−1 (x) = 0} ∩ {x :
fn (x) ≥ 0} = G1 (W ∩ {0} × Rk+1 ) ∩ {x ∈ U : fn (x) ≥ 0}. When U is sufficiently small, by the continuity of
fn and the fact fn (p) > 0, we can assume fn (x) > 0, ∀x ∈ U . So
This shows β(x1 , · · · , xk+1 ) := G1 ((0, x1 , · · · , xk+1 )) is a (k + 1)-dimensional coordinate patch on N about
p.
For the second case, we note p is necessarily in M . So Df (p) is of rank n and there is a C r -diffeomorphism
G between an open set W of Rn+k and an open set U of Rn+k containing p, such that f ◦ G = π on W .
So U ∩ N = {x ∈ U : f1 (x) = · · · = fn−1 (x) = 0, fn (x) ≥ 0} = G(W ) ∩ (π ◦ G−1 )−1 ({0} × [0, ∞)) =
G(W ∩π −1 ({0}×[0, ∞))) = G(W ∩{0}×[0, ∞)×Rk ). This shows γ(x1 , · · · , xk+1 ) := G((0, xk+1 , x1 , · · · , xk ))
is a (k + 1)-dimensional coordinate patch on N about p.
In summary, we have shown N is a (k + 1)-manifold. Lemma 24.2 shows ∂N = M .
3.
Proof. Define H : R[3 → R2 by H(x, y, z) = (f (x, y, z), g(x,] y, z)). By the theorem proved in Problem
Dx f (x, y, z) Dy f (x, y, z) Dz f (x, y, z)
2, if DH(x, y, z) = has rank 2 for (x, y, z) ∈ M := {(x, y, z) :
Dx g(x, y, z) Dy g(x, y, z) Dz g(x, y, z)
f (x, y, z) = g(x, y, z) = 0}, M is a 1-manifold without boundary in R3 , i.e. a C r curve without singularities.
4.
5. (a)
x1
Proof. We write any point x ∈ R9 as x = x2 , where x1 = [x11 , x12 , x13 ], x2 = [x21 , x22 , x23 ], and
x3
x3 = [x31 , x32 , x33 ]. Define f1 (x) = ||x1 ||2 − 1, f2 (x) = ||x2 ||2 − 1, f3 (x) = ||x3 ||2 − 1, f4 (x) = (x1 , x2 ),
f5 (x) = (x1 , x3 ), and f6 (x) = (x2 , x3 ). Then O(3) is the solution set of the equation f (x) = 0.
(b)
12
Proof. We note
∂(f1 , · · · , f6 )
Df (x) =
∂(x11 , x12 , x13 , x21 , x22 , x23 , x31 , x32 , x33 )
2x11 2x12 2x13 0 0 0 0 0 0
0 0 0 2x 2x 2x 0 0 0
21 22 23
0 0 0 0 0 0 2x 2x32 2x33
=
x21
31
x22 x23 x11 x12 x13 0 0 0
x31 x32 x33 0 0 0 x11 x12 x13
0 0 0 x31 x32 x33 x21 x22 x23
Since x1 , x2 , x3 are pairwise orthogonal and are non-zero, we conclude x1 , x2 and x3 are independent. From
the structure of Df , the row space of Df (x) for x ∈ O(3) has rank 6. By the theorem proved in Problem 2,
O(3) is a 3-manifold without boundary in R9 . Finally, O(3) = {x : f (x) = 0} is clearly bounded and closed,
hence compact.
6.
n(n−1) n(n−1)
Proof. The argument is similar to that of Problem 5, and the dimension = n2 − n − 2 = 2 .
1-manifold, hence it has measure zero in R2 ). On the set {(t, z) : 0 < t < 2π, |z| < a}, α is smooth and
α−1 (x, y, z) = (t, z) is continuous on S 2 (a) − D. Finally, by the calculation done in the text, the rank of Dα
is 2 on {(t, z) : 0 < t < 2π, |z| < a}.
(Dα)tr Dα
[ ] −(a2 − z 2 )1/2 sin t (−z cos t)/(a2 − z 2 )1/2
−(a2 − z 2 )1/2 sin t (a − z ) cos t
2 2 1/2
0 2
= (a − z 2 )1/2 cos t (−z sin t)/(a2 − z 2 )1/2
(−z cos t)/(a2 − z 2 )1/2 (−z sin t)/(a2 − z 2 )1/2 1
0 1
[ 2 ]
a − z2 0
= a2 .
0 a2 −z 2
∫
So V (Dα) = a and v(S 2 (a)) = {(t,z):0<t<2π,|z|<a}
V (Dα) = 4πa2 .
4.
Proof. Let (αj ) be a family of coordinate patches that covers M . Then (h ◦ αj ) is a family of coordinate
patches that covers N . Suppose ϕ1 , · · · , ϕl is a partition of unity on M that is dominated by (αj ), then
13
ϕ1 ◦ h−1 , · · · , ϕl ◦ h−1 is a partition of unity on N that is dominated by (h ◦ αj ). Then
∫ l ∫
∑
f dV = (ϕi ◦ h−1 )f dV
N i=1 N
l ∫
∑
= (ϕi ◦ h−1 ◦ h ◦ αi )(f ◦ h ◦ αi )V (D(h ◦ αi ))
i=1 IntUi
l ∫
∑
= (ϕi ◦ αi )(f ◦ h ◦ αi )V (Dαi )
i=1 IntUi
l ∫
∑
= ϕi (f ◦ h)dV
i=1 M
∫
= f ◦ hdV.
M
6.
Proof. Let L0 = {x ∈ Rn : xi > 0}. Then M ∩ L0 is a manifold, for if α : U → V is a coordinate patch on
M , α : U ∩ α−1 (L0 ) → V ∩ L0 is a coordinate patch on M ∩ L. Similarly, if we let L1 = {x ∈ Rn : xi < 0},
M ∩ L1 is a manifold. Theorem 25.4 implies
∫ [∫ ∫ ]
1 1
ci (M ) = πdV = πdV + πdV .
v(M ) M v(M ) M ∩L0 M ∩L1
In order to show ci (M ) = 0, it suffices to show (πi ◦ αj )V (Dαj ) = −(πi ◦ f ◦ αj )V (D(f ◦ αj )). Indeed,
8. (a)
Proof. Let (αi ) be a family of coordinate patches on M and ϕ1 , · · · , ϕl a partition of unity on M dominated
by (αi ). Let (βj ) be a family of coordinate patches on N and ψ1 , · · · , ψk a partition of unity on N dominated
14
by (βj ). Then it’s easy to see ((αi , βj ))i,j is a family of coordinate patches on M ×N and (ϕm ψn )1≤m≤l,1≤n≤k
is a partition of unity on M × N dominated by ((αi , βj ))i,j . Then
∫ ∑ ∫
f · gdV = (ϕm f )(ψn g)dV
M ×N 1≤m≤l,1≤n≤k M ×N
∑ ∫
= (ϕm ◦ αm · f ◦ αm )V (Dαm )(ψn ◦ βn · g ◦ βn )V (Dβn )
1≤m≤l,1≤n≤k IntUm ×IntVn
∑ ∫ ∫
= (ϕm ◦ αm · f ◦ αm )V (Dαm ) (ψn ◦ βn · g ◦ βn )V (Dβn )
1≤m≤l,1≤n≤k IntUm IntVn
∫ ∫
= [ f dV ][ gdV ].
M N
(b)
Proof. Set f = 1 and g = 1 in (a).
(c)
Proof. By (a), v(S 1 × S 1 ) = v(S 1 ) · v(S 1 ) = 4π 2 a2 .
26 Multilinear Algebra
4.
Proof. By Example 1, it is easy to see f and g are not tensors on R4 . h is a tensor: h = ϕ1,1 − 7ϕ2,3 .
5.
(b)
Proof. f ⊗ g(x, y, z, u, v) = 2x1 y2 z2 u2 v1 − 10x1 y2 z2 u3 v1 − x2 y3 z1 u2 v1 + 5x2 y3 z1 u3 v1 .
7.
∑ ∑ ∑ ∑ ∑
Proof.
∑ Suppose f = I dI ϕI and g = J dJ ϕJ . Then f ⊗g = ( I dI ϕI )⊗( J dJ ϕJ ) = I,J dI dJ ϕI ⊗ϕJ =
I,J dI dJ ϕI,J . This shows the four properties stated in Theorem 26.4 characterize the tensor product
uniquely.
8.
Proof. For any x ∈ Rm , T ∗ f (x) = f (T (x)) = f (B · x) = (AB) · x. So the matrix of the 1-tensor T ∗ f on Rm
is AB.
15
27 Alternating Tensors
1.
Proof. Since h is not multilinear, h is not an alternating tensor. f = ϕ1,2 − ϕ2,1 + ϕ1,1 is a tensor. The only
permutation of {1, 2} are the identity mapping id and σ : σ(1) = 2, σ(2) = 1. So f is alternating if and
only if f σ (x, y) = −f (x, y). Since f σ (x, y) = f (y, x) = y1 x2 − y2 x1 + y1 x1 ̸= −f (x, y), we conclude f is not
alternating.
Similarly, g = ϕ1,3 − ϕ3,2 is a tensor. And g σ = ϕ2,1 − ϕ2,3 ̸= −g. So g is not alternating.
3.
Proof. Suppose I = (i1 , · · · , ik ). If {i1 , · · · , ik } ̸= {j1 , · · · , jk } (set equality), then ϕI (aj1 , · · · , ajk ) = 0. If
{i1 , · · · , ik } = {j1 , · · · , jk }, there must exist a permutation σ of {1, 2, · · · , k}, such that I = (i1 , · · · , ik ) =
(jσ(1) , · · · , jσ(k) ). Then ϕI (aj1 , · · · , ajk ) = (sgnσ)(ϕI )σ (aj1 , · · · , ajk ) = (sgnσ)ϕI (ajσ(1) , · · · , ajσ(k) ) = sgnσ.
In summary, we have
{
sgnσ if there is a permutation σ of {1, 2, · · · , k} such that I = Jσ = (jσ(1) , · · · , jσ(k) )
ϕI (aj1 , · · · , ajk ) =
0 otherwise.
4.
Proof. For any v1 , · · · , vk ∈ V and a permutation σ of {1, · · · , k}.
16
x1 y1 z1
Proof. (AF )(x, y, z) = −ϕ1 ∧ ϕ4 ∧ ϕ5 (x, y, z) = −det x4 y4 z4 = −x1 y4 z5 + x1 y5 z4 + x4 y1 z5 − x4 y5 z1 −
x5 y5 z5
x5 y1 z4 + x5 y4 z1 .
2.
∑ ∑
Proof.
∑ Suppose G is a k-tensor, then AG(v1 , · · · , vk ) = σ (sgnσ)Gσ (v1 , · · · , vk ) = σ (sgnσ)G(v1 , · · · , vk ) =
[ σ (sgnσ)]G(v1 , · · · , vk ). Let e be an elementary permutation. Then e : σ → e ◦ σ is an isomorphism on
the permutation group Sk of {1, 2, · · · , k}. So Sk can be divided into two disjoint subsets U1 and U2 so that
e∑establishes a one-to-one correspondence between U1 and U2 . By the fact sgne ◦ σ = −sgnσ, we conclude
σ (sgnσ) = 0. This implies AG = 0.
3.
1
A(f1 ⊗ · · · ⊗ fk ) = f1 ∧ · · · ∧ fk
l1 ! · · · lk !
for any k.
4.
∑
Proof. ϕi1 ∧ · · · ϕik (x1 , · · · , xk ) = A(ϕi1 ⊗ · · · ⊗∑
∑ ϕik )(x1 , · · · , xk ) = σ (sgnσ)(ϕi1 ⊗ · · · ⊗ ϕik )σ (x1 , · · · , xk ) =
σ (sgnσ)(ϕi1 ⊗ · · · ⊗ ϕik )(xσ(1) , · · · , xσ(k) ) = σ (sgnσ)xi1 ,σ(1) , · · · , xik ,σ(k) = detXI .
5.
Proof. Suppose F is a k-tensor. Then
6. (a)
Proof.∑T ∗ ψI (v1 , · · · , vk ) = ψI (T (v1 ), · · · , T (vk )) = ψI (B · v1 , · · · , B · vk ). In particular, for J¯ = (j̄1 , · · · , j̄k ),
cJ¯ = [J] cJ ψJ (ej̄1 , · · · , ej̄k ) = T ∗ ψI (ej̄1 , · · · , ej̄k ) = ψI (B · ej̄1 , · · · , B · ej̄k ) = ψI (βj̄1 , · · · , βj̄k ) where βi is
the i-th column of B. So cJ¯ = det[βj̄1 , · · · , βj̄k ]I . Therefore, cJ is the determinant of the matrix consisting
of the i1 , · · · , ik rows and the j1 , · · · , jk columns of B, where I = (i1 , · · · , ik ) and J = (j1 , · · · , jk ).
(b)
∑ ∑ ∑ ∑ ∑
Proof. T ∗ f = [I] dI T ∗ (ψI ) = [I] dI [I] detBI,J ψJ = [J] ( [I] dI detBI,J )ψJ where BI,J is the matrix
consisting of the i1 , · · · , ik rows and the j1 , · · · , jk columns of B (I = (i1 , · · · , ik ) and J = (j1 , · · · , jk )).
17
29 Tangent Vectors and Differential Forms
1.
γ1′ (t)
Proof. γ∗ (t; e1 ) = (γ(t); Dγ(t) · e1 ) = (γ(t); · · · ), which is the velocity vector of γ corresponding to the
γn′ (t)
parameter value t.
2.
Proof. The velocity vector of the curve γ(t) = α(x + tv) corresponding to parameter value t = 0 is calculated
d
by dt γ(t)|t=0 = limt→0 α(x+tv)−α(x)
t = Dα(x) · v. So α∗ (x; v) = (α(x); Dα(x) · v) = (α(x); dt
d
γ(t)|t=0 ).
3.
Proof. Suppose α : Uα → Vα and β : Uβ → Vβ are two coordinate patches about p, with α(x) = β(y) = p.
Because Rk is spanned by the vectors e1 , · · · , ek , the space Tpα (M ) obtained by using α is spanned by the
vectors (p; ∂α(x) ∂β(y) k
∂xj )j=1 and the space Tp (M ) obtained by using β is spanned by the vectors (p; ∂yi )i=1 . Let
k β
Since D(β −1 ◦ α)(x) is of rank k, the linear space spanned by (∂α(x)/∂xj )kj=1 agrees with the linear space
spanned by (∂β(y)/∂yi )ki=1 .
4. (a)
Proof. Suppose α : U → V is a coordinate patch about p, with α(x) = p. Since p ∈ M − ∂M , we can without
loss of generality assume U is an open subset of Rk . By the definition of tangent vector, there exists u ∈ Rk
such that v = Dα(x) · u. For ε sufficiently small, {x + tu : |t| ≤ ε} ⊂ U and γ(t) := α(x + tu) (|t| ≤ ε) has
d
its image in M . Clearly dt γ(t)|t=0 = Dα(x) · u = v.
(b)
Proof. Suppose γ : (−ε, ε) → Rn is a parametrized-curve whose image set lies in M . Denote γ(0) by p and
assume α : U → V is a coordinate patch about p. For v := dt
d
γ(t)|t=0 , we define u = Dα−1 (p) · v. Then
5.
Proof. Similar to the proof of Problem 4, with (−ε, ε) changed to [0, ε) or (−ε, 0]. We omit the details.
18
30 The Differential Operator
2.
and
ω ∧ η = (−xy 2 z 2 − 3x)dx ∧ dy + (2x2 y + xyz)dx ∧ dz + (6x − y 2 z 3 )dy ∧ dz.
So
d(ω ∧ η) = (−2xy 2 z − 2x2 − xz + 6)dx ∧ dy ∧ dz,
(dω) ∧ η = −2x2 dx ∧ dy ∧ dz − xzdx ∧ dy ∧ dz,
and
ω ∧ dη = 2xy 2 zdx ∧ dy ∧ dz − 6dx ∧ dy ∧ dz.
Therefore, (dω) ∧ η − ω ∧ dη = (−2xy 2 z − 2x2 − xz + 6)dx ∧ dy ∧ dz = d(ω ∧ η).
3.
Proof. In R2 , ω = ydx − xdy vanishes at x0 = (0, 0), but dω = −2dx ∧ dy does not ∑ vanish at x0 . In general,
suppose ω is a k-form defined in an open set A of Rn , and it has the general form ω = [I] fI dxI . If it vanishes
at each x in a neighborhood of x0 , we must have fI = 0 in a neighborhood
∑ of x0 for∑each ∑
I. By continuity,
we conclude fI ≡ 0 in a neighborhood of x0 , including x0 . So dω = [I] dfI ∧ dxI = [I] ( i Di f dxi ) ∧ dxI
vanishes at x0 .
4.
( ) ( )
−2xy
Proof. dω = d x
x2 +y 2 dx +d y
x2 +y 2 dy = 2xy
(x2 +y 2 )2 dx ∧ dy + (x2 +y 2 )2 dx ∧ dy = 0. So ω is closed. Define
1 2 2
θ= 2 log(x + y ), then dθ = ω. So ω is exact on A.
5. (a)
−(x2 +y 2 )+2y 2 x2 +y 2 −2x2
Proof. dω = (x2 +y 2 )2 dy ∧ dx + (x2 +y 2 )2 dx ∧ dy = 0. So ω is closed.
(c)
Then [ ]
∂(x, y) cos t −r sin t
det = det = r ̸= 0.
∂(r, t) sin t r cos t
By part (b) and the inverse function theorem (Theorem 8.2, the global version), we conclude ϕ is of class
C ∞.
(d)
Proof. Using the transformation given in part (c), we have dx = cos tdr −r sin tdt and dy = sin tdr +r cos tdt.
So ω = [−r sin t(cos tdr − r sin tdt) + r cos t(sin tdr + r cos tdt)]/r2 = dt = dϕ.
(e)
19
Proof. We follow the hint. Suppose g is a closed 0-form in B. Denote by a the point (−1, 0) of R2 . For
any x ∈ B, let γ(t) : [0, 1] → B be the segment connecting a and x, with γ(0) = a and γ(1) = x. Then by
mean-value theorem (Theorem 7.3), there exists t0 ∈ (0, 1), such that g(a)−g(x) = Dg(a+t0 (x−a))·(a−x).
Since g is closed in B, Dg = 0 in B. This implies g(x) = g(a) for any x ∈ B.
(f)
Proof. First, we note ϕ is not well-defined in all of A, so part (d) can not be used to prove ω is exact in
A. Assume ω = df in A for some 0-form f . Then d(f − ϕ) = df − dϕ = ω − ω = 0 in B. By part (e),
f − ϕ is a constant in B. Since limy↓0 ϕ(1, y) = 0 and limy↑0 ϕ(1, y) = 2π, f (1, y) has different limits when
y approaches 0 through positive and negative values. This is a contradiction since f is C 1 function defined
everywhere in A.
6.
∑n ci ∧ · · · ∧ dxn = ∑n Di fi dx1 ∧ · · · ∧ dxn . So dη = 0 if and
Proof. dη = i=1 (−1)i−1 Di fi dxi ∧ dx1 ∧ · · · dx i=1
∑n ||x||2 −mx2 ∑n
only if i=1 Di fi = 0. Since Di fi (x) = ||x||m+2 i , i=1 Di fi (x) = n−m
||x||m . So dη = 0 if and only if m = n.
7.
Proof. By linearity, it suffices to prove the theorem for ω = f dxI , where I = (i1 , · · · , ik−1 ) is a k-
∑n from {1, · · · , n} in ascending order. Indeed, in this case, h(x) = d(f dxI )(x)((x; v1 ), · · · , (x; vk )) =
tuple
( i=1 Di f (x)dxi ∧ dxI )((x; v1 ), · · · , (x; vk )). Let X = [v1 · · · vk ]. For each j ∈ {1, · · · , k}, let Yj =
[v1 · · · vbj · · · vk ]. Then by Theorem 2.15 and Problem 4 of §28,
∑
k
detX(i, i1 , · · · , ik−1 ) = (−1)j−1 vij detYj (i1 , · · · , ik−1 ).
j=1
Therefore
∑
n
h(x) = Di f (x)detX(i, i1 , · · · , ik−1 )
i=1
∑
n ∑
k
= Di f (x)(−1)j−1 vij detYj (i1 , · · · , ik−1 )
i=1 j=1
∑
k
= (−1)j−1 Df (x) · vj detYj (i1 , · · · , ik−1 ).
j=1
\
Meanwhile, gj (x) = ω(x)((x; v1 ), · · · , (x; vj ), · · · , (x; vk )) = f (x)detYj (i1 , · · · , ik−1 ). So
20
∑n ci ∧ · · · ∧ dxn ) = ∑n (−1)i−1 Di gi dxi ∧ dx1 ∧ · · · ∧ dx
ci ∧
Also, d ◦ β∑
n−1 (G) = d( i=1 (−1)
i−1
gi dx1 ∧ · · · ∧ dx ∑n i=1 ∑n
n
· · · ∧ dxn = ( i=1 Di gi )dx1 ∧ · · · ∧ dxn , and βn ◦ div(G) = βn ( i=1 Di gi ) = ( i=1 Di gi )dx1 ∧ · · · ∧ dxn . So
d ◦ βn−1 = βn ◦ div.
∑3
(Proof of Theorem 31.2) We only need to check d ◦ α1 = β2 ◦ curl. Indeed, d ◦ α1 (F ) = d( i=1 fi dxi ) =
(D2 f1 dx2 + D3 f1 dx3 ) ∧ dx1 + (D1 f2 dx1 + D3 f2 dx3 ) ∧ dx2 + (D1 f3 dx1 + D2 f3 dx2 ) ∧ dx3 = (D2 f3 − D3 f2 )dx2 ∧
dx3 + (D3 f1 − D1 f3 )dx3 ∧ dx1 + (D1 f2 − D2 f1 )dx1 ∧ dx2 , and β2 ◦ curl(F ) = β2 ((x; (D2 f3 − D3 f2 )e1 + (D3 f1 −
D1 f3 )e2 + (D1 f2 − D2 f1 )e3 )) = (D2 f3 − D3 f2 )dx2 ∧ dx3 − (D3 f1 − D1 f3 )dx1 ∧ dx3 + (D1 f2 − D2 f1 )dx1 ∧ dx2 .
So d ◦ α1 = β2 ◦ curl.
2.
Proof. α1 F = f1 dx1 + f2 dx2 and β1 F = f1 dx2 − f2 dx1 .
3. (a)
Proof. Let f be a scalar field in A and F (x) = (x; [f1 (x), f2 (x), f3 (x)]) be a vector field in A. Define
ωF1 = f1 dx1 + f2 dx2 + f3 dx3 and ωF2 = f1 dx2 ∧ dx3 + f2 dx3 ∧ dx1 + f3 dx1 ∧ dx2 . Then it is straightforward
to check that dωF1 = wcurl
2
F
and dωF2 = (divF )dx1 ∧ dx2 ∧ dx3 . So by the general principle d(dω) = 0, we
have ( )
1 2
0 = d(df ) = d ωgrad f
= ωcurl gradf
and ( )
0 = d(dωF1 ) = d ωcurl
2
F
= (div curlF )dx1 ∧ dx2 ∧ dx3 .
These two equations imply that curl gradf = 0 and div curlF = 0.
4. (a)
∑ ∑ ∑
Proof. γ2 (αH +βG) = i<j [αhij (x)+βgij (x)]dxi ∧dxj = α i<j hij (x)dxi ∧dxj +β i<j gij (x)dxi ∧dxj =
αγ2 (H) + βγ2 (G). So γ2 is a linear mapping. It is also easy to see γ2 is one-to-one and onto as the skew-
symmetry of H implies hii = 0 and hij + hji = 0.
(b)
Proof. Suppose F is a vector field in A and H ∈ S(A). We define twist : {vector fields in A} → S(A) by
twist(F )ij = Di fj − Dj fi , and spin : S(A) → {vector fields in A} by spin(H) = (x; (D4 h23 − D3 h24 +
D2 h34 , −D4 h13 + D3 h14 − D1 h34 , D4 h12 − D2 h14 + D1 h24 , −D3 h12 + D2 h13 − D1 h23 )).
5. (a)
∑n ∑n
Proof.
∑n Suppose ω = i=1 ai dxi is a 1-form such that ω(x)(x; v) = ⟨f (x), v⟩. Then i=1 ai (x)vi =
i=1 f i (x)v i . Choose v = ei , we conclude ai = fi . So ω = α1 F .
(b)
Proof. Suppose ω is an (n − 1) form such that ω(x)((x; v1 ), · · · , (x; vn−1 )) = εV (g(x), v1 , · · · , vn−1 ). Assume
∑n ci ∧ · · · ∧ dxn , then
ω has the representation i=1 ai dx1 ∧ · · · ∧ dx
∑
n
ω(x)((x; v1 ), · · · , (x; vn−1 )) = ai (x)det[v1 , · · · , vn−1 ](1,··· ,bi,··· ,n)
i=1
∑n
= (−1)i−1 [(−1)i−1 ai (x)]det[v1 , · · · , vn−1 ](1,··· ,bi,··· ,n)
i=1
= det[a(x), v1 , · · · , vn−1 ],
21
we can conclude det[a(x), v1 , · · · , vn−1 ] = det[g(x), v1 , · · · , vn−1 ], or equivalently,
Proof. Suppose ω = f dx1 ∧· · ·∧dxn is an n-form such that ω(x)((x; v1 ), · · · , (x; vn )) = ε·h(x)·V (v1 , · · · , vn ).
This is equivalent to f (x)det[v1 , · · · , vn ] = h(x)det[v1 , · · · , vn ]. So f = h and ω = βn h.
Proof.
Proof. dω = −xdx ∧ dy − 3dy ∧ dz, α∗ (ω) = x ◦ α · y ◦ αdα1 + 2z ◦ αdα2 − y ◦ αdα3 = u3 v(udv + vdu) +
2(3u + v) · (2udu) − u2 (3du + dv) = (u3 v 2 + 9u2 + 4uv)du + (u4 v − u2 )dv. Therefore
22
and
Proof. Note α∗ yi = yi ◦ α = αi .
5.
∑
Proof. α∗ (dyI ) is an l-form in A, so we can write it as α∗ (dyI ) = [J] hJ dxJ , where J is an ascending l-tuple
form the set {1, · · · , k}. Fix J = (j1 , · · · , jl ), we have
6. (a)
Proof. We fix x ∈ A and denote α(x) by y. Then ∑n G(y) = α∗ (F (x)) = (y; Dα(x) · f (x)). Define g(y) =
Dα(x) · f (x) = (Dα · f )(α−1 (y)). Then gi (y) = ( j=1 Dj αi fj )(α−1 (y)) and we have
∑n ∑
n ∑
n ∑
n ∑
n ∑ n
α∗ (α1 G) = α∗ ( gi dyi ) = gi ◦ αdαi = gi ◦ α Dj αj dxj = ( Dj αi gi ◦ α)dxj .
i=1 i=1 i=1 j=1 j=1 i=1
∑
n ∑
n ∑
n
fj = Dj αi gi ◦ α = Dj αi Dk αi fk = [Dj α1 Dj α2 · · · Dj αn ] · Dα · f,
i=1 i=1 k=1
that is, Dα(x)tr · Dα(x) · f (x) = f (x). So α∗ (α1 G) = α1 F if and only if Dα(x) is an orthogonal matrix for
each x.
(b)
∑n ci ∧ · · · ∧ dxn and
Proof. βn−1 F = i=1 (−1)
i−1
fi dx1 ∧ · · · ∧ dx
∑
n
α∗ (βn−1 G) = α∗ ( ci ∧ · · · ∧ dyn )
(−1)i−1 gi dy1 ∧ · · · ∧ dy
i=1
∑
n
= ci ∧ · · · ∧ dyn )
(−1)i−1 (gi ◦ α)α∗ (dy1 ∧ · · · ∧ dy
i=1
( n )
∑
n ∑ n ∑ ∂(α1 , · · · , αbi , · · · , αn )
= (−1) i−1
( Dj αi fj ) det d
dx1 ∧ · · · ∧ dxk ∧ · · · ∧ dxn .
i=1 j=1
∂(x1 , · · · , x
ck , · · · , xn )
k=1
23
So α∗ (βn−1 F ) = βn−1 F if and only if for any k ∈ {1, · · · , n},
∑
n
∂(α1 , · · · , αbi , · · · , αn )
fk = (−1)k+i Dj αi fj det
i,j=1
∂(x1 , · · · , x
ck , · · · , xn )
∑ n ∑n
∂(α1 , · · · , αbi , · · · , αn )
= fj (−1)k+i Dj αi det
j=1 i=1
∂(x1 , · · · , x
ck , · · · , xn )
∑
n
= fj δkj detDα
j=1
= fk detDα.
(c)
Proof. α∗ (βn k) = α∗ (kdy1 ∧ · · · ∧ dyn ) = k ◦ α · α∗ (dy1 ∧ · · · ∧ dyn ) = h · detDα · dx1 ∧ · · · ∧ dxn and
βn h = hdx1 ∧ · · · ∧ dxn . So α∗ (βn k) = βn h for all h if and only if detDα = 1.
7.
e∗ (divF )(y) = divF (x) = βn−1 ◦ β(divF )(x) = βn−1 ◦ d(βn−1 F )(x) = βn−1 ◦ d(α∗ (βn−1 G))(x)
α
= βn−1 ◦ α∗ ◦ d(βn−1 G)(x) = βn−1 ◦ α∗ ◦ βn (divG)(x).
So
e∗ (divF )(y) = βn−1 ◦ α∗ ◦ βn (divG)(x) = divG(α(x)) = divG(y) = div(e
α α∗ (F ))(y).
e∗ (gradh) = grad ◦ α
(2) α e∗ (h). Indeed,
D1 h(x)
e∗ (gradh)(y) = α∗ (gradh ◦ α−1 (y)) = α∗ (gradh(x)) = (y; Dα(x) · · · · ) = (y; Dα(x) · (Dh(x))tr ),
α
Dn h(x)
and
grad ◦ α
e∗ (h)(y) = grad(h ◦ α−1 )(y)
= (y; [D(h ◦ α−1 )(y)]tr )
= (y; [Dh(α−1 (y)) · Dα−1 (y)]tr )
= (y; [Dh(x) · (Dα(x))−1 ]tr ).
grad ◦ α
e∗ (h)(y) = (y; [Dh(x) · (Dα(x))tr ]tr ) = (y; Dα(x) · (Dh(x))tr ) = α
e∗ (gradh)(y).
24
e∗ (curlF ) = curl(e
(3) For n = 3, α α∗ F ). Indeed, curl(e
α∗ F )(y) = curlG(y), and
e∗ (curlF )(y)
α = α∗ (curlF (α−1 (y)))
= α∗ (β2−1 ◦ β2 ◦ curlF (x))
= α∗ (β2−1 ◦ d ◦ α1 F (x))
= α∗ (β2−1 ◦ d ◦ α∗ ◦ α1 G(x))
= α∗ (β2−1 ◦ α∗ ◦ d ◦ α1 G(x))
= α∗ (β2−1 ◦ α∗ ◦ β2 ◦ curlG(x))
25
33 Integrating Forms over Parametrized-Manifolds
1.
[ ] [ ]
∫ ∫ 0 1 1 0 ∫
Proof. Yα
(x2 dx2 ∧ dx3 + x1 x3 dx1 ∧ dx3 ) = A
vdet + u(u2 + v 2 + 1)det = A −2uv +
2u 2v 2u 2v
2uv(u2 + v 2 + 1) = 1.
2.
Proof.
∫
x1 dx1 ∧ dx4 ∧ dx3 + 2x2 x3 dx1 ∧ dx2 ∧ dx3
∫Yα
= α∗ (−x1 dx1 ∧ dx3 ∧ dx4 + 2x2 x3 dx1 ∧ dx2 ∧ dx3 )
A
∫ 1 0 0 1 0 0
= −sdet 0 0 1 + 2utdet 0 1 0 ds ∧ du ∧ dt
A 0 4(2u − t) 2(t − 2u) 0 0 1
∫
= 4s(2u − t) + 2ut
A
= 6.
3. (a)
Proof.
∫
1
(x1 dx2 ∧ dx3 − x2 dx1 ∧ dx3 + x3 dx1 ∧ dx2 )
Yα ||x||
m
∫ [ ]
1 ∂(x2 , x3 ) ∂(x1 , x3 ) ∂(x1 , x2 )
= udet − vdet + (1 − u 2
− v 2 1/2
) det
A ||(u, v, (1 − u − v )
2 2 1/2 )||m ∂(u, v) ∂(u, v) ∂(u, v)
∫ [ ] [ ] [ ]
0 1 1 0 1 0
= udet − u
− √1−uv2 −v2 − vdet − 1−uu2 −v2 − √1−uv2 −v2 + (1 − u2
− v 2 1/2
) det
A 1−u2 −v 2 0 1
∫ 2 2 √
u v
= √ +√ + 1 − u2 − v 2
1−u −v 2 2 1 − u2 − v 2
∫A
1
= √ .
A 1 − u2 − v2
{
u = r cos θ
Apply change-of-variable, (0 ≤ r ≤ 1, 0 ≤ θ < 2π), we have
v = r sin θ
∫ ∫ [ ]
1 1 cos θ −r sin θ
√ = √ det = 2π.
A 1 − u2 − v 2 [0,1]2 1 − r2 sin θ r cos θ
(b)
Proof. −2π.
4.
26
Proof. Suppose η has the representation η = f dx1 ∧ dx2 ∧ · · · ∧ dxk , where dxi is the standard elementary
1-form depending on the standard basis e1 , · · · , ek in Rk . Let a1 ,· · · ,ak be another basis for Rk and define
A = [a1 , · · · , ak ]. Then
η(x)((x; a1 ), · · · , (x; ak )) = f (x)detA.
If the frame (a1 , · · · , ak ) is orthonormal and right-handed, detA = 1. We consequently have
∫ ∫ ∫
η= f= η(x)((x; a1 ), · · · , (x; ak )).
A A x∈A
34 Orientable Manifolds
1.
2.
Proof. Let α : Uα → Vα and β : Uβ → Vβ be two coordinate patches and suppose W := Vα ∩Vβ is non-empty.
∀p ∈ W , denote by x and y the points in α−1 (W ) and β −1 (W ) such that α(x) = p = β(y), respectively.
Then
D(α ◦ r)−1 ◦ (β ◦ r)(r−1 (y)) = D(α ◦ r)−1 (p) · D(β ◦ r)(r−1 (y))
= D(r−1 ◦ α−1 )(p) · D(β ◦ r)(r−1 (y))
= Dr−1 (x)Dα−1 (p) · Dβ(y) · Dr(r−1 (y)).
3.
Proof. Denote by n the unit normal field corresponding to the orientation of M . Then [n, T ] is right-handed,
i.e. det[n, T ] > 0.
4.
−2π sin(2πu) 0 n1 [ ]
Proof. ∂α = 2π cos(2πu) , ∂α = 0. We need to find n = n2 , such that det n, ∂α , ∂α > 0, ||n|| = 1,
∂u ∂v ∂u ∂v
0 1 n3
and n ⊥ span{ ∂α ,
∂u ∂v
∂α
}. Indeed, ⟨n, ∂α
∂v ⟩ = 0 implies n 3 = 0, ⟨n, ∂u ⟩= 0 implies −n1 sin(2πu)+n
∂α
2 cos(2πu) =
n1 −2π sin(2πu) 0
0. Combined with the condition n21 +n22 +n23 = n21 +n22 = 1 and det n2 2π cos(2πu) 0 = (n1 cos(2πu)+
{ 0 0 1
n1 = cos(2πu)
n2 sin(2πu)) · 2π > 0, we can solve for n1 and n2 : . So the unit normal field corresponding
n2 = sin(2πu)
27
cos(2πu) 1
to this orientation of C is given by n = sin(2πu) . In particular, for u = 0, α(0, v) = (1, 0, v) and n = 0.
0 0
So n points outwards.
By Example 5, the orientation of {(x, y, z) : x2 + y 2 = 1, z = 0} is counter-clockwise and the orientation
of {(x, y, z) : x2 + y 2 = 1, z = 0} is clockwise.
5.
Proof. We can regard M as a 2-manifold in R3 and apply Example 5. The unit normal vector of M as
a 2-manifold is perpendicular to the plane where M lies on and points towards us. Example 5 then gives
the unit tangent vector field corresponding to the induced orientation of ∂M . Denote by n the unit normal
∂α ∂α ∂α
field corresponding to ∂M . If α is a coordinate patch of M , [n, ∂x 1
] is right-handed. Since [ ∂x ,
1 ∂x2
] is
∂α
right-handed and ∂x2 points into M , n points outwards from M .
Alternatively, we can apply Lemma 38.7.
6. (a)
Proof. The meaning of “well-defined” is that if x is covered by more than one coordinate patch of the same
coordinate system, the definition of λ(x) is unchanged. More precisely, assume x is both covered by αi1
and αi2 , as well as βj1 and βj2 , detD(αi−1
1
◦ βj1 )(βj−1
1
(x)) and detD(αi−1
2
◦ βj2 )(βj−1
2
(x)) have the same sign.
Indeed,
detD(αi−1
1
◦ βj1 )(βj−1
1
(x))
= detD(αi−1
1
◦ αi2 ◦ αi−1
2
◦ βj2 ◦ βj−1
2
◦ βj1 )(βj−1
1
(x))
= detD(αi−1
1
◦ αi2 )(αi−1
2
(x)) · detD(αi−1
2
◦ βj2 )(βj−1
2
(x)) · detD(βj−1
2
◦ βj1 )(βj−1
1
(x)).
Since detD(αi−11
◦αi2 ) > 0 and detD(βj−1
2
◦βj1 ) > 0, we can conclude detD(αi−1
1
◦βj1 )(βj−1
1
(x)) and detD(αi−1
2
◦
−1
βj2 )(βj2 (x)) have the same sign.
(b)
Proof. ∀x, y ∈ M . When x and y are sufficiently close, they can be covered by the same coordinate patch
αi and βj . Since detDαi−1 ◦ βj does not change sign in the place where αi and βj overlap (recall αi−1 ◦ βj is
a diffeomorphism from an open subset of Rk to an open subset of Rk ), we conclude λ is a constant, in the
place where αi and βj overlap. In particular, λ is continuous.
(c)
Proof. Since λ is continuous and λ is either 1 or -1, by the connectedness of M , λ must be a constant. More
precisely, as the proof of part (b) has shown, {x ∈ M : λ(x) = 1} and {x ∈ M : λ(x) = −1} are both open
sets. Since M is connected, exactly one of them is empty.
(d)
Proof. This is straightforward from part (a)-(c).
7.
Proof. By Example 4, the unit normal vector corresponding to the induced orientation of ∂M points outwards
from M . This is a special case of Lemma 38.7.
8.
28
Proof. We consider a general problem similar to that of Example 4: Let M be an n-manifold in Rn , oriented
naturally, what is the induced orientation of ∂M ?
Suppose h : U → V is a coordinate patch on M belonging to the natural orientation of M , about the
point p of ∂M . Then the map
h ◦ b(x) = h(x1 , · · · , xn−1 , 0)
gives the restricted coordinate patch on ∂M about p. The normal field N = (p ; T ) to ∂M corresponding to
the induced orientation satisfies the condition that the frame
[ ]
∂h(h−1 (p)) ∂h(h−1 (p))
(−1)n T (p), ,··· ,
∂x1 ∂xn−1
∂h
is right-handed. Since Dh is right-handed, (−1)n T and (−1)n−1 ∂x n
lie on the same side of the tangent plane
∂h
of M at p. Since ∂xn points into M , T points outwards from M . Thus, the induced orientation of ∂M is
characterized by the normal vector field to M pointing outwards from M . This is essentially Lemma 38.7.
To determine whether or not a coordinate patch on ∂M belongs to the induced orientation of ∂M , we
suppose α is a coordinate patch on ∂M about p. Define A(p) = D(h−1 ◦ α)(α−1 (p)). Then α belongs to the
induced orientation if and only if sgn(detA(p)) = (−1)n . Since Dα(α−1 (p)) = Dh((h−1 (p)) · A(p), we have
[ ][ ]
∂h(h−1 (p)) ∂h(h−1 (p)) 1 0
[(−1)n T (p), Dα(α−1 (p))] = (−1)n T (p), ,··· , .
∂x1 ∂xn−1 0 A(p)
Therefore, α belongs to the induced orientation if and only if [T (p), Dα(α−1 (p))] is right-handed.
p
Back to our particular problem, the unit normal vector to S n−1 at p is ||p|| . So α belongs to the orientation
n−1
[ −1
]
of S if and only if p, Dα(α (p)) is right-handed. If α(u) = p, we have
u1 1 0 ··· 0 0
u2 0 1 ··· 0 0
[ ]
··· ··· ··· ··· ··· ···
p, Dα(α−1 (p)) = .
un−1 0 0 ··· 0 1
√ −u −u −un−2 −un−1
1 − ||u||2 √ 1
2
√ 2
2
··· √ 2
√ 2
1−||u|| 1−||u|| 1−||u|| 1−||u||
√
Plain calculation yields det[p, Dα(α−1 (p))] = (−1)n+1 / 1 − ||u||2 . So α belongs to the orientation of S n−1 if
and only if n is odd. Similarly, we can show β belongs to the orientation of S n−1 if and only if n is even.
29
3. (a)
Proof. By Exercise 8 of §34, α and β always belong to different orientations of S n−1 . By Exercise 6 of §34,
α and β belong to opposite orientations of S n−1 .
(b)
Proof. Assume β ∗ η = −α∗ η, then by Theorem 35.2 and part (a)
∫ ∫ ∫ ∫ ∫ ∫
η= η+ η= α∗ η + (−1) β∗η = 2 α∗ η.
S n−1 S n−1 ∩{x∈Rn :xn >0} S n−1 ∩{x∈Rn :xn <0} A A A
Now we show β ∗ η = −α∗ η. Indeed, using our calculation in Exercise 8 of §34, we have
1 0 ··· 0 0
0 1 ··· 0 0
··· ··· ··· ··· ··· · · ·
Dα(u) = ,
0 0 ··· 0 1
−u −u −un−2 −un−1
√ 1
2
√ 2
2
··· √ 2
√ 2
1−||u|| 1−||u|| 1−||u|| 1−||u||
and
1 0 ··· 0 0
0 1 ··· 0 0
Dβ(u) = ··· ··· ··· ··· ··· · · · .
0 0 ··· 0 1
√ u1 √ u2
··· √un−2 √un−1
1−||u||2 1−||u||2 1−||u||2 1−||u||2
So for any x ∈ A,
∑
n
α∗ η(x) = (−1)i−1 fi ◦ α(u)detDα(1, · · · , bi, · · · , n)du1 ∧ · · · ∧ dun−1
i=1
{n−1 }
∑ −ui √
= ui (−1) n−1−i
√ + (−1) n−1
1− ||u||2 du1 ∧ · · · ∧ dun−1
i=1
1 − ||u||2
{n−1 }
∑ ui √
= − ui (−1)n−1−i √ + (−1)n−1 (−1) 1 − ||u||2 du1 ∧ · · · ∧ dun−1
i=1
1 − ||u||2
∑
n
= − (−1)i−1 fi ◦ β(u)detDβ(1, · · · , bi, · · · , n)du1 ∧ · · · ∧ dun−1
i=1
= −β ∗ η(x).
(c)
Proof. By our calculation in part (b), we have
∫ ∫ n−1
∑ √
ui
α∗ η = (−1)i−1 ui (−1)n−i √ + (−1)n−1 1 − ||u||2
A A i=1 1 − ||u||2
∫ ∑n−1 2 √
u
= (−1)n−1 √ i=1 i + 1 − ||u||2
A 1 − ||u||2
∫
1
= ± √ ̸= 0.
A 1 − ||u||2
30
36 A Geometric Interpretation of Forms and Integrals
1.
β∗ (y; bi ) = (p ; Dβ(y)bi )
= (p ; Dβ(y)[D(α−1 ◦ β)(y)]−1 ai )
= (p ; Dβ(y)D(β −1 ◦ α)(x)ai )
= (p ; Dα(x)ai )
= α∗ (x; ai ).
Moreover, [b1 , · · · , bk ] = D(β −1 ◦ α)(x)[a1 , · · · , ak ]. Since detD(β −1 ◦ α)(x) > 0, [b1 , · · · , bk ] is right-handed
if and only if [a1 , · · · , ak ] is right-handed.
√
Plain calculation shows ||c|| = 1+3u2 +3v 2
1−u2 −v 2 , so
− √1+3u
2u
2 +3v 2
√
1−u2 −v 2
n = − √1+3u2 +3v 2
.
− 1+3u2 +3v2
√ 2v
0
In particular, at the point α(0, 0) = (0, 2, 0), n = −1, which points inwards into {(x1 , x2 , x3 ) : 4(x1 )2 +
0
(x2 )2 +4(x3 )2 ≤ 4, x2 ≥ 0}. By Example 5 of §34, the tangent vector corresponding to the induced orientation
of ∂M is easy to determine.
(b)
31
Proof. According to the result of part (a), we can choose the following coordinate patch which belongs to
the induced orientation of ∂M : β(θ) = (cos θ, 0, sin θ) (0 ≤ θ2π). By Theorem 35.2, we have
∫ ∫
x2 dx1 + 3x1 dx3 = 3 cos θ · cos θ = 3π.
∂M [0,2π)
(c)
Proof. dω = −dx1 ∧ dx2 + 3dx1 ∧ dx3 . So
∫ ∫
dω = −dx1 ∧ dx2 + 3dx1 ∧ dx3
M
∫ M
5. (a)
Proof. By Stokes’ Theorem, we have
∫ ∫ ∫ ∫ ∫ ∫
b b
dω = ω= ω+ ω= ω− ω= − .
M ∂M S 2 (d) −S 2 (c) S 2 (d) S 2 (c) d c
(b)
∫
Proof. If dω = 0, we conclude from part (a) that b = 0. This implies S 2 (r)
ω = a. To be continued ...
(c)
∫
Proof. If ω = dη, by part (b) we conclude b = 0. Moreover, Stokes’ Theorem implies a = ω =
∫ S 2 (r)
S 2 (r)
dη = 0.
6.
∫ ∫ ∫
Proof. M ∫ d(ω ∧ η) = ∂M ω ∧ η = 0. Since d(ω ∧ η) = dω ∧ η + (−1)k ω ∧ dη, we conclude M ω ∧ dη =
(−1)k+1 M dω ∧ η. So a = (−1)k+1 .
32
38 Applications to Vector Analysis
1.
Proof. Let M = {x ∈ R3 : c ≤ ||x|| ≤ d} oriented with the natural orientation. By the divergence theorem,
∫ ∫
(divG)dV = ⟨G, N ⟩dV,
M ∂M
where N is the unit normal vector field to ∂M that points outwards from M . For the coordinate patch for
M:
x1 = r sin θ cos ϕ
x2 = r sin θ sin ϕ (c ≤ r ≤ d, 0 ≤ θ < π, 0 ≤ ϕ < 2π)
x3 = r cos θ,
we have
sin θ cos ϕ r cos θ cos ϕ −r sin θ sin ϕ
∂(x1 , x2 , x3 )
det = det sin θ sin ϕ r cos θ sin ϕ r sin θ cos ϕ = r2 sin θ.
(r, θ, ϕ)
cos θ −r sin θ 0
∫ ∫ ∫ ∫ ∫
1 ,x2 ,x3 )
So M (divG)dV = 1r det ∂(x(r,θ,ϕ) = 0. Meanwhile ∂M ⟨G, N ⟩dV = S 2 (d) ⟨G, Nr ⟩dV − S 2 (c) ⟨G, Nr ⟩dV .
∫ ∫
So we conclude S 2 (d) ⟨G, Nr ⟩dV = S 2 (c) ⟨G, Nr ⟩dV .
2. (a)
Proof. We let M3 = B n (ε). Then for ε small enough, M3 is contained by both M1 − ∂M1 and M2 − ∂M2 .
Applying the divergence theorem, we have (i = 1, 2)
∫ ∫ ∫
0= (divG)dV = ⟨G, Ni ⟩dV − ⟨G, N3 ⟩dV,
Mi −IntM3 ∂Mi ∂M3
where
∫ N3 is the unit outward ∫ normal vector field to ∂M3 . This shows that regardless i = 1 or i = 2,
∂Mi
⟨G, N i ⟩dV is a constant ∂M3
⟨G, N3 ⟩dV .
(b)
∫
Proof. We have shown that if the origin is contained in M − ∂M , the integral ∂M ⟨G, N ⟩dV is a constant.
If the origin is not contained in M − ∂M , by the compactness
∫ of M , we conclude the origin is in the exterior
of M . Applying the divergence theorem implies ∂M ⟨G, N ⟩dV = 0. So this integral has only two possible
values.
3.
Proof. Four possible values. Apply the divergence theorem (like in Exercise 3) and carry out the computation
in the following four cases: 1) both p and q are contained by M − ∂M ; 2) p is contained by M − ∂M but q
is not; 3) q is contained by M − ∂M but p is not; 4) neither p nor q is contained by M − ∂M .
4.
33
39 The Poincaré Lemma
2. (a)
Proof. Let ω ∈ Ωk (B) with dω = 0. Then g ∗ ω ∈ Ωk (A) and d(g ∗ ω) = g ∗ (dω) = 0. Since A is homologically
trivial in dimension k, there exists ω1 ∈ Ωk (A) such that dω1 = g ∗ ω. Then ω2 = (g −1 )∗ (ω1 ) ∈ Ωk (B) and
dω2 = d(g −1 )∗ (ω1 ) = (g −1 )∗ (dω1 ) = (g −1 )∗ g ∗ ω = (g ◦ g −1 )∗ ω = ω. Since ω is arbitrary, we conclude B is
homologically trivial in dimension k.
(b)
√
Proof. Let A = [ 12 , 1] × [0, π] and B = {(x, y) : 12 ≤ x2 + y 2 ≤ 1, x, y ≥ 0}. Define g : A → B as
g(r, θ) = (r cos θ, r sin θ). By the Poincaré lemma, A is homologically trivial in every dimension. By part (a)
of this exercise problem, B is homologically trivial in every dimension. But B is clearly not star-convex.
3.
Proof. Let p ∈ A and define X = {x ∈ A : x can be joined by a broken-line path in A}. Since Rn is locally
convex, it is easy to see X is an open subset of A.
(Sufficiency) Assume A is connected. Then X = A. For any closed 0-form f , ∀x ∈ ∫ A, denote by γ a
broken-line path that joins x and p. We have by virtue of Newton-Leibnitz formula 0 = γ df = f (x) − f (p).
So f is a constant, i.e. an exact 0-form, on A. Hence A is homologically trivial in dimension 0.
(Necessity) Assume A is not connected. Then A can be decomposed into the joint union of at least two
open subsets, say, A1 and A2 . Define {
1, on A1
f=
0, on A2 .
Then f is a closed 0-form, but not exact. So A is not homologically trivial in dimension 0.
4.
∑ ∑
Proof. Let η = [I] fI dxI + [J] gJ dxJ ∧ dt, where I denotes an ascending (k + 1)-tuple and J denotes an
∑
ascending k-tuple, both from the set {1, · · · , n}. Then P η = [J] gJ dxJ and
∑
(P η)(x)((x; v1 ), · · · , (x; vk )) = (−1)k (LgJ )det[v1 · · · vk ]J .
[J]
34
Therefore
∫ t=1
(−1) k
η(y)((y; w1 ), · · · , (y; wk ), (y; en+1 ))
t=0
∑∫ t=1
= (−1)k gJ det[v1 · · · vk ]J
[J] t=0
∑
= (−1)k (LgJ )det[v1 · · · vk ]J
[J]
Proof. This is already proved on page 334 of the book, esp. in the last paragraph.
(b)
Proof. Step 1. We prove the theorem for n = 1. Without loss of generality, we assume p < q. Let
A = R1 − p − q; write A = A0 ∪ A1 ∪ A2 , where A0 = (−∞, p), A1 = (p, q), and A2 = (q, ∞). If ω is a closed
k-form in A, with k > 0, then ω|A0 , ω|A1 and ω|A2 are closed. Since A0 , A1 , A2 are all star-convex, there
are k − 1 forms η0 , η1 and η2 on A0 , A1 and A2 , respectively, such that dηi = ω|Ai for i = 0, 1, 2. Define
η = ηi on Ai , i = 0, 1, 2. Then η is well-defined and of class C ∞ , and dη = ω.
Now let f0 be the 0-form in A defined by setting f0 (x) = 0 for x ∈ A1 ∪ A2 and f0 (x) = 1 for x ∈ A0 ;
let f1 be the 0-form in A defined by setting f1 (x) = 0 for x ∈ A0 ∪ A2 and f1 (x) = 1 for x ∈ A1 . Then f0
and f1 are closed forms, and they are not exact. We show the cosets {f0 } and {f1 } form a basis for H 0 (A).
35
Given a closed 0-form f in A, the forms f |A0 , f |A1 , and f |A2 are closed and hence exact. Then there are
constants c0 , c1 , and c2 such that f |Ai = ci , i = 0, 1, 2. It follows that
Now Step 2 tells us that H k (A) has the same dimension as the deRham group of Rn deleting two points, and
the induction hypothesis implies that the latter has dimension 0 if k ̸= n − 1, and dimension 2 if k = n − 1.
The theorem follows.
6.
Proof. The theorem of Exercise 5 can be restated in terms of forms as follows: Let A = Rn − p − q with
n ≥ 1.
(a) If k ̸= n − 1, then every closed k-form on A is exact on A.
(b) There are two closed (n − 1) forms, η1 and η2 , such that η1 , η2 , and η1 − η2 are not exact. And if η is
any closed (n − 1) form on A, then there exist unique scalars c1 and c2 such that η − c1 η1 − c2 η2 is exact.
References
[1] J. Munkres. Analysis on manifolds, Westview Press, 1997. (document)
[2] P. Lax. Linear algebra and its applications, 2nd Edition, Wiley-Interscience, 2007.
36