Integracion y Seriessss

Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

Analysis on Manifolds

Solution of Exercise Problems


Yan Zeng
Version 0.1.1, last revised on 2014-03-25.

Abstract
This is a solution manual of selected exercise problems from Analysis on manifolds, by James R.
Munkres [1]. If you find any typos/errors, please email me at [email protected].

Contents
1 Review of Linear Algebra 3

2 Matrix Inversion and Determinants 3

3 Review of Topology in Rn 4

4 Compact Subspaces and Connected Subspace of Rn 5

5 The Derivative 5

6 Continuously Differentiable Functions 5

7 The Chain Rule 6

8 The Inverse Function Theorem 6

9 The Implicit Function Theorem 6

10 The Integral over a Rectangle 6

11 Existence of the Integral 7

12 Evaluation of the Integral 7

13 The Integral over a Bounded Set 7

14 Rectifiable Sets 7

15 Improper Integrals 7

16 Partition of Unity 7

17 The Change of Variables Theorem 7

18 Diffeomorphisms in Rn 7

1
19 Proof of the Change of Variables Theorem 7

20 Applications of Change of Variables 7

21 The Volume of a Parallelepiped 7

22 The Volume of a Parametrized-Manifold 8

23 Manifolds in Rn 10

24 The Boundary of a Manifold 11

25 Integrating a Scalar Function over a Manifold 13

26 Multilinear Algebra 15

27 Alternating Tensors 16

28 The Wedge Product 16

29 Tangent Vectors and Differential Forms 18

30 The Differential Operator 19

31 Application to Vector and Scalar Fields 20

32 The Action of a Differentiable Map 22

33 Integrating Forms over Parametrized-Manifolds 26

34 Orientable Manifolds 27

35 Integrating Forms over Oriented Manifolds 29

36 A Geometric Interpretation of Forms and Integrals 31

37 The Generalized Stokes’ Theorem 31

38 Applications to Vector Analysis 33

39 The Poincaré Lemma 34

40 The deRham Groups of Punctured Euclidean Space 35

41 Differentiable Manifolds and Riemannian Manifolds 36

2
1 Review of Linear Algebra
A good textbook on linear algebra from the viewpoint of finite-dimensional spaces is Lax [2]. In the
below, we make connections between the results presented in the current section and that reference.
Theorem 1.1 (page 2) corresponds to Lax [2, page 5], Chapter 1, Lemma 1.
Theorem 1.2 (page 3) corresponds to Lax [2, page 6], Chapter 1, Theorem 4.
Theorem 1.5 (page 7) corresponds to Lax [2, page 37], Chapter 4, Theorem 2 and the paragraph below
Theorem 2.

2. (Theorem 1.3, page 5) If A is an n by m matrix and B is an m by p matrix, show that

|A · B| ≤ m|A||B|.

Proof. For any i = 1, · · · n, j = 1, · · · , p, we have


m
∑ ∑ m ∑
m

aik bkj ≤ |aik bkj | ≤ |A| |bkj | ≤ m|A||B|.

k=1 k=1 k=1

Therefore, { m }


|A · B| = max aik bkj ; i = 1, · · · n, j = 1, · · · , p ≤ m|A||B|.

k=1

3. Show that the sup norm on R2 is not derived from an inner product on R2 . [Hint: Suppose ⟨x, y⟩ is an
inner product on R2 (not the dot product) having the property that |x| = ⟨x, x⟩1/2 . Compute ⟨x ± y, x ± y⟩
and apply to the case x = e1 and y = e2 .]
1
Proof. Suppose ⟨·, ·⟩ is an inner product on R2 having the property that |x| = ⟨x, x⟩ 2 , where |x| is the sup
norm. By the equality ⟨x, y⟩ = 14 (|x + y|2 − |x − y|2 ), we have

1 1 3
⟨e1 , e1 + e2 ⟩ = (|2e1 + e2 |2 − |e2 |2 ) = (4 − 1) = ,
4 4 4
1 1
⟨e1 , e2 ⟩ = (|e1 + e2 |2 − |e1 − e2 |2 ) = (1 − 1) = 0,
4 4
⟨e1 , e1 ⟩ = |e1 | = 1.
2

So ⟨e1 , e1 +e2 ⟩ ̸= ⟨e1 , e2 ⟩+⟨e1 , e1 ⟩, which implies ⟨·, ·⟩ cannot be an inner product. Therefore, our assumption
is not true and the sup norm on R2 is not derived from an inner product on R2 .

2 Matrix Inversion and Determinants


1. Consider the matrix  
1 2
A =  1 −1 
0 1
(a) Find two different left inverse for A.
(b) Show that A has no right inverse.
(a)

3
( ) ( )
b11 b12 b13 b + b12 2b11 − b12 + b13
Proof. B = . Then BA = 11 . So BA = I2 if and only if
b21 b22 b23 b21 + b22 2b21 − b12 + b23


b11 + b12 = 1

b + b = 0
21 22
2b11 − b12 + b13 = 0



2b21 − b22 + b23 = 1.

Plug −b12 = b11 − 1 and −b22 = b21 into the las two equations, we have
{
3b11 + b13 = 1
3b21 + b23 = 1.
( ) ( )
0 1 1 1 0 −2
So we can have the following two different left inverses for A: B1 = and B2 = .
0 0 1 1 −1 −2

(b)
Proof. By Theorem 2.2, A has no right inverse.

2.
Proof. (a) By Theorem 1.5, n ≥ m and among the n row vectors of A, there are exactly m of them are
linearly
[ ] independent. By applying elementary row operations to A, we can reduce A to the [ echelon
] form
Im Im
. So we can find a matrix D that is a product of elementary matrices such that DA = .
0 0
(b) If rankA = m, by part (a) there exists a matrix D that is a product of elementary matrices such that
[ ]
I
DA = m .
0

Let B = [Im , 0]D, then BA = Im , i.e. B is a left inverse of A. Conversely, if B is a left inverse of A, it is
easy to see that A as a linear mapping from Rm to Rn is injective. This implies the column vectors of A are
linearly independent, i.e. rankA = m.
(c) A has a right inverse if and only if Atr has a left inverse. By part (b), this implies rankA = rankAtr =
n.
4.

Proof. Suppose (Dk )K k=1 is a sequence of elementary matrices such that DK · · · D2 D1 A = In . Note DK · · · D2 D1 A =
DK · · · D2 D1 In A, we can conclude A−1 = DK · · · D2 D1 In .

5.
( )
d −b
Proof. A−1 = 1
by Theorem 2.14.
−c a d−bc

3 Review of Topology in Rn
2.

Proof. X = R, Y = (0, 1], and A = Y .


3.

4
Proof. For any closed subset C of Y , f −1 (C) = [f −1 (C) ∩ A] ∪ [f −1 (C) ∩ B]. Since f −1 (C) ∩ A is a closed
subset of A, there must be a closed subset D1 of X such that f −1 (C) ∩ A = D1 ∩ A. Similarly, there is a
closed subset D2 of X such that f −1 (C) ∩ B = D2 ∩ B. So f −1 (C) = [D1 ∩ A] ∪ [D2 ∩ B]. A and B are
closed in X, so D1 ∩ A, D2 ∩ B and [D1 ∩ A] ∪ [D2 ∩ B] are all closed in X. This shows f is continuous.
7.
Proof. (a) Take f (x) ≡ y0 and let g be such that g(y0 ) ̸= z0 but g(y) → z0 as y → y0 .

4 Compact Subspaces and Connected Subspace of Rn


1.
Proof. (a) Let xn = (2nπ+ π2 )−1 and yn = (2nπ− π2 )−1 . Then as n → ∞, |xn −yn | → 0 but | sin x1n −sin y1n | =
2.
3.
Proof. The boundedness of X is clear. Since for any i ̸= j, ||ei − ej || = 1, the sequence (ei )∞
i=1 has no
accumulation point. So X cannot be compact. Also, the fact ||ei − ej || = 1 for i ̸= j shows each ei is
an isolated point of X. Therefore X is closed. Combined, we conclude X is closed, bounded, and non-
compact.

5 The Derivative
1.
Proof. By definition, limt→0 f (a+tu)−f
t
(a)
exists. Consequently, limt→0 f (a+tu)−f (a)
t = limt→0 f (a+tcu)−f (a)
ct
exists and is equal to cf ′ (a; u).
2.
u1 u2
Proof. (a) f (u) = f (u1 , u2 ) = u21 +u22
. So

f (tu) − f (0) 1 t2 u1 u2 1 u1 u2
= 2 2 2 = .
t t t (u1 + u2 ) t u21 + u22

In order for limt→0 f (tu)−f


t
(0)
to exist, it is necessary and sufficient that u1 u2 = 0 and u21 + u22 ̸= 0. So for
vectors (1, 0) and (0, 1), f (0; u) exists, and we have f ′ (0; (1, 0)) = f ′ (0; (0, 1)) = 0.

(b) Yes, D1 f (0) = D2 f (0) = 0.


kx2
2 x2 = 1+k 2 . For k ̸= 0, the limit
k
(c) No, because f is not continuous at 0: lim(x,y)→0,y=kx f (x, y) = x2 +k
is not equal to f (0).
(d) See (c).

6 Continuously Differentiable Functions


1.
Proof. We note
|xy| 1 x2 + y 2 1√ 2
√ ≤ √ = x + y2 .
x2 + y 2 2 x2 + y 2 2

So lim(x,y)→0 √ |xy| = 0. This shows f (x, y) = |xy| is differentiable at 0 and the derivative is 0. However,
x +y 2
2

for any fixed y, f (x, y) is not a differentiable function of x at 0. So its partial derivative w.r.t. x does not
exist in a neighborhood of 0, which implies f is not of class C 1 in a neighborhood of 0.

5
7 The Chain Rule
8 The Inverse Function Theorem
9 The Implicit Function Theorem
10 The Integral over a Rectangle
6.
Proof. (a) Straightforward from the Riemann condition (Theorem 10.3).
(b) Among all the sub-rectangles determined by P , those whose sides contain the newly added point have
a combined volume no greater than (meshP )(width(Q))n−1 . So
′′
0 ≤ L(f, P ) − L(f, P ) ≤ 2M (meshP )(widthQ)n−1 .
The result for upper sums can be derived similarly.
(c) Given ε > 0, choose a partition P ′ such that U (f, P ′ ) − L(f, P ′ ) < 2ε . Let N be the number of
partition points in P ′ and let
ε
δ= .
8M N (widthQ)n−1
Suppose P has mesh less than δ, the common refinement P ′′ of P and P ′ is obtained by adjoining at most
N points to P . So by part (b)
ε ε
0 ≤ L(f, P ′′ ) − L(f, P ) ≤ N · 2M (meshP )(widthQ)n−1 ≤ 2M N (widthQ)n−1 = .
8M N (widthQ)n−1 4
Similarly, we can show 0 ≤ U (f, P ) − U (f, P ′′ ) ≤ 4ε . So
U (f, P ) − L(f, P ) = [U (f, P ) − U (f, P ′′ )] + [L(f, P ′′ ) − L(f, P )] + [U (f, P ′′ ) − L(f, P ′′ )]
ε
≤ + ε4 + [U (f, P ′ ) − L(f, P ′ )]
4
ε ε
≤ +
2 2
= ε.
This shows for any given ε > 0, there is a δ > 0 such that U (f, P ) − L(f, P ) < ε for every partition P of
mesh less than δ.
7.

Proof. (Sufficiency) Note | R f (xR )v(R) − A| < ε can be written as

A−ε< f (xR )v(R) < A + ε.
R

This shows U (f, P ) ≤ A +∫ε and L(f, P ) ≥ A − ε. So U (f, P ) − L(f, P ) ≤ 2ε. By ∫Problem 6, we conclude f
is integrable over Q, with Q f ∈ [A − ε, A + ε]. Since ε is arbitrary, we conclude Q f = A.
(Necessity) By Problem 6, for any given ε > 0, there is a δ > 0 such that U (f, P ) − L(f, P ) < ε for every
partition P of mesh less than δ. For any such partition P , if for each sub-rectangle R determined by P , xR
is a point of R, we must have

L(f, P ) − A ≤ f (xR )v(R) − A ≤ U (f, P ) − A.
R

Since L(f, P ) ≤ A ≤ U (f, P ), we conclude



| f (xR )v(R) − A| ≤ U (f, P ) − L(f, P ) < ε.
R

6
11 Existence of the Integral
12 Evaluation of the Integral
13 The Integral over a Bounded Set
14 Rectifiable Sets
15 Improper Integrals
16 Partition of Unity
17 The Change of Variables Theorem
18 Diffeomorphisms in Rn
19 Proof of the Change of Variables Theorem
20 Applications of Change of Variables
21 The Volume of a Parallelepiped
1. (a)
   
( ) a 1 + a2 ab ac
I   
tr tr
Proof. Let v = (a, b, c), then X X = (I3 , v ) 3
= I3 + b (a, b, c) = ab 1 + b2 bc .
v
c ca cb 1 + c2

(b)

Proof. We use both methods:

V (X) = [det(X tr · X)]1/2 = [(1 + a2 )(1 + b2 + c2 ) − ab · ab + ca · (−ac)]1/2 = (1 + a2 + b2 + c2 )1/2

and
      1/2
1 0 0 0 1 0 1 0 0
V (X) = det2 I3 + det2 0 1 0 + det2 0 0 1 + det2 0 0 1  = (1 + c2 + a2 + b2 )1/2 .
a b c a b c a b c

2.

Proof. Let X = (x1 , · · · , xi , · · · , xk ) and Y = (x1 , · · · , λxi , · · · , xk ). Then V (Y ) = [ [I] det2 YI ]1/2 =
∑ ∑ 1
[ [I] λ2 det2 XI ]1/2 = |λ|[ [I] det2 XI ] 2 = |λ|V (X).

3.
Proof. Suppose P is determined by x1 , · · · , xk . Then V (h(P)) = V (λx1 , · · · , λxk ) = |λ|V (x1 , λx2 , · · · , λxk ) =
· · · = |λ|k V (x1 , x2 , · · · , xk ) = |λ|k V (P).
4. (a)

7
Proof. Straightforward.

(b)
Proof.


3 ∑
3 ∑
3
||a||2 ||b||2 − ⟨a, b⟩2 = ( a2i )( b2j ) − ( ak bk )2
i=1 j=1 k=1


3 ∑
3
= a2i b2j − a2k b2k − 2(a1 b1 a2 b2 + a1 b1 a3 b3 + a2 b2 a3 b3 )
i,j=1 k=1


3
= a2i b2j − 2(a1 b1 a2 b2 + a1 b1 a3 b3 + a2 b2 a3 b3 )
i,j=1,i̸=j

= (a2 b3 − a3 b2 )2 + (a1 b3 − a3 b1 )2 + (a1 b2 − a2 b1 )2


( ) ( ) ( )
2 a2 b2 2 a1 b1 2 a1 b1
= det + det + det .
a3 b3 a3 b3 a2 b2

5. (a)

Proof. Suppose V1 and V2 both satisfy conditions (i)-(iv). Then by the Gram-Schmidt process, the uniqueness
is reduced to V1 (x1 , · · · , xk ) = V2 (x1 , · · · , xk ), where x1 , · · · , xk are orthonormal.

(b)
Proof. Following the hint, we can assume without loss of generality that W = Rn and the inner product is
the dot product on Rn . Let V (x1 , · · · , xk ) be the volume function, then (i) and (ii) are implied by Theorem
21.4, (iii) is Problem 2, and (iv) is implied by Theorem 21.3: V (x1 , · · · , xk ) = [det(X tr X)]1/2 .

22 The Volume of a Parametrized-Manifold


1.

Proof. By definition, v(Zβ ) = A V (Dβ). Let x denote the general point of A; let y = α(x) and z = h◦α(x) =
β(y). By chain rule, Dβ(x) = Dh(y) · Dα(x).
∫ So [V (Dβ(x))]

2
= det(Dα(x)tr Dh(y)tr Dh(y)Dα(x)) =
2
[V (Dα(x))] by Theorem 20.6. So v(Zβ ) = A V (Dβ) = A V (Dα) = v(Yα ).

2.
Proof. Let x denote the general point of A. Then
 
1 0 ··· 0
 0 1 ··· 0 
 
 0 0 ··· 0 
Dα(x) =   ···

 ··· ··· ···  
 0 0 ··· 1 
D1 f (x) D2 f (x) · · · Dk f (x)
[ ∑k ]1/2 ∫ [ ∑k ]1/2
and by Theorem 21.4, V (Dα(x)) = 1 + i=1 (Di f (x))2 . So v(Yα ) = A 1 + i=1 (Di f (x))2 .

3. (a)

8
( )
∫ ∫ ∫ −a sin t
Proof. v(Yα ) = A V (Dα) and Yα πi dV = A πi ◦ αV (Dα). Since Dα(t) = , V (Dα) = |a|. So
a cos t
∫ ∫π ∫ ∫π
v(Yα ) = |a|π, Yα π1 dV = 0 a cos t|a| = 0, and Yα π2 dV = 0 a sin t|a| = 2a|a|. Hence the centroid is
(0, 2a/π).
(b)
Proof. By Example 4, v(Yα ) = 2πa2 and
∫ ∫ ∫ 2π ∫ a
a r cos θ · ar
π1 dV = x√ = √ = 0,
Yα A a −x −y
2 2 2
0 0 a2 − r 2
∫ ∫ ∫ 2π ∫ a
a r sin θ · ar
π2 dV = y√ = √ = 0,
Yα A a −x −y
2 2 2
0 0 a2 − r2
∫ ∫ √
a
π3 dV = a2 − x2 − y 2 √ = a3 π.
Yα A a − x2 − y 2
2

So the centroid is (0, 0, a2 ).


4. (a)

Proof. v(∆1 (R)) = A V (Dα), where A is the (open) triangle in R2 with vertices (a, b), (a + h, b) and
(a + h, b + h). V (Dα) is a continuous function on the compact set Ā, so it achieves its maximum M and
minimum m on Ā. Let x1 , x2 ∈ Ā be such that V (Dα(x1 )) = M and V (Dα(x2 )) = m, respectively. Then

v(A) · m ≤ v(∆1 (R)) ≤ v(A) · M.

∫ considering the segment connecting x1 and x2 , we can find a point ξ ∈ Ā such that V (Dα(ξ))v(A) =
By
A
V (Dα). This shows there is a point ξ of R such that

1
v(∆1 (R)) = V (Dα) = V (Dα(ξ))v(A) = V (Dα(ξ)) · v(R).
A 2
A similar result for v(∆2 (R)) can be proved similarly.
(b)
Proof. V (Dα) as a continuous function is uniformly continuous on the compact set Q.
(c)
Proof.
∫ ∑ ∫

A(P ) −
V (Dα) ≤ V (Dα)
v(∆1 (R)) + v(∆2 (R)) −
Q R R

∑ 1 ∫

V (Dα)
= 2 [V (Dα(ξ1 (R))) + V (Dα(ξ2 (R)))] v(R) −
R R
∑ ∫ V (Dα(ξ1 (R))) + V (Dα(ξ2 (R)))

≤ − V (Dα) .
2
RR

Given ε > 0, there exists a δ > 0 such that if x1 , x2 ∈ Q with |x1 − x2 | < δ, we must have |V (Dα(x1 )) −
ε
V (Dα(x2 ))| < v(Q) . So for every partition P of Q of mesh less than δ,
∫ ∑∫
ε
A(P ) − V (Dα) < = ε.
v(Q)
Q R R

9
23 Manifolds in Rn
1.

Proof. In this case, we set U = R and V = M = {(x, x2 ) : x ∈ R}. Then α maps U onto V in a one-to-one
fashion. Moreover, we have
(1) α is of class C ∞ .
(2) α−1 ((x, x2[)) =
] x is continuous, for (xn , xn ) → (x, x ) as n → ∞ implies xn → x as n → ∞.
2 2

1
(3) Dα(x) = has rank 1 for each x ∈ U .
2x
So M is a 1-manifold in R2 covered by the single coordinate patch α.
2.

Proof. We let U = H1 and V = N = {(x, x2 ) : x ∈ H1 }. Then β maps U onto V in a one-to-one fashion.


Moreover, we have
(1) β is of class C ∞ .
(2) β −1 ((x, x2[)) =
] x is continuous.
1
(3) Dβ(x) = has rank 1 for each x ∈ H1 .
2x
So N is a 1-manifold in R2 covered by the single coordinate patch β.
3. (a)

Proof. For any point p ∈ S 1 with p ̸= (1, 0), we let U = (0, 2π), V = S 1 − (1, 0), and α : U → V be defined
by α(θ) = (cos θ, sin θ). Then α maps U onto V continuously in a one-to-one fashion. Moreover,
(1) α is of class C ∞ .
(2) α−1 is continuous,
[ ] for (cos θn , sin θn ) → (cos θ, sin θ) as n → ∞ implies θn → θ as n → ∞.
− sin θ
(3) Dα(θ) = has rank 1.
cos θ
So α is a coordinate patch. For p = (1, 0), we consider U = (−π, π), V = S 1 − (−1, 0), and β : U → V
be defined by β(θ) = (cos θ, sin θ). We can prove in a similar way that β is a coordinate patch. Combined,
we can conclude the unit circle S 1 is a 1-manifold in R2 .
(b)

Proof. We claim α−1 is not continuous. Indeed, for tn = 1 − 1


n, α(tn ) → (1, 0) on S 1 as n → ∞, but
α−1 (α(tn )) = tn → 1 ̸= α−1 ((1, 0)) = 0 as n → ∞.

4.
Proof. Let U = A and V = {(x, f (x)) : x ∈ A}. Define α : U → V by α(x) = (x, f (x)). Then α maps U
onto V in a one-to-one fashion. Moreover,
(1) α is of class C r .
(2) α−1 is continuous,
[ ] for (xn , f (xn )) → (x, f (x)) as n → ∞ implies xn → x as n → ∞.
Ik
(3) Dα(x) = has rank k.
Df (x)
So V is a k-manifold in Rk+1 with a single coordinate patch α.

5.
Proof. For any x ∈ M and y ∈ N , there is a coordinate patch α for x and a coordinate patch β for y,
respectively. Denote by U the domain of α, which is open in Rk and by W the domain of β, which is open in
either Rl or Hl . Then U × W is open in either Rk+l or Hk+l , depending on W is open in Rl or Hl . This is the
essential reason why we need at least one manifold to have no boundary: if both M and N have boundaries,
U × W may not be open in Rk+l or Hk+l .

10
The rest of the proof is routine. We define a map f : U × W → α(U ) × β(W ) by f (x, y) = (α(x), β(y)).
Since α(U ) is open in M and β(W ) is open in N by the definition of coordinate patch, f (U × W ) =
α(U ) × β(W ) is open in M × N under the product topology. f is one-to-one and continuous, since α and β
enjoy such properties. Moreover,
(1) f is of class C r , since α and β are of class C r .
(2) f −1 = (α−1 [, β −1 ) is continuous −1 −1
] since α and β are continuous.
Dα(x) 0
(3) Df (x, y) = clearly has rank k + l for each (x, y) ∈ U × W .
0 Dβ(y)
Therefore, we conclude M × N is a k + l manifold in Rm+n .

6. (a)

Proof. We define α1 : [0, 1) → [0, 1) by α1 (x) = x and α2 : [0, 1) → (0, 1] by α2 (x) = −x + 1. Then it’s easy
to check α1 and α2 are both coordinate patches.

(b)
Proof. Intuitively I × I cannot be a 2-manifold since it has “corners”. For a formal proof, assume I × I is
a 2-manifold of class C r with r ≥ 1. By Theorem 24.3, ∂(I × I), the boundary of I × I, is a 1-manifold
without boundary of class C r . Assume α is a coordinate patch of ∂(I × I) whose image includes one of those
corner points. Then Dα cannot exist at that corner point, contradiction. In conclusion, I × I cannot be a
2-manifold of class C r with r ≥ 1.

24 The Boundary of a Manifold


1.

Proof. The equation for the solid torus N in cartesian √ coordinates is (b − x2 + y 2 )2 + z 2 ≤ a2 , and the
equation for the torus T in cartesian coordinates is (b − x2 + y2 )2 + z 2 = a2 . Define O = R and f : O → R
2x − √ 2xb
√  x2 +y 2 
by f (x, y, z) = a2 − z 2 − (b − x2 + y 2 )2 . Then Df (x, y, z) =  √ 
2y − x2 +y2  has rank 1 at each point of
2yb

−2z
T . By Theorem 24.4, N is a 3-manifold and T = ∂N is a 2-manifold without boundary.
2.

Proof. We first prove a regularization result.


Lemma 24.1. Let f : Rn+k → Rn be of class C r . Assume Df has rank n at a point p, then there is an open
set W ⊂ Rn+k and a C r -function G : W → Rn+k with C r -inverse such that G(W ) is an open neighborhood
of p and f ◦ G : W → Rn is the projection mapping to the first n coordinates.

Proof. We write any point x ∈ Rn+k as (x1 , x2 )[with x1 ∈ Rn] and x2 ∈ Rk . We first assume Dx1 f (p) has
Dx 1 f D x 2 f
rank n. Define F (x) = (f (x), x2 ), then DF = . So detDF (p) = detDx1 f (p) ̸= 0. By the
0 Ik
inverse function theorem, there is an open set U of Rn+k containing p such that F carries U in a one-to-one
fashion onto an open set W of Rn+k and its inverse function G is of class C r . Denote by π : Rn+k → Rn the
projection π(x) = x1 , then f ◦ G(x) = π ◦ F ◦ G(x) = π(x) on W .
∂(f1 ,··· ,fn )
In general, since Df (p) has rank n, there will be j1 < · · · < jn such that the matrix ∂(x j1 ,··· ,xjn ) has rank

n at p. Here x denotes the j-th coordinate of x. Define H : R


j n+k
→R n+k
as the permutation that swaps the
pairs (x1 , xj1 ), (x2 , xj2 ), · · · , (xn , xjn ), i.e. H(x) = (xj1 , xj2 , · · · , xjn , · · · ) − (pj1 , pj2 , · · · , pjn , · · · ) + p. Then
∂(f1 ,··· ,fn )
H(p) = p and D(f ◦H)(p) = Df (H(p))DH(p) = Df (p)·DH(p). So Dx1 (f ◦H)(p) = ∂(x j1 ,··· ,xjn ) (p) and f ◦H

is of the type considered previously. So using the notation of the previous paragraph, f ◦ (H ◦ G)(x) = π(x)
on W .

11
By the lemma and using its notation, ∀p ∈ M = {x : f (x) = 0}, there is a C r -diffeomorphism G
between an open set W of Rn+k and an open set U of Rn+k containing p, such that f ◦ G = π on W . So
U ∩ M = {x ∈ U : f (x) = 0} = G(W ) ∩ (f ◦ G ◦ G−1 )−1 ({0}) = G(W ) ∩ G(π −1 ({0})) = G(W ∩ {0} × Rk ).
Therefore α(x1 , · · · , xk ) := G((0, x1 , · · · , xk )) is a k-dimensional coordinate patch on M about p. Since p is
arbitrarily chosen, we have proved M is a k-manifold without boundary in Rn+k .
Now, ∀p ∈ N = {x : f1 (x) = · · · = fn−1 (x), fn (x) ≥ 0}, there are two cases: fn (p) > 0 and fn (p) = 0.
For the first case, by an argument similar to that of M , we can find a C r -diffeomorphism G1 between an
open set W of Rn+k and an open set U of Rn+k containing p, such that f ◦ G1 = π1 on W . Here π1 is the
projection mapping to the first (n − 1) coordinates. So U ∩ N = U ∩ {x : f1 (x) = · · · = fn−1 (x) = 0} ∩ {x :
fn (x) ≥ 0} = G1 (W ∩ {0} × Rk+1 ) ∩ {x ∈ U : fn (x) ≥ 0}. When U is sufficiently small, by the continuity of
fn and the fact fn (p) > 0, we can assume fn (x) > 0, ∀x ∈ U . So

U ∩N = U ∩ {x : f1 (x) = · · · = fn (x) = 0, fn (x) > 0}


= G1 (W ∩ {0} × Rk+1 ) ∩ {x ∈ U : fn (x) > 0}
= G1 (W ∩ {0} × Rk+1 ∩ G−11 (U ∩ {x : fn (x) > 0}))
−1
= G1 ([W ∩ G1 (U ∩ {x : fn (x) > 0})] ∩ {0} × Rk+1 ).

This shows β(x1 , · · · , xk+1 ) := G1 ((0, x1 , · · · , xk+1 )) is a (k + 1)-dimensional coordinate patch on N about
p.
For the second case, we note p is necessarily in M . So Df (p) is of rank n and there is a C r -diffeomorphism
G between an open set W of Rn+k and an open set U of Rn+k containing p, such that f ◦ G = π on W .
So U ∩ N = {x ∈ U : f1 (x) = · · · = fn−1 (x) = 0, fn (x) ≥ 0} = G(W ) ∩ (π ◦ G−1 )−1 ({0} × [0, ∞)) =
G(W ∩π −1 ({0}×[0, ∞))) = G(W ∩{0}×[0, ∞)×Rk ). This shows γ(x1 , · · · , xk+1 ) := G((0, xk+1 , x1 , · · · , xk ))
is a (k + 1)-dimensional coordinate patch on N about p.
In summary, we have shown N is a (k + 1)-manifold. Lemma 24.2 shows ∂N = M .
3.

Proof. Define H : R[3 → R2 by H(x, y, z) = (f (x, y, z), g(x,] y, z)). By the theorem proved in Problem
Dx f (x, y, z) Dy f (x, y, z) Dz f (x, y, z)
2, if DH(x, y, z) = has rank 2 for (x, y, z) ∈ M := {(x, y, z) :
Dx g(x, y, z) Dy g(x, y, z) Dz g(x, y, z)
f (x, y, z) = g(x, y, z) = 0}, M is a 1-manifold without boundary in R3 , i.e. a C r curve without singularities.

4.

[ − a , xn ). Let N = {x : f1 (x) [ f2 (x) ≥ 0} = S (a) ∩ Hn]


2 2 n−1
Proof. We define f (x) = (f1 (x), f2 (x)) = (||x|| ] = 0,
2x1 2x2 · · · 2xn−1 2xn 2x1 2x2 · · · 2xn−1 0
and M = {x : f (x) = 0}. Since Df (x) = =
0 0 ··· 0 1 0 0 ··· 0 1
has rank 2 on M and ∂f1 /∂x = [2x1 , 2x2 , · · · , 2xn ] has rank 1 on N , by the theorem proved in Problem
n−1
2, E+ (a) = N is an (n − 1) manifold whose boundary is the (n − 2) manifold M . Geometrically, M is
n−2
S (a).

5. (a)
 
x1
Proof. We write any point x ∈ R9 as x = x2 , where x1 = [x11 , x12 , x13 ], x2 = [x21 , x22 , x23 ], and
x3
x3 = [x31 , x32 , x33 ]. Define f1 (x) = ||x1 ||2 − 1, f2 (x) = ||x2 ||2 − 1, f3 (x) = ||x3 ||2 − 1, f4 (x) = (x1 , x2 ),
f5 (x) = (x1 , x3 ), and f6 (x) = (x2 , x3 ). Then O(3) is the solution set of the equation f (x) = 0.

(b)

12
Proof. We note

∂(f1 , · · · , f6 )
Df (x) =
∂(x11 , x12 , x13 , x21 , x22 , x23 , x31 , x32 , x33 )
 
2x11 2x12 2x13 0 0 0 0 0 0
 0 0 0 2x 2x 2x 0 0 0 
 21 22 23 
 0 0 0 0 0 0 2x 2x32 2x33 
= 
 x21
31 
 x22 x23 x11 x12 x13 0 0 0  
 x31 x32 x33 0 0 0 x11 x12 x13 
0 0 0 x31 x32 x33 x21 x22 x23

Since x1 , x2 , x3 are pairwise orthogonal and are non-zero, we conclude x1 , x2 and x3 are independent. From
the structure of Df , the row space of Df (x) for x ∈ O(3) has rank 6. By the theorem proved in Problem 2,
O(3) is a 3-manifold without boundary in R9 . Finally, O(3) = {x : f (x) = 0} is clearly bounded and closed,
hence compact.
6.
n(n−1) n(n−1)
Proof. The argument is similar to that of Problem 5, and the dimension = n2 − n − 2 = 2 .

25 Integrating a Scalar Function over a Manifold


1.
Proof. To see √ α(t, z) is a coordinate patch, we note that α is one-to-one and onto S 2 (a) − D, where D =
{(x, y, z) : ( a − z , 0, z), |z| ≤ a} is a closed set and has measure zero in S (a) (note D is a parametrized
2 2 2

1-manifold, hence it has measure zero in R2 ). On the set {(t, z) : 0 < t < 2π, |z| < a}, α is smooth and
α−1 (x, y, z) = (t, z) is continuous on S 2 (a) − D. Finally, by the calculation done in the text, the rank of Dα
is 2 on {(t, z) : 0 < t < 2π, |z| < a}.

(Dα)tr Dα
 
[ ] −(a2 − z 2 )1/2 sin t (−z cos t)/(a2 − z 2 )1/2
−(a2 − z 2 )1/2 sin t (a − z ) cos t
2 2 1/2
0  2
= (a − z 2 )1/2 cos t (−z sin t)/(a2 − z 2 )1/2 
(−z cos t)/(a2 − z 2 )1/2 (−z sin t)/(a2 − z 2 )1/2 1
0 1
[ 2 ]
a − z2 0
= a2 .
0 a2 −z 2

So V (Dα) = a and v(S 2 (a)) = {(t,z):0<t<2π,|z|<a}
V (Dα) = 4πa2 .

4.
Proof. Let (αj ) be a family of coordinate patches that covers M . Then (h ◦ αj ) is a family of coordinate
patches that covers N . Suppose ϕ1 , · · · , ϕl is a partition of unity on M that is dominated by (αj ), then

13
ϕ1 ◦ h−1 , · · · , ϕl ◦ h−1 is a partition of unity on N that is dominated by (h ◦ αj ). Then
∫ l ∫

f dV = (ϕi ◦ h−1 )f dV
N i=1 N
l ∫

= (ϕi ◦ h−1 ◦ h ◦ αi )(f ◦ h ◦ αi )V (D(h ◦ αi ))
i=1 IntUi
l ∫

= (ϕi ◦ αi )(f ◦ h ◦ αi )V (Dαi )
i=1 IntUi
l ∫

= ϕi (f ◦ h)dV
i=1 M

= f ◦ hdV.
M

In particular, by setting f ≡ 1, we get v(N ) = v(M ).

6.
Proof. Let L0 = {x ∈ Rn : xi > 0}. Then M ∩ L0 is a manifold, for if α : U → V is a coordinate patch on
M , α : U ∩ α−1 (L0 ) → V ∩ L0 is a coordinate patch on M ∩ L. Similarly, if we let L1 = {x ∈ Rn : xi < 0},
M ∩ L1 is a manifold. Theorem 25.4 implies
∫ [∫ ∫ ]
1 1
ci (M ) = πdV = πdV + πdV .
v(M ) M v(M ) M ∩L0 M ∩L1

Suppose (αj ) is a family of coordinate patches on M ∩ L0 and there is a partition of unity ϕ1 , · · · , ϕl on


M ∩ L0 that is dominated by (αj ), then
∫ l ∫
∑ l ∫

πi dV = (ϕj πi )dV = (ϕj ◦ αj )(πi ◦ αj )V (Dαj )
M ∩L0 j=1 M j=1 IntUj

Define f : Rn → Rn by f (x) = (x1 , · · · , −xi , · · · , xn ). It’s easy to see (f ◦ αj ) is a family of coordinate


patches on M ∩ L1 and ϕ1 ◦ f , · · · , ϕl ◦ f is a partition of unity on M ∩ L1 that is dominated by (f ◦ αj ).
Therefore
∫ l ∫
∑ l ∫

πi dV = (ϕj ◦f ◦f ◦αj )(πi ◦f ◦αj )V (D(f ◦αj )) = (ϕj ◦αj )(πi ◦f ◦αj )V (D(f ◦αj ))
M ∩L1 j=1 IntUj j=1 IntUj

In order to show ci (M ) = 0, it suffices to show (πi ◦ αj )V (Dαj ) = −(πi ◦ f ◦ αj )V (D(f ◦ αj )). Indeed,

V 2 (D(f ◦ αj ))(x) = V 2 (Df (αj (x))Dαj (x))


= det(Dαj (x)tr Df (αj (x))tr Df (αj (x))Dαj (x))
= det(Dαj (x)tr Dαj (x))
= V 2 (Dα)(x),
∫ ∫
and πi ◦ f = −πi . Combined, we conclude M ∩L1 πi dV = − M ∩L0 πi dV . Hence ci (M ) = 0.

8. (a)
Proof. Let (αi ) be a family of coordinate patches on M and ϕ1 , · · · , ϕl a partition of unity on M dominated
by (αi ). Let (βj ) be a family of coordinate patches on N and ψ1 , · · · , ψk a partition of unity on N dominated

14
by (βj ). Then it’s easy to see ((αi , βj ))i,j is a family of coordinate patches on M ×N and (ϕm ψn )1≤m≤l,1≤n≤k
is a partition of unity on M × N dominated by ((αi , βj ))i,j . Then
∫ ∑ ∫
f · gdV = (ϕm f )(ψn g)dV
M ×N 1≤m≤l,1≤n≤k M ×N
∑ ∫
= (ϕm ◦ αm · f ◦ αm )V (Dαm )(ψn ◦ βn · g ◦ βn )V (Dβn )
1≤m≤l,1≤n≤k IntUm ×IntVn
∑ ∫ ∫
= (ϕm ◦ αm · f ◦ αm )V (Dαm ) (ψn ◦ βn · g ◦ βn )V (Dβn )
1≤m≤l,1≤n≤k IntUm IntVn
∫ ∫
= [ f dV ][ gdV ].
M N

(b)
Proof. Set f = 1 and g = 1 in (a).

(c)
Proof. By (a), v(S 1 × S 1 ) = v(S 1 ) · v(S 1 ) = 4π 2 a2 .

26 Multilinear Algebra
4.

Proof. By Example 1, it is easy to see f and g are not tensors on R4 . h is a tensor: h = ϕ1,1 − 7ϕ2,3 .
5.

Proof. f and h are not tensors. g is a tensor and g = 5ϕ3,2,3,4,4 .


6. (a)

Proof. f = 2ϕ1,2,2 − ϕ2,3,1 , g = ϕ2,1 − 5ϕ3,1 . So f ⊗ g = 2ϕ1,2,2,2,1 − 10ϕ1,2,2,3,1 − ϕ2,3,1,2,1 + 5ϕ2,3,1,3,1 .

(b)
Proof. f ⊗ g(x, y, z, u, v) = 2x1 y2 z2 u2 v1 − 10x1 y2 z2 u3 v1 − x2 y3 z1 u2 v1 + 5x2 y3 z1 u3 v1 .

7.
∑ ∑ ∑ ∑ ∑
Proof.
∑ Suppose f = I dI ϕI and g = J dJ ϕJ . Then f ⊗g = ( I dI ϕI )⊗( J dJ ϕJ ) = I,J dI dJ ϕI ⊗ϕJ =
I,J dI dJ ϕI,J . This shows the four properties stated in Theorem 26.4 characterize the tensor product
uniquely.

8.
Proof. For any x ∈ Rm , T ∗ f (x) = f (T (x)) = f (B · x) = (AB) · x. So the matrix of the 1-tensor T ∗ f on Rm
is AB.

15
27 Alternating Tensors
1.
Proof. Since h is not multilinear, h is not an alternating tensor. f = ϕ1,2 − ϕ2,1 + ϕ1,1 is a tensor. The only
permutation of {1, 2} are the identity mapping id and σ : σ(1) = 2, σ(2) = 1. So f is alternating if and
only if f σ (x, y) = −f (x, y). Since f σ (x, y) = f (y, x) = y1 x2 − y2 x1 + y1 x1 ̸= −f (x, y), we conclude f is not
alternating.
Similarly, g = ϕ1,3 − ϕ3,2 is a tensor. And g σ = ϕ2,1 − ϕ2,3 ̸= −g. So g is not alternating.
3.
Proof. Suppose I = (i1 , · · · , ik ). If {i1 , · · · , ik } ̸= {j1 , · · · , jk } (set equality), then ϕI (aj1 , · · · , ajk ) = 0. If
{i1 , · · · , ik } = {j1 , · · · , jk }, there must exist a permutation σ of {1, 2, · · · , k}, such that I = (i1 , · · · , ik ) =
(jσ(1) , · · · , jσ(k) ). Then ϕI (aj1 , · · · , ajk ) = (sgnσ)(ϕI )σ (aj1 , · · · , ajk ) = (sgnσ)ϕI (ajσ(1) , · · · , ajσ(k) ) = sgnσ.
In summary, we have
{
sgnσ if there is a permutation σ of {1, 2, · · · , k} such that I = Jσ = (jσ(1) , · · · , jσ(k) )
ϕI (aj1 , · · · , ajk ) =
0 otherwise.

4.
Proof. For any v1 , · · · , vk ∈ V and a permutation σ of {1, · · · , k}.

(T ∗ f )σ (v1 , · · · , vk ) = T ∗ f (vσ(1) , · · · , vσ(k) ) = f (T (vσ(1) ), · · · , T (vσ(k) )) = f σ (T (v1 ), · · · , T (vk ))


= (sgnσ)f (T (v1 ), · · · , T (vk )) = (sgnσ)T ∗ f (v1 , · · · , vk ).

So (T ∗ f )σ = (sgnσ)T ∗ f , which implies T ∗ f ∈ Ak (V ).


5.
−1
Proof. We follow the hint and prove ϕIσ = (ϕI )σ . Indeed, suppose a1 , · · · , an is a basis of the underlying
vector space V , then
{
σ −1 0 if I ̸= (jσ−1 (1) , · · · , jσ−1 (k) )
(ϕI ) (aj1 , · · · , ajk ) = (ϕI )(ajσ−1 (1) , · · · , ajσ−1 (k) ) =
1 if I = (jσ−1 (1) , · · · , jσ−1 (k) )
{
0 if Iσ ̸= (jσ◦σ−1 (1) , · · · , jσ◦σ−1 (k) ) = J
=
1 if Iσ = (jσ◦σ−1 (1) , · · · , jσ◦σ−1 (k) ) = J
= ϕIσ (aj1 , · · · , ajk ).
∑ σ
∑ −1 −1 ∑ ∑
Thus, ϕI = σ (sgnσ)(ϕI ) = σ −1 (sgnσ )(ϕI )σ = σ −1 (sgnσ)ϕIσ = σ (sgnσ)ϕIσ .

28 The Wedge Product


1. (a)
Proof. F = 2ϕ2 ⊗ϕ2 ⊗ϕ1 +ϕ1 ⊗ϕ5 ⊗ϕ4 , G = ϕ1 ⊗ϕ3 +ϕ3 ⊗ϕ1 . So AF = 2ϕ2 ∧ϕ2 ∧ϕ1 +ϕ1 ∧ϕ5 ∧ϕ4 = −ϕ1 ∧ϕ4 ∧ϕ5
and AG = ϕ1 ∧ ϕ3 − ϕ1 ∧ ϕ3 = 0, by Step 9 of the proof of Theorem 28.1.
(b)
Proof. (AF ) ∧ h = −ϕ1 ∧ ϕ4 ∧ ϕ5 ∧ (ϕ1 − 2ϕ3 ) = 2ϕ1 ∧ ϕ4 ∧ ϕ5 ∧ ϕ3 = 2ϕ1 ∧ ϕ3 ∧ ϕ4 ∧ ϕ5 .
(c)

16
 
x1 y1 z1
Proof. (AF )(x, y, z) = −ϕ1 ∧ ϕ4 ∧ ϕ5 (x, y, z) = −det x4 y4 z4  = −x1 y4 z5 + x1 y5 z4 + x4 y1 z5 − x4 y5 z1 −
x5 y5 z5
x5 y1 z4 + x5 y4 z1 .
2.
∑ ∑
Proof.
∑ Suppose G is a k-tensor, then AG(v1 , · · · , vk ) = σ (sgnσ)Gσ (v1 , · · · , vk ) = σ (sgnσ)G(v1 , · · · , vk ) =
[ σ (sgnσ)]G(v1 , · · · , vk ). Let e be an elementary permutation. Then e : σ → e ◦ σ is an isomorphism on
the permutation group Sk of {1, 2, · · · , k}. So Sk can be divided into two disjoint subsets U1 and U2 so that
e∑establishes a one-to-one correspondence between U1 and U2 . By the fact sgne ◦ σ = −sgnσ, we conclude
σ (sgnσ) = 0. This implies AG = 0.

3.

l1 !l2 ! A(f1 ⊗ f2 ) = f1 ∧ f2 by the definition of ∧. Assume for k = n,


1
Proof. We work by induction. For k = 2,
the claim is true. Then for k = n + 1,
1 1 1 1
A(f1 ⊗· · ·⊗fn ⊗fn+1 ) = A((f1 ⊗· · ·×fn )⊗fn+1 ) = A(f1 ⊗· · ·⊗fn )∧fn+1
l1 ! · · · ln !ln+1 ! l1 ! · · · ln ! ln+1 ! l1 ! · · · ln !

by Step 6 of the proof of Theorem 28.1. By induction, l1 !···l 1


n!
A(f1 ⊗ · · · ⊗ fn ) = f1 ∧ · · · ∧ fn . So
1
l1 !···ln !ln+1 ! A(f 1 ⊗ · · · ⊗ f n ⊗ f n+1 ) = f1 ∧ · · · ∧ f n ∧ f n+1 . By the principle of mathematical induction,

1
A(f1 ⊗ · · · ⊗ fk ) = f1 ∧ · · · ∧ fk
l1 ! · · · lk !
for any k.
4.

Proof. ϕi1 ∧ · · · ϕik (x1 , · · · , xk ) = A(ϕi1 ⊗ · · · ⊗∑
∑ ϕik )(x1 , · · · , xk ) = σ (sgnσ)(ϕi1 ⊗ · · · ⊗ ϕik )σ (x1 , · · · , xk ) =
σ (sgnσ)(ϕi1 ⊗ · · · ⊗ ϕik )(xσ(1) , · · · , xσ(k) ) = σ (sgnσ)xi1 ,σ(1) , · · · , xik ,σ(k) = detXI .

5.
Proof. Suppose F is a k-tensor. Then

T ∗ (F σ )(v1 , · · · , vk ) = F σ (T (v1 ), · · · , T (vk ))


= F (T (vσ(1) ), · · · , T (vσ(k) ))
= T ∗ F (vσ(1) , · · · , vσ(k) )
= (T ∗ F )σ (v1 , · · · , vk ).

6. (a)
Proof.∑T ∗ ψI (v1 , · · · , vk ) = ψI (T (v1 ), · · · , T (vk )) = ψI (B · v1 , · · · , B · vk ). In particular, for J¯ = (j̄1 , · · · , j̄k ),
cJ¯ = [J] cJ ψJ (ej̄1 , · · · , ej̄k ) = T ∗ ψI (ej̄1 , · · · , ej̄k ) = ψI (B · ej̄1 , · · · , B · ej̄k ) = ψI (βj̄1 , · · · , βj̄k ) where βi is
the i-th column of B. So cJ¯ = det[βj̄1 , · · · , βj̄k ]I . Therefore, cJ is the determinant of the matrix consisting
of the i1 , · · · , ik rows and the j1 , · · · , jk columns of B, where I = (i1 , · · · , ik ) and J = (j1 , · · · , jk ).

(b)
∑ ∑ ∑ ∑ ∑
Proof. T ∗ f = [I] dI T ∗ (ψI ) = [I] dI [I] detBI,J ψJ = [J] ( [I] dI detBI,J )ψJ where BI,J is the matrix
consisting of the i1 , · · · , ik rows and the j1 , · · · , jk columns of B (I = (i1 , · · · , ik ) and J = (j1 , · · · , jk )).

17
29 Tangent Vectors and Differential Forms
1.
 
γ1′ (t)
Proof. γ∗ (t; e1 ) = (γ(t); Dγ(t) · e1 ) = (γ(t);  · · · ), which is the velocity vector of γ corresponding to the
γn′ (t)
parameter value t.
2.

Proof. The velocity vector of the curve γ(t) = α(x + tv) corresponding to parameter value t = 0 is calculated
d
by dt γ(t)|t=0 = limt→0 α(x+tv)−α(x)
t = Dα(x) · v. So α∗ (x; v) = (α(x); Dα(x) · v) = (α(x); dt
d
γ(t)|t=0 ).
3.

Proof. Suppose α : Uα → Vα and β : Uβ → Vβ are two coordinate patches about p, with α(x) = β(y) = p.
Because Rk is spanned by the vectors e1 , · · · , ek , the space Tpα (M ) obtained by using α is spanned by the
vectors (p; ∂α(x) ∂β(y) k
∂xj )j=1 and the space Tp (M ) obtained by using β is spanned by the vectors (p; ∂yi )i=1 . Let
k β

W = Vα ∩ Vβ , Uα′ = α−1 (W ), and Uβ′ = β −1 (W ). Then β −1 ◦ α : Uα′ → Uβ′ is a C r -diffeomorphism by


Theorem 24.1. By chain rule,

Dα(x) = D(β ◦ β −1 ◦ α)(x) = Dβ(y) · D(β −1 ◦ α)(x).

Since D(β −1 ◦ α)(x) is of rank k, the linear space spanned by (∂α(x)/∂xj )kj=1 agrees with the linear space
spanned by (∂β(y)/∂yi )ki=1 .

4. (a)
Proof. Suppose α : U → V is a coordinate patch about p, with α(x) = p. Since p ∈ M − ∂M , we can without
loss of generality assume U is an open subset of Rk . By the definition of tangent vector, there exists u ∈ Rk
such that v = Dα(x) · u. For ε sufficiently small, {x + tu : |t| ≤ ε} ⊂ U and γ(t) := α(x + tu) (|t| ≤ ε) has
d
its image in M . Clearly dt γ(t)|t=0 = Dα(x) · u = v.
(b)

Proof. Suppose γ : (−ε, ε) → Rn is a parametrized-curve whose image set lies in M . Denote γ(0) by p and
assume α : U → V is a coordinate patch about p. For v := dt
d
γ(t)|t=0 , we define u = Dα−1 (p) · v. Then

α∗ (x; u) = (p ; Dα(x) · u) = (p ; Dα(x) · Dα−1 (p) · v) = (p ; D(α ◦ α−1 )(p) · v) = (p ; v).

So the velocity vector of γ corresponding to parameter value t = 0 is a tangent vector.

5.
Proof. Similar to the proof of Problem 4, with (−ε, ε) changed to [0, ε) or (−ε, 0]. We omit the details.

18
30 The Differential Operator
2.

Proof. dω = −xdx ∧ dy − zdy ∧ dz. So d(dω) = −dx ∧ dx ∧ dy − dz ∧ dy ∧ dz = 0. Meanwhile,

dη = −2yzdz ∧ dy + 2dx ∧ dz = 2yzdy ∧ dz + 2dx ∧ dz

and
ω ∧ η = (−xy 2 z 2 − 3x)dx ∧ dy + (2x2 y + xyz)dx ∧ dz + (6x − y 2 z 3 )dy ∧ dz.
So
d(ω ∧ η) = (−2xy 2 z − 2x2 − xz + 6)dx ∧ dy ∧ dz,
(dω) ∧ η = −2x2 dx ∧ dy ∧ dz − xzdx ∧ dy ∧ dz,
and
ω ∧ dη = 2xy 2 zdx ∧ dy ∧ dz − 6dx ∧ dy ∧ dz.
Therefore, (dω) ∧ η − ω ∧ dη = (−2xy 2 z − 2x2 − xz + 6)dx ∧ dy ∧ dz = d(ω ∧ η).

3.
Proof. In R2 , ω = ydx − xdy vanishes at x0 = (0, 0), but dω = −2dx ∧ dy does not ∑ vanish at x0 . In general,
suppose ω is a k-form defined in an open set A of Rn , and it has the general form ω = [I] fI dxI . If it vanishes
at each x in a neighborhood of x0 , we must have fI = 0 in a neighborhood
∑ of x0 for∑each ∑
I. By continuity,
we conclude fI ≡ 0 in a neighborhood of x0 , including x0 . So dω = [I] dfI ∧ dxI = [I] ( i Di f dxi ) ∧ dxI
vanishes at x0 .

4.
( ) ( )
−2xy
Proof. dω = d x
x2 +y 2 dx +d y
x2 +y 2 dy = 2xy
(x2 +y 2 )2 dx ∧ dy + (x2 +y 2 )2 dx ∧ dy = 0. So ω is closed. Define
1 2 2
θ= 2 log(x + y ), then dθ = ω. So ω is exact on A.

5. (a)
−(x2 +y 2 )+2y 2 x2 +y 2 −2x2
Proof. dω = (x2 +y 2 )2 dy ∧ dx + (x2 +y 2 )2 dx ∧ dy = 0. So ω is closed.

(c)

Proof. We consider the following transformation from (0, ∞) × (0, 2π) to B:


{
x = r cos t
y = r sin t.

Then [ ]
∂(x, y) cos t −r sin t
det = det = r ̸= 0.
∂(r, t) sin t r cos t
By part (b) and the inverse function theorem (Theorem 8.2, the global version), we conclude ϕ is of class
C ∞.

(d)
Proof. Using the transformation given in part (c), we have dx = cos tdr −r sin tdt and dy = sin tdr +r cos tdt.
So ω = [−r sin t(cos tdr − r sin tdt) + r cos t(sin tdr + r cos tdt)]/r2 = dt = dϕ.
(e)

19
Proof. We follow the hint. Suppose g is a closed 0-form in B. Denote by a the point (−1, 0) of R2 . For
any x ∈ B, let γ(t) : [0, 1] → B be the segment connecting a and x, with γ(0) = a and γ(1) = x. Then by
mean-value theorem (Theorem 7.3), there exists t0 ∈ (0, 1), such that g(a)−g(x) = Dg(a+t0 (x−a))·(a−x).
Since g is closed in B, Dg = 0 in B. This implies g(x) = g(a) for any x ∈ B.
(f)

Proof. First, we note ϕ is not well-defined in all of A, so part (d) can not be used to prove ω is exact in
A. Assume ω = df in A for some 0-form f . Then d(f − ϕ) = df − dϕ = ω − ω = 0 in B. By part (e),
f − ϕ is a constant in B. Since limy↓0 ϕ(1, y) = 0 and limy↑0 ϕ(1, y) = 2π, f (1, y) has different limits when
y approaches 0 through positive and negative values. This is a contradiction since f is C 1 function defined
everywhere in A.
6.
∑n ci ∧ · · · ∧ dxn = ∑n Di fi dx1 ∧ · · · ∧ dxn . So dη = 0 if and
Proof. dη = i=1 (−1)i−1 Di fi dxi ∧ dx1 ∧ · · · dx i=1
∑n ||x||2 −mx2 ∑n
only if i=1 Di fi = 0. Since Di fi (x) = ||x||m+2 i , i=1 Di fi (x) = n−m
||x||m . So dη = 0 if and only if m = n.

7.
Proof. By linearity, it suffices to prove the theorem for ω = f dxI , where I = (i1 , · · · , ik−1 ) is a k-
∑n from {1, · · · , n} in ascending order. Indeed, in this case, h(x) = d(f dxI )(x)((x; v1 ), · · · , (x; vk )) =
tuple
( i=1 Di f (x)dxi ∧ dxI )((x; v1 ), · · · , (x; vk )). Let X = [v1 · · · vk ]. For each j ∈ {1, · · · , k}, let Yj =
[v1 · · · vbj · · · vk ]. Then by Theorem 2.15 and Problem 4 of §28,


k
detX(i, i1 , · · · , ik−1 ) = (−1)j−1 vij detYj (i1 , · · · , ik−1 ).
j=1

Therefore

n
h(x) = Di f (x)detX(i, i1 , · · · , ik−1 )
i=1

n ∑
k
= Di f (x)(−1)j−1 vij detYj (i1 , · · · , ik−1 )
i=1 j=1


k
= (−1)j−1 Df (x) · vj detYj (i1 , · · · , ik−1 ).
j=1

\
Meanwhile, gj (x) = ω(x)((x; v1 ), · · · , (x; vj ), · · · , (x; vk )) = f (x)detYj (i1 , · · · , ik−1 ). So

Dgj (x) = Df (x)detYj (i1 , · · · , ik−1 )


∑k
and consequently, h(x) = j=1 (−1)
j−1
Dgj (x) · vj . In particular, for k = 1, h(x) = Df (x) · v, which is a
directional derivative.

31 Application to Vector and Scalar Fields


1.

Proof. (Proof of Theorem


∑n 31.1) It is straightforward to check
∑n that αi and βj ∑are isomorphisms. Moreover,
n
d ◦ α0 (f ) = df = i=1 Di f dxi and α1 ◦ grad(f ) = α1 ((x; i=1 Di f (x)ei )) = i=1 Di f (x)dxi . So d ◦ α0 =
α1 ◦ grad.

20
∑n ci ∧ · · · ∧ dxn ) = ∑n (−1)i−1 Di gi dxi ∧ dx1 ∧ · · · ∧ dx
ci ∧
Also, d ◦ β∑
n−1 (G) = d( i=1 (−1)
i−1
gi dx1 ∧ · · · ∧ dx ∑n i=1 ∑n
n
· · · ∧ dxn = ( i=1 Di gi )dx1 ∧ · · · ∧ dxn , and βn ◦ div(G) = βn ( i=1 Di gi ) = ( i=1 Di gi )dx1 ∧ · · · ∧ dxn . So
d ◦ βn−1 = βn ◦ div.
∑3
(Proof of Theorem 31.2) We only need to check d ◦ α1 = β2 ◦ curl. Indeed, d ◦ α1 (F ) = d( i=1 fi dxi ) =
(D2 f1 dx2 + D3 f1 dx3 ) ∧ dx1 + (D1 f2 dx1 + D3 f2 dx3 ) ∧ dx2 + (D1 f3 dx1 + D2 f3 dx2 ) ∧ dx3 = (D2 f3 − D3 f2 )dx2 ∧
dx3 + (D3 f1 − D1 f3 )dx3 ∧ dx1 + (D1 f2 − D2 f1 )dx1 ∧ dx2 , and β2 ◦ curl(F ) = β2 ((x; (D2 f3 − D3 f2 )e1 + (D3 f1 −
D1 f3 )e2 + (D1 f2 − D2 f1 )e3 )) = (D2 f3 − D3 f2 )dx2 ∧ dx3 − (D3 f1 − D1 f3 )dx1 ∧ dx3 + (D1 f2 − D2 f1 )dx1 ∧ dx2 .
So d ◦ α1 = β2 ◦ curl.
2.
Proof. α1 F = f1 dx1 + f2 dx2 and β1 F = f1 dx2 − f2 dx1 .
3. (a)
Proof. Let f be a scalar field in A and F (x) = (x; [f1 (x), f2 (x), f3 (x)]) be a vector field in A. Define
ωF1 = f1 dx1 + f2 dx2 + f3 dx3 and ωF2 = f1 dx2 ∧ dx3 + f2 dx3 ∧ dx1 + f3 dx1 ∧ dx2 . Then it is straightforward
to check that dωF1 = wcurl
2
F
and dωF2 = (divF )dx1 ∧ dx2 ∧ dx3 . So by the general principle d(dω) = 0, we
have ( )
1 2
0 = d(df ) = d ωgrad f
= ωcurl gradf
and ( )
0 = d(dωF1 ) = d ωcurl
2
F
= (div curlF )dx1 ∧ dx2 ∧ dx3 .

These two equations imply that curl gradf = 0 and div curlF = 0.
4. (a)
∑ ∑ ∑
Proof. γ2 (αH +βG) = i<j [αhij (x)+βgij (x)]dxi ∧dxj = α i<j hij (x)dxi ∧dxj +β i<j gij (x)dxi ∧dxj =
αγ2 (H) + βγ2 (G). So γ2 is a linear mapping. It is also easy to see γ2 is one-to-one and onto as the skew-
symmetry of H implies hii = 0 and hij + hji = 0.
(b)
Proof. Suppose F is a vector field in A and H ∈ S(A). We define twist : {vector fields in A} → S(A) by
twist(F )ij = Di fj − Dj fi , and spin : S(A) → {vector fields in A} by spin(H) = (x; (D4 h23 − D3 h24 +
D2 h34 , −D4 h13 + D3 h14 − D1 h34 , D4 h12 − D2 h14 + D1 h24 , −D3 h12 + D2 h13 − D1 h23 )).
5. (a)
∑n ∑n
Proof.
∑n Suppose ω = i=1 ai dxi is a 1-form such that ω(x)(x; v) = ⟨f (x), v⟩. Then i=1 ai (x)vi =
i=1 f i (x)v i . Choose v = ei , we conclude ai = fi . So ω = α1 F .
(b)
Proof. Suppose ω is an (n − 1) form such that ω(x)((x; v1 ), · · · , (x; vn−1 )) = εV (g(x), v1 , · · · , vn−1 ). Assume
∑n ci ∧ · · · ∧ dxn , then
ω has the representation i=1 ai dx1 ∧ · · · ∧ dx


n
ω(x)((x; v1 ), · · · , (x; vn−1 )) = ai (x)det[v1 , · · · , vn−1 ](1,··· ,bi,··· ,n)
i=1
∑n
= (−1)i−1 [(−1)i−1 ai (x)]det[v1 , · · · , vn−1 ](1,··· ,bi,··· ,n)
i=1
= det[a(x), v1 , · · · , vn−1 ],

where a(x) = [a1 (x), · · · , (−1)i−1 ai (x), · · · , (−1)n−1 an (x)]T r . Since

εV (g(x), v1 , · · · , vn−1 ) = det[g(x), v1 , · · · , vn−1 ],

21
we can conclude det[a(x), v1 , · · · , vn−1 ] = det[g(x), v1 , · · · , vn−1 ], or equivalently,

det[a(x) − g(x), v1 , · · · , vn−1 ] = 0.


∑n ci ∧ · · · ∧
Since v1 , · · · , vn−1 can be arbitrary, we must have g(x) = a(x), i.e. ω = i=1 (−1)
i−1
gi dx1 ∧ · · · ∧ dx
dxn = βn−1 G.
(c)

Proof. Suppose ω = f dx1 ∧· · ·∧dxn is an n-form such that ω(x)((x; v1 ), · · · , (x; vn )) = ε·h(x)·V (v1 , · · · , vn ).
This is equivalent to f (x)det[v1 , · · · , vn ] = h(x)det[v1 , · · · , vn ]. So f = h and ω = βn h.

32 The Action of a Differentiable Map


1.

Proof. Let ω, η and θ be 0-forms. Then


(1) β ∗ (aω + bη) = aω ◦ β + bη ◦ β = aβ ∗ (ω) + bβ ∗ (η).
(2) β ∗ (ω ∧ θ) = β ∗ (ω · θ) = ω ◦ β · θ ◦ β = β ∗ (ω) · β ∗ (θ) = β ∗ (ω) ∧ β ∗ (θ).
(3) (β ◦ α)∗ ω = ω ◦ β ◦ α = α∗ (ω ◦ β) = α∗ (β ∗ ω).
2.

Proof.

dα1 ∧ dα3 ∧ dα5


= (D1 α1 dx1 + D2 α1 dx2 + D3 α1 dx3 ) ∧ (D1 α3 dx1 + D2 α3 dx2 + D3 α3 dx3 )
∧(D1 α5 dx1 + D2 α5 dx2 + D3 α5 dx3 )
= (D1 α1 D2 α3 dx1 ∧ dx2 + D1 α1 D3 α3 dx1 ∧ dx3 + D2 α1 D1 α3 dx2 ∧ dx1 + D2 α1 D3 α3 dx2 ∧ dx3
+D3 α1 D1 α3 dx3 ∧ dx1 + D3 α1 D2 α3 dx3 ∧ dx2 ) ∧ (D1 α5 dx1 + D2 α5 dx2 + D3 α5 dx3 )
= D2 α1 D3 α3 D1 α5 dx2 ∧ dx3 ∧ dx1 + D3 α1 D2 α3 D1 α5 dx3 ∧ dx2 ∧ dx1 + D1 α1 D3 α3 D2 α5 dx1 ∧ dx3 ∧ dx2
+D3 α1 D1 α3 D2 α5 dx3 ∧ dx1 ∧ dx2 + D1 α1 D2 α3 D3 α5 dx1 ∧ dx2 ∧ dx3 + D2 α1 D1 α3 D3 α5 dx2 ∧ dx1 ∧ dx3
= (D2 α1 D3 α3 D1 α5 − D3 α1 D2 α3 D1 α5 − D1 α1 D3 α3 D2 α5 + D3 α1 D1 α3 D2 α5 + D1 α1 D2 α3 D3 α5
−D2 α1 D1 α3 D3 α5 )dx1 ∧ dx2 ∧ dx3
 
D1 α1 D2 α1 D3 α1
= det D1 α3 D2 α3 D3 α3  dx1 ∧ dx2 ∧ dx3
D1 α5 D2 α5 D3 α5
= detDα(1, 3, 5)dx1 ∧ dx2 ∧ dx3 .
∂α(1,3,5)
So α∗ (dy(1,3,5) = α∗ (dy1 ∧ dy3 ∧ dy5 ) = α∗ (dy1 ) ∧ α∗ (dy3 ) ∧ α∗ (dy5 ) = dα1 ∧ dα3 ∧ dα5 = det ∂x dx1 ∧
dx2 ∧ dx3 . This confirms Theorem 32.2.
3.

Proof. dω = −xdx ∧ dy − 3dy ∧ dz, α∗ (ω) = x ◦ α · y ◦ αdα1 + 2z ◦ αdα2 − y ◦ αdα3 = u3 v(udv + vdu) +
2(3u + v) · (2udu) − u2 (3du + dv) = (u3 v 2 + 9u2 + 4uv)du + (u4 v − u2 )dv. Therefore

α∗ (dω) = −x ◦ αdα1 ∧ dα2 − 3dα2 ∧ dα3


= −uv(udv + vdu) ∧ (2udu) − 2(2udu) ∧ (3du + dv) − (2udu) ∧ (3du + dv)
= (2u3 v − 6u)du ∧ dv,

22
and

d(α∗ ω) = (2u3 vdv + 4udv) ∧ du + (4u3 vdu − 2udu) ∧ dv


= (−2u3 v − 4u + 4u3 v − 2u)du ∧ dv
= (2u3 v − 6u)du ∧ dv.

So α∗ (dω) = d(α∗ ω).


4.

Proof. Note α∗ yi = yi ◦ α = αi .
5.

Proof. α∗ (dyI ) is an l-form in A, so we can write it as α∗ (dyI ) = [J] hJ dxJ , where J is an ascending l-tuple
form the set {1, · · · , k}. Fix J = (j1 , · · · , jl ), we have

hJ (x) = α∗ (dyI )(x)((x; ej1 ), · · · , (x; ejl ))


= (dyI )(x)(α∗ (x; ej1 ), · · · , α∗ (x; ejl ))
= (dyI )(x)((α(x); Dj1 α(x)), · · · , (α(x); Djl α(x)))
= det[Dj1 α(x), · · · , Djl α(x)]I
∂αI
= det (x).
∂xJ
∑ ( )
Therefore α∗ (dyI ) = [J]
∂αI
det ∂x J
dxJ .

6. (a)

Proof. We fix x ∈ A and denote α(x) by y. Then ∑n G(y) = α∗ (F (x)) = (y; Dα(x) · f (x)). Define g(y) =
Dα(x) · f (x) = (Dα · f )(α−1 (y)). Then gi (y) = ( j=1 Dj αi fj )(α−1 (y)) and we have

∑n ∑
n ∑
n ∑
n ∑
n ∑ n
α∗ (α1 G) = α∗ ( gi dyi ) = gi ◦ αdαi = gi ◦ α Dj αj dxj = ( Dj αi gi ◦ α)dxj .
i=1 i=1 i=1 j=1 j=1 i=1

Therefore α∗ (α1 G) = α1 F if and only if


n ∑
n ∑
n
fj = Dj αi gi ◦ α = Dj αi Dk αi fk = [Dj α1 Dj α2 · · · Dj αn ] · Dα · f,
i=1 i=1 k=1

that is, Dα(x)tr · Dα(x) · f (x) = f (x). So α∗ (α1 G) = α1 F if and only if Dα(x) is an orthogonal matrix for
each x.

(b)
∑n ci ∧ · · · ∧ dxn and
Proof. βn−1 F = i=1 (−1)
i−1
fi dx1 ∧ · · · ∧ dx


n
α∗ (βn−1 G) = α∗ ( ci ∧ · · · ∧ dyn )
(−1)i−1 gi dy1 ∧ · · · ∧ dy
i=1

n
= ci ∧ · · · ∧ dyn )
(−1)i−1 (gi ◦ α)α∗ (dy1 ∧ · · · ∧ dy
i=1
( n )

n ∑ n ∑ ∂(α1 , · · · , αbi , · · · , αn )
= (−1) i−1
( Dj αi fj ) det d
dx1 ∧ · · · ∧ dxk ∧ · · · ∧ dxn .
i=1 j=1
∂(x1 , · · · , x
ck , · · · , xn )
k=1

23
So α∗ (βn−1 F ) = βn−1 F if and only if for any k ∈ {1, · · · , n},


n
∂(α1 , · · · , αbi , · · · , αn )
fk = (−1)k+i Dj αi fj det
i,j=1
∂(x1 , · · · , x
ck , · · · , xn )
∑ n ∑n
∂(α1 , · · · , αbi , · · · , αn )
= fj (−1)k+i Dj αi det
j=1 i=1
∂(x1 , · · · , x
ck , · · · , xn )

n
= fj δkj detDα
j=1
= fk detDα.

Since F can be arbitrary, α∗ (βn−1 F ) = βn−1 F if and only if detDα = 1.

(c)
Proof. α∗ (βn k) = α∗ (kdy1 ∧ · · · ∧ dyn ) = k ◦ α · α∗ (dy1 ∧ · · · ∧ dyn ) = h · detDα · dx1 ∧ · · · ∧ dxn and
βn h = hdx1 ∧ · · · ∧ dxn . So α∗ (βn k) = βn h for all h if and only if detDα = 1.
7.

Proof. If α is an orientation-preserving isometry of Rn , Exercise 6 implies α∗ (α1 G) = α1 F , α∗ (βn−1 G) =


βn−1 F , and α∗ (βn k) = βn h, where F , G, h and k are as defined in Exercise 6. Fix x ∈ A and let y = α(x).
We need to show
e∗ (divF )(y) = div(e
(1) α α∗ (F ))(y). Indeed, div(e
α∗ (F ))(y) = divG(y), and

e∗ (divF )(y) = divF (x) = βn−1 ◦ β(divF )(x) = βn−1 ◦ d(βn−1 F )(x) = βn−1 ◦ d(α∗ (βn−1 G))(x)
α
= βn−1 ◦ α∗ ◦ d(βn−1 G)(x) = βn−1 ◦ α∗ ◦ βn (divG)(x).

For any function g ∈ C ∞ (B),

βn−1 ◦ α∗ ◦ βn (g) = βn−1 ◦ α∗ (gdy1 ∧ · · · ∧ dyn ) = βn−1 (g ◦ α · detDα · dx1 ∧ · · · ∧ dxn ) = g ◦ α.

So
e∗ (divF )(y) = βn−1 ◦ α∗ ◦ βn (divG)(x) = divG(α(x)) = divG(y) = div(e
α α∗ (F ))(y).

e∗ (gradh) = grad ◦ α
(2) α e∗ (h). Indeed,
 
D1 h(x)
e∗ (gradh)(y) = α∗ (gradh ◦ α−1 (y)) = α∗ (gradh(x)) = (y; Dα(x) ·  · · · ) = (y; Dα(x) · (Dh(x))tr ),
α
Dn h(x)

and

grad ◦ α
e∗ (h)(y) = grad(h ◦ α−1 )(y)
= (y; [D(h ◦ α−1 )(y)]tr )
= (y; [Dh(α−1 (y)) · Dα−1 (y)]tr )
= (y; [Dh(x) · (Dα(x))−1 ]tr ).

Since Dα is orthogonal, we have

grad ◦ α
e∗ (h)(y) = (y; [Dh(x) · (Dα(x))tr ]tr ) = (y; Dα(x) · (Dh(x))tr ) = α
e∗ (gradh)(y).

24
e∗ (curlF ) = curl(e
(3) For n = 3, α α∗ F ). Indeed, curl(e
α∗ F )(y) = curlG(y), and
e∗ (curlF )(y)
α = α∗ (curlF (α−1 (y)))
= α∗ (β2−1 ◦ β2 ◦ curlF (x))
= α∗ (β2−1 ◦ d ◦ α1 F (x))
= α∗ (β2−1 ◦ d ◦ α∗ ◦ α1 G(x))
= α∗ (β2−1 ◦ α∗ ◦ d ◦ α1 G(x))
= α∗ (β2−1 ◦ α∗ ◦ β2 ◦ curlG(x))

Let H be a vector field in B, we show α∗ (β2−1 ◦ α∗ ◦ β2 (H)(x)) = H(α(x)) = H(y). Indeed,


α∗ (β2−1 ◦ α∗ ◦ β2 (H)(x))
∑n
ci ∧ · · · ∧ dyn ))
= α∗ (β2−1 ◦ α∗ ( (−1)i−1 hi dy1 ∧ · · · ∧ dy
i=1
 

n ∑
n
∂(α1 , · · · , α
b i , · · · , αn )
= α∗ ◦ β2−1  (−1)i−1 hi ◦ α det dj ∧ · · · ∧ dxn 
dx1 ∧ · · · ∧ dx
i=1 j=1
∂(x1 , · · · , x
bj , · · · , xn )
 ( n ) 
∑n ∑ ∂(α1 , · · · , αbi , · · · , αn )
= α∗ ◦ β2−1  (−1)i−1 hi ◦ α · det dj ∧ · · · ∧ dxn 
dx1 ∧ · · · ∧ dx
j=1 i=1
∂(x1 , · · · , xbj , · · · , xn )
 ( n ) 
∑n ∑ ∂(α1 , · · · , αbi , · · · , αn )
= α∗  (−1)i+j hi ◦ α · det ej  .
j=1 i=1
∂(x1 , · · · , xbj , · · · , xn )

Using the definition of α∗ and the fact that detDα = 1, we have


 ( n ) 
∑n ∑ ∂(α , · · · , b
α , · · · , α )
α∗  ej 
1 i n
(−1)i+j hi ◦ α · det
j=1 i=1
∂(x 1 , · · · , b
x j , · · · , x n )
 ∑n αi ,··· ,αn ) 
◦ α · det ∂(α∂(c
i=1 (−1)
i+1
hi 1 ,··· ,c
x1 ,··· ,xn )
 ··· 
∑ 
 n αi ,··· ,αn ) 
∂(α1 ,··· ,c
= Dα(x) ·  i=1 (−1) hi ◦ α · det ∂(x1 ,··· ,c
i+j
xj ,··· ,xn )  .
 
 ··· 
∑n
i=1 (−1)i+n
hi ◦ α · det ∂(α∂(x
1 ,··· ,c
αi ,··· ,αn )
c
1 ,··· ,xn)

So the k-th component of the above column vector is



n ∑n
∂(α1 , · · · , αbi , · · · , αn )
Dj αk (−1)i+j hi ◦ α · det
j=1 i=1
∂(x1 , · · · , xbj , · · · , xn )

n ∑n
∂(α1 , · · · , αbi , · · · , αn )
= hi ◦ α (−1)i+j Dj αk det
i=1 j=1
∂(x1 , · · · , xbj , · · · , xn )
= hk ◦ αdetDα
= hk ◦ α.
Thus, we have proved α∗ (β2−1 ◦ α∗ ◦ β2 (H)(x)) = H(y). Replace H with curlG, we have
e∗ (curlF )(y) = curlG(y) = curl(e
α α∗ F )(y).

25
33 Integrating Forms over Parametrized-Manifolds
1.
[ ] [ ]
∫ ∫ 0 1 1 0 ∫
Proof. Yα
(x2 dx2 ∧ dx3 + x1 x3 dx1 ∧ dx3 ) = A
vdet + u(u2 + v 2 + 1)det = A −2uv +
2u 2v 2u 2v
2uv(u2 + v 2 + 1) = 1.
2.

Proof.

x1 dx1 ∧ dx4 ∧ dx3 + 2x2 x3 dx1 ∧ dx2 ∧ dx3
∫Yα
= α∗ (−x1 dx1 ∧ dx3 ∧ dx4 + 2x2 x3 dx1 ∧ dx2 ∧ dx3 )
A
    
∫ 1 0 0 1 0 0
= −sdet 0 0 1  + 2utdet 0 1 0 ds ∧ du ∧ dt
A 0 4(2u − t) 2(t − 2u) 0 0 1

= 4s(2u − t) + 2ut
A
= 6.

3. (a)
Proof.

1
(x1 dx2 ∧ dx3 − x2 dx1 ∧ dx3 + x3 dx1 ∧ dx2 )
Yα ||x||
m
∫ [ ]
1 ∂(x2 , x3 ) ∂(x1 , x3 ) ∂(x1 , x2 )
= udet − vdet + (1 − u 2
− v 2 1/2
) det
A ||(u, v, (1 − u − v )
2 2 1/2 )||m ∂(u, v) ∂(u, v) ∂(u, v)
∫ [ ] [ ] [ ]
0 1 1 0 1 0
= udet − u
− √1−uv2 −v2 − vdet − 1−uu2 −v2 − √1−uv2 −v2 + (1 − u2
− v 2 1/2
) det
A 1−u2 −v 2 0 1
∫ 2 2 √
u v
= √ +√ + 1 − u2 − v 2
1−u −v 2 2 1 − u2 − v 2
∫A
1
= √ .
A 1 − u2 − v2

{
u = r cos θ
Apply change-of-variable, (0 ≤ r ≤ 1, 0 ≤ θ < 2π), we have
v = r sin θ
∫ ∫ [ ]
1 1 cos θ −r sin θ
√ = √ det = 2π.
A 1 − u2 − v 2 [0,1]2 1 − r2 sin θ r cos θ

(b)

Proof. −2π.
4.

26
Proof. Suppose η has the representation η = f dx1 ∧ dx2 ∧ · · · ∧ dxk , where dxi is the standard elementary
1-form depending on the standard basis e1 , · · · , ek in Rk . Let a1 ,· · · ,ak be another basis for Rk and define
A = [a1 , · · · , ak ]. Then
η(x)((x; a1 ), · · · , (x; ak )) = f (x)detA.
If the frame (a1 , · · · , ak ) is orthonormal and right-handed, detA = 1. We consequently have
∫ ∫ ∫
η= f= η(x)((x; a1 ), · · · , (x; ak )).
A A x∈A

34 Orientable Manifolds
1.

Proof. Let α : Uα → Vα and β : Uβ → Vβ be two coordinate patches and suppose W: Vα ∩ Vβ is non-empty.


∀p ∈ W , denote by x and y the points in α−1 (W ) and β −1 (W ) such that α(x) = p = β(y), respectively.
Then
Dα−1 ◦ β(y) = Dα−1 (p) · Dβ(y) = [Dα(x)]−1 · Dβ(y).
So detDα−1 ◦ β(y) = [detDα(x)]−1 detDβ(y) > 0. Since p is arbitrarily chosen, we conclude α and β overlap
positively.

2.
Proof. Let α : Uα → Vα and β : Uβ → Vβ be two coordinate patches and suppose W := Vα ∩Vβ is non-empty.
∀p ∈ W , denote by x and y the points in α−1 (W ) and β −1 (W ) such that α(x) = p = β(y), respectively.
Then

D(α ◦ r)−1 ◦ (β ◦ r)(r−1 (y)) = D(α ◦ r)−1 (p) · D(β ◦ r)(r−1 (y))
= D(r−1 ◦ α−1 )(p) · D(β ◦ r)(r−1 (y))
= Dr−1 (x)Dα−1 (p) · Dβ(y) · Dr(r−1 (y)).

Note r−1 = r and detDr = detDr−1 = −1, we have

det(D(α ◦ r)−1 ◦ (β ◦ r)(r−1 (y))) = [detDα(x)]−1 detDβ(y).

So if α and β overlap positively, so do α ◦ r and β ◦ r.

3.
Proof. Denote by n the unit normal field corresponding to the orientation of M . Then [n, T ] is right-handed,
i.e. det[n, T ] > 0.
4.
     
−2π sin(2πu) 0 n1 [ ]
Proof. ∂α =  2π cos(2πu) , ∂α = 0. We need to find n = n2 , such that det n, ∂α , ∂α > 0, ||n|| = 1,
∂u ∂v ∂u ∂v
0 1 n3
and n ⊥ span{ ∂α ,
∂u ∂v
∂α
}. Indeed, ⟨n, ∂α
∂v ⟩ = 0 implies n 3 = 0, ⟨n, ∂u ⟩= 0 implies −n1 sin(2πu)+n
∂α
 2 cos(2πu) =
n1 −2π sin(2πu) 0
0. Combined with the condition n21 +n22 +n23 = n21 +n22 = 1 and det n2 2π cos(2πu) 0 = (n1 cos(2πu)+
{ 0 0 1
n1 = cos(2πu)
n2 sin(2πu)) · 2π > 0, we can solve for n1 and n2 : . So the unit normal field corresponding
n2 = sin(2πu)

27
   
cos(2πu) 1
to this orientation of C is given by n =  sin(2πu) . In particular, for u = 0, α(0, v) = (1, 0, v) and n = 0.
0 0
So n points outwards.
By Example 5, the orientation of {(x, y, z) : x2 + y 2 = 1, z = 0} is counter-clockwise and the orientation
of {(x, y, z) : x2 + y 2 = 1, z = 0} is clockwise.
5.

Proof. We can regard M as a 2-manifold in R3 and apply Example 5. The unit normal vector of M as
a 2-manifold is perpendicular to the plane where M lies on and points towards us. Example 5 then gives
the unit tangent vector field corresponding to the induced orientation of ∂M . Denote by n the unit normal
∂α ∂α ∂α
field corresponding to ∂M . If α is a coordinate patch of M , [n, ∂x 1
] is right-handed. Since [ ∂x ,
1 ∂x2
] is
∂α
right-handed and ∂x2 points into M , n points outwards from M .
Alternatively, we can apply Lemma 38.7.

6. (a)
Proof. The meaning of “well-defined” is that if x is covered by more than one coordinate patch of the same
coordinate system, the definition of λ(x) is unchanged. More precisely, assume x is both covered by αi1
and αi2 , as well as βj1 and βj2 , detD(αi−1
1
◦ βj1 )(βj−1
1
(x)) and detD(αi−1
2
◦ βj2 )(βj−1
2
(x)) have the same sign.
Indeed,

detD(αi−1
1
◦ βj1 )(βj−1
1
(x))
= detD(αi−1
1
◦ αi2 ◦ αi−1
2
◦ βj2 ◦ βj−1
2
◦ βj1 )(βj−1
1
(x))
= detD(αi−1
1
◦ αi2 )(αi−1
2
(x)) · detD(αi−1
2
◦ βj2 )(βj−1
2
(x)) · detD(βj−1
2
◦ βj1 )(βj−1
1
(x)).

Since detD(αi−11
◦αi2 ) > 0 and detD(βj−1
2
◦βj1 ) > 0, we can conclude detD(αi−1
1
◦βj1 )(βj−1
1
(x)) and detD(αi−1
2

−1
βj2 )(βj2 (x)) have the same sign.
(b)

Proof. ∀x, y ∈ M . When x and y are sufficiently close, they can be covered by the same coordinate patch
αi and βj . Since detDαi−1 ◦ βj does not change sign in the place where αi and βj overlap (recall αi−1 ◦ βj is
a diffeomorphism from an open subset of Rk to an open subset of Rk ), we conclude λ is a constant, in the
place where αi and βj overlap. In particular, λ is continuous.

(c)
Proof. Since λ is continuous and λ is either 1 or -1, by the connectedness of M , λ must be a constant. More
precisely, as the proof of part (b) has shown, {x ∈ M : λ(x) = 1} and {x ∈ M : λ(x) = −1} are both open
sets. Since M is connected, exactly one of them is empty.

(d)
Proof. This is straightforward from part (a)-(c).

7.
Proof. By Example 4, the unit normal vector corresponding to the induced orientation of ∂M points outwards
from M . This is a special case of Lemma 38.7.
8.

28
Proof. We consider a general problem similar to that of Example 4: Let M be an n-manifold in Rn , oriented
naturally, what is the induced orientation of ∂M ?
Suppose h : U → V is a coordinate patch on M belonging to the natural orientation of M , about the
point p of ∂M . Then the map
h ◦ b(x) = h(x1 , · · · , xn−1 , 0)
gives the restricted coordinate patch on ∂M about p. The normal field N = (p ; T ) to ∂M corresponding to
the induced orientation satisfies the condition that the frame
[ ]
∂h(h−1 (p)) ∂h(h−1 (p))
(−1)n T (p), ,··· ,
∂x1 ∂xn−1
∂h
is right-handed. Since Dh is right-handed, (−1)n T and (−1)n−1 ∂x n
lie on the same side of the tangent plane
∂h
of M at p. Since ∂xn points into M , T points outwards from M . Thus, the induced orientation of ∂M is
characterized by the normal vector field to M pointing outwards from M . This is essentially Lemma 38.7.
To determine whether or not a coordinate patch on ∂M belongs to the induced orientation of ∂M , we
suppose α is a coordinate patch on ∂M about p. Define A(p) = D(h−1 ◦ α)(α−1 (p)). Then α belongs to the
induced orientation if and only if sgn(detA(p)) = (−1)n . Since Dα(α−1 (p)) = Dh((h−1 (p)) · A(p), we have
[ ][ ]
∂h(h−1 (p)) ∂h(h−1 (p)) 1 0
[(−1)n T (p), Dα(α−1 (p))] = (−1)n T (p), ,··· , .
∂x1 ∂xn−1 0 A(p)

Therefore, α belongs to the induced orientation if and only if [T (p), Dα(α−1 (p))] is right-handed.
p
Back to our particular problem, the unit normal vector to S n−1 at p is ||p|| . So α belongs to the orientation
n−1
[ −1
]
of S if and only if p, Dα(α (p)) is right-handed. If α(u) = p, we have
 
u1 1 0 ··· 0 0
 u2 0 1 ··· 0 0 
[ ]  
 ··· ··· ··· ··· ··· ··· 
p, Dα(α−1 (p)) =  .
 un−1 0 0 ··· 0 1 
√ −u −u −un−2 −un−1

1 − ||u||2 √ 1
2
√ 2
2
··· √ 2
√ 2
1−||u|| 1−||u|| 1−||u|| 1−||u||

Plain calculation yields det[p, Dα(α−1 (p))] = (−1)n+1 / 1 − ||u||2 . So α belongs to the orientation of S n−1 if
and only if n is odd. Similarly, we can show β belongs to the orientation of S n−1 if and only if n is even.

35 Integrating Forms over Oriented Manifolds


Notes. We view Theorem 17.1 (Substitution rule) in the light∫ of integration ∫ of a form over an oriented
manifold. The theorem states that, under certain conditions, g((a,b)) f = (a,b) (f ◦ g)|g ′ |. Throughout
this note, we assume a < b. We also assume that when dx or dy appears in the integration formula, the
formula means integration of a differential form over a manifold; when dx or dy is missing, the formula means
Riemann integration over a domain.
∫b
First, as a general principle, a f (x)dx is regarded as the integration of the 1-form f (x)dx over the
∫ ∫a
naturally oriented manifold (a, b), and is therefore equal to (a,b) f by definition. Similarly, b f (x)dx is
regarded as the integration of f (x)dx over the manifold (a, b) whose orientation is reverse to the natural
∫b ∫
orientation, and is therefore equal to − a f (x)dx = − (a,b) f .
∫ g(b)
Second, if g ′ > 0, then g(a) < g(b) and g(a) f (y)dy is the integration of the 1-form f (y)dy over
∫ ∫ g(b)
the naturally oriented manifold (g(a), g(b)) with g a coordinate patch. So g((a,b)) f = g(a) f (y)dy =
∫ ∫ ∫ ∫ g(b)
(a,b)
g ∗ (f (y)dy) = (a,b) f (g(x))g ′ (x)dx = (a,b) f (g)g ′ . If g ′ < 0, then g(a) > g(b) and g(a) f (y)dy is the
integration of the 1-form f (y)dy over the manifold (g(b), g(a)) whose orientation is reverse to the natural
∫ ∫ g(b) ∫ ∫ ∫
orientation. So g((a,b)) f = − g(a) f (y)dy = − (a,b) g ∗ (f (y)dy) = − (a,b) f (g(x))g ′ (x)dx = (a,b) f (g)(−g ′ ).
∫ ∫
Combined, we can conclude g((a,b)) f = (a,b) (f ◦ g)|g ′ |.

29
3. (a)
Proof. By Exercise 8 of §34, α and β always belong to different orientations of S n−1 . By Exercise 6 of §34,
α and β belong to opposite orientations of S n−1 .
(b)
Proof. Assume β ∗ η = −α∗ η, then by Theorem 35.2 and part (a)
∫ ∫ ∫ ∫ ∫ ∫
η= η+ η= α∗ η + (−1) β∗η = 2 α∗ η.
S n−1 S n−1 ∩{x∈Rn :xn >0} S n−1 ∩{x∈Rn :xn <0} A A A

Now we show β ∗ η = −α∗ η. Indeed, using our calculation in Exercise 8 of §34, we have
 
1 0 ··· 0 0
 0 1 ··· 0 0 
 
 ··· ··· ··· ··· ··· · · ·
Dα(u) =  ,
 0 0 ··· 0 1 
 −u −u −un−2 −un−1

√ 1
2
√ 2
2
··· √ 2
√ 2
1−||u|| 1−||u|| 1−||u|| 1−||u||

and  
1 0 ··· 0 0
 0 1 ··· 0 0 
 
Dβ(u) =  ··· ··· ··· ··· ··· · · · .
 
 0 0 ··· 0 1 
√ u1 √ u2
··· √un−2 √un−1
1−||u||2 1−||u||2 1−||u||2 1−||u||2

So for any x ∈ A,

n
α∗ η(x) = (−1)i−1 fi ◦ α(u)detDα(1, · · · , bi, · · · , n)du1 ∧ · · · ∧ dun−1
i=1
{n−1 }
∑ −ui √
= ui (−1) n−1−i
√ + (−1) n−1
1− ||u||2 du1 ∧ · · · ∧ dun−1
i=1
1 − ||u||2
{n−1 }
∑ ui √
= − ui (−1)n−1−i √ + (−1)n−1 (−1) 1 − ||u||2 du1 ∧ · · · ∧ dun−1
i=1
1 − ||u||2

n
= − (−1)i−1 fi ◦ β(u)detDβ(1, · · · , bi, · · · , n)du1 ∧ · · · ∧ dun−1
i=1
= −β ∗ η(x).

(c)
Proof. By our calculation in part (b), we have
∫ ∫ n−1
∑ √
ui
α∗ η = (−1)i−1 ui (−1)n−i √ + (−1)n−1 1 − ||u||2
A A i=1 1 − ||u||2
∫ ∑n−1 2 √
u
= (−1)n−1 √ i=1 i + 1 − ||u||2
A 1 − ||u||2

1
= ± √ ̸= 0.
A 1 − ||u||2

30
36 A Geometric Interpretation of Forms and Integrals
1.

Proof. Define bi = [D(α−1 ◦ β)(y)]−1 ai = D(β −1 ◦ α)(x)ai . Then

β∗ (y; bi ) = (p ; Dβ(y)bi )
= (p ; Dβ(y)[D(α−1 ◦ β)(y)]−1 ai )
= (p ; Dβ(y)D(β −1 ◦ α)(x)ai )
= (p ; Dα(x)ai )
= α∗ (x; ai ).

Moreover, [b1 , · · · , bk ] = D(β −1 ◦ α)(x)[a1 , · · · , ak ]. Since detD(β −1 ◦ α)(x) > 0, [b1 , · · · , bk ] is right-handed
if and only if [a1 , · · · , ak ] is right-handed.

37 The Generalized Stokes’ Theorem


2.
∫ ∫
∫Proof. Assume η = dω for some form. Since ∂S
n−1
= ∅, Stokes’ Theorem implies S n−1
η = S n−1
dω =
∂S n−1 ω = 0. Contradiction.
3.

Proof. Apply Stokes’ Theorem to ω = P dx + Qdy.


4. (a)
 
1 0
Proof. Dα(u, v) = − √1−u 2u
2 −v 2
− √1−u
2v
2 −v 2
. By Lemma 38.3, the normal vector n corresponding to the
0 1
c
orientation of M satisfies n = ||c|| , where
   √ 2u 
detDα(u, v)(2, 3) − 1−u2 −v2
c = −detDα(u, v)(1, 3) =  −1 .
detDα(u, v)(1, 2) − √1−u2 −v2
2v


Plain calculation shows ||c|| = 1+3u2 +3v 2
1−u2 −v 2 , so
 
− √1+3u
2u
2 +3v 2

 1−u2 −v 2 
n = − √1+3u2 +3v 2 
.
− 1+3u2 +3v2
√ 2v

 
0
In particular, at the point α(0, 0) = (0, 2, 0), n = −1, which points inwards into {(x1 , x2 , x3 ) : 4(x1 )2 +
0
(x2 )2 +4(x3 )2 ≤ 4, x2 ≥ 0}. By Example 5 of §34, the tangent vector corresponding to the induced orientation
of ∂M is easy to determine.

(b)

31
Proof. According to the result of part (a), we can choose the following coordinate patch which belongs to
the induced orientation of ∂M : β(θ) = (cos θ, 0, sin θ) (0 ≤ θ2π). By Theorem 35.2, we have
∫ ∫
x2 dx1 + 3x1 dx3 = 3 cos θ · cos θ = 3π.
∂M [0,2π)

(c)
Proof. dω = −dx1 ∧ dx2 + 3dx1 ∧ dx3 . So
∫ ∫
dω = −dx1 ∧ dx2 + 3dx1 ∧ dx3
M
∫ M

= −detDα(u, v)(1, 2) + 3detDα(u, v)(1, 3)


{(u,v):u2 +v 2 <1}
∫ [ ]
2v
= √ +3
{(u,v):u2 +v 2 <1} 1 − u2 − v 2
∫ [ ]
2r sin θ
= √ +3 r
{(θ,r):0≤r<1,0≤θ<2π} 1 − r2
= 3π.

5. (a)
Proof. By Stokes’ Theorem, we have
∫ ∫ ∫ ∫ ∫ ∫
b b
dω = ω= ω+ ω= ω− ω= − .
M ∂M S 2 (d) −S 2 (c) S 2 (d) S 2 (c) d c

(b)

Proof. If dω = 0, we conclude from part (a) that b = 0. This implies S 2 (r)
ω = a. To be continued ...

(c)

Proof. If ω = dη, by part (b) we conclude b = 0. Moreover, Stokes’ Theorem implies a = ω =
∫ S 2 (r)

S 2 (r)
dη = 0.
6.
∫ ∫ ∫
Proof. M ∫ d(ω ∧ η) = ∂M ω ∧ η = 0. Since d(ω ∧ η) = dω ∧ η + (−1)k ω ∧ dη, we conclude M ω ∧ dη =
(−1)k+1 M dω ∧ η. So a = (−1)k+1 .

32
38 Applications to Vector Analysis
1.

Proof. Let M = {x ∈ R3 : c ≤ ||x|| ≤ d} oriented with the natural orientation. By the divergence theorem,
∫ ∫
(divG)dV = ⟨G, N ⟩dV,
M ∂M

where N is the unit normal vector field to ∂M that points outwards from M . For the coordinate patch for
M:


x1 = r sin θ cos ϕ
x2 = r sin θ sin ϕ (c ≤ r ≤ d, 0 ≤ θ < π, 0 ≤ ϕ < 2π)


x3 = r cos θ,

we have
 
sin θ cos ϕ r cos θ cos ϕ −r sin θ sin ϕ
∂(x1 , x2 , x3 )
det = det  sin θ sin ϕ r cos θ sin ϕ r sin θ cos ϕ  = r2 sin θ.
(r, θ, ϕ)
cos θ −r sin θ 0
∫ ∫ ∫ ∫ ∫
1 ,x2 ,x3 )
So M (divG)dV = 1r det ∂(x(r,θ,ϕ) = 0. Meanwhile ∂M ⟨G, N ⟩dV = S 2 (d) ⟨G, Nr ⟩dV − S 2 (c) ⟨G, Nr ⟩dV .
∫ ∫
So we conclude S 2 (d) ⟨G, Nr ⟩dV = S 2 (c) ⟨G, Nr ⟩dV .

2. (a)
Proof. We let M3 = B n (ε). Then for ε small enough, M3 is contained by both M1 − ∂M1 and M2 − ∂M2 .
Applying the divergence theorem, we have (i = 1, 2)
∫ ∫ ∫
0= (divG)dV = ⟨G, Ni ⟩dV − ⟨G, N3 ⟩dV,
Mi −IntM3 ∂Mi ∂M3

where
∫ N3 is the unit outward ∫ normal vector field to ∂M3 . This shows that regardless i = 1 or i = 2,
∂Mi
⟨G, N i ⟩dV is a constant ∂M3
⟨G, N3 ⟩dV .

(b)

Proof. We have shown that if the origin is contained in M − ∂M , the integral ∂M ⟨G, N ⟩dV is a constant.
If the origin is not contained in M − ∂M , by the compactness
∫ of M , we conclude the origin is in the exterior
of M . Applying the divergence theorem implies ∂M ⟨G, N ⟩dV = 0. So this integral has only two possible
values.
3.

Proof. Four possible values. Apply the divergence theorem (like in Exercise 3) and carry out the computation
in the following four cases: 1) both p and q are contained by M − ∂M ; 2) p is contained by M − ∂M but q
is not; 3) q is contained by M − ∂M but p is not; 4) neither p nor q is contained by M − ∂M .
4.

Proof. Follow the hint and apply Lemma 38.5.

33
39 The Poincaré Lemma
2. (a)

Proof. Let ω ∈ Ωk (B) with dω = 0. Then g ∗ ω ∈ Ωk (A) and d(g ∗ ω) = g ∗ (dω) = 0. Since A is homologically
trivial in dimension k, there exists ω1 ∈ Ωk (A) such that dω1 = g ∗ ω. Then ω2 = (g −1 )∗ (ω1 ) ∈ Ωk (B) and
dω2 = d(g −1 )∗ (ω1 ) = (g −1 )∗ (dω1 ) = (g −1 )∗ g ∗ ω = (g ◦ g −1 )∗ ω = ω. Since ω is arbitrary, we conclude B is
homologically trivial in dimension k.

(b)

Proof. Let A = [ 12 , 1] × [0, π] and B = {(x, y) : 12 ≤ x2 + y 2 ≤ 1, x, y ≥ 0}. Define g : A → B as
g(r, θ) = (r cos θ, r sin θ). By the Poincaré lemma, A is homologically trivial in every dimension. By part (a)
of this exercise problem, B is homologically trivial in every dimension. But B is clearly not star-convex.
3.

Proof. Let p ∈ A and define X = {x ∈ A : x can be joined by a broken-line path in A}. Since Rn is locally
convex, it is easy to see X is an open subset of A.
(Sufficiency) Assume A is connected. Then X = A. For any closed 0-form f , ∀x ∈ ∫ A, denote by γ a
broken-line path that joins x and p. We have by virtue of Newton-Leibnitz formula 0 = γ df = f (x) − f (p).
So f is a constant, i.e. an exact 0-form, on A. Hence A is homologically trivial in dimension 0.
(Necessity) Assume A is not connected. Then A can be decomposed into the joint union of at least two
open subsets, say, A1 and A2 . Define {
1, on A1
f=
0, on A2 .
Then f is a closed 0-form, but not exact. So A is not homologically trivial in dimension 0.

4.
∑ ∑
Proof. Let η = [I] fI dxI + [J] gJ dxJ ∧ dt, where I denotes an ascending (k + 1)-tuple and J denotes an

ascending k-tuple, both from the set {1, · · · , n}. Then P η = [J] gJ dxJ and

(P η)(x)((x; v1 ), · · · , (x; vk )) = (−1)k (LgJ )det[v1 · · · vk ]J .
[J]

On the other hand, [


] [ ]
In×n v
wi = Dαt vi = vi = i .
0 0
So

η(y)((y; w1 ), · · · , (y; wk ), (y; en+1 ))


∑ ( [ ] [ ] [ ])
v v 0
= fI dxI (y; 1 ), · · · , (y; k ), (y; n×1 )
0 0 1
[I]
∑ ( [ ] [ ] [ ])
v1 vk 0n×1
+ gI dxJ ∧ dt (y; ), · · · , (y; ), (y; )
0 0 1
[J]

= 0+ gJ det[v1 · · · vk ]J
[J]

= gJ det[v1 · · · vk ]J .
[J]

34
Therefore
∫ t=1
(−1) k
η(y)((y; w1 ), · · · , (y; wk ), (y; en+1 ))
t=0
∑∫ t=1
= (−1)k gJ det[v1 · · · vk ]J
[J] t=0

= (−1)k (LgJ )det[v1 · · · vk ]J
[J]

= (P η)(x)((x; v1 ), · · · , (x; vk )).

40 The deRham Groups of Punctured Euclidean Space


1. (a)

Proof. This is already proved on page 334 of the book, esp. in the last paragraph.
(b)

Proof. To see Te is well-defined, suppose v +W = v ′ +W . Then v −v ′ ∈ W and T (v)−T (v ′ ) = T (v −v ′ ) ∈ W ′


by the linearity of T and the fact that T carries W into W ′ . Therefore T (v) + W ′ = T (v ′ ) + W ′ , which
shows Te is well-defined. The linearity of Te follows easily from that of T .
2.
∑n
Proof. ∀v ∈ V , we can uniquely write ∑nv as v = i=1 ci ai for some coefficients c1 , · · · , cn . By the fact that a1 ,
· · · , ak ∈ W , we conclude v + W = i=k+1 ci (ai + W ). So the cosets ∑n ak+1 + W , · · · , an + W spans V /W . To
see ak+1 +W , · · · , an +W are linearly independent, let us assume i=k+1 ci (ai +W ) = 0 for some coefficients
∑n ∑n ∑k
ck+1 , · · · , cn . Then i=k+1 ci ai ∈ W and there exist d1 , · · · , dk such that i=k+1 ci ai = j=1 dj aj . By the
linear independence of a1 , · · · , an , we conclude ck+1 = · · · = cn = 0, i.e. the cosets ak+1 + W , · · · , an + W
are linearly independent.
4. (a)

Proof. dimH i (U ) = dimH i (V ) = 0, for all i.


(b)

Proof. dimH i (U ) = dimH i (V ) = 0, for all i.


(c)

Proof. dimH 0 (U ) = dimH 0 (V ) = 0.


5.

Proof. Step 1. We prove the theorem for n = 1. Without loss of generality, we assume p < q. Let
A = R1 − p − q; write A = A0 ∪ A1 ∪ A2 , where A0 = (−∞, p), A1 = (p, q), and A2 = (q, ∞). If ω is a closed
k-form in A, with k > 0, then ω|A0 , ω|A1 and ω|A2 are closed. Since A0 , A1 , A2 are all star-convex, there
are k − 1 forms η0 , η1 and η2 on A0 , A1 and A2 , respectively, such that dηi = ω|Ai for i = 0, 1, 2. Define
η = ηi on Ai , i = 0, 1, 2. Then η is well-defined and of class C ∞ , and dη = ω.
Now let f0 be the 0-form in A defined by setting f0 (x) = 0 for x ∈ A1 ∪ A2 and f0 (x) = 1 for x ∈ A0 ;
let f1 be the 0-form in A defined by setting f1 (x) = 0 for x ∈ A0 ∪ A2 and f1 (x) = 1 for x ∈ A1 . Then f0
and f1 are closed forms, and they are not exact. We show the cosets {f0 } and {f1 } form a basis for H 0 (A).

35
Given a closed 0-form f in A, the forms f |A0 , f |A1 , and f |A2 are closed and hence exact. Then there are
constants c0 , c1 , and c2 such that f |Ai = ci , i = 0, 1, 2. It follows that

f (x) = (c0 − c2 )f0 (x) + (c1 − c2 )f1 (x) + c2

for x ∈ A. Then {f } = (c0 − c2 ){f0 } + (c1 − c2 ){f1 }, as desired.


Step 2. Similar to the proof of Theorem 40.4, step 2, we can show the following: if B is open in Rn , then
B × R is open in Rn+1 , and for all k, dimH k (B) = dimH k (B × R).
Step 3. Let n ≥ 1. We assume the theorem true for n and prove it for n + 1. We first prove the following
Lemma 40.1. Rn+1 − S × H1 and Rn+1 − S × L1 are homologically trivial.

Proof. Let U1 = Rn+1 − {p} × H1 , V1 = Rn+1 − {q} × H1 , A1 = U1 ∩ V1 = Rn+1 − S × H1 , and X1 = U1 ∪ V1 =


Rn+1 . Since U1 and V1 are star-convex, U1 and V1 are homologically trivial in all dimensions. By Theorem
40.3, for k ≥ 0, H k (A1 ) = H k+1 (X1 ) = H k+1 (Rn+1 ) = 0. So Rn+1 − S × H1 is homologically trivial in all
dimensions. Similarly, Rn+1 − S × L1 is homologically trivial in all dimensions.

Now, we define U = Rn+1 − S × H1 , V = Rn+1 − S × L1 , and A = U ∩ V = Rn+1 − S × R1 . Then


X := Rn+1 − p − q = U ∪ V . We have shown U and V are homologically trivial. It follows from Theorem
40.3 that H 0 (X) is trivial, and that

dim H k+1 (X) = dim H k (A) for k ≥ 0.

Now Step 2 tells us that H k (A) has the same dimension as the deRham group of Rn deleting two points, and
the induction hypothesis implies that the latter has dimension 0 if k ̸= n − 1, and dimension 2 if k = n − 1.
The theorem follows.

6.
Proof. The theorem of Exercise 5 can be restated in terms of forms as follows: Let A = Rn − p − q with
n ≥ 1.
(a) If k ̸= n − 1, then every closed k-form on A is exact on A.
(b) There are two closed (n − 1) forms, η1 and η2 , such that η1 , η2 , and η1 − η2 are not exact. And if η is
any closed (n − 1) form on A, then there exist unique scalars c1 and c2 such that η − c1 η1 − c2 η2 is exact.

41 Differentiable Manifolds and Riemannian Manifolds

References
[1] J. Munkres. Analysis on manifolds, Westview Press, 1997. (document)
[2] P. Lax. Linear algebra and its applications, 2nd Edition, Wiley-Interscience, 2007.

36

You might also like