Revised Course Notes
Revised Course Notes
Revised Course Notes
KO HONDA
When are two topological spaces equivalent? The following gives one notion:
Definition 1.5. A map f : X → Y is a homeomorphism is there exists an inverse f −1 : Y → X
for which f and f −1 are both continuous.
HW: Show that the two incarnations of S 1 from Examples 2A and 2B are homeomorphic.
Zen of mathematics: Any world (“category”) in mathematics consists of spaces (“objects”) and
maps between spaces (“morphisms”).
Examples:
(1) (Topological category) Topological spaces and continuous maps.
(2) (Groups) Groups and homomorphisms.
(3) (Linear category) Vector spaces and linear transformations.
1.2. Review of linear algebra.
Definition 1.6. A vector space V over a field k = R or C is a set V equipped with two operations
V × V → V (called addition) and k × V → V (called scalar multiplication) s.t.
(1) V is an abelian group under addition.
(a) (Identity) There is a zero element 0 s.t. 0 + v = v + 0 = v.
(b) (Inverse) Given v ∈ V there exists an element w ∈ V s.t. v + w = w + v = 0.
(c) (Associativity) (v1 + v2 ) + v3 = v1 + (v2 + v3 ).
(d) (Commutativity) v + w = w + v.
(2) (a) 1v = v.
(b) (ab)v = a(bv).
(c) a(v + w) = av + aw.
1
Figure out what “smallest” and “largest” mean.
DIFFERENTIAL GEOMETRY COURSE NOTES 3
Note: Keep in mind the Zen of mathematics — we have defined objects (vector spaces), and now
we need to define maps between objects.
Definition 1.7. A linear map φ : V → W between vector spaces over k satisfies φ(v1 + v2 ) =
φ(v1 ) + φ(v2 ) (v1 , v2 ∈ V ) and φ(cv) = c · φ(v) (c ∈ k and v ∈ V ).
Now, when are two vector spaces equivalent in the linear category?
If V and W are finite-dimensional? ,2 then we may take bases? {v1 , . . . , vn } and {w1 , . . . , wm } and
represent a linear map φ : V → W as an m × n matrix A. φ is then invertible if and only if m = n
and det(A) 6= 0.?
Examples of vector spaces: Let φ : V → W be a linear map of vector spaces.
(1) The kernel ker φ = {v ∈ V | φ(v) = 0} is a vector subspace of V .
(2) The image im φ = {φ(v) | v ∈ V } is a vector subspace of W .
(3) Let V ⊂ W be a subspace. Then the quotient W/V = {w + V | w ∈ W } can be given the
structure of a vector space. Here w + V = {w + v | v ∈ V }.
(4) The cokernel coker φ = W/ im φ.
2
? means you should look up its definition.
4 KO HONDA
2. R EVIEW OF DIFFERENTIATION
2.1. Definitions. Let f : Rm → Rn be a map. The discussion carries over to f : U → V for open
sets U ⊂ Rm and V ⊂ Rn .
Definition 2.1. The map f : Rm → Rn is differentiable at a point x ∈ Rm if there exists a linear
map L : Rm → Rn satisfying
|f (x + h) − f (x) − L(h)|
(1) lim = 0,
h→0 |h|
where h ∈ Rm − {0}. L is called the derivative of f at x and is usually written as df (x).
HW: Show that if f : Rm → Rn is differentiable at x ∈ Rm , then there is a unique L which
satisfies Equation (1).
Fact 2.2. If f is differentiable at x, then df (x) : Rm → Rn is a linear map which satisfies
f (x + tv) − f (x)
(2) df (x)(v) = lim .
t→0 t
We say that the directional derivative of f at x in the direction of v exists if the right-hand side
of Equation (2) exists. What Fact 2.2 says is that if f is differentiable at x, then the directional
derivative of f at x in the direction of v exists and is given by df (x)(v).
2.2. Partial derivatives. Let ej be the usual basis element (0, . . . , 1, . . . , 0), where 1 is in the jth
∂f
position. Then df (x)(ej ) is usually called the partial derivative and is written as ∂x j
(x) or ∂j f (x).
More explicitly, if we write f = (f1 , . . . , fn )T (here T means transpose), where fi : Rm → R,
then T
∂f ∂f1 ∂fn
(x) = (x), . . . , (x) ,
∂xj ∂xj ∂xj
and df (x) can be written in matrix form as follows:
∂f1 ∂f1
∂x1
(x) . . . ∂x m
(x)
df (x) =
.. .. ..
. . .
∂fn ∂fn
∂x1
(x) ... ∂xm
(x)
The matrix is usually called the Jacobian matrix.
Facts:
(1) If ∂i (∂j f ) and ∂j (∂i f ) are continuous on an open set 3 x, then ∂i (∂j f )(x) = ∂j (∂i f )(x).
∂fi
(2) df (x) exists if all ∂x j
(y), i = 1, . . . , n, j = 1, . . . , m, exist on an open set 3 x and each
∂fi
∂xj
is continuous at x.
3. M ANIFOLDS
3.1. Topological manifolds.
Definition 3.1. A topological manifold of dimension n is a pair consisting of a topological space
X and a collection A = {φα : Uα → Rn }α∈I of maps (called an atlas of X) such that:
(1) Uα is an open set of X and ∪α∈I Uα = X,
(2) φα is a homeomorphism onto an open subset φα (Uα ) of Rn .
(3) (Technical condition 1) X is Hausdorff.
(4) (Technical condition 2) X is second countable.
Each φα : Uα → Rn is called a coordinate chart.
Definition 3.2. A topological space X is Hausdorff if for any x 6= y ∈ X there exist open sets Ux
and Uy containing x, y respectively such that Ux ∩ Uy = ∅.
Definition 3.3. A topological space (X, T ) is second countable if there exists a countable sub-
collection T0 of T and any open set U ∈ T is a union (not necessarily finite) of open sets in
T0 .
HW: Show that S 1 from Example 2A or 2B from Day 1 (already shown to be homeomorphic from
an earlier exercise) is a topological manifold.
Non-example. A topological manifold which satisfies all the axioms except for the Hausdorff
condition: Take R × {0, 1}/ ∼, where (x, 0) ∼ (x, 1) for all x 6= 0, with the quotient topology.
For any open sets Ui , i = 0, 1, containing (0, i), U0 ∩ U1 6= ∅.
Non-example. The long line is a topological manifold which satisfies all the axioms except for
the second countable condition. It is ω1 × [0, 1) with the smallest element deleted, where ω1 is the
“first uncountable ordinal” (in particular it is uncountable and has an ordering), and the topology
is the order topology coming from the lexicographic ordering on ω1 × [0, 1). For more details see
Wikipedia or Munkres, “Topology”.
HW: Give an example of a Hausdorff, second countable topological space X which is not a topo-
logical manifold. (You may have trouble proving that it is not a topological manifold, though. You
may also want to find several different types of examples.)
Observe that in the land of topological manifolds, a square and a circle are the same, i.e., they are
homeomorphic! That is not the world we will explore — in other words, we seek a category where
squares are not the same as circles. In other words, we need derivatives!
3.2. Differentiable manifolds.
Definition 3.4. A smooth manifold is a topological manifold (X, A = {φα : Uα → Rn }) satisfying
the following: For every Uα ∩ Uβ 6= ∅,
φβ ◦ φ−1
α : φα (Uα ∩ Uβ ) → φβ (Uα ∩ Uβ )
is a smooth map. The maps φβ ◦ φ−1
α are called transition maps.
DIFFERENTIAL GEOMETRY COURSE NOTES 7
Note: In the rest of the course when we refer to a “manifold”, we mean a “smooth manifold”,
unless stated otherwise.
8 KO HONDA
3
Strictly speaking, this should say “can be given the structure of a smooth manifold”. There may be more than one
choice and we have not yet discussed when two manifolds are the same.
DIFFERENTIAL GEOMETRY COURSE NOTES 9
Suppose CA∞1 (M ) = CA∞2 (M ). We use the existence of bump functions, i.e., smooth functions
h : R → [0, 1] such that h(x) = 1 on [a, b] and h(x) = 0 on R − [c, d], where c < a < b < d. (The
construction of bump functions is an exercise.)
In order to show that the transition maps
ψβ ◦ φ−1
α : φα (Uα ∩ Vβ ) → ψβ (Uα ∩ Vβ ) ⊂ R
n
are smooth, we postcompose with the projection πj : Rn → R to the jth R factor and show that
πj ◦ ψβ ◦ φ−1
α is smooth. Given x ∈ ψβ (Uα ∩ Vβ ), let Bε (x) ⊂ B2ε (x) ⊂ ψβ (Uα ∩ Vβ ) be small
open balls around x. Using the bump functions we can construct a function f on Uα ∩ Vβ such that
f ◦ ψβ−1 equals πj on Bε (x) and 0 outside B2ε (x); f can be extended to the rest of M by setting
−1
f = 0. f is clearly in CA∞2 (M ). Since CA∞1 (M ) = CA∞2 (M ), f ◦ φ−1 −1
α = (f ◦ ψβ ) ◦ (ψβ ◦ φα ) is
smooth. This is sufficient to show the smoothness of πj ◦ ψβ ◦ φ−1 −1
α and hence of ψβ ◦ φα .
Upshot: Smooth maps between smooth manifolds can be “reduced” to smooth maps from Rn to
Rm .
DIFFERENTIAL GEOMETRY COURSE NOTES 13
6.2. Illustrative example. Let f : R2 → R, (x, y) 7→ x2 + y 2 . We would like to analyze the level
sets f −1 (a), where a > 0. To that end, we consider
F : R2 → R2 , (x, y) 7→ (f (x, y), y).
Let us use coordinates (x, y) for the domain R2 and coordinates (u, v) for the range R2 . We
compute:
2x 2y
dF (x, y) = .
0 1
Let us restrict our attention to the portion x > 0. Since det(dF (x, y)) = 2x > 0, the inverse
function theorem applies and there is a local diffeomorphism between a neighborhood U(x,y) ⊂ R2
14 KO HONDA
of a point (x, y) on the level set f (x, y) = a and a neighborhood VF (x,y) of F (x, y) on the line
u = a.
In particular, f −1 (a)∩U(x,y) is mapped to {u = a}∩VF (x,y) ; in other words, F is a local diffeomor-
phism which “straightens out” f −1 (a). Hence f −1 (a), restricted to x > 0, is a smooth manifold.
Check the transition functions!
Interpreted slightly differently, the pair f, y can locally be used as coordinate functions on R2 ,
provided x > 0.
6.3. Rank. Recall that the dimension of a vector space V is the cardinality of a basis for V . If V
is finite-dimensional, then V ' Rm for some m, and dim V = m.
Definition 6.3. The rank of a linear map L : V → W is the dimension of im(L).
Definition 6.4. The rank of a smooth map f : Rm → Rn at x ∈ Rm is the rank of df (x) : Rm →
Rn . The map f has constant rank if the rank of df (x) is constant.
We can similarly define the rank of a smooth map f : M → N at a point x ∈ M by using local
coordinates.
Claim 6.5. The rank at x ∈ M is constant under change of coordinates.
−1
Proof. We compare the ranks of d(ψα ◦ f ◦ φ−1α ) and d(ψβ ◦ f ◦ φβ ), where φα : Uα → R ,
m
Carving manifolds out of other manifolds: The implicit function theorem, submersion version,
has the following corollary:
Corollary 7.4. If f : M → N is a submersion, then f −1 (y), y ∈ N , can be given the structure of
a manifold of dimension dim M − dim N .
Proof. The implicit function theorem above gives a coordinate chart with coordinate functions of
the form (xi1 , . . . , xin−m ) about each point in f −1 (y). HW: Check the transition functions and
verify the Hausdorff and second countable conditions!!
Example: The easy way to prove that the circle S 1 = {x2 + y 2 = 1} ⊂ R2 can be given the
structure of a manifold is to consider the map
f : R2 − {(0, 0)} → R, f (x, y) = x2 + y 2 .
16 KO HONDA
The Jacobian is df (x, y) = (2x, 2y). Since x and y are never simultaneously zero, the rank of df
is 1 at all points of R2 − {(0, 0)} and in particular on S 1 . Using the implicit function theorem, it
follows that S 1 is a manifold.
7.2. Regular values and Sard’s theorem.
Definition 7.5. Let f : M → N be a smooth map.
(1) A point y ∈ N is a regular value of f if df (x) is surjective for all x ∈ f −1 (y).
(2) A point y ∈ N is a critical value of f if df (x) is not surjective for some x ∈ f −1 (y).
(3) A point x ∈ M is a critical point of f if df (x) is not surjective.
The implicit function theorem implies that f −1 (y) can be given the structure of a manifold if y
is a regular value of f .
Example: Let M = {x3 + y 3 + z 3 = 1} ⊂ R3 . Consider the map
f : R3 → R, (x, y, z) 7→ x3 + y 3 + z 3 .
Then M = f −1 (1). The Jacobian is given by df (x, y, z) = (3x2 , 3y 2 , 3z 2 ) and the rank of
df (x, y, z) is one if and only if (x, y, z) 6= (0, 0, 0). Since (0, 0, 0) 6∈ M , it follows that 1 is a
regular value of f . Hence M can be given the structure of a manifold.
HW: Prove that S n ⊂ Rn+1 is a manifold.
Example: Zero sets of homogeneous polynomials in RPn . A polynomial f : Rn+1 → R is
homogeneous of degree d if f (tx) = td f (x) for all t ∈ R − {0} and x ∈ Rn+1 . The zero set
Z(f ) of f is given by {[x0 , . . . , xn ] | f (x0 , . . . , xn ) = 0}. By the homogeneous condition, Z(f ) is
well-defined. We can check whether Z(f ) is a manifold by passing to local coordinates.
For example, consider the homogeneous polynomial f (x0 , x1 , x2 ) = x30 + x31 + x32 of degree 3
on RP2 . Consider the open set U = {x0 6= 0} ⊂ RP2 . If we let x0 = 1, then on U ' R2 we have
f (x1 , x2 ) = 1 + x31 + x32 . Check that 0 is a regular value of f (x1 , x2 )! The open sets {x1 6= 0} and
{x2 6= 0} can be treated similarly.
More involved example: Let SL(n, R) = {A ∈ Mn (R) | det(A) = 1}. SL(n, R) is called the
special linear group of n × n real matrices. Consider the determinant map
2
f : Rn → R, A 7→ det(A).
We can rewrite f as follows:
f : Rn × · · · × Rn → R, (a1 , . . . , an ) 7→ det(a1 , . . . , an ),
where ai are column vectors and A = (a1 , . . . , an ) = (aij ).
First we need some properties of the determinant:
(1) f (e1 , . . . , en ) = 1.
(2) f (a1 , . . . , ci ai + c0i a0i , . . . , an ) = ci · f (a1 , . . . , ai , . . . , an ) + c0i · f (a1 , . . . , a0i , . . . , an ).
(3) f (. . . , ai , ai+1 , . . . ) = −f (. . . , ai+1 , ai , . . . ).
DIFFERENTIAL GEOMETRY COURSE NOTES 17
(1) is a normalization, (2) is called multilinearity, and (3) is called the alternating property. It turns
out that (1), (2), and (3) uniquely determine the determinant function.
We now compute df (A)(B):
f (A + tB) − f (A)
df (A)(B) = lim
t→0 t
det(a1 + tb1 , . . . , an + tbn ) − det(a1 , . . . , an )
= lim
t→0 t
det(a1 , . . . , an ) + t[det(b1 , a2 , . . . , an ) + det(a1 , b2 , . . . , an )
= lim
t→0 t
+ · · · + det(a1 , . . . , an−1 , bn )] + t2 (. . . ) − det(a1 , . . . , an )
t
= det(b1 , a2 , . . . , an ) + · · · + det(a1 , . . . , an−1 , bn )
It is easy to show that 1 is a regular value of df (it suffices to show that df (A) is nonzero for any
A ∈ SL(n, R)). For example, take b1 = ca1 where c ∈ R and bi = 0 for all i 6= 1.
Theorem 7.6 (Sard’s theorem). Let f : U → V be a smooth map. Then almost every point y ∈ Rn
is a regular value.
The notion of almost every point will be made precise later. But in the meantime:
Reality Check: In Sard’s theorem what happens when m < n?
18 KO HONDA
8.2. Immersions.
Definition 8.1. Let U ⊂ Rm and V ⊂ Rn be open sets. A smooth map f : U → V is an immersion
if df (x) is injective for all x ∈ U . (Note this means n ≥ m.) A smooth map f : M → N between
manifolds is an immersion if f is an immersion with respect to all local coordinates.
Prototype: f : Rm → Rn , n ≥ m, (x1 , . . . , xm ) 7→ (x1 , . . . , xm , 0, . . . , 0).
Theorem 8.2 (Implicit function theorem, immersion version). Let f : U → V be an immersion,
where U ⊂ Rm and V ⊂ Rn are open sets. Then for any p ∈ U , there exist open sets U ⊃ Up 3 p,
∼
V ⊃ Vf (p) 3 f (p) and a diffeomorphism G : Vf (p) → W ⊂ Rn such that
G ◦ f : Up → Rn
is given by
(x1 , . . . , xm ) → (x1 , . . . , xm , 0, . . . , 0).
Proof. The proof is similar to that of the submersion version. Define the map
F : U × Rn−m → Rn ,
Zen: The implicit function theorem tells us that under a constant rank condition we may assume
that locally we can straighten our manifolds and maps and pretend we are doing linear algebra.
Examples of immersions:
(1) Circle mapped to figure 8 in R2 .
(2) The map f : R → C, t 7→ eit , which wraps around the unit circle S 1 ⊂ C infinitely many
times.
(3) The map f : R → R2 /Z2 , t 7→ (at, bt), where b/a is irrational. The image of f is dense in
R2 /Z2 .
8.3. Embeddings and submanifolds. We upgrade immersions f : M → N as follows:
Definition 8.3. An embedding f : M → N is an immersion which is one-to-one and proper. The
image of an embedding is called a submanifold of N .
The “pathological” examples above are immersions but not embeddings. Why? (1) and (2) are not
one-to-one and (3) is not proper.
Proposition 8.4. Let M and N be manifolds of dimension m and n with topologies T and T 0 . If
f : M → N is an embedding, then f −1 (T 0 ) = T .
Proof. It suffices to show that f −1 (T 0 ) ⊃ T , since a continuous map f satisfies f −1 (T 0 ) ⊂ T . Let
x ∈ M and U be a small open set containing x. Then by the implicit function theorem f can be
written locally as U → Rn , x0 7→ (x0 , 0) (where we are using x0 to avoid confusion with x). We
claim that there is an open set V ⊂ Rn such that V ∩ f (M ) = f (U ): Arguing by contradiction,
suppose there exist y ∈ f (U ) and a sequence {xi }∞ i=1 ⊂ M such that f (xi ) → y but f (xi ) 6∈ f (U ).
The set {y} ∪ {f (xi )}∞ i=1 is compact, so {f −1
(y)} ∪ {xi }∞
i=1 is compact by properness, where we
are recalling that f is one-to-one. By compactness, there is a subsequence of {xi } which converges
to f −1 (y). This implies that xi ∈ U and f (xi ) ∈ f (U ) for sufficiently large i, a contradiction.
20 KO HONDA
Proof. We prove the theorem for one variable x. By the Fundamental Theorem of Calculus,
Z 1
g(1) − g(0) = g 0 (t)dt.
0
Substituting g(t) = f (tx) and integrating by parts (i.e., udv = uv − vdu) with u = dtd f (tx)
R R
and v = t − 1 we obtain:
Z 1
d
f (x) − f (0) = f (tx)dt
0 dt
Z 1
0 t=1
= x · f (tx) · (t − 1)|t=0 − (t − 1)x2 · f 00 (tx)dt
0
Z 1
= −xf 0 (0)(−1) − x2 (t − 1)f 00 (tx)dt
0
= f 0 (0) · x + h(x) · x2 .
R1
Here we write h(x) = − 0
(t − 1)f 00 (tx)dt. This gives us the desired result
f (x) = f (0) + f 0 (0) · x + h(x) · x2
for one variable.
Corollary 9.4. γ1 ∼ γ2 if and only if xi (γ1 (t)) = xi (γ2 (t)) + O(t2 ) for i = 1, . . . , n.
Corollary 9.5. If M is a submanifold of Rm , then γ1 ∼ γ2 if and only if γ10 (0) = γ20 (0).
(1)
Remark: At this point it’s not clear whether Tp M has a canonical vector space structure. Here’s
one possible definition: Choose coordinates x = (x1 , . . . , xn ) about p = 0. If we write γ1 , γ2 ∈
(1)
Tp M with respect to x, then we can do addition x ◦ γ1 + x ◦ γ2 and scalar multiplication cx ◦ γ1 .
However, the above vector space structure depends on the choice of coordinates. Of course we
can show that the definition does not depend on the choice of coordinates, but the vector space
(2)
structure is clearly canonical in the definition of Tp M from next time.
22 KO HONDA
π
Thus we obtain a smooth manifold T M and a C ∞ -function T M → M .
HW: Show that the tangent bundle T S 2 defined in this way is diffeomorphic to our “official defi-
nition” of the tangent bundle.
11.3. Complex manifolds.
More abstract example: S 2 defined by gluing coordinate charts. Let U = R2 and V = R2 , with
coordinates (x1 , y1 ), (x2 , y2 ), respectively. Alternatively, think of R2 = C. Take U ∩ V = C − {0}.
The transition function is:
φU V
U − {0} −→ V − {0},
1
z 7→ ,
z
with respect to complex coordinates z = x + iy. With respect to real coordinates,
x y
(x, y) 7→ ,− .
x2 + y 2 x2 + y 2
S 2 has the structure of a complex manifold.
Definition 11.1. A function φ : C → C is holomorphic (or complex analytic) if
dφ φ(z + h) − φ(z)
= lim
dz h→0 h
exists for all z ∈ C. Here h ∈ C − {0}.
A function φ : Cn → Cm is holomorphic if
∂φ φ(z1 , . . . , zi + h, . . . , zn ) − φ(z1 , . . . , zn )
= lim
∂zi h→0 h
exists for all z = (z1 , . . . , zn ) and i = 1, . . . n.
Definition 11.2. A complex manifold is a topological manifold with an atlas {Uα , φα }, where
φα : Uα → Cn and φβ ◦ φ−1 n n
α : C → C is a holomorphic map.
(1)
HW: Define the derivative map in terms of Tp M and show the equivalence with the definition
just given.
Derivative map. The maps f∗ : Tp M → Tf (p) N can be combined into the derivative map
f∗ : T M = tp∈M Tp M → T N = tr∈N Tr N.
HW: Show that the derivative map is a smooth map between tangent bundles.
12.3. Properties of 1-forms.
π
Definition 12.2. Let T ∗ M → M be the cotangent bundle. A 1-form over U ⊂ M is a smooth map
s : U → T ∗ M such that π ◦ s = id.
Note that a 1-form assigns an element of Tp∗ M to a given p ∈ M in a smooth manner. The space
of 1-forms on U is denoted by Ω1 (U ) and is an R-vector space.
1. We often write Ω0 (M ) = C ∞ (M ). Then there exists a map d : Ω0 (M ) → Ω1 (M ), g 7→ dg.
HW: Verify that dg is a 1-form on M , i.e., check that dg is a smooth map M → T ∗ M . Hint: in
∂g ∂g
local coordinates dg can be written as x = (x1 , . . . , xn ) 7→ (x, ∂x 1
(x), . . . , ∂x n
(x)).
2. Given φ : M → N , there is no natural map T ∗ M → T ∗ N unless φ is a diffeomorphism.
However, there exists φ∗ : Ω1 (N ) → Ω1 (M ), θ 7→ φ∗ θ. HW: check that the map is well-defined,
i.e., indeed takes a smooth section of T ∗ N to a smooth section of T ∗ M . The 1-form φ∗ θ is called
the pullback of θ under φ.
3. Let ψ : L → M and φ : M → N be smooth maps between smooth manifolds and let θ be a
1-form on N . Then (φ ◦ ψ)∗ θ = ψ ∗ (φ∗ θ). HW: check this. Note however that the order of pulling
back is reasonable.
4. There exists a commutative diagram:
φ∗
Ω0 (N ) −→ Ω0 (M )
d↓ ↓d ,
φ∗
Ω1 (N ) −→ Ω1 (M )
i.e., d ◦ φ∗ = φ∗ ◦ d. HW: Check this by unwinding the definitions.
5. d(gh) = gdh + hdg. HW: Check this.
Example: Let θ = f (x, y)dx + g(x, y)dy on R2 . If i : R → R2 , t 7→ (a(t), b(t)), then
i∗ θ = (f (a(t), b(t))a0 (t) + g(a(t), b(t))b0 (t))dt.
28 KO HONDA
13. L IE GROUPS
13.1. Lie groups.
Definition 13.1. A Lie group G is a smooth manifold together with smooth maps µ : G × G → G
(multiplication) and i : G → G (inverse) which make G into a group.
Definition 13.2. A Lie subgroup H ⊂ G is a subgroup of G such that the inclusion map H → G
is a 1-1 immersion.4 A Lie group homomorphism φ : H → G is a homomorphism which is also a
smooth map of the underlying manifolds.
Examples of Lie groups:
(1) Rn with the usual addition; the quotient T n = Rn /Zn with the usual addition.
(2) The general linear group GL(n, R) = {A ∈ Mn (R) | det(A) 6= 0}. We already showed
that this is a manifold. The product AB is defined by a formula which is polynomial in the
matrix entries of A and B, so µ is smooth. Similarly one can show that i is smooth.
(3) More invariantly, given a finite-dimensional R-vector space V , let GL(V ) be the group of
∼
R-linear automorphisms V → V .
(4) The special linear group SL(n, R) = {A ∈ Mn (R) | det(A) = 1}.
(5) The orthogonal group O(n) = {A ∈ Mn (R) | AAT = I}.
(6) The special orthogonal group SO(n, R) = SL(n, R) ∩ O(n).
What’s this cocycle condition? This cocycle condition (triple intersection property) is clearly
necessary if we want to construct a vector bundle by patching together {Uα × Rn }. It guarantees
that the gluings that we prescribe, i.e., ΦUα Uβ from Uα to Uβ , etc. are compatible.
HW: On the other hand, if we can find a collection {ΦUα Uβ } (for all Uα , Uβ ), which satisfies
Conditions (1) and (2), then we can construct a vector bundle by gluing {Uα × Rn } using this
prescription.
Consider π : T ∗ M → M . In a previous lecture we computed that the transition functions ΦU V :
∂y
U ∩ V → GL(n, R) are given by x 7→ (( ∂x (x))−1 )T . The inverse and transpose are both necessary
for the cocycle condition to be met.
14.3. Constructing new vector bundles out of T M . Let M be a manifold and {Uα } an atlas
for M . View T M as being constructed out of {Uα × Rn } by gluing using transition functions
ΦUα Uβ : Uα ∩ Uβ → GL(n, R) that satisfy Conditions (1) and (2).
Consider a representation ρ : GL(n, R) → GL(m, R).
Examples of representations:
(1) ρ : GL(n, R) → GL(1, R) = R× , A 7→ det(A).
(2) ρ : GL(n, R) → GL(n, R), A 7→ BAB −1 .
(3) ρ : GL(n, R) → GL(n, R), A 7→ (A−1 )T .
Now we know how to integrate 1-forms. Over the next few weeks we will define objects that we
can integrate on higher-dimensional submanifolds (not just curves), called k-forms. For this we
need to do quite a bit of preparation.
16.2. Tensor products. We first review some notions in linear algebra and then define the tensor
product. Let V , W be vector spaces over R. (The vector spaces do not need to be finite-dimensional
or over R, but you may suppose they are if you want.)
1. (Direct sum) V ⊕ W . As a set, V ⊕ W = V × W . The addition is given by (v1 , w1 ) + (v2 , w2 ) =
(v1 + v2 , w1 + w2 ) and the scalar multiplication is given by c(v, w) = (cv, cw). dim(V ⊕ W ) =
dim(V ) + dim(W ).
2. Hom(V, W ) = {R-linear maps φ : V → W }. In particular, we have V ∗ = Hom(V, R).
dim(Hom(V, W )) = dim(V ) · dim(W ).
We now define the tensor product V ⊗ W of V and W .
Informal definition. Suppose V and W are finite-dimensional and let {v1 , . . . , vm }, {w1 , . . . , wn }
be bases for V and W , respectively. Then the tensor produce V ⊗ W is a vector space which has
{vi ⊗ wj | i = 1, . . . , m; j = 1, . . . , n}
P
as a basis. Elements of V ⊗ W are linear combinations ij aij vi ⊗ wj .
DIFFERENTIAL GEOMETRY COURSE NOTES 35
We therefore consider R(V, W ) ⊂ F (V × W ), the vector space generated by the “bilinear rela-
tions”
(v1 + v2 , w) − (v1 , w) − (v2 , w),
(v, w1 + w2 ) − (v, w1 ) − (v, w2 ),
(cv, w) − c(v, w),
(v, cw) − c(v, w).
Then the quotient space F (V × W )/R(V, W ) is V ⊗ W .
Verification of universal mapping property. With V ⊗ W defined as above, define
i : V × W → V ⊗ W, (v, w) 7→ v ⊗ w.
The bilinearity of i follows from the construction of V ⊗ W . (For example, i(v1 + v2 , w) =
(v1 + v2 ) ⊗ w = v1 ⊗ w + v2 ⊗ w = i(v1 , w) + i(v2 , w).) We can define
X X
φ̃ : V ⊗ W → U, ai vi ⊗ wi 7→ ai φ(vi , wi ).
i i
This map is well-defined because all the elements of R(V, W ) are mapped to 0. The uniqueness of
φ̃ is clear.
36 KO HONDA
(2) ensures us that we do not need to write parentheses when we take a tensor product of several
vector spaces.
Let A : V → V and B : W → W be linear maps. Then we have
A ⊕ B : V ⊕ W → V ⊕ W, A ⊗ B : V ⊗ W → V ⊗ W.
We denote V ⊗k for the k-fold tensor product of V . Then we have a representation
ρ : GL(V ) → GL(V ⊗k ), A 7→ A ⊗ · · · ⊗ A.
17.2. Tensor and exterior algebra. The tensor algebra T (V ) of V is
T (V ) = R ⊕ V ⊕ V ⊗2 ⊕ V ⊗3 ⊕ . . . ,
where the multiplication is given by
(v1 ⊗ · · · ⊗ vs )(vs+1 ⊗ · · · ⊗ vt ) = v1 ⊗ · · · ⊗ vt .
Here s or t may be zero, in which case 1 ∈ R acts as the multiplicative identity.
DIFFERENTIAL GEOMETRY COURSE NOTES 37
The exterior algebra ∧V of V is T (V )/I, where I is the 2-sided ideal generated by elements
of the form v ⊗ v, v ∈ V , i.e., elements of I are finite sums of terms that look like η1 ⊗ v ⊗ v ⊗ η2 ,
where η1 , η2 ∈ T (V ) and v ∈ V . Elements of ∧V are linear combinations of terms of the form
v1 ∧ · · · ∧ vk , where k ∈ {0, 1, 2 . . . } and vi ∈ V . By definition, in ∧V we have
(4) v ∧ v = 0.
Also note that (v1 + v2 ) ∧ (v1 + v2 ) = v1 ∧ v1 + v1 ∧ v2 + v2 ∧ v1 + v2 ∧ v2 . The first and last terms
are zero, so
(5) v1 ∧ v2 = −v2 ∧ v1 .
∧V is clearly an algebra, i.e., there is a multiplication ω ∧ η given elements ω, η in ∧V .
We define ∧k V to be the degree k terms of ∧V , i.e., linear combinations of terms of the form
v1 ∧ · · · ∧ vk where vi ∈ V .
Alternating multilinear forms. A multilinear form φ : V × · · · × V → U is alternating if
φ(v1 , . . . , vi , vi+1 , . . . , vk ) = −1 · φ(v1 , . . . , vi+1 , vi , . . . vk ).
Recall that transpositions generate the full symmetric group Sk on k letters. If σ ∈ Sk maps
(1, . . . , k) 7→ (σ(1), . . . , σ(k)) and sgn(σ) is the number of transpositions mod 2 that is needed to
write σ as a product of transpositions, then
φ(v1 , . . . , vk ) = (−1)sgn(σ) φ(vσ(1) , . . . , vσ(k) ).
Proof. The proof is given in several steps. It is easy to see that {ei1 ∧ · · · ∧ eik | 1 ≤ i1 , . . . , ik ≤ n}
spans ∧k V . Moreover, using Equations (4) and (5), we can shrink the spanning set to:
{ei1 ∧ · · · ∧ eik | 1 ≤ i1 < · · · < ik ≤ n}.
We will focus on ∧k T ∗ M in what follows. Sections of ∧k T ∗ M are called k-forms and can be
written in local coordinates x1 , . . . , xm as:
X
ω= fi1 ,...,ik dxi1 ∧ · · · ∧ dxik .
i1 <···<ik
Lemma 19.2. d2 = 0.
Proof. For d2 : Ω0 (M ) → Ω2 (M ), we compute:
!
X ∂f X ∂ 2f
d ◦ df = d dxi = dxj ∧ dxi = 0.
i
∂xi i,j
∂xi ∂xj
When α = dxI , we verify that dα = d(dxI ) = 0.
Now if α = fI dxI , then
dα = dfI ∧ dxI + fI d(dxI ) = dfI ∧ dxI ,
d2 α = (d2 fI ) ∧ dxI − dfI ∧ d(dxI ) = 0,
which proves the lemma.
The de Rham cohomology groups measure the failure of Equation (7) to be exact.
We will often write H k (M ) instead of HdR
k
(M ).
Examples:
1. M = {pt}. Then Ω0 (pt) = R and Ωk (pt) = 0 for i > 0. Hence H 0 (pt) = R and H k (pt) = 0
for k > 0.
2. M = R. Then Ω0 (M ) = C ∞ (R) and Ω1 (M ) ' C ∞ (R) because every 1-form is of the form
df
f dx. Now, d : f 7→ dx dx can be viewed as the map
d : C ∞ (R) → C ∞ (R),
f 7→ f 0 .
It is easy to see that Ker d = {constant functions} and hence H 0 (R) = R. Next, Im d is all of
x
C ∞ (R), since given any f we can take its antiderivative 0 f (t)dt. Therefore, H 1 (R) = 0. Also
R
Basic properties.
1. H 0 (M ) ' R if M is connected. Proof: df = 0 if and only if f is locally constant.
2. H k (M ) = 0 if dim M < k. Proof: Ωk (M ) = 0.
3. If M = M1 t M2 , then H k (M ) ' H k (M1 ) ⊕ H k (M2 ) for all k ≥ 0. Proof: HW.
42 KO HONDA
20.2. Mayer-Vietoris sequences. This is a method for effectively decomposing a manifold and
computing its cohomology from its components.
Let M = U ∪ V , where U and V are open sets. Then we have natural inclusion maps
i ,i
U V i
(8) U ∩ V −→ U t V → M.
Here iU and iV are inclusions of U ∩ V into U and into V .
Example: M = S 1 , U = V = R, U ∩ V = R t R.
Theorem 20.3. We have the following long exact sequence:
i∗ i∗ −i∗
0 → H 0 (M ) → H 0 (U ) ⊕ H 0 (V ) −→
U V
H 0 (U ∩ V ) →
i∗ i∗ −i∗
→ H 1 (M ) → H 1 (U ) ⊕ H 1 (V ) −→
U V
H 1 (U ∩ V ) →
→ ....
Remark 20.4. 0 → A → B exact means A → B is injective. A → B → 0 exact means A → B
is surjective. Hence 0 → A → B → 0 exact means A → B is an isomorphism.
The proof of Theorem 20.3 will be given over the next couple of lectures, but for the time being
we will apply it:
Example: Compute H k (S 1 ) using Mayer-Vietoris using the above decomposition of S 1 as R ∪ R.
20.3. Poincaré lemma. The following lemma is an important starting point when using the Mayer-
Vietoris sequence to compute cohomology groups.
Lemma 20.5 (Poincaré lemma). Let ω ∈ Ωk (Rn ) for k ≥ 1. Then ω is closed if and only if it is
exact.
In other words, H k (Rn ) = 0 for k ≥ 1. We will give the proof later, when we discuss homotopy-
theoretic properties of de Rham cohomology.
DIFFERENTIAL GEOMETRY COURSE NOTES 43
21.2. Short exact sequences to long exact sequences. Getting from the short exact sequence to
the long exact sequence is a purely algebraic operation.
d dk+1
Define a cochain complex (C, d): · · · → C k −→ k
C k+1 −→ C k+1 → . . . to be a sequence of
vector spaces and maps with dk+1 ◦ dk = 0 for all k. (C, d) gives rise to H k (C) = Ker dk / Im dk−1 ,
the kth cohomology of the complex.
A cochain map φ : A → B is a collection of maps φk : Ak → B k such that the squares of the
following diagram commute (i.e., dk ◦ φk = φk+1 ◦ dk ):
dk−2 dk−1 d dk+1
−−−→ Ak−1 −−−→ Ak −−−
k
→ Ak+1 −−−→
φk−1 y
φk y
φk+1 y
φk+1 ψk+1
0 −−−→ Ak+1 −−−→ B k+1 −−−→ C k+1 −−−→ 0
x x x
dk
dk
dk
φ
k k ψ
0 −−−→ Ak −−−→ Bk −−−→ Ck −−−→ 0
x x x
dk−1
dk−1
dk−1
DIFFERENTIAL GEOMETRY COURSE NOTES 45
22. I NTEGRATION
n
Let M beR an n-dimensional manifold and ω ∈ Ω (M ). The goal of this lecture is to motivate
and define M ω.
Definition 22.1. The function f : R → R is Riemann-integrable (or simply integrable) if for any
ε > 0 there exists a partition P such that U (f, P ) − L(f, P ) < ε. If f is integrable, then we define
Z
f dx1 . . . dxn := lim U (f, P ) = lim L(f, P ).
R P P
R
Similarly we define U
f dx1 . . . dxn , if U ⊂ Rn is an open set and f : U → R is a function with
compact support.
Theorem 22.2. If f is a continuous function with support on a compact set, then f is integrable.
∼
Change of variables formula. If g : [a, b] → [c, d] is a smooth reparametrization and x, y are
coordinates on [a, b], [c, d], then
Z g(b) Z b
f (y)dy = (f ◦ g)(x) · g 0 (x)dx.
g(a) a
More generally, let U and V ⊂ Rn be open sets with coordinates x = (x1 , . . . , xn ) and y =
∼
(y1 , . . . , yn ), and φ : U → V a diffeomorphism. Then:
Z Z
f (φ(x)) det ∂φ
f (y)dy1 . . . dyn = ∂x
dx1 . . . dxn .
V U
R
Remark 22.3. In light of the change of variables formula, M ω makes sense only when M is
orientable, since the change of variables for an n-form does not have the absolute value. At any
rate, n-forms have the wonderful property of having the correct transformation property (modulo
sign) under diffeomorphisms.
DIFFERENTIAL GEOMETRY COURSE NOTES 47
22.2. Orientation. Recall that M is orientable if there exists a subatlas {φα : Uα → Rn } such
that the Jacobians Jαβ = d(φβ ◦ φ−1
α ) have positive determinant. We will refer to such an atlas as
an oriented atlas.
Proposition 22.4. M is orientable if and only if there exists a nowhere zero n-form ω on M .
Proof. Suppose M is orientable. Let {φα : Uα → Rn } be an oriented atlas, i.e., a subatlas whose
transition functions have Jacobians Jαβ with positive determinant. Take a partition of unity {fα }
subordinate to Uα and let xα1 , . . . , xαn be the coordinates on Uα . Construct ωα = fα dx α α
P1 ∧ · · · ∧ dxn ,
which can be viewed as a smooth n-form on M with support in Uα . Let ω = α ωα . This is
nowhere zero, since
(φβ ◦ φ−1 ∗ −1 α α
α ) ωβ = fβ ◦ (φβ ◦ φα ) det(Jαβ )dx1 ∧ · · · ∧ dxn .
Since det(Jαβ ) is positive, fβ ◦ (φα ◦ φ−1 β ) det(Jαβ ) ≥ 0. At any point p ∈ M , at least one fα is
positive and all the terms that are added up contribute nonnegatively with respect to coordinates
dxα1 , . . . , dxαn , so ω is nowhere zero on M .
On the other hand, suppose there exists a nowhere zero n-form ω on M . We choose a subatlas
{φα : Uα → Rn } whose coordinate functions xα1 , . . . , xαn satisfy the condition that dxα1 ∧ · · · ∧ dxαn
is a positive function times ω. Then clearly Jαβ has positive determinant on this subatlas.
If M is orientable, then there is a nowhere zero n-form ω, which in turn implies that ∧n T ∗ M is
isomorphic to M × R as a vector bundle, i.e., is a trivial vector bundle.
On a connected manifold M , any two nowhere zero n-forms ω and ω 0 differ by a function, i.e.,
∃ a positive (or negative) function f such that ω = f ω 0 . We define an equivalence relation ω ∼ ω 0
if ω = f ω 0 and f > 0. Then there exist two equivalence classes of ∼ and each equivalence class is
called an orientation of M .
The standard orientation on Rn is dx1 ∧ · · · ∧ dxn .
Equivalent definition of orientation. Let F r(V ) be the set of ordered bases (or frames) of a
finite-dimensional R-vector space V of dimension n. It can be given a smooth structure so that it
is diffeomorphic to GL(V ), albeit not naturally: Fix an ordered basis (v1 , . . . , vn ). Then any other
basis (w1 , . . . , wn ) can be written as (Av1 , . . . , Avn ), where A ∈ GL(V ). Hence there is a bijection
F r(V ) ' GL(V ) and we induce a smooth structure on F r(V ) from GL(V ) via this identification.
The non-naturality comes from the fact that there is a distinguished point id ∈ GL(V ) but no
distinguished basis in F r(V ).
Since GL(V ) has two connected components, F r(V ) also has two connected components, and
each component is called an orientation for V . An orientation for M is a choice of orientation for
each Tp M which is smooth in p ∈ M .
We can also construct the frame bundle F r(M ) = tp∈M F r(Tp M ) by topologizing as follows:
Identify a neighborhood of p ∈ M with Rn and identify tp∈Rn F r(Tp Rn ) = F r(Rn ) × Rn . The
frame bundle is a fiber bundle over M whose fibers are diffeomorphic to GL(V ).
48 KO HONDA
22.3. Definition of the integral. Suppose M is orientable. Choose an oriented atlas {φα : Uα →
Rn } for M . We then define:
Z XZ
ω= (φ−1 ∗
α ) (fα ω),
M α φα (Uα )
R
where {fα } is a partition of unity subordinate to {Uα }. We will often write Uα fα ω instead of
(φ−1 ∗
R
φα (Uα ) α ) (fα ω).
R
HW: Check that the definition of M ω does not depend on the choice of oriented atlas {φα : Uα →
Rn } as well as on the choice of {fα } subordinate to {Uα }.
DIFFERENTIAL GEOMETRY COURSE NOTES 49
Remark 23.4. The choice of orientation in the above lemma is the boundary orientation of ∂M
induced from the orientation of M .
23.2. Stokes’ Theorem.
Theorem 23.5 (Stokes’ Theorem).
R RLet ω be an (n−1)-form on an oriented manifold with boundary
M of dimension n. Then M dω = ∂M ω.
Zen: The significance of Stokes’ Theorem is that a topological operation ∂ is related to an analytic
operation d.
Proof. Take an open cover {Uα } where Uα is diffeomorphic to (i) (0, 1) × · · · × (0, 1) (i.e., Uα does
not intersect ∂M ) or (ii) (0, 1] × (0, 1) × · · · × (0, 1) (i.e., Uα ∩ ∂M = {x1 = 1}). (Note that in the
definition of a manifold with boundary, we could have allowed Hn to be {x1 ≤ c}, where c ∈ R
is a constant. Here we are taking R c = 1.) Let {f R α } be a partition of unity subordinate to {Uα }. By
linearity, it suffices to compute Uα d(fα ω) = ∂M ∩Uα fα ω, i.e., assume ω is supported on one Uα .
We will treat the n = 2 case. Let ω be an (n − 1)-form of type (ii). Then on [0, 1] × [0, 1] we
can write ω = f1 dx1 + f2 dx2 .
Z Z Z 1
ω= f1 dx1 + f2 dx2 = f2 (1, x2 )dx2 .
∂M ∂M 0
50 KO HONDA
Example: (Green’s Theorem) Let Ω ⊂ R2 be a compact domain with smooth boundary, i.e., Ω is
a 2-dimensional manifold with boundary ∂Ω = γ. Then
Z Z
∂g ∂f
f dx + gdy = − dxdy.
γ Ω ∂x ∂y
R Let ω R= F1 dydz + F2 dzdx + F3 dxdy. Then dω = (div F )dxdydz. It remains to see why
∂Ω
ω = ∂Ω hn, F idA.
Evaluating forms. We explain what it means to take ω(v1 , . . . , vk ), where ω is a k-form and vi are
tangent vectors. Let V be a finite-dimensional vector space. There exists a map:
(∧k V ∗ ) × (V × · · · × V ) → R
X
(f1 ∧ · · · ∧ fk , (v1 , . . . , vk )) 7→ (−1)σ f1 (vi1 ) . . . fk (vik ),
where the sum ranges over all permutations of (1, . . . , k) and σ is the number of transpositions
required for the transposition (1, . . . , k) 7→ (i1 , . . . , ik ). Note that this alternating sum is necessary
for the well-definition of the map.
∂ ∂
Example: Let ω = F1 dydz + F2 dzdx + F3 dxdy. Then ω ∂x , ∂z = −F2 .
Interior product. We can define the interior product as follows: iv : ∧k V ∗ → ∧k−1 V ∗ , iv ω =
ω(v, ·, ·, . . . , ·). (Insert v into the first slot to get a (k − 1)-form.)
Example: On R3 , let η = dxdydz. Also let n be the unit normal vector to ∂Ω. Then, along ∂Ω we
can define in η = n1 dydz + n2 dzdx + n3 dxdy.
Why is this dA? At any point of p ∈ ∂Ω, take tangent vectors v1 , v2 of ∂Ω so that n, v1 , v2
is an oriented orthonormal basis. Then the area form dA should evaluate to 1 on v1 , v2 . Since
η(n, v1 , v2 ) = 1 (since η is just the determinant), we see that dA = in η.
Explanation of hn, F idA = F1 dydz + F2 dzdx + F3 dxdy. Also note that iF η = F1 dydz + . . . .
But now, iF η(v1 , v2 ) = η(F, v1 , v2 ) = η(hn, F in, v1 , v2 ) = hn, F idA (by Gram-Schmidt).
24.2. Evaluating cohomology classes. In this section let M be a compact oriented n-manifold
without boundary.
R
Lemma
R 24.2. There exists a well-defined, nonzero map : H n (M ) → R which sends [ω] →
M
ω.
52 KO HONDA
R
Proof. Given ω ∈ Ωn (M R ), wen map ω 7→ M ω. Note that every n-form ω is closed. To show the
map descends to a map : H (M ) → R, let ω be an exact form, i.e., ω = dη. Then
Z Z Z
ω= dη = η = 0.
M M ∂M
R
R Next we prove the nontriviality of : if ω is an orientation form (i.e., ω is nowhere zero), then
M
ω > 0 or < 0, since on each oriented coordinate chart ω is some positive function times
dx1 . . . dxn .
The lemma implies that dim H n (M ) ≥ 1. In fact, we have the following, which will be proved
in a later lecture.
Theorem 24.3. H n (M ) ' R.
Proof. Z Z Z Z Z
φ∗1 ω − φ∗0 ω = ∗
Φω= ∗
d(Φ ω) = Φ∗ (dω) = 0,
M M ∂(M ×[0,1]) M M
since ω is closed.
The claim implies that for a sufficiently small open set Vy containing y, φ−1 (Vy ) is a finite
disjoint union of open sets Ux1 , . . . , Uxk , each of which is diffeomorphic to Vy .
Definition 25.4. The degree of a smooth map φ : M → N between compact oriented n-manifolds
is the sum of orientation numbers ±1 for each xi in the preimage of a regular value y. Here the
54 KO HONDA
∂x1
(p) = · · · = ∂x m
(p) = 0, but some ∂x∂i ∂x j
(p) 6= 0. Without loss of generality assume that
∂2f
∂x1 ∂xj
(p) 6= 0. Then consider the map
∂f
h : Rm → Rm , (x1 , . . . , xm ) 7→ ( ∂x j
, x2 , . . . , xm ).
Since the Jacobian is invertible at p, by the Inverse Function Theorem the map h restricts to a
∼
diffeomorphism V → V 0 , where V, V 0 are open sets containing p, f (p). Let (x̃1 , . . . , x̃m ) =
∂f
( ∂xj
, x2 , . . . , xm ) be coordinates on V 0 . Observe that:
• {critical values of f : V → R} = {critical values of f ◦ h−1 : V 0 → R}; and
• {critical values of f ◦ h−1 : V 0 → R} ⊂ {critical values of f ◦ h−1 : {x̃1 = 0} → R}.
We can then apply the inductive assumption to f ◦ h−1 whose domain is (m − 1)-dimensional.
Step 2. Similar to Step 1.
Step 3. Let p ∈ Ck , k ≥ m. In local coordinates we assume f : [− 12 , 12 ]m → R and p = 0. Taylor’s
Theorem with remainder gives us
f (x + h) = f (x) + R(x; h),
where |R(x; h)| ≤ C|h|k+1 , for all x ∈ Ck ∩ [− 21 , 12 ]m and |h| < δ, where δ > 0 is small.
We cover [− 21 , 12 ]m by cubes of length δ > 0 small. For this we need roughly δ1m cubes. Consider
one such cube Q which nontrivially intersects Ck . Then its volume is δ m , whereas its image under
f has length on the order of magnitude of δ k+1 by Taylor’s Theorem. Adding up the total volume of
the image, we obtain an upper bound of C δ1m δ k+1 , which can be made arbitrarily small by choosing
δ small.
DIFFERENTIAL GEOMETRY COURSE NOTES 57
27. D EGREE
27.1. H n (M ) of a n-dimensional manifold.
R
Theorem 27.1. If M is an oriented, compact n-manifold (without boundary), then : H n (M ) →
R is an isomorphism.
R
Proof.
R By Lemma 24.2, : H n (M ) → R is well-defined and nonzero. It remains to show that if
ω = 0, then ω is exact. Let {Ui } be a cover of M which is finite and has the property that every
Ui is diffeomorphic
P to Rn . Using a partition of unity {fα } Rsubordinate to {Ui }, we can split ω into
the sum i ωi , where ωi is supported inside Ui . Note that Ui ωi may not be zero.
Lemma 27.2. If ω is an n-form with compact support and zero integral on Rn , then ω = dη, where
η has compact support.
R∞
Proof. We will prove this for n = 2. Then ω = f (x, y)dxdy. Define g(x) = −∞ f (x, y)dy. By
R R∞
Fubini’s theorem and the hypothesis that R2 ω = 0, we have −∞ g(x)dx = 0. Define G(x, y) =
ε(y)g(x), where ε(y) is a bump function with total area 1. Then write:
Z y Z x
η(x, y) = − [f (x, t) − G(x, t)]dt dx + G(t, y)dt dy.
−∞ −∞
Clearly, dη = [f (x, y) − G(x, y)]dxdy + G(x, y)dxdy and η has compact support.
27.2. Relationship to the degree. Recall the definition of degree: Let φ : M → N be a smooth
map between compact, oriented manifolds without boundary of dimension n. By Sard’s Theorem,
the set of regular values of φ has full measure in N . Let y ∈ N be a regular value and let φ−1 (y) =
Pk
{x1 , . . . , xk }. Then deg(φ) = i=1 ±1, where the contribution is +1 when φ is orientation-
preserving near xi and −1 is otherwise.
We will explain why the degree is well-defined. The map φ : M → N induces the map
φ∗ : H n (N ) → H n (M ) and we have a commutative diagram:
φ∗
H n (N ) −−−→ H n (M )
R R
y y
c
R −−−→ R
where the map R → R is multiplication by some real number c. The following proposition calcu-
lates this constant c to be deg φ:
58 KO HONDA
Proof. Once we can prove Equation (10) for a suitable ω of our choice, the proposition R follows.
Take ω to be supported on a small open disk Vy about y with positive integral. Then M
φ∗ ω will
be the sum of Ux φ∗ ω, where Uxi are connected components of the preimage of Vy . Noting that
R
i
φ is a diffeomorphism from Uxi to Vy , we have Ux φ∗ ω = ± Vy ω, depending on whether the
R R
i
orientations agree or not. This proves Equation (10).
28. L IE DERIVATIVES
We define the interior product on the linear algebra level as follows:5 If v ∈ V , then iv :
∧k V ∗ → ∧k−1 V ∗ is given by:
X
f1 ∧ · · · ∧ fk 7→ (−1)l+1 f1 ∧ . . . fl (v) · · · ∧ fk .
l
that φt (x) := Φ(x, t), t ∈ [−1, 1], is a diffeomorphism. Assume in addition that φ0 = id. Then
d ∗
φ ω|t=0
dt t
is a derivation (verification is easy). If f ∈ Ω0 (M ), then
d ∗ d
φt f |t=0 = f (φt )|t=0 = df (X) = X(f ),
dt dt
where X is the vector field such that X(x), x ∈ M , is viewed as the arc φt (x), t ∈ [−1, 1], that
passes through x. We will often write X = dφ t
| .
dt t=0
∼
Proposition 28.3 (Cartan formula). If φt : M → M is a 1-parameter family of diffeomorphisms
such that φ0 = id, then dtd φ∗t |t=0 : Ωk (M ) → Ωk (M ) is given by d◦iX +iX ◦d, where X = dφ t
| .
dt t=0
Next we say two maps φ0 , φ1 : M → N are (smoothly) homotopic if there exists a smooth map
Φ : M × [0, 1] → N with φt (·) = Φ(·, t), t = 0, 1. φt is said to be the homotopy from φ0 to φ1 .
Proposition 29.2 (Homotopy invariance). If φt : M → N , t ∈ [0, 1], is a homotopy from φ0 to φ1 ,
then φ∗t : H k (N ) → H k (M ) is independent of t.
Proof. Consider Φ : M ×R → N . (There exists an extension Φ : M ×[0, 1] → N to Φ : M ×R →
N ; this is easy for example when M is compact.) We have inclusions it : M → M ×R, x 7→ (x, t),
and clearly φt = Φ ◦ it . Now take a diffeomorphism Ψt : M × R → M × R, (x, s) 7→ (x, s + t).
Since it = Ψt ◦ i0 , i∗t = i∗0 ◦ Ψ∗t . By the previous proposition, Ψ∗t is independent of t on the level
of cohomology. Hence so are i∗t and φ∗t .
k
Corollary 29.4 (Poincaré lemma). HdR (Rn ) = 0 if k > 0.
Proof. We will show that Rn is homotopy equivalent to R0 = {pt}. Consider maps φ : Rn → R0 ,
(x1 , . . . , xn ) 7→ 0, and ψ : R0 → Rn , 0 7→ (0, . . . , 0). Clearly, φ ◦ ψ : R0 → R0 , 0 7→ 0, is the
identity map. Next, ψ ◦ φ : Rn → Rn , (x1 , . . . , xn ) 7→ 0 is homotopic to the identity map. In fact,
consider F : Rn × [0, 1] → Rn , ((x1 , . . . , xn ), t) 7→ (tx1 , . . . , txn ).
30.1. Lie brackets. Given two vector fields X and Y on M viewed as derivations at each p ∈ M ,
we can define its Lie bracket [X, Y ] = XY − Y X, i.e., for f ∈ C ∞ (M ),
[X, Y ](f ) = X(Y f ) − Y (Xf ).
Proposition 30.1. [X, Y ] is also a derivation, hence is a vector field.
Proof. This is a local computation. Take X = i ai ∂x∂ i and Y = j bj ∂x∂ j . Then:
P P
! !
X ∂ X ∂f X ∂ X ∂f
[X, Y ](f ) = ai bj − bj ai
i
∂x i j
∂x j j
∂x j i
∂xi
X ∂bj ∂f X ∂ai ∂f
= ai − bj
ij
∂x i ∂x j ij
∂xj ∂xi
In other words,
" #
X ∂ X ∂ X ∂bi ∂ai
∂
ai , bj = aj − bj .
i
∂xi j
∂xj ij
∂xj ∂xj ∂xi
For the Fundamental Theorem of ODEs we need X to be at least C 1 . The proof of this theorem
will be omitted. We also write φt (x) := Φ(x, t) and refer to it as the time-t flow of X. Note that
φ0 (x) = id.
Remarks.
(1) If M is compact without boundary and X is a vector field on M , then there exists a global
flow Φ : M × R → M with φ0 = id by the uniqueness of the flow of X, where defined.
Since M is compact, the finite covering property ensures that we may choose ε to work for
all the open sets U . If we know that there is a flow for a short time ε, we can repeat the
flow and obtain a flow for an arbitrarily long time.
(2) However, if M is not compact, then there are vector fields X which do not admit global
short-time flows Φ : M × (−ε, ε) → M . (See the example below.)
(3) φs ◦ φt = φs+t and φ−1
t = φ−t . In particular, on a compact M , {φt }t∈R forms a 1-parameter
group of diffeomorphisms isomorphic to the additive group R.
∂
Example. On M = R − {0} consider X = ∂x . The vector field X, considered as a vector field on
R, clearly integrates to Φ : R × R → R, (x, t) 7→ x + t. However, when {0} is removed, no matter
how small an ε you take, there is no Φ : (R − {0}) × (−ε, ε) → R − {0}.
∂
Corollary 30.4. If X(p) 6= 0, then there exist coordinates (x1 , . . . , xn ) near p such that X = ∂x1
.
Proof. If M is n-dimensional, choose an (n − 1)-manifold Σ, defined in a neighborhood of p,
which is transverse to X. Here Σ is transverse to X if Tq Σ and X(q) span Tq M at all q ∈ Σ.
(Why does such a Σ exist?) Now take ψ : Σ × (−ε, ε) → M given by the flow Φ of X restricted
to Σ. Since Σ is transverse to X, ψ is a diffeomorphism near p by the inverse function theorem. If
x2 , . . . , xn are coordinates on Σ and x1 is the coordinate for (−ε, ε), then X can be written as ∂x∂ 1 .
DIFFERENTIAL GEOMETRY COURSE NOTES 65
31.2. Lie derivatives. Let X be a vector field on M . Assume there exists a flow Φ : M ×
(−ε, ε) → M , φt (x) = Φ(x, t), such that φ0 (x) = x. (Since we will be considering derivatives,
local flows suffice, but we will assume a global flow for ease of notation.)
The Lie derivative LX on forms ω is given by:
d
LX ω = φ∗t ω|t=0 .
dt
Lie derivatives can be defined on vector fields Y as well:
d
LX Y := (φ−t )∗ Y |t=0 .
dt
Here, vector fields cannot usually be pulled back, but for a diffeomorphism φ, there is a suitable
substitute, namely (φ−1 )∗ .
More generally, it is easy to see that LX can be defined on any section of ∧k T ∗ M ⊗ ∧l T M .
Properties of LX .
(1) LX f = Xf .
66 KO HONDA
(2) LX ω = (d ◦ iX + iX ◦ d)ω P
(3) LX (ω(X1 , . . . , Xk )) = (LX ω)(X1 , . . . , Xk ) + i ω(X1 , . . . , LX Xi , . . . , Xk ).
(4) LX Y = [X, Y ].
Proof. (1), (2) are already proven. (3) is left for homework. (For example, you can do this in local
coordinates; unwinding the definition needs to be done carefully.) We will prove (4), assuming (1),
(2), (3). We compute:
X(Y (f )) = LX (Y f )
= LX (df (Y ))
= (LX df )(Y ) + df (LX Y )
= (d ◦ LX f )Y + df (LX Y )
= d(X(f ))Y + (LX Y )(f )
= Y (X(f )) + (LX Y )(f ).
Therefore, (LX Y )f = X(Y (f )) − Y (X(f )) = [X, Y ](f ).
31.3. Interpretation of LX Y = [X, Y ]. As before, X, Y may not have global flows, but for
∼
simplicity let us assume they do. Let φs : M → M , s ∈ R, be the 1-parameter group of diffeomor-
∼
phisms generated by X and ψt : M → M , t ∈ R, be the 1-parameter group of diffeomorphisms
generated by Y . Noting that Y (x) = limt→0 ψt (x)−x
t
, we have
Here X
ci means omit the term with Xi . The proof is an exercise.
32.2. Distributions. Recall from Corollary 30.4 that if X is a vector field with X(p) 6= 0, then
there exist local coordinates (x1 , . . . , xn ) near p such that X = ∂x∂ 1 . Can we generalize this? If
X, Y are two vector fields such that X(p), Y (p) span a 2-dimensional subspace of Tp M , then the
span of X(x) and Y (x) is a 2-plane field for every x in a neighborhood of p.
Let M be an n-dimensional manifold.
Definition 32.2.
(1) A k-dimensional distribution D is a smooth choice of a k-dimensional subspace Dp ⊂ Tp M
at every point p ∈ M . By a smooth choice we mean that for each p ∈ M there exists a
small neighborhood Up of p and k linearly independent vector fields X1 , . . . , Xk on Up
which span Dq for each q ∈ Up . Alternatively, D is a rank k subbundle of T M .
(2) An integral submanifold N of M is a submanifold where Tp N ⊂ Dp at every p ∈ N .
dim N is not necessarily equal to dim D, but dim N ≤ dim D.
(3) A distribution D is integrable if M is covered by local coordinate charts (xα1 , . . . , xαn ) such
that D = Rh ∂x∂α , . . . , ∂x∂α i. Equivalently, D is integrable if there locally exist functions
1 k
f1 , . . . , fn−k such that {f1 = const, . . . , fn−k = const} are integral submanifolds of D
and the fi are independent, i.e., df1 ∧ · · · ∧ dfn−k 6= 0.
68 KO HONDA
Suppose for all X, Y ∈ Γ(D), [X, Y ] ∈ Γ(D). We will find coordinates x1 , . . . , xn so that
D = Rh ∂x∂ 1 , . . . , ∂x∂ k i. Note that all our computations are local, so we restrict to M = Rn . We will
first do a slightly easier situation.
Proof. We will deal with the case where dim D = 2 and M = R3 . Suppose P3 [X,∂ Y ] = 0. Using the
∂
fundamental theorem of ODE’s, we can write X = ∂x1 . Then Y = i=1 bi ∂xi , and [X, Y ] = 0
∂bi
implies that ∂x 1
= 0, i.e., bi = bi (x2 , x3 ) (there is no dependence on x1 ). Now take Y 0 = Y −
b1 X = b2 (x2 , x3 ) ∂x∂ 2 + b3 (x2 , x3 ) ∂x∂ 3 . If we project to R2 with coordinates x2 , x3 , then Y 0 can be
integrated to ∂x∂ 0 , after a possible change of coordinates. Therefore, D = Rh ∂x∂ 1 , ∂x∂ 0 i.
2 2
∂b2 ∂ ∂b3 ∂ ∂ ∂ ∂
[X, Y ] = + =A + Bb2 + Bb3 .
∂x1 ∂x2 ∂x1 ∂x3 ∂x1 ∂x2 ∂x3
∂b2 ∂b3
This implies: A = 0, ∂x1
= Bb2 , ∂x1
= Bb3 . Hence,
R t=x1 R t=x1
B(t,x2 ,x3 )dt B(t,x2 ,x3 )dt
b2 = f (x2 , x3 )e t=0 , b3 = g(x2 , x3 )e t=0 .
R
Therefore, Y = e B (f (x2 , x3 ) ∂x∂ 2 +g(x2 , x3 ) ∂x∂ 3 ), and by rescaling Y we get Y 0 = f (x2 , x3 ) ∂x∂ 2 +
g(x2 , x3 ) ∂x∂ 3 . As before, now Y 0 can be integrated to give ∂x∂ 0 .
2
33.2. Restatement in terms of forms. If D has rank k on M of dimension n, then locally there
exist 1-forms ω1 , . . . , ωn−k such that D = {ω1 = · · · = ωn−k = 0}.
Pn−k
Proposition 33.3. D is integrable if and only if dωi = j=1 θij ∧ ωj , where θij are 1-forms.
Proof. We use the identity
(11) dω(X, Y ) = Xω(Y ) − Y ω(X) − ω([X, Y ]).
Let X, Y ∈ Γ(D). Then the identity simplifies to:
(12) dω(X, Y ) = −ω([X, Y ]).
Pn−k
Suppose dωi = j=1 θij ∧ ωj . Then dωi (X, Y ) = 0. Hence ωi ([X, Y ]) = 0 for all i by
Equation (12), which implies that [X, Y ] ∈ Γ(D).
Suppose D is integrable. Complete ω1 , . . . , ωn−k to a (pointwise) basis by adding η1 , . . . , ηk .
Then
X Xk X n−k X
dωi = aij ωi ∧ ωj + bij ηi ∧ ωj + cij ηi ∧ ηj .
i<j i=1 j=1 i<j
By Equation (12), dωi (X, Y ) = 0 for X, Y ∈ Γ(D). Taking X1 , . . . , Xk ∈ Γ(D) dual to η1 , . . . , ηk ,
we find that dωi (Xr , Xs ) = crs (or −csr ). This proves that all the cij are zero.
DIFFERENTIAL GEOMETRY COURSE NOTES 71
34. C ONNECTIONS
34.1. Definition. Let E be a rank k vector bundle over M and let s be a section of E. s may
be local (i.e., in Γ(E, U )) or global (i.e., in Γ(E, M )). Also let X be a vector field. We want to
differentiate s at p ∈ M in the direction of X(p) ∈ Tp M .
Definition 34.1. A connection or covariant derivative ∇ assigns to every vector field X ∈ X(M )
a differential operator ∇X : Γ(E) → Γ(E) which satisfies:
(1) ∇X s is R-linear in s, i.e., ∇X (c1 s1 + c2 s2 ) = c1 ∇X s1 + c2 ∇X s2 if c1 , c2 ∈ R.
(2) ∇X s is C ∞ (M )-linear in X, i.e., ∇f X+gY s = f ∇X s + g∇Y s.
(3) (Leibniz rule) ∇X (f s) = (Xf )s + f ∇X s.
Note: The definition of connection is tensorial in X (condition (2)), so (∇X s)(p) depends on s
near p but only on X at p.
34.2. Flat connections. We will now present the first example of a connection.
A vector bundle E of rank k is said to be trivial or parallelizable if there exist sections s1 , . . . , sk ∈
Γ(E, M ) which span Ep at every p ∈ M . Although not every vector bundle is parallelizable,
locally every vector bundle is trivial since E|U ' U × Rk . We will now construct connections on
trivial bundles.
Write any section s as s = i fi si , where fi ∈ C ∞ (M ). Then define
P
X
∇X s = (Xfi )si = (Xf1 )s1 + · · · + (Xfk )sk ∈ Γ(E).
i
Proposition 34.2. Any two covariant constant frames s1 , . . . , sk and s1 , . . . , sk differ by an element
of GL(k, R).
Proof. Let s1 , . . . , sk be another covariant constant frame, i.e., ∇X si = 0. Since we can write
X
si = fij sj ,
j
72 KO HONDA
This proves that Xfij = 0 for all X and hence fij = const.
Next, if E and F are vector bundles over M , then we can define E ⊕ F as follows:
(1) The fiber (E ⊕ F )m over m ∈ M is Em ⊕ Fm .
∼ ∼
(2) Take U ⊂ M small enough so that E|U → U × Rk and F |U → U × Rl . Then we get
(E ⊕ F )|U ' U × (Rk ⊕ Rl ).
E ⊗ F is defined similarly.
35.2. Existence. Let M be an n-dimensional manifold and E be a rank k vector bundle over M .
Recall a connection ∇ is a way of differentiating sections of E in the direction of a vector field X.
∇X : Γ(E) → Γ(E),
∇X (f s) = (Xf )s + f ∇X s.
Definition 35.1. A connection ∇ on E is flat if there exists an open cover {Uα } of M such that
E|Uα admits a covariant constant frame s1 , . . . , sk .
Proposition 35.2. Connections exist on any vector bundle E over M .
Note that if E is parallelizable we have already defined connections globally on E. The key point
is to pass from local to global when E is not globally trivial.
Let ∇0 and ∇00 be two connections on E|U . Let us see whether ∇0 + ∇00 is a connection.
(∇0X + ∇00X )(f s) = ∇0X (f s) + ∇00X (f s)
= (Xf )s + f ∇0X s + (Xf )s + f ∇00X s
= 2(Xf )s + f (∇0X + ∇00X )s.
This is not quite a connection, since 2(Xf ) should be Xf instead. However, a simple modification
presents itself:
Lemma 35.3. Suppose λ1 , λ2 ∈ C ∞ (U ) satisfies λ1 + λ2 = 1. Then λ1 ∇0 + λ2 ∇00 is a connection
on E|U .
Proof. HW.
Proof of Proposition 35.2. Let {Uα } be an open cover such that E|Uα is trivial. Let ∇α be a flat
connection on E|Uα associated P to some trivialization. Next let {fα } be a partition of unity sub-
ordinate to {Uα }. Then form α fα ∇α . By the previous lemma, the Leibniz rule is satisfied.
74 KO HONDA
Remark: Although each of the pieces ∇α is flat before patching, the patching destroys flatness.
There is no guarantee that (even locally) there exist sections s1 , . . . , sk which are covariant con-
stant. In fact, for a generic connection, there is not even a single covariant constant section. One
way of measuring the failure of the existence of covariant constant sections is the curvature.
35.3. The space of connections. Given two connections ∇ and ∇0 , we compute their difference:
(∇X − ∇0X )(f s) = f (∇X − ∇0X )s.
Therefore, the difference of two connections is tensorial in s.
Locally, take sections s1 , . . . , sk (not necessarily covariant constant). Then (∇X − ∇0X )si =
0
P
j aij sj , where (aij ) is a k × k matrix of functions. In other words, ∇ − ∇ can be represented
by a matrix A = (Aij ) of 1-forms Aij . Here aij = Aij (X). Hence, locally it makes sense to write:
∇ = d + A.
fi si corresponds to (f1 , . . . , fk )T and more precisely
P
Here s =
∇(f1 , . . . , fk )T = d(f1 , . . . , fk )T + A(f1 , . . . , fk )T .
Globally, ∇−∇0 is a section of T ∗ M ⊗End(E). Here End(E) = Hom(E, E). The space of such
sections is often written as Ω1 (End(E)) and a section is called a “1-form with values in End(E)”.
This proves:
Proposition 35.4. The space of connections on E is an affine space Ω1 (End(E)).
Remark: We view Ω1 (End(E)) not as a vector space (which has a preferred zero element) but
rather as an affine space, which is the same thing except for the lack of a preferred zero element.
DIFFERENTIAL GEOMETRY COURSE NOTES 75
36. C URVATURE
Let E → M be a rank r vector bundle and ∇ be a connection on E.
Definition 36.1. The curvature R∇ (or simply R) of a connection ∇ is given by:
R(X, Y ) = [∇X , ∇Y ] − ∇[X,Y ] = ∇X ∇Y − ∇Y ∇X − ∇[X,Y ] ,
or
R(X, Y )s = [∇X , ∇Y ]s − ∇[X,Y ] s.
Proposition 36.2.
(1) R(X, Y )s is tensorial, i.e., C ∞ (M )-linear, in each of X, Y , and s.
(2) R(X, Y ) = −R(Y, X).
Proof. (2) is easy. For (1), we will prove that R(X, Y ) is tensorial in s and leave the verification
for X and Y as an exercise.
R(X, Y )(f s) = (∇X ∇Y − ∇Y ∇X )(f s) − ∇[X,Y ] (f s)
= ∇X ((Y f )s + f ∇Y s) − ∇Y ((Xf )s + f ∇X s) − (([X, Y ]f )s + f ∇[X,Y ] s)
= f R(X, Y )s
P
P The flat connection ∇X s =
Proposition 36.3. (Xfk )sk has R = 0. (Here s1 , . . . , sr trivializes
E|U and s = fk sk .)
Proof. We use X = ∂x∂ i , Y = ∂x∂ j . Since R(X, Y ) is tensorial, it suffices to compute it for our
choices.
X
∂ ∂
R , s = ∇ ∂ ∇ ∂ − ∇ ∂ ∇ ∂ f k sk
∂xi ∂xj k
∂xi ∂xj ∂xj ∂xi
X ∂fk ∂fk
= ∇ ∂ sk − ∇ ∂ sk
l
∂xi ∂x j ∂xj ∂x i
X ∂ 2 fk ∂ 2 fk
= − sk = 0.
∂xi ∂xj ∂xj ∂xi
∇ ∇
Ω0 (E) → Ω1 (E) → Ω2 (E) → . . . .
76 KO HONDA
The first map is covariant differentiation (interpreted slightly differently). It turns out that this
sequence is not a chain complex, i.e., ∇ ◦ ∇ = 6 0 usually. In fact the obstruction to this being a
chain complex is the curvature. Let us locally write:
∇ ◦ ∇s = (d + A)(d + A)s = (d2 + Ad + dA + A ∧ A)s = (dA + A ∧ A)s.
Proposition 36.4. R = dA + A ∧ A, i.e., R(X, Y )s = (dA + A ∧ A)(X, Y )s.
Proof. It suffices to prove the proposition for X = ∂x∂ i , Y = ∂x∂ j , and s = sk , where s1 , . . . , sr is a
local frame for E|U . A is an r ×r matrix of 1-forms (Atij dxt ). (We will use the Einstein summation
convention – if the same index appears twice we assume it is summed over this index.) Then we
compute:
∂Ajmk
(13) ∇ ∂ ∇ ∂ sk = ∇ ∂ (sm Ajmk ) = sm + sn Ainm Ajmk
∂xi ∂xj ∂xi ∂xi
The computation of the rest is left as an exercise.
DIFFERENTIAL GEOMETRY COURSE NOTES 77
Remark: There are lots of connections which are not flat, since it is easy to find A so that dA +
A ∧ A 6= 0.
The pair (M, g) of a manifold M together with a Riemannian metric g on M is called a Riemannian
manifold.
Let i : N → (M, g) be an embedding or immersion. Then the induced Riemannian metric i∗ g on
N is defined as follows:
i∗ g(x)(v, w) = g(i(x))(i∗ v, i∗ w),
where v, w ∈ Tx N . The injectivity of i∗ is required for the positive definiteness.
78 KO HONDA
The unique torsion-free, compatible connection is called the Levi-Civita connection for (M, g).
DIFFERENTIAL GEOMETRY COURSE NOTES 79
d
The verification is easy. The claim implies that ∇X Y is simply dt
Y (γ(t))|t=0 , where γ(t) is the
arc representing X at a given point.
What we will do today is valid for hypersurfaces ((n − 1)-dimensional submanifolds) M inside
(N, g) of dimension n, but we will restrict our attention to N = R3 for simplicity.
Definition 38.1. Let X, Y be vector fields of R3 which are tangent to Σ, and let N be the unit
normal vector field to Σ inside R3 . The shape operator is S(X, Y ) = h∇X Y, N i. In other words,
S(X, Y ) is the projection in the N -direction of ∇X Y .
Proposition 38.2. S(X, Y ) is tensorial in X, Y and is symmetric.
Proof. S(X, Y ) = S(Y, X) follows from the torsion-free condition and the fact that [X, Y ] is
tangent to Σ. Now,
S(f X, Y ) = h∇f X Y, N i = hf ∇X Y, N i = f S(X, Y ).
Tensoriality in Y is immediate from the symmetric condition.
Remark: The shape operator is usually called the second fundamental form in classical differential
geometry and measures how curved a surface is. (In case you are curious what the first fundamental
form is, it’s simply the induced Riemannian metric.)
Also observe that S(X, Y ) = h∇X Y, N i = h∇X N, Y i, by using the fact that hY, N i = 0 (since
N is a normal vector and Y is tangent to Σ).
38.1. Induced connection vs. Levi-Civita. If X, Y ∈ X(M ), we can write:
∇X Y = ∇hX Y + S(X, Y )N,
where ∇hX Y denotes the projection of ∇X Y onto T Σ.
Proposition 38.3. ∇h = ∇, i.e., ∇h is the Levi-Civita connection of (Σ, g).
Proof. We have defined ∇hX Y = ∇X Y − S(X, Y )N . It is easy to verify that ∇h satisfies the
properties of a connection on Σ.
∇h is torsion-free:
∇hX Y − ∇hY X = (∇X Y − S(X, Y )N ) − (∇Y X − S(Y, X)N )
= ∇X Y − ∇ Y X
= [X, Y ]
80 KO HONDA
∇h is compatible with g:
XhY, Zi = h∇X Y, Zi + hY, ∇X Zi
= h∇hX Y, Zi + hY, ∇hX Zi,
since hN, Xi = 0 for any vector field N on Σ.
It seems miraculous that somehow the induced connection is a Levi-Civita connection. Classically,
the induced covariant derivative came first, and Levi-Civita came as an abstraction of the covariant
derivative.
DIFFERENTIAL GEOMETRY COURSE NOTES 81
∂
hsi , sj i = h∇ ∂ si , sj i + hsi , ∇ ∂ sj i,
∂xk ∂xk ∂xk
so we have
Akij = −Akij .
Lemma 40.2. Let {s01 , . . . , s0k } be another orthonormal frame for E over U . If g : U → SO(k) is
the transformation sending coordinates with respect to si to coordinates with respect to s0i (by left
multiplication), then the connection matrix transforms as: A 7→ g −1 dg + g −1 Ag.
Proof.
g −1 (d + A)g = g −1 dg + g −1 gd + g −1 Ag
= d + (g −1 dg + g −1 Ag).
You may want to check that if A is skew-symmetric and g is orthogonal, then g −1 dg + g −1 Ag is
also skew-symmetric.
40.2. Rank 2 case. Suppose from now on that E has rank 2 over M of arbitrary dimension. Then
AU (the connection matrix over U with respect to some trivialization) is given by
0 A21
AU = .
−A21 0
DIFFERENTIAL GEOMETRY COURSE NOTES 83
Example: For the Levi-Civita connection ∇ on a surface (Σ, g) ,→ (R3 , g), we have, locally,
0 κθ1 ∧ θ2
RU = ,
−κθ1 ∧ θ2 0
where κ is the scalar curvature, {e1 , e2 } is an orthonormal frame, and {θ1 , θ2 } is dual to the frame
(called the dual coframe). (The fact that κ is the scalar curvature is the content of the Theorema
Egregium!)
40.3. The Gauß-Bonnet Theorem. Let (M, g) be an oriented Riemannian manifold of dimension
n. Then there exists a naturally defined volume form ω which has the following property: At
x ∈ M , let e1 , . . . , en be an oriented orthonormal basis for Tx M . Then ω(x)(e1 , . . . , en ) = 1. If
we change the choice of orthonormal basis by multiplying by A ∈ SO(n), then we have a change
of det(A), which is still 1. Therefore, ω is well-defined.
For surfaces (Σ, g), we have an area form dA.
Theorem 40.4 (Gauß-Bonnet). Let Σ be a compact submanifold of Euclidean space (R3 , g). Then,
for one of the orientations of Σ, Z
κdA = 2πχ(Σ).
Σ
Here κ is the scalar curvature, dA is the area form for g induced from (R3 , g), and χ(Σ) is the
Euler characteristic of Σ.
84 KO HONDA
Note that a compact surface Σ (without boundary) of genus g has χ(Σ) = 2 − 2g.
Proof. Notice that κdA is simply κθ1R∧ θ2 above in the Example, and hence R the Euler class is
e(T M ) = [κdA]. In order to evaluate Σ κdA, we therefore need to compute Σ ω for the connec-
tion of our choice on T Σ compatible with g, by using Theorem 40.3.
In what follows we will frequently identify SO(2) with the unit circle S 1 = {eiθ |θ ∈ [0, 2π]} in
C via
cos θ − sin θ
↔ eiθ .
sin θ cos θ
We will do a sample computation in the case of the sphere S 2 . Let S 2 be the union of two
regions U = {|z| ≤ 1} and V = {|w| ≤ 1} identified via z = 1/w along their boundaries.
Here z, w are complex coordinates. (Note that U and V are not open sets, but it doesn’t really
matter....) If we trivialize T Σ on U and V using the natural trivialization from T C, then the gluing
map g : U ∩ V → SO(2) is given by θ 7→ e2iθ . If we set AV to be identically zero, then
0 2dθ
AU = g −1 dg + g −1 AV g = g −1 dg = along ∂U (after transforming via g). No
−2dθ 0
matter how we extend AU to the interior of U , we have the following by Stokes’ Theorem:
Z Z
ωU = 2dθ = 4π = 2πχ(S 2 ).
U ∂U
Now let Σ be a compact surface of genus g (without boundary). Then we can remove g annuli
S 1 × [0, 1] from Σ so that Σ becomes a disk Σ0 with 2g − 1 holes. We make A flat on the annuli,
and see what this induces on Σ0 . A computation similar to the one above gives the desired formula.
(Check this!!)