Compact Groups
Compact Groups
Compact Groups
Compact Groups
Most innite groups, in practice, come dressed in a natural topology, with re-
spect to which the group operations are continuous. All the familiar groups
in particular, all matrix groupsare locally compact; and this marks the
natural boundary of representation theory.
A topological group G is a topological space with a group structure dened
on it, such that the group operations
(x, y) xy, x x
1
of multiplication and inversion are both continuous.
Examples:
1. The real numbers R form a topological group under addition, with the usual
topology dened by the metric
d(x, y) = [x y[.
2. The non-zero reals R
T = I.
Here Mat(n, R) denotes the space of all nn real matrices; and T
denotes
the transpose of T:
T
ij
= T
ji
.
We can identify Mat(n, R) with the Euclidean space E
n
2
, by regarding the
n
2
entries t
ij
as the coordinates of T.
With this understanding, O(n) is a closed subspace of E
n
2
, since it is the
set of points satisfying the simultaneous polynomial equations making up
the matrix identity T
T)
ii
= 1.
Thus the orthogonal group O(n) is compact.
2. The special orthogonal group
SO(n) = T O(n) : det T = 1
is a closed subgroup of the compact group O(n), and so is itself compact.
Note that
T O(n) = det T = 1,
since
T
T = I = det T
det T = 1 = (det T)
2
= 1,
since det T
T = I.
Here Mat(n, C) denotes the space of n n complex matrices; and T
ij
= T
ji
.
We can identify Mat(n, C) with the Euclidean space E
2n
2
, by regarding
the real and imaginary parts of the n
2
entries t
ij
as the coordinates of T.
With this understanding, U(n) is a closed subspace of E
2n
2
. It is bounded
because each entry has absolute value
[t
ij
[ 1.
In fact, for each i,
[t
1i
[
2
+[t
2i
[
2
+ +[t
ni
[
2
= (T
T)
ii
= 1.
Thus the unitary group U(n) is compact.
424II 14
When n = 1,
U(1) = x C : [x[ = 1.
Thus
U(1) = S
1
= T
1
= R/Z.
Note that this group (which we can denote equally well by U(1) or T
1
) is
abelian (or commutative).
4. The special unitary group
SU(n) = T U(n) : det T = 1
is a closed subgroup of the compact group U(n), and so is itself compact.
Note that
T U(n) = [ det T[ = 1.
since
T
T = I = det T
det T = 1 = [ det T[
2
= 1,
since det T
= det T.
The map
U(1) SU(n) U(n) : (, T) T
is a surjective homomorphism. It is not bijective, since
I SU(n)
n
= 1.
Thus the homomorphism has kernel
C
n
= ),
where = e
2/n
. It follows that
U(n) = (U(1) SU(n)) /C
n
.
We shall nd that the groups SU(n) play a more important part in repre-
sentation theory than the full unitary groups U(n).
5. The symplectic group
Sp(n) = T Mat(n, H) : T
T = I.
Here Mat(n, H) denotes the space of n n matrices with quaternion en-
tries; and T
ij
=
T
ji
.
424II 15
(Recall that the conjugate of the quaternion
q = t +xi +yj +zk
is the quaternion
q = t xi yj zk.
Note that conjugacy is an anti-automorphism, ie
q
1
q
2
= q
2
q
1
.
It follows from this that
(AB)
= B
for any 2 matrices A, B whose product is dened. This in turn justies our
implicit assertion that Sp(n) is a group:
S, T Sp(n) = (ST)
(ST) = T
ST = T
T = I = ST Sp(n).
Note too that while multiplication of quaternions is not in general commu-
tative, q and q do commute:
qq = q q = t
2
+ x
2
+y
2
+z
2
= [q[
2
,
dening the norm, or absolute value, [q[ of a quaternion q.)
We can identify Mat(n, H) with the Euclidean space E
4n
2
, by regarding
the coefcients of 1, i, j, k in the n
2
entries t
ij
as the coordinates of T.
With this understanding, Sp(n) is a closed subspace of E
4n
2
. It is bounded
because each entry has absolute value
[t
ij
[ 1.
In fact, for each i,
[t
1i
[
2
+[t
2i
[
2
+ +[t
ni
[
2
= (T
T)
ii
= 1.
Thus the symplectic group Sp(n) is compact.
When n = 1,
Sp(1) = q H : [q[ = 1 = t +xi +yj +zk : t
2
+x
2
+y
2
+z
2
= 1.
Thus
Sp(1)
= S
3
.
We leave it to the reader to show that there is in fact an isomorphism
Sp(1) = SU(2).
424II 16
Although compactness is by far the most important topological property that
a group can possess, a second topological property plays a subsidiary but still
important r oleconnectivity.
Recall that the space X is said to be disconnected if it can be partitioned into
2 non-empty open sets:
X = U V, U V = .
We say that X is connected if it is not disconnected.
There is a closely related concept which is more intuitively appealing, but is
usually more difcult to work with. We say that X is pathwise-connected if given
any 2 points x, y X we can nd a path joining x to y, ie a continuous map
: [0, 1] X
with
(0) = x, (1) = y.
It is easy to see that
pathwise-connected = connected.
For if X = U V is a disconnection of X, and we choose points u U, v V ,
then there cannot be a path joining u to v. If there were, then
I =
1
U
1
V
would be a disconnection of the interval [0, 1]. But it follows from the basic prop-
erties of real numbers that the interval is connected. (Suppose I = U V . We
may suppose that 0 U. Let
l = inf x V .
Then we get a contradiction whether we assume that x V or x / V .)
Actually, for all the groups we deal with the 2 concepts of connected and
pathwise-connected will coincide. The reason for this is that all our groups will
turn out to be locally euclidean, ie each point has a neighbourhood homeomorphic
to the open ball in some euclidean space E
n
. This will become apparent much
later when we consider the Lie algebra of a matrix group.
We certainly will not assume this result. We mention it merely to point out
that you will not go far wrong if you think of a connected space as one in which
you can travel from any point to any other, without taking off.
The following result provides a useful tool for showing that a compact group
is connected.
424II 17
Proposition 1.1 Suppose the compact group G acts transitively on the compact
space X. Let x
0
X; and let
H = S(x
0
) = g G : gx
0
= gx
0
1
x = gH.
Lemma 1.1 Each coset gH is connected.
Proof of Lemma The map
h gh : H gH
is a continuous bijection.
But H is compact, since it is a closed subgroup of G (as H =
1
x
0
).
Now a continuous bijection of a compact space K onto a hausdorff space Y
is necessarily a homeomorphism. For if U K is open, then C = K U is
closed and therefore compact. Hence (C) is compact, and therefore closed; and
so (U) = Y (C) is open in Y . This shows that
1
is continuous, ie is a
homeomorphism.
Thus H
= gH; and so
H connected = gH connected.
x = x
2
1
+ +x
2
n
). For
|Tx|
2
= (Tx)
Tx = x
Tx = x
x.
It follows that T sends the sphere
S
n1
= x R : |x| = 1
into itself. Thus SO(n) acts on S
n1
.
This action is transitive: we can nd an orthogonal transformation of determi-
nant 1 sending any point of S
n1
into any other. (The proof of this is left to the
reader.)
Moreover the space S
n1
is compact, since it is closed and bounded.
Thus the conditions of our Proposition hold. Let us take
x
0
=
_
_
_
_
_
_
0
.
.
.
0
1
_
_
_
_
_
_
.
Then
H(x
0
) = S(x
0
)
= SO(n 1).
424II 19
For
Tx
0
= x
0
= T =
_
_
_
_
_
_
0
T
1
.
.
.
0
0 0 1
_
_
_
_
_
_
where T
1
SO(n 1). (Since Tx
0
= x
0
the last column of T consists of 0s and
a 1. But then
t
2
n1
+t
2
n2
+ + 1 = 1 = t
n1
= t
n2
= = 0.
since each row of an orthogonal matrix has norm 1.)
Our proposition shows therefore that
SO(n 1) connected = SO(n) connected.
But
SO(1) = I
is certainly connected. We conclude by induction that SO(n) is connected for all
n.
Remark: Although we wont make use of this, our Proposition could be slightly
extended, to state that if X is connected, then the number of components of H
and G are equal.
Applying this to the full orthogonal groups O(n), we deduce that for each n
O(n) has the same number of components as O(1), namely 2. But of course this
follows from the connectedness of SO(n), since we know that O(n) splits into 2
parts, SO(n) and a coset of SO(n) (formed by the orthogonal matrices T with
det T = 1) homeomorphic to SO(n).
Corollary 1.2 The special unitary group SU(n) is connected for each n.
Proof This follows in exactly the same way. SU(n) acts on C
n
by
(T, x) Tx.
This again preserves the norm
|x| =
_
[x
1
[
2
+ [x
n
[
2
_1
2
,
since
|Tx|
2
= (Tx)
Tx = x
Tx = x
x = |x|
2
.
424II 110
Thus SU(n) sends the sphere
S
2n1
= x C
n
: |x| = 1
into itself. As before, the stabiliser subgroup
S
_
_
_
_
_
_
0
.
.
.
0
1
_
_
_
_
_
_
= SU(n 1);
and so, again as before,
SU(n 1) connected = SU(n) connected.
Since
SU(1) = I
is connected, we conclude by induction that SU(n) is connected for all n.
Remark: The same argument shows that the full unitary group U(n) is connected
for all n, since
U(1) = x C : [x[ = 1 = S
1
is connected.
But this also follows from the connectedness of SU(n) through the homomor-
phism
(, T) T : U(1) SU(n) U(n)
since the image of a connected set is connected (as is the product of 2 connected
sets).
Note that this homomorphism is not quite an isomorphism, since
I SU(n)
n
= 1.
It follows that
U(n) = (U(1) SU(n)) /C
n
,
where C
n
= ) is the nite cyclic group generated by = e
2/n
.
Corollary 1.3 The symplectic group Sp(n) is connected for each n.
Proof The result follows in the same way from the action
(T, x) Tx
424II 111
of Sp(n) on H
n
. This action sends the sphere
S
4n1
= x H
n
: |x| = 1
into itself; and so, as before,
Sp(n 1) connected = Sp(n) connected.
In this case we have
Sp(1) = q = t +xi +yj +zk H : |q|
2
= t
2
+x
2
+y
2
+z
2
= 1
= S
3
.
So again, the induction starts; and we conclude that Sp(n) is connected for all n.
Chapter 2
Invariant integration on a compact
group
Every compact group carries a unique invariant measure. This remarkable
and beautiful result allows us to extend representation theory painlessly from
the nite to the compact case.
2.1 Integration on a compact space
There are 2 rival approaches to integration theory.
Firstly, there is what may be called the traditional approach, in which the
fundamental notion is the measure (S) of a subset S.
Secondly, there is the Bourbaki approach, in which the fundamental notion is
the integral
_
f of a function f. This approach is much simpler, where applicable,
and is the one that we shall follow.
Suppose X is a compact space. Let C(X, k) (where k = R or C) denote the
vector space of continuous functions
f : X k.
Recall that a continuous function on a compact space is bounded and always
attains its bounds. We set
[f[ = max
xX
[f(x)[
for each function f C(X, k).
This norm denes a metric
d(f
1
, f
2
) = [f
1
f
2
[
on C(X, k), which in turn denes a topology on the space.
424II 21
2.1. INTEGRATION ON A COMPACT SPACE 424II 22
The metric is complete, ie every Cauchy sequence converges. This is easy to
see. If f
i
is a Cauchy sequence in C(X, k) then f
i
(x) is a Cauchy sequence
in k for each x X. Since R and C are complete metric spaces, this sequence
converges, to f(x), say; and it is a simple technical exercise to show that the limit
function f(x) is continuous, and that f
i
f in C(X, k).
Thus C(X, k) is a complete normed vector spacea Banach space, in short.
A measure on X is dened to be a continuous linear functional
: C(X, k) k (k = R or C).
More fully,
1. is linear, ie
(
1
f
1
+
2
f
2
) =
1
(f
1
) +
2
(f
2
);
2. is continuous, ie given > 0 there exists > 0 such that
[f[ < = [(f)[ < .
We often write _
X
f d or
_
X
f(x) d(x)
in place of (f).
Since a complex measure splits into real and imaginary parts,
=
R
+i
I
,
where the measures
R
and
I
are real, we can safely restrict the discussion to
real measures.
Example: Consider the circle (or torus)
S
1
= T = R/Z.
We parametrise S
1
by the angle mod 2. The usual measure d is a measure in
our sense; in fact
(f) =
1
2
_
2
0
f() d
is the invariant Haar measure on the group S
1
whose existence and uniqueness on
every compact group we shall shortly demonstrate.
Another measurea point measureis dened by taking the value of f at a
given point, say
1
(f) = f().
Measures can evidently be combined linearly, as for example
2
= +
1
2
1
,
ie
2
(f) =
_
2
0
f() d +
1
2
f().
2.2. INTEGRATION ON A COMPACT GROUP 424II 23
2.2 Integration on a compact group
Suppose now G is a compact group. If is a measure on G, and g G, then we
can dene a new measure g by
(g)(f) = (g
1
f) =
_
G
f(gx) dg.
(Since we are dealing with functions on a space of functions, g is inverted twice.)
Theorem 2.1 Suppose G is a compact group. Then there exists a unique real
measure on G such that
1. is invariant on G, ie
_
G
(gf) d =
_
G
f d
for all g G, f C(G, R).
2. is normalised so that G has volume 1, ie
_
G
1 d = 1.
Moreover,
1. this measure is strictly positive, ie
f(x) 0 for all x =
_
f d 0,
with equality only if f = 0, ie f(g) = 0 for all g.
2.
[
_
G
f d[
_
G
[f[; d.
Proof
The intuitive idea. As the proof is long, and rather technical, it may help to
sketch the argument rst. The basic idea is that averaging smoothes.
By an average F(x) of a function f(x) C(G) we mean a weighted average
of transforms of f, ie a function of the form
F(x) =
1
f(g
1
x) + +
r
f(g
r
x),
where
g
1
, . . . , g
r
G, 0
1
, . . . ,
r
1,
1
+ +
r
= 1.
These averages have the following properties:
2.2. INTEGRATION ON A COMPACT GROUP 424II 24
An average of an average is an average, ie if F is an average of f, then an
average of F is also an average of f.
If there is an invariant measure on F, then averaging leaves the integral
unchanged, ie if F is an average of f then
_
F dg =
_
f dg.
Averaging smoothes, in the sense that if F is an average of f then
min f min F max F max f.
In particular, if we dene the variation of f by
var f = max f min f
then
var F var f.
Now suppose a positive invariant measure exists. Then
min f
_
f dg max f,
ie the integral of f is sandwiched between its bounds.
If f is not completely smooth, ie not constant, we can always make it smoother,
ie reduce its variation, by spreading out its valleys, as follows. Let
m = min f, M = max f;
and let U be the set of points where f is below average, ie
U = x G : f(x) <
1
2
(m+M).
The transforms of U (as of any non-empty set) cover X; for if x
0
U then
x (xx
1
0
)U. Since U is open, and X is compact, a nite number of these
transforms cover X, say
X g
1
U g
r
U.
Now consider the average
F =
1
r
(g
1
f + +g
r
f) ,
2.2. INTEGRATION ON A COMPACT GROUP 424II 25
ie
F(x) =
1
r
_
f(g
1
1
x) + + f(g
1
r
x)
_
.
For any x, at least one of g
1
1
x, . . . , g
1
r
x lies in U (since x g
i
U = g
1
i
x U).
Hence
F(x) <
1
r
_
(r 1)M +
1
2
(m+ M)
_
=
_
1
1
2r
M
_
+
1
2r
m.
Thus
var F <
_
1
1
2r
_
(M m) < var f.
If we could nd an average that was constant, say F(x) = c, then (always
assuming the existence of an invariant measure) we would have
_
f dg =
_
F dg = c
_
1 dg = c.
An example: Let G = U(1); and let f be the saw-tooth function
f(e
i
) = [[ ( < ).
Let g = e
i
, ie rotation through half a revolution. Then
F(x) =
1
2
(f(x) + f(gx)) =
2
for all x U(1). So in this case, we have found a constant function; and we
deduce that if an invariant integral exists, then
_
f dg =
_
F dg =
2
.
But it is too much in general to hope that we can completely smooth a function
by averaging. However, we can expect to make the variation as small as we wish,
so that
var F = max F min F < ,
say. But then (always assuming there is an invariant measure)
_
f will be sand-
wiched between these 2 bounds,
min F
_
f dg max F.
2.2. INTEGRATION ON A COMPACT GROUP 424II 26
So we can determine
_
f as a limit in this way.
Thats the idea of the proof. Surprisingly, the most troublesome detail to ll
in is to show that 2 different averaging limits cannot lead to different values for
_
f. For this, we have to introduce the second action of G on C(G), by right
multiplication,
(g, f) f(xg).
This leads to a second way of averaging, using right transforms f(xg). The com-
mutation of multiplication on the left and right allows us to play off these 2 kinds
of average against one another.
Proof proper: Suppose f C(G, R). By the argument above, we can nd a
sequence of averages
F
0
= f, F
1
, F
2
, . . .
(each an average of its predecessor) such that
var F
0
> var F
1
> var F
2
>
(or else we reach a constant function F
r
= c).
However, this does not establish that
var F
i
0
as i . We need a slightly sharper argument to prove this. In effect we must
use the fact that f is uniformly continuous.
Recall that a function f : R R is said to be uniformly continuous on the
interval I R if given > 0 we can always nd > 0 such that
[x y[ < = [fx fy[ < .
We can extend this concept to a function f : G R on a compact group G as
follows: f is said to be uniformly continuous on G if given > 0 we can nd an
open set U e (the neutral element of G) such that
x
1
y U = [fx fy[ < .
Lemma 2.1 A continuous function on a compact group is necessarily uniformly
continuous.
Proof of Lemma Suppose f C(G, k). For each point g G, let
U(g) = x G : [f(x) f(g)[ <
1
2
.
2.2. INTEGRATION ON A COMPACT GROUP 424II 27
By the triangle inequality,
x, y U(g) = [f(x) f(y)[ < .
Now each neighbourhood U of g in G is expressible in the form
U = gV
where V is a neighbourhood of e in G.
Furthermore, for each neighbourhood V of e, we can nd a smaller neighbour-
hood W of e such that
W
2
V.
(This follows from the continuity of the multiplication (x, y) xy. Here W
2
denotes the set w
1
w
2
: w
1
, w
2
W.)
So for each g G we can nd an open neighbourhood W(g) of e such that
gW(g)
2
U(g);
and in particular
x, y gW(g)
2
= [f(x) f(y)[ < .
The open sets gW(g) cover G (since g W(g)). Therefore, since G is com-
pact, we can nd a nite subcover, say
G = g
1
W
1
g
2
W
2
g
r
W
r
,
where W
i
= W(g
i
).
Let
W =
i
W
i
.
Suppose x
1
y W, ie
y xW.
Now x lies in some set g
i
W
i
. Hence
x, y g
i
W
i
W g
i
W
2
i
;
and so
[f(x) f(y)[ < .
Now we observe that this open set U will serve not only for f but also for
every average F of f. For if
F =
1
g
1
f + +
r
g
r
f (0
1
, . . . ,
r
1,
1
+ +
r
= 1)
2.2. INTEGRATION ON A COMPACT GROUP 424II 28
then
[F(x) F(y)[
1
[f(g
1
1
x) f(g
1
1
y)[ + +
r
[f(g
1
r
x) f(g
1
r
y)[.
But
(g
1
i
x)
1
(g
1
i
y) = x
1
g
i
g
1
i
y = x
1
y.
Thus
x
1
y U = [f(g
1
i
x) f(g
1
i
y)[ <
= [F(x) F(y)[ < (
1
+
r
) = .
Returning to our construction of an improving average F, let us take =
(M m)/2; then we can nd an open set U e such that
x
1
y U = [F(x) F(y)[ <
1
2
(M m)
for every average F of f. In other words, the variation of F on any transform gU
is less than half the variation of f on G.
As before, we can nd a nite number of transforms of U covering G, say
G g
1
U g
r
U.
One of these transforms, g
i
U say, must contain a point x
0
at which F takes its
minimal value. But then, within g
i
U,
[F(x) F(x
0
)[ <
1
2
(M m);
and so
F(x) < min F +
1
2
(M m).
If now we form the new average
F
=
1
r
(g
1
F + g
r
F)) ,
as before, then
max F
r 1
r
max F +
1
r
_
min F +
M m
2
_
.
Since min F
_
1
1
r
_
var F +
1
2r
var f.
2.2. INTEGRATION ON A COMPACT GROUP 424II 29
A little thought shows that this implies that
var F
< var F
provided
var F >
1
2
var f.
At rst sight, this seems a weaker result than our earlier one, which showed that
var F
< var F in all cases! The difference is, that r now is independent of F.
Thus we can nd a sequence of averages
F
0
= f, F
1
, F
2
, . . .
(each an average of its predecessor) such that var F
i
is decreasing to a limit
satisfying
_
1
1
r
_
+
1
2r
var(f),
ie
1
2
var f.
In particular, we can nd an average F with
var F <
2
3
var f.
Repeating the argument, with F in place of f, we nd a second average F
such that
var F
<
_
2
3
_
2
var f;
and further repetition gives a new sequence of averages
F
0
= f, F
1
, F
2
, . . . ,
with
var F
i
0,
as required.
This sequence gives us a nest of intervals
(min f, max f) (min F
1
, max F
1
) (min F
2
, max F
2
)
2.2. INTEGRATION ON A COMPACT GROUP 424II 210
whose lengths are tending to 0. Thus the intervals converge on a unique real
number I.
We want to set _
f dg = I.
But before we can do this, we must ensure that no other sequence of averages can
lead to a nest of intervals
(min f, max f) (min F
1
, max F
1
) (min F
2
, max F
2
)
converging on a different real number I
,= I.
This will follow at once from the following Lemma.
Lemma 2.2 Suppose F, F
.
In other words, the minimum of any average is the maximum of any other
average.
Proof of Lemma The result would certainly hold if we could nd a function F
; for then
min F min F
max F
maxF
.
However, it is not at all clear that such a common average always exists. We
need a new idea.
So far we have only been considering the action of G on C(G) on the left. But
G also acts on the right, the 2 actions being independent and combining in the
action of GG given by
((g, h)f) (x) = f(g
1
xh).
Let us temporarily adopt the notation fh for this right action, ie
(fh)(x) = f(xh).
We can use this action to dene right averages
j
(fh
j
).
The point of introducing this complication is that we can use the right averages to
rene the left averages, and vice versa.
2.2. INTEGRATION ON A COMPACT GROUP 424II 211
Thus suppose we have a left average
F =
i
(g
i
f)
and a right average
F
j
(fh
j
).
Then we can form the joint average
F
j
(g
i
fh
j
).
We can regard F
by
left averaging. In either case we conclude that F
; and
min F min F
max F
max F
.
Thus the minimum of any left average is the maximum of any right average.
Similarly
min F
max F;
the minimum of any right average is the maximum of any left average.
In fact, the second result follows from the rst; since we can pass from left
averages to right averages, and vice versa, through the involution
f
f : C(G) C(G),
where
f(g) = f(g
1
).
For it is readily veried that
F =
1
(g
1
f) + +
r
(g
r
f) =
F =
1
(
fg
1
1
) + +
r
(
fg
1
r
).
Thus if F is a left average then
F is a right average, and vice versa.
Now suppose we have 2 left averages F
1
, F
2
such that
max F
1
< min F
2
.
Let
min F
2
max F
1
= .
Let F
= max F
min F
< .
2.2. INTEGRATION ON A COMPACT GROUP 424II 212
Then we have a contradiction; for
min F
2
max F
< min F
+ max F
1
+ .
i
g
i
f
1
of f
1
such that
(f
1
) < min F
1
max F
1
< (f
1
) +;
and similarly we can nd a right average
F
2
=
j
f
2
h
j
of f
2
such that
(f
2
) < min F
2
max F
2
< (f
2
) +.
Now let
F =
j
(g
i
(f
1
+f
2
)h
j
) .
Then we have
min F
1
+ min F
2
min F (f +g) max F max F
1
+ max F
2
;
2.2. INTEGRATION ON A COMPACT GROUP 424II 213
from which we deduce that
(f +g) = (f) +(g).
Lets postpone for a moment the proof that is continuous.
It is evident that a non-negative function will have non-negative integral, since
all its averages will be non-negative:
f 0 =
_
f d 0.
Its perhaps not obvious that the integral is strictly positive. Suppose f 0, and
f(g) > 0. Then we can nd an open set U containing g such that
f(x) > 0
for x U. Now we can nd g
1
, . . . , g
r
such that
G = g
1
U g
r
U.
Let F be the average
F(x) =
1
r
_
f(g
1
1
x) + + f(g
1
r
x)
_
.
Then
x g
i
U = g
1
i
x U = f(g
1
i
x) ,
and so
F(x)
r
.
Hence
_
f dg =
_
F dg
r
> 0.
Since
min f
_
f d max f,
it follows at once that
[
_
f d[ [f[.
It is now easy to show that is continuous. For a linear function is continuous
if it is continuous at 0; and we have just seen that
[f[ < = [
_
f d[ < .
2.2. INTEGRATION ON A COMPACT GROUP 424II 214
It follows at once from
min f
_
f dg max f
that
[
_
f dg[ [f[.
Finally, since f and gf (for f C(G), g G) have the same transforms,
they have the same (left) averages. Hence
_
gf dg =
_
f dg,
ie the integral is left-invariant.
Moreover, it follows from our construction that this is the only left-invariant
integral on G with
_
1 dg = 1; for any such integral must be sandwiched between
min F and max F for all averages F of f, and we have seen that these intervals
converge on a single real number.
The Haar measure, by denition, is left invariant:
_
f(g
1
x) d(x) =
_
f(x) d(x).
It followed from our construction that it is also right invariant:
_
f(xh) d(x) =
_
f(x) d(x).
It is worth noting that this can be deduced directly from the existence of the Haar
measure.
Proposition 2.1 The Haar measure on a compact group G is right invariant, ie
_
G
f(gh) dg =
_
G
f(g) dg (h G, f C(G, R)) .
Proof Suppose h G. The map
h
: f (fh)
denes a left invariant measure on G. By the uniqueness of the Haar measure, and
the fact that
h
(1) = 1
(since the constant function 1 is right as well as left invariant),
h
= ,
2.2. INTEGRATION ON A COMPACT GROUP 424II 215
ie is right invariant.
Outline of an alternative proof Those who are fond of abstraction might prefer
the following formulation of the rst part of our proof, set in the real Banach space
C(G) = C(G, R).
Let /(f) C(G) denote the set of averages of f. This set is convex, ie
F, F
/(f) = F + (1 )F
/(f) (0 1).
Let C(G) denote the set of constant functions f(g) = c. Evidently
= R.
We want to show that
/(f) ,= ,
ie the closure of /(f) contains a constant function. (In other words, we can nd
a sequence of averages converging on a constant function.)
To prove this, we establish that /(f) is pre-compact, ie its closure /(f) is
compact. For then it will follow that there is a point X /(f) (ie a function
X(g)) which is closest to . But if this point is not in , we will reach a con-
tradiction; for by the same argument that we used in our proof, we can always
improve on a non-constant average, ie nd another average closer to . (We actu-
ally need the stronger version of this using uniform continuity, since the closest
point X(g) is not necessarily an average, but only the limit of a sequence of av-
erages. Uniform continuity shows that we can improve all averages by a xed
amount; so if we take an average sufciently close to X(g) we can nd another
average closer to than X(g).)
It remains to show that /(f) is pre-compact. We note in the rst place that
the set of transforms of f,
Gf = gf : g G
is a compact subset of C(G), since it is the image of the compact set G under the
continuous map
g gf : G C(G).
Also, /(f) is the convex closure of this set Gf, ie the smallest convex set
containing Gf (eg the intersection of all convex sets containing G), formed by the
points
1
F
1
+ +
r
F
r
: 0
1
, . . . ,
r
1;
1
+ +
r
= 1.
Thus /(f) is the convex closure of the compact set Gf. But the convex clo-
sure of a compact set in a complete metric space is always pre-compact. That
2.2. INTEGRATION ON A COMPACT GROUP 424II 216
follows (not immediately, but by a straightforward argument) from the following
lemma in the theory of metric spaces: A subset S X of a complete metric space
is pre-compact if and only if it can be convered by a nite number of balls of
radius ,
S B(x
1
, ) B(x
r
, ),
for every > 0.
Accordingly, we have shown that /(f) is non-empty. We must then show
that it consists of a single point. This we do as in our proof proper, by introduc-
ing right averages. Finally, we dene
_
f dg to be this point of intersection (or
rather, the corresponding real number); and we show as before that this denes an
invariant integral (f) with the required properties.
Examples:
1. As we have already noted, the Haar measure on S
1
is
1
2
d.
In other words,
(f) =
1
2
_
2
0
f() d.
2. Consider the compact group SU(2). We know that
SU(2)
= S
3
,
since the general matrix in SU(2) takes the form
U =
_
x +iy z +it
z +it x iy
_
, [x[
2
+[y[
2
+[z[
2
+[t[
2
= 1.
The usual volume on S
3
, when normalised, gives the Haar measure on
SU(2). To see that, observe that multiplication by U SU(2) denes a
distance preserving linear transformationan isometryof R
4
, ie if
U
_
x + iy z +it
z +it x iy
_
=
_
x
+iy
+it
+it
iy
_
then
x
2
+ y
2
+z
2
+t
2
= x
2
+y
2
+z
2
+t
2
for all (x, y, z, t) R
4
.
It follows that multiplication by U preserves the volume on S
3
. In other
words, this volume provides an invariant measure on SU(2), which must
therefore beafter normalisationthe Haar measure on SU(2).
2.2. INTEGRATION ON A COMPACT GROUP 424II 217
As this examplethe simplest non-abelian compact groupdemonstrates,
concrete computation of the Haar measure is likely to be complicated. For-
tunately, the mere existence of the Haar measure is usually sufcient for our
purpose.
Chapter 3
From nite to compact groups
Almost all the results established in Part I for nite-dimensional representations
of nite groups extend to nite-dimensional representations of compact groups.
For the Haar measure on a compact group G allows us to average over G; and our
main results wereor can beestablished by averaging.
In this chapter we run very rapidly over these results, and their extension to
the compact case. This may serve (if nothing else) as a review of the main results
of nite-dimensional representation theory.
The chapter is divided into sections corresponding to the chapters of Part I, eg
section 3.5 covers the results established in chapter 5 of Part I.
We assume, unless the contrary is explicitly stated, that we are dealing with
nite-dimensional representations over k (where k = R or C). This restriction
greatly simplies the story, for three reasons:
1. Each nite-dimensional vector space over k carries a unique hausdorff topol-
ogy under which addition and scalar multiplication are continuous. If V is
n-dimensional then
V
= k
n
;
and this unique topology on V is just that arising from the product topology
on k
n
.
2. If U and V are nite-dimensional vector spaces over k, then every linear
map
t : U V
is continuous. Continuity is automatic in nite dimensions.
3. If V is a nite-dimensional vector space over k, then every subspace U V
is closed in V .
424II 31
3.1. REPRESENTATIONS OF A COMPACT GROUP 424II 32
3.1 Representations of a Compact Group
We have agreed that a representation of a topological group Gin a nite-dimensional
vector space V over k (where k = R or C) is dened by a continuous linear action
GV V.
Recall that a representation of a nite group G in V can be dened in 2 equiv-
alent ways:
1. by a linear action
GV V ;
2. by a homomorphism
G GL(V ),
where GL(V ) denotes the group of invertible linear maps t : V V .
We again have the same choice. We have chosen (1) as our fundamental de-
nition in the compact case, where we chose (2) in the nite case, simply because
it is a little easier to discuss the continuity of a linear action.
However, there is a natural topology on GL(V ). For we can identify GL(V )
with a subspace of the space of all linear maps t : V V ; if dimV = n then
GL(V ) Mat(n, k)
= k
n
2
.
This n
2
-dimensional vector space has a unique hausdorff topology, as we have
seen; and this induces a topology on GL(V ).
We know that there is a one-one correspondence between linear actions of G
on V and homomorphisms G GL(V ). It is a straightforward matter to verify
that under this correspondence, a linear action is continuous if and only if the
corresponding homomorphism is continuous.
3.2 Equivalent Representations
The denition of the equivalence of 2 representations , of a group G in the
nite-dimensional vector spaces U, V over k holds for all groups, and so extends
without question to compact groups.
We note that the map : U V dening such an equivalence is necessarily
continuous, since U and V are nite-dimensional. In the innite-dimensional case
(which, we emphasise, we are not considering at the moment) we would have to
add the requirement that should be continuous.
3.3. SIMPLE REPRESENTATIONS 424II 33
3.3 Simple Representations
Recall that the representation of a group Gin the nite-dimensional vector space
V over k is said to be simple if no proper subspace U V is stable under G. This
denition extends to all groups G, and in particular to compact groups.
In the innite-dimensional case we would restrict the requirement to proper
closed subspaces of V . This is no restriction in our case, since as we have noted,
all subspaces of a nite-dimensional vector space over k are closed.
3.4 The Arithmetic of Representations
Suppose , are representations of the group G in the nite-dimensional vector
spaces U, V over k. We have dened the representations + , ,
in the
vector spaces U V , U V , U
gG
gg
1
is also a projection onto U; and
W = ker
is a stable complementary subspace:
V = U W.
This carries over without difculty, although a little care is required. First we
must explain how we dene the average
=
_
G
gg
1
dg.
For here we are integrating the operator-valued function
F(g) = gg
1
However, there is little difculty in extending the concept of measure to vector-
valued functions F on G, ie maps
F : G V,
where V is a nite-dimensional vector space over k. This we can do, for exam-
ple, by choosing a basis for V , and integrating each component of F separately.
We must show that the result is independent of the choice of basis; but that is
3.6. EVERY REPRESENTATION OF A FINITE GROUP IS SEMISIMPLE 424II 36
straightforward, The case of a function with values in hom(U, V ), where U, V are
nite-dimensional vector spaces over k, may be regarded as a particular case of
this, since we can regard hom(U, V ) as itself a vector space over k.
There is one other point that arises: in this proof (and elsewhere) we often
encounter double sums
gG
hG
f(g, h)
over G. The easiest way to extend such an argument to compact groups is to
consider the corresponding integral
_
GG
f(g, h) d(g, h)
of the continuous function f(g, h) over the product group GG.
In such a case, let us set
F(g) =
_
hG
f(g, h) dh
for each g G. Then it is readily shown that F(g) is continuous, so that we can
compute
I =
_
gG
F(g) dg
But then it is not hard to see that I = I(f) denes a second Haar measure on
GG; so we deduce from the uniqueness of this measure that
_
GG
f(g, h) d(g, h) =
_
gG
__
hG
f(g, h) dh
_
dg.
This result allows us to deal with all the manipulations that arise (such as
reversal of the order of integration). For example, in our proof of the result above
that the averaged projection is itself a projection, we argue as follows:
2
=
_
gG
gg
1
dg
_
hG
hh
1
dh
=
_
(g,h)GG
gg
1
hh
1
d(g, h)
=
_
(g,h)GG
gg
1
hh
1
d(g, h)
(using the fact that g = g, since U = im is stable under G). Thus
2
=
_
(g,h)GG
hh
1
d(g, h)
=
_
gG
dg
_
hG
hh
1
dh
= .
3.7. UNIQUENESS AND THE INTERTWINING NUMBER 424II 37
3.7 Uniqueness and the Intertwining Number
The denition of the intertwining number I(, ) does not presuppose that G is
nite, and so extends to the compact case, as do all the results of this chapter.
3.8 The Character of a Representation
The denition of the character of a nite-dimensional representation does not de-
pend in any way on the niteness of the group, and so extends to the compact
case.
There is one result, however, which extends to this case, but whose proof
requires a little more thought.
Proposition 3.1 Suppose is an n-dimensional representation of a compact group
G over R or C; and suppose g G. Let the eigenvalues of (g) be
1
, . . . ,
n
.
Then
[
i
[ = 1 (i = 1, . . . , n).
Proof We knowthat there exists an invariant inner product u, v) on the representation-
space V . We can choose a basis for V so that
v, v) = [x
1
[
2
+ +[x
n
[
2
,
where v = (x
1
, . . . , x
n
)
= v
= v
Uv = v
v
= v
v = [[
2
v
v
= [[ = 1.
Hence
1
=
for each such eigenvalue.
Alternative proof Recall how we proved this in the nite case. By Lagranges
Theorem g
m
= 1 for some m > 0, for each g G. Hence
(g)
m
= I;
3.8. THE CHARACTER OF A REPRESENTATION 424II 38
and so the eigenvalues of (g) all satisfy
m
= 1.
In particular
[[ = 1;
and so
1
= .
We cannot say that an element g in a compact group G is necessarily of nite
order. However, we can show that the powers g
n
of g approach arbitrarily close to
the identity e G. (In other words, some subsequence of g, g
2
, g
3
, . . . tends to
e.)
For suppose not. Then we can nd an open set U e such that no power
of g except g
0
= e lies in U. Let V be an open neighbourhood of e such that
V V
1
U. Then the subsets g
n
V are disjoint. For
x g
m
V g
n
V = x = g
m
v
1
= g
n
v
2
= g
nm
= v
1
v
1
2
= g
nm
U,
contrary to hypothesis.
It follows [the details are left to the student] that the subgroup
g) = . . . , g
1
, e, g, g
2
, . . .
is
1. discrete,
2. innite, and
3. closed in G.
But this implies that G has a non-compact closed subgroup, which is impossible.
Thus we can nd a subsequence
1 n
1
< n
2
< . . .
such that
g
n
i
e
as i .
It follows that
(g)
n
i
I
3.9. THE REGULAR REPRESENTATION 424II 39
as i . Hence if is any eigenvector of (g) then
n
i
1.
This implies in particular that
[[ = 1.
(g
1
) =
(g)
for all g G
Proof Suppose the eigenvalues of (g) are
1
, . . . ,
n
. Then the eigenvalues of
(g
1
) = (g)
1
are
1
1
, . . . ,
1
n
. Thus
(g
1
= tr (g
1
)
= lambda
1
1
+ +
1
n
=
1
+ +
n
=
1
+ +
n
= tr (g)
=
(g).
j
provide all the representations of GH.
This argument fails in the compact case, since m and n are innite (unless G
or H is nite).
We must turn therefore to our second proof that a simple representation of
GH over C is necessarily of the form . Recall that this alternative proof
was based on the natural equivalence
hom(hom(V, U), W) = hom(V, U W).
This proof does carry over to the compact case.
Suppose the representation-space of is the GH-space V . Consider V as a
G-space (ie forget for the moment the action of H on V ). Let U V be a simple
G-subspace of V . Then there exists a non-zero G-map t : V U (since the
G-space V is semisimple). Thus the vector space
X = hom
G
(V, U)
formed by all such G-maps is non-zero.
Now H acts naturally on X:
(ht)(v) = t(hv).
Thus X is an H-space. Let W be a simple H-subspace of X. Then there exists a
non-zero H-map u : X W (since the H-space X is semisimple). Thus
hom
H
(X, W) = hom
H
_
hom
G
(V, U), W
_
is non-zero. But it is readily veried that
hom
H
_
hom
G
(V, U), W
_
= hom
GH
(V, U W).
Thus there exists a non-zero GH-map T : V U W. Since V and U W
are both simple GH-spaces, T must be an isomorphism:
V = U W.
In particular
= ,
where is the representation of G in U, and is the representation of H in W.
Thus if Gand H are compact groups then every simple representation of GH
over C is of the form .
[Can you see where we have used the fact that G and H are compact in our
argument above?]
3.12. REAL REPRESENTATIONS 424II 312
3.12 Real Representations
Everything in this chapter carries over to the compact case, with no especial prob-
lems arising.
Chapter 4
Representations of U(1)
The group U(1) goes under many names:
U(1) = SO(2) = S
1
= T
1
= R/Z.
Whatever it is called, U(1) is abelian, connected andabove allcompact.
As an abelian group, every simple representation of U(1) (over C) is 1-dimensional.
Proposition 4.1 Suppose : G C
N
= 1 = ()
N
= 1.
It follows that
() = e
2ni/N
=
n
= E
n
()
for some n Z in the range N/2 < n < N/2. We shall deduce from this that
= E
n
.
Let
1
= e
i/N
.
424II 43
Then
2
1
= = (
1
)
2
= () =
n
= (
1
) =
n
1
,
since this is the unique square root of
n
in U.
Repeating this argument successively with we deduce that if
j
= e
2
2
j
N
i
then
(
j
) =
n
j
= E
n
(
j
)
for j = 2, 3, 4, . . . .
But it follows from this that
(
k
j
) = (
k
j
)
n
= E
n
(
k
j
)
for k = 1, 2, 3, . . . . In other words
(e
i
) = E(e
i
)
for all of the form
= 2
k
2
j
But these elements e
i
are dense in U(1). Therefore, by continuity,
(g) = E
n
(g)
for all g U(1), ie = E
n
.
Alternative proof Suppose
: U(1) U(1)
is a representation of U(1) distinct from all the E
n
. Then
I(E
n
, ) = 0
for all n, ie
c
n
=
1
2
_
2
0
(e
i
)e
n
d = 0.
In other words, all the Fourier coefcients of (e
i
) vanish.
But this implies (from Fourier theory) that the function itself must vanish,
which is impossible since (1) = 1.
424II 44
Remark: As this proof suggests, the representation theory of U(1) is just the
Fourier theory of periodic functions in disguise. (In fact, the whole of group rep-
resentation theory might be described as a kind of generalised Fourier analysis.)
Let denote the representation of U(1) in the space C(U(1)) of continuous
functions f : U(1) C, with the usual action: if g = e
i
then
(gf)(e
i
) = e
i()
.
The Fourier series
f(e
i
) =
nZ
c
n
e
in
expresses the splitting of C(U(1)) into 1-dimensional spaces
C (U(1)) =
V
n
,
where
V
n
= e
in
) = ce
in
: c C.
Notice that with our denition of group action, the space V
n
carries the repre-
sentation E
n
, rather than E
n
. For if g = e
i
, and f(e
i
) = e
in
, then
(gf)(e
i
) = e
in
f(e
i
) = E
n
(g)f(e
i
).
In terms of representations, the splitting of C(U(1)) may be written:
=
nZ
E
n
.
We must confess at this point that we have gone out of bounds in these re-
marks, since the vector space C(G) is innite-dimensional (unless G is nite),
whereas all our results to date have been restricted to nite-dimensional represen-
tations. We shall see in Chapter 7 how we can justify this extension.
Chapter 5
Representations of SU(2)
5.1 Conjugacy in SU(n)
Since characters are class functions, our rst step in studying the representations
of a compact group Gas of a nite groupis to determine how G divides into
conjugacy classes.
We know that if 2 matrices S, T GL(n, k) are similar, ie conjugate in
GL(n, k), then they will have the same eigenvalues
1
, . . . ,
n
. So this gives a
necessary condition for conjugacy in any matrix group G GL(n, k):
S T (in G) = S, T have same eigenvalues.
In general this condition is not sufcient, eg
_
1 1
0 1
_
,
_
1 0
0 1
_
in GL(2, C), although both matrices have eigenvalues 1, 1. However we shall
see that the condition is sufcient in each of the classical compact matrix groups
O(n), SO(n), U(n), SU(n), Sp(n).
Two remarks: Firstly, when speaking of conjugacy we must always be clear in
what group we are taking conjugates. Two matrices S, T G GL(n, k) may
well be conjugate in GL(n, k) without being conjugate in G.
Secondly, the concepts of eigenvalue and eigenvector really belong to a repre-
sentation of a group rather than the group itself. So for example, when we speak
of an eigenvalue of T U(n) we really shouldthough we rarely shallsay an
eigenvalue of T in the natural representation of U(n) in C
n
.
Lemma 5.1 The diagonal matrices in U(n) form a subgroup isomorphic to the
torus group T
n
U(1)
n
.
424II 51
5.1. CONJUGACY IN SU(N) 424II 52
Proof We know that the eigenvalues of T U(n) have absolute value 1, since
Tv = v = v
T =
v
= v
Tv =
v
v
= v
v =
v
v
= [[
2
=
= 1
= [[ = 1
Thus the eigenvalues of T can be written in the form
e
i
1
, . . . , e
i
n
(
1
, . . . ,
n
R).
In particular the diagonal matrices in U(n) are just the matrices
_
_
_
_
e
i
1
.
.
.
e
i
n
_
_
_
_
It follows that the homomorphism
U(1)
n
U(n) :
_
e
1
, . . . , e
n
_
_
_
_
_
e
i
1
.
.
.
e
i
n
_
_
_
_
maps U(1)
n
homeomorphically onto the diagonal subgroup of U(n), allowing us
to identify the two:
U(1)
n
U(n).
Lemma 5.2 Every unitary matrix T U(n) is conjugate (in U(n)) to a diagonal
matrix:
T D U(1)
n
.
Remark: You are probably familiar with this result: Every unitary matrix can be
diagonalised by a unitary transformation. But it is instructive to give a proof in
the spirit of representation theory.
Proof Let T) denote the closed subgroup generated by T, ie the closure in
U(n) of the group
. . . , T
1
, I, T, T
2
, . . .
formed by the powers of T.
5.1. CONJUGACY IN SU(N) 424II 53
This group is abelian; and its natural representation in C
n
leaves invariant the
standard positive-denite hermitian form [x
1
[
2
+ + [x
n
[
2
, since it consists of
unitary matrices.
It follows that this representation splits into a sum of 1-dimensional represen-
tations, mutually orthogonal with respect to the standard form. If we choose a
vector e
i
of norm 1 in each of these 1-dimensional spaces we obtain an orthonor-
mal set of eigenvectors of T. If U is the matrix of change of basis, ie
U = (e
1
, . . . , e
n
)
then
U
TU =
_
_
_
_
e
i
1
.
.
.
e
i
n
_
_
_
_
where
Te
i
= e
i
e
i
.
Lemma 5.3 The diagonal matrices in SU(n) form a subgroup isomorphic to the
torus group T
n1
U(1)
n1
.
Proof If
T =
_
_
_
_
e
i
1
.
.
.
e
i
n
_
_
_
_
then
det T = e
i(
1
++
n
)
.
Hence
T SU(n)
1
+ +
n
= 0 (mod 2).
Thus the homomorphism
U(1)
n1
SU(n) :
_
e
1
, . . . , e
n1
_
_
_
_
_
_
_
e
i
1
.
.
.
e
i
n1
e
i(
1
++
n1
)
_
_
_
_
_
_
maps U(1)
n1
homeomorphically onto the diagonal subgroup of SU(n), allowing
us to identify the two:
U(1)
n1
SU(n).
TU = D (U U(n)).
We know that [ det U[ = 1, say
det U = e
i
.
Let
V = e
i/n
U.
Then V SU(n); and
V
TV = D.
SU
for some U U(n). Suppose
det U = e
i
.
Let
V = e
i/n
U.
Then V SU(n); and
T = V
SV
_
z
_
= T
_
z
w
_
5.2. REPRESENTATIONS OF SU(2) 424II 56
Explicitly, recall that the matrices T SU(2) are just those of the form
U =
_
a b
b a
_
([a[
2
+[b[
2
= 1)
Taking T in this form, its action is given by
(z, w) (az + bw,
bz + aw)
By extension, this change of variable denes an action of SU(2) on polyno-
mials P(z, w) in z and w:
P(z, w) P(az +bw,
bz + aw).
Denition 5.1 For each half-integer j = 0, 1/2, 1/, 3/2, . . . we denote by D
j
the
representation of SU(2) in the space
V (j) = z
2j
, z
2j1
w, . . . , w
2j
)
of homogeneous polynomials in z, w of degree 2j.
Example: Let j = 3/2. The 4 polynomials
z
3
, z
2
w, zw
2
, w
3
form a basis for V(3/2).
Consider the action of the matrix
T =
_
0 i
i 0
_
SU(2).
We have
T(z
3
) = (iw)
3
= iw
3
,
T(z
2
w) = izw
2
,
T(zw
2
) = iz
2
w,
T(w
3
) = iz
3
Thus under D3
2
,
_
0 i
i 0
_
_
_
_
_
_
0 0 0 i
0 0 i 0
0 i 0 0
i 0 0 0
_
_
_
_
_
5.2. REPRESENTATIONS OF SU(2) 424II 57
Proposition 5.1 The character
j
of D
j
is given by the following rule: Suppose
T has eigenvalues e
i
Then
j
(T) = e
2ij
+e
2i(j1)
+ +e
2ij
Proof We know that
T U() =
_
e
i
0
0 e
i
_
.
Hence
j
(T) =
j
(U()) .
The result follows on considering the action of U() on the basis z
2j
, . . . , w
2j
of V (j). For
U()z
k
w
2jk
=
_
e
i
z
_
k
_
e
i
w
_
2jk
= e
2i(kj)
z
k
w
2jk
.
Thus under D
j
,
U()
_
_
_
_
_
_
e
2ij
e
2i(j2)
.
.
.
e
2ij
_
_
_
_
_
_
whence
j
(U()) = e
2ij
+e
2i(j2)
+ +e
2ij
.
2
_
1 1
1 1
_
(almost any T would do) we see that
(z +w)
2j
= z
2j
+ 2jz
2j1
w + +w
2j
U.
Each of the monomials of degree 2j occurs here with non-zero coefcient. It
follows that each of these monomials must be in U:
z
2jk
w
k
U for all k.
Hence U = V (j), ie D
j
is simple.
Proposition 5.3 The D
j
are the only simple representations of SU(2).
Proof Suppose is a simple representation of SU(2) distinct from the D
j
.
Then in particular
I(, D
j
) = 0.
In other words,
is orthogonal to each
j
,
Consider the restriction of to the diagonal subgroup U(1). Suppose
U(1)
=
j
n
j
E
j
,
where of course all but a nite number of the n
j
vanish (and the rest are positive
integers). It follows that
(U()) =
j
n
j
e
ij
5.2. REPRESENTATIONS OF SU(2) 424II 59
Lemma 5.6 For any representation of SU(2),
n
j
= n
j
,
ie E
j
and E
j
occur with the same multiplicity in
U(1)
.
Proof This follows at once from the fact that
U() U()
in SU(2).
Since n
j
= n
j
, we see that
(U()) =
j
c
j
j
(U()) .
Since each T SU(2) is conjugate to some U() it follows that
(T) =
j
c
j
j
(T)
for all T SU(2). But this contradicts the proposition that the simple characters
are linearly independent (since they are orthogonal).
We knowthat every nite-dimensional representation of SU(2) is semi-simple.
In particular, each product D
j
D
k
is expressible as a sum of simple representations,
ie as a sum of D
n
s.
Theorem 5.1 (The Clebsch-Gordan formula) For any pair of half-integers j, k
D
j
D
k
= D
j+k
+D
j+k1
+ + D
|jk|
.
Proof We may suppose that j k.
Suppose T has eigenvalues e
i
. For any 2 half-integers a, b such that a
b, a b N, let
L(a, b) = e
2ia
+e
2i(a+1)
+ +e
2ib
.
(We may think of L(a, b) as a ladder linking a to b on the axis, with rungs every
step, at a + 1, a + 2, . . . .) Thus
j
() = L(j, j);
and so
D
j
D
k
(T) =
j
()
k
() = L(j, j)L(k, k).
5.2. REPRESENTATIONS OF SU(2) 424II 510
We have to show that
L(j, j)L(k, k) = L(jk, j+k)+L(jk+1, j+k1)+ +L(j+k, jk).
We argue by induction on k. The result holds trivially for k = 0.
By our inductive hypothesis,
L(j, j)L(, k+1, k1) = L(jk+1, j+k1)+ +L(j+k1, jk+1).
Now
L(k) = L(k 1) + (e
2ik
+e
2ik
).
But
L(j, j)e
2ik
= L(j k, j k),
L(j, j)e
2ik
= L(j +k, j +k).
Thus
L(j, j)(e
2ik
+e
2ik
) = L(j k, j k) +L(j +k, j +k)
= L(j k, j +k) + L(j +k, j k).
Gathering our ladders together,
L(j, j)L(k, k) = L(j k + 1, j +k 1) + +L(j + k 1, j k + 1)
+L(j k, j +k) + L(j +k, j k)
= L(j k, j +k) + +L(j +k, j k),
as required.
Proposition 5.4 The representation D
j
of SU(2) is real for integral j and quater-
nionic for half-integral j.
Proof The character
j
() = e
2ij
+e
2i(j1)
+ +e
2ij
is real, since
j
() = e
2ij
+e
2i(j1)
+ + e
2ij
=
j
().
Thus D
j
(which we know to be simple) is either real or quaternionic.
5.2. REPRESENTATIONS OF SU(2) 424II 511
A quaternionic representation always has even dimension; for it carries an
invariant non-singular skew-symmetric form, and such a form can only exist in
even dimension, since it can be reduced to the form
x
1
y
2
x
2
y
1
+x
3
y
4
x
4
y
3
+
But
dimD
j
= 2j + 1
is odd for integral j. Hence D
j
must be real in this case.
Lemma 5.7 The representation D1
2
is quaternionic.
Proof of Lemma Suppose D1
2
were real, say
D1
2
= C,
where
: SU(2) GL(2, R)
is a 2-dimensional representation of SU(2) over R. We know that this representa-
tion carries an invariant positive-denite form. By change of coordinates we can
bring this to x
2
1
+x
2
2
, so that
im O(2).
Moreover, since SU(2) is connected, so is its image. Hence
im SO(2).
Thus denes a homomorphism
SU(2) SO(2) = U(1),
ie a 1-dimensional representation of SU(2), which must in fact be D
0
= 1. It
follows that = 1 + 1, contradicting the simplicity of D1
2
.
Remark: It is worth noting that the representation D1
2
is quaternionic in its original
sense, in that it arises from a representation in a quaternionic vector space. To see
this, recall that
SU(2) = Sp(1) = q H : [q[ = 1.
The symplectic group Sp(1) acts naturally on H, by left multiplication:
(g, q) gq (g Sp(1), q H).
5.2. REPRESENTATIONS OF SU(2) 424II 512
(We take scalar multiplication in quaternionic vector spaces on the right.) It is
easy to see that this 1-dimensional representation over H gives rise, on restriction
of scalars, to a simple 2-dimensional representation over C, which must be D1
2
.
It remains to prove that D
j
is quaternionic for half-integral j >
1
2
. Suppose in
fact D
j
were real; and suppose this were the rst half-integral j with that property.
Then
D
j
D
1
= D
j+1
+ D
j
+D
j1
would also be real (since the product of 2 real representations is real). But D
j1
is quaternionic, by assumption, and so must appear with even multiplicity in any
real representation. This is a contradiction; so D
j
must be quaternionic for all
half-integral j.
Alternative Proof Recall that if is a simple representation then
_
(g
2
) dg =
_
_
1 if is real,
0 if is essentially complex,
1 if is quaternionic.
Let = D
j
. Suppose g SU(2) has eigenvalues e
i
. Then g
2
has eigenval-
ues e
2i
, and so
j
(g
2
) = e
4ij
+ e
4i(j1)
+ + e
4ij
=
2j
(g)
2j1
(g) + + (1)
2j
0
(g).
Thus
_
j
(g
2
) dg =
_
2j
(g) dg
_
2j1
(g) dg + + (1)
2j
_
0
(g) dg
= I(1, D
2j
) I(1, D
2j1
) + + (1)
2j
I(1, D
0
)
= (1)
2j
I(1, 1)
=
_
+1 if j is integral
1 if j is half-integral
Chapter 6
Representations of SO(3)
Denition 6.1 A covering of one topological group G by another C is a continu-
ous homomorphism
: C G
such that
1. ker is discrete;
2. is surjective, ie im = G.
Proposition 6.1 A discrete subgroup is necessarily closed.
Proof Suppose S G is a discrete subgroup. Then by denition we can nd
an open subset U G such that
U S = 1.
(For if S is discrete then 1 is open in the induced topology on S, ie it is the
intersection of an open set in G with S.)
We can nd an open set V G containing 1 such that
V
1
V U,
ie v
1
1
v
2
U for all v
1
, v
2
V . This follows from the continuity of the map
(x, y) x
1
y : GG G.
Now suppose g G S. We must show that there is an open set O g not
intersecting S. The open set gV g contains at most 1 element of S. For suppose
s, t gV , say
s = gv
1
, t = gv
2
.
424II 61
424II 62
Then
s
1
t = v
1
1
v
2
U S.
Thus s
1
t = 1, ie s = t.
If gV S = then we can take O = gV . Otherwise, suppose gV S = s.
We can nd an open set W G such that g W, s / W; and then we can take
O = gV W.
Corollary 6.1 A discrete subgroup of a compact group is necessarily nite.
Remark: We say that
: C G
is an n-fold covering if | ker | = n.
Proposition 6.2 Suppose : C G is a surjective (and continuous) homomor-
phism of topological groups. Then
1. Each representation of G in V denes a representation
of C in V by
the composition
: C
G
GL(V ).
2. If the representations
1
,
2
of G dene the representations
1
,
2
of C in
this way then
1
=
2
1
=
2
.
3. With the same notation,
(
1
+
2
)
1
+
2
, (
1
2
)
2
, (
= (
.
4. A representation of C arises in this way from a representation of G if and
only if it is trivial on ker , ie
g ker = (g) = 1.
5. The representation
.
Remark: We can express this succinctly by saying that the representation-ring of
G is a sub-ring of the representation-ring of C:
R(G, k) R(C, k).
We can identify a representation of G with the corresponding representation
= iA
AU)
= U
= U
AU
= U
AU H.
Thus the action
(U, A) U
AU = U
1
AU
of SU(2) on H denes a 4-dimensional real representation of SU(2).
This is not quite what we want; we are looking for a 3-dimensional represen-
tation. Let
T = / H : tr / = /
denote the subspace of H formed by the trace-free hermitian matrices, ie those of
the form
A =
_
x y iz
y +iz x
_
(x, y, z, t R).
These constitute a 3-dimensional real vector space; and since
tr (U
AU) = tr
_
U
1
AU
_
= tr A
this space is stable under the action of SU(2). Thus we have constructed a 3-
dimensional representation of SU(2) over R, dened by a homomorphism
: SU(2) GL(3, R).
The determinant denes a negative-denite quadratic form on T , since
det
_
x y iz
y + iz x
_
= x
2
y
2
z
2
Moreover this quadratic form is left invariant by the action of SU(2), since
det (U
AU) = det
_
U
1
AU
_
= det A.
In other words, SU(2) acts by orthogonal transformations on T , so that
im O(3).
Moreover, since SU(2) is connected, its image must also be connected, and so
im SO(3).
We use the same symbol to denote the resulting homomorphism
: SU(2) SO(3).
We have to show that this homomorphism is a covering.
424II 65
Lemma 6.1 ker = I.
Proof of Lemma Suppose U ker . In other words,
U
1
AU = A
for all A T .
In fact this will hold for all hermitian matrices A H since
H = T
1).
But now the result holds also for all skew-hermitian matrices, since they are
of the form iA, with A hermitian. Finally the result holds for all matrices A
GL(2, C), since every matrix is a sum of hermitian and skew-hermitian parts:
A =
1
2
(A +A
) +
1
2
(A A
) .
Since
U
1
AU = A AU = UA,
we are looking for matrices U which commute with all 2 2-matrices A. It is
readily veried that the only such matrices are the scalar multiples of the identity,
ie
U = I.
But now,
U SU(2) = det U = 1
=
2
= 1
= = 1.
AU =
_
e
i
0
0 e
i
__
x y iz
y +iz x
__
e
i
0
0 e
i
_
=
_
x e
2i
(y iz)
e
2i
(y +iz) x
_
=
_
X Y iZ
Y +iZ X
_
,
where
X = x
Y = cos 2y + sin 2z
Z = sin 2y cos 2z.
Thus U() induces a rotation in the space T through 2 about the Ox-axis, say
U() R(2, Ox).
As another example, let
V =
1
2
_
1 1
1 1
_
;
In this case
V
AV =
_
x y +iz
y iz x
_
=
_
X Y + iZ
Y iZ X
_
,
where
X = y
Y = x
Z = z.
Thus (V ) is a rotation through /2 about Oz.
It is sufcient now to show that the rotations R(, Ox) about the x-axis, to-
gether with T = R(/2, Oz), generate the group SO(3). Since is a homomor-
phism, V U()V
1
maps onto
TR(2, Ox)T
1
= R(2, T(Ox)) = R(2, Oy).
Thus im contains all rotations about Ox and about Oy. It is easy to see that
these generate all rotations. For consider the rotation R(, l) about the axis l. We
424II 67
can nd a rotation S about Ox bringing the axis l into the plane Oxz; and then a
rotation T about Oy bringing l into the coordinate axis Ox. Thus
TSR(, l)(TS)
1
= R(, Ox);
and so
R(, l) = S
1
T
1
R(, Ox)TS.
.
2. We shall see in Part 4 that the space T (or more accurately the space iT ) is
just the Lie algebra of the group SU(2). Every Lie group acts on its own
Lie algebra. This is the genesis of the homomorphism .
Proposition 6.4 The simple representations of SO(3) are the representations D
j
for integral j:
D
0
= 1, D
1
, D
2
, . . .
Proof We have established that the simple representations of SO(3) are just
those D
j
which are trivial on I. But under I,
(z, w) (z, w)
and so if P(z, w) is a homogeneous polynomial of degree 2j,
P(z, w) = (1)
2j
P(z, w).
Thus I acts trivially on V
j
if and only if 2j is even, ie j is integral.
The following result is almost obvious.
424II 68
Proposition 6.5 Let be the natural representation of SO(3) is R
3
. Then
C = D
1
.
Proof To start with, is simple. For if it were not, it would have a 1-dimensional
sub-representation. In other words, we could nd a direction in R
3
sent into itself
be every rotation, which is absurd.
It follows that C is simple. For otherwise it would split into 2 conjugate parts,
which is impossible since its dimension is odd.
The result follows since D
1
is the only simple representation of dimension 3.
Chapter 7
The Peter-Weyl Theorem
7.1 The nite case
Suppose G is a nite group. Recall that
C(G) = C(G, C)
denotes the banach space of maps f : G C, with the norm
[f[ = sup
gG
[f(g)[.
(For simplicity we restrict ourselves to the case of complex scalars: k = C.)
The group G acts on C(G) on both the left and the right. These actions can be
combined to give an action of GG:
((g, h)f) (x) = f(g
1
xh).
Recall that the corresponding representation of GG splits into simple parts
=
1
1
+ +
s
s
where
1
, . . . ,
s
are the simple representations of G (over C).
Suppose V is a G-space. We have a canonical isomorphism
hom(V, V ) = V
V.
Thus GG acts on hom(V, V ), with the rst factor acting on V
424II 71
7.1. THE FINITE CASE 424II 72
where t
(v) = ht(g
1
v).
The expression for above can be re-written as
C(G) hom(V
1
, V
1
) + + hom(V
s
, V
s
),
where V
1
C(G)
s
,
where
C(G)
hom(V
, V
).
Since the representations
1
C(G)
s
,
can equally well be regarded as the splitting of the G-space C(G) into its isotypic
parts.
Whichever way we look at it, we see that each function f(x) on G splits into
components f
is given by
=
1
[G[
gG
(g
1
)g.
It follows that
f
(x) =
1
[G[
gG
(g)f(gx).