Mathematics 281: Leonard Evans

Download as pdf or txt
Download as pdf or txt
You are on page 1of 716

ISP

Mathematics 281

Leonard Evans
Department of Mathematics
Northwestern University


c Leonard Evens 1992, 1993

This work is licensed under the Creative Commons By-Attribution Share-Alike 3.0 Unported
license, giving you permission to share and modify this work provided you attribute appropriately
and license any new work in a compatible way.
ii

Preface

This text has been specially created for the first year mathematics course of the Integrated
Science Program at Northwestern University. Some of what we cover will be used in other
first year courses such as physics. Such topics will generally be introduced before they are
needed in the other courses and in a manner which emphasizes their relation to the subject
matter of those courses. For the most part, the rest of the subject matter is mathematics
which is used in later courses in the program, but on rare occasions we shall discuss some
points which are mainly of interest to mathematicians. Overall, the perspective is that of
a mathematician, which sometimes looks a bit different from that of a physicist or chemist.
Mathematicians will tend to emphasize care in formulation of concepts and the need to
prove mathematical statements by rigorous arguments, while scientists may concentrate on
the physical content of such statements. You should be aware, however, that the underlying
concepts are often the same, and you should make sure you understand how ideas introduced
in your different courses are related.

It is assumed that the student has mastered differential and integral calculus of one vari-
able as taught, for example, in a typical high school advanced placement calculus course.
A reasonable reference for this material is Edwards and Penney’s Calculus and Analytic
Geometry, 3rd edition or any similar calculus text.

How to learn from this text

You should try to work all the problems except those marked as optional. You may have
some trouble with some of the problems, but ultimately after discussions with fellow students
and asking questions of your professor or teaching assistant, you should understand how to
do them. You should write up the solutions and keep them for review. Some sections are
clearly marked as optional, and your professor will indicate some others which are not part
of the course. Such sections may contain proofs or other special topics which may be of
interest to you, so you should look at these sections to decide if you want to study them.
Some of these sections include material which you may want to come back to in connection
with more advanced courses.

Use of computers

You will be learning about computer programming in a separate course. On occasion,


you will be expected to make use of what you learn there to help you understand some
mathematical point. Also, you will have various computer resources available to help you
visualize some of the subject matter of the course, e.g., to graph curves and surfaces. You
should learn to make use of these resources.

A note on indefinite integrals

A brief comment on the treatment of indefinite integrals may be helpful. You probably
spent considerable time in your previous calculus course learning how to calculate indefinite
iii

integrals (also called antiderivatives). That experience is useful in later work in so far as
it gives you some perspective on what is involved in integrating. However, for the most
part, finding indefinite integrals is a relatively mechanical process on which one does not
want to spend a lot of time. When you encounter an integral, if you don’t remember how
to do it right away, you should normally either look it up in a table of integrals or use a
symbolic manipulation program (such as Maple or Mathematica) to find it. Of course, there
are some occasions where that won’t suffice or where you get the wrong answer, so your
previous mastery of the subject will have to be brought to bear, but that will be unusual.
In this text, we have tried to encourage you in this attitude by letting Mathematica provide
indefinite integrals wherever possible. Some students object to this ‘appeal to authority’ for
an answer you can derive yourself. Its justification is that time is short and best reserved
for less routine tasks.

Acknowledgments

Many of the problems were inspired by problems found in other introductory texts. In
particular, Edwards and Penney was used as a source for many of the calculus problems,
and Braun’s Differential Equations and Their Applications, 3rd edition was used as a source
for many of the differential equations and linear algebra problems. The problems weren’t
literally copied from those texts, and these problems are typical of problems found in many
such texts, but to the extent that original ideas of the above authors were adapted, they
certainly deserve the credit. Most of the treatment of calculus and linear algebra is original
(to the extent that one can call any treatment of such material original), but parts of
the treatment of differential equations, particularly systems of differential equations were
inspired by the approach of Braun.

I should like to thank Jason Jarzembowski who compiled most of the problems for Chapters
I through V. Michael R. Stein, Integrated Science Program Director, ensured that the text
would come to fruition by allocating needed resources to that end and by exhorting others to
get it done. Finally, I should like to thank my teaching assistant, John Gately, who helped
with the development of problem sets and timely printing of the text.

Leonard Evens, July, 1992

This document was originally typeset in AMS-TEX. It was re-typeset in LATEX by Jason
Siefken in 2015.
iv
Contents

I Vector Calculus 1
1 Vectors 3
1.1 Introduction 3
1.2 Kinematics and Vector Functions 11
1.3 The Dot Product 22
1.4 The Vector Product 27
1.5 Geometry of Lines and Planes 32
1.6 Integrating on Curves 43

2 Differential Equations, Preview 57


2.1 Some Elementary Differential Equations 57

3 Differential Calculus of Functions of n Variables 65


3.1 Graphing in Rn 65
3.2 Limits and Continuity 75
3.3 Partial Derivatives 78
3.4 First Order Approximation and the Gradient 83
3.5 The Directional Derivative 91
3.6 Criteria for Differentiability 94
3.7 The Chain Rule 99
3.8 Tangents to Level Sets and Implicit Differentiation 108
3.9 Higher Partial Derivatives 115

4 Multiple Integrals 121


4.1 Introduction 121
4.2 Iterated Integrals 132
4.3 Double Integrals in Polar Coordinates 143
4.4 Triple Integrals 150
4.5 Cylindrical Coordinates 157
4.6 Spherical Coordinates 164

v
vi CONTENTS

4.7 Two Applications 172


4.8 Improper Integrals 176
4.9 Integrals on Surfaces 183
4.10 The Change of Variables Formula 196
4.11 Properties of the Integral 203

5 Caculus of Vector Fields 209


5.1 Vector Fields 209
5.2 Surface Integrals for Vector Fields 214
5.3 Conservative Vector Fields 222
5.4 Divergence and Curl 233
5.5 The Divergence Theorem 236
5.6 Proof of the Divergence Theorem 241
5.7 Gauss’s Law and the Dirac Delta Function 244
5.8 Green’s Theorem 248
5.9 Stokes’s Theorem 255
5.10 The Proof of Stokes’s Theorem 264
5.11 Conservative Fields, Reconsidered 269
5.12 Vector Potential 274
5.13 Vector Operators in Curvilinear Coordinates 278

II Differential Equations 289


6 First Order Differential Equations 291
6.1 Differential Forms 291
6.2 Using Differential Forms 294
6.3 First Order Linear Equations 302
6.4 Applications to Population Problems 308
6.5 Existence and Uniqueness of Solutions 312
6.6 Graphical and Numerical Methods 320

7 Second Order Linear Differential Equations 327


7.1 Second Order Differential Equations 327
7.2 Linear Second Order Differential Equations 329
7.3 Homogeneous Second Order Linear Equations 332
7.4 Homogeneous Equations with Constant Coefficients 339
7.5 Complex Numbers 343
7.6 Complex Solutions of a Differential Equation 346
7.7 Oscillations 350
7.8 The Method of Reduction of Order 354
7.9 The Inhomogeneous Equation. Variation of Parameters 356
CONTENTS vii

7.10 Finding a Particular Solution by Guessing 358


7.11 Forced Oscillations 362

8 Series 367
8.1 Series Solutions of a Differential Equation 367
8.2 Definitions and Examples 369
8.3 Series of Non-negative Terms 377
8.4 Alternating Series 385
8.5 Absolute and Conditional Convergence 389
8.6 Power Series 394
8.7 Analytic Functions and Taylor Series 403
8.8 More Calculations with Power Series 413
8.9 Multidimensional Taylor Series 420
8.10 Local Behavior of Functions and Extremal Points 426

9 Series Solution of Differential Equations 437


9.1 Power Series Solutions at Ordinary Points 437
9.2 Partial Differential Equations 445
9.3 Regular Singular Points and the Method of Frobenius 450
9.4 The Method of Frobenius. General Theory 460
9.5 Second Solutions in the Bad Cases 466
9.6 More about Bessel Functions 473

III Linear Algebra 477


10 Linear Algebra, Basic Notation 479
10.1 Systems of Differential Equations 479
10.2 Matrix Algebra 484
10.3 Formal Rules 491
10.4 Linear Systems of Algebraic Equations 493
10.5 Singularity, Pivots, and Invertible Matrices 501
10.6 Gauss-Jordan Reduction in the General Case 507
10.7 Vector Spaces 514
10.8 Linear Independence, Bases, and Dimension 520
10.9 Calculations in Rn or Cn 531

11 Determinants and Eigenvalues 537


11.1 Homogeneous Linear Systems of Differential Equations 537
11.2 Finding Linearly Independent Solutions 542
11.3 Definition of the Determinant 547
11.4 Some Important Properties of Determinants 555
viii CONTENTS

11.5 Eigenvalues and Eigenvectors 562


11.6 Complex Roots 572
11.7 Repeated Roots and the Exponential of a Matrix 578
11.8 Generalized Eigenvectors 586

12 More about Linear Systems 599


12.1 The Fundamental Solution Matrix 599
12.2 Inhomogeneous Systems 604
12.3 Normal Modes 609
12.4 Real Symmetric and Complex Hermitian Matrices 621
12.5 The Principal Axis Theorem 627
12.6 Change of Coordinates and the Principal Axis Theorem 633
12.7 Classification of Conics and Quadrics 641
12.8 A Digression on Constrained Maxima and Minima 646

13 Nonlinear Systems 661


13.1 Introduction 661
13.2 Linear Approximation 671

Appendices 691

A Creative Commons Legal Text 693


Part I

Vector Calculus

1
Chapter 1

Vectors

1.1 Introduction

A vector is a quantity which is characterized by a magnitude and a direction. Many


quantities are best described by vectors rather than numbers. For example, when
driving a car, it may be sufficient to know your speed, which can be described by a
single number, but the motion of an airplane must be described by a vector quantity F
called velocity which takes into account its direction as well as its speed.

Ordinary numerical quantities are called scalars when we want to emphasize that
they are not vectors.

In print, vectors are almost always denoted by bold face symbols, e.g., F, but when
written, one uses a variety of mechanisms for distinguishing vectors from scalars.


One common such notation is F . The magnitude of a vector F is denoted |F|.
Indicating its direction symbolically is a bit more difficult, and we shall discuss that
later.

Vectors are represented geometrically by directed line segments. The length of the
line segment is the magnitude of the vector and its direction is the direction of the
vector.

Note that parallel lines segments of the same length and same direction represent
the same vector. In drawing pictures, there is a tendency to identify a directed line
segment with the vector it represents, but this can lead to confusion about how
vectors are used. In general, there are infinitely many directed line segments which
could be used to represent the same vector. You should always remember that you
are free to put the tail end of a vector at any point which might be convenient for
−−→
your purposes. We shall often use the notation AB for the vector represented by

3
4 CHAPTER 1. VECTORS

B the directed line segment from A to B.

Many of the laws of physics relate vector quantities. For example, Newton’s Second
B’ Law
F = ma
relates two vector quantities: force F and acceleration a. The constant of propor-
tionality m is a scalar quantity called the mass.
A
Operations with vectors It is a familiar experience that when velocities are
combined, their magnitudes do not add. For example, if an airplane flies North at
A’ 600 mph against an east wind of 100 mph, the resultant velocity will be somewhat
west of north and its magnitude certainly won’t be 700 mph. To calculate the
AB = A’B’
correct velocity in this case, we need vector addition, which is defined geometrically
as follows. Suppose a, b are vectors. To add them, choose directed line segments
representing them, with tails at the same point, and consider the vector represented
by the diagonal of the resulting parallelogram.

We call that vector a + b. This is called the parallelogram law of addition. There is
also another way to do the same thing called the triangle law of addition. See the
diagram.

a+b b a+b b
b

sv
a a

v
Note that for the triangle law, we place the tail of one directed line segment on the
head of the other, taking advantage of of freedom to place the tail where needed. If
the vectors a and b have the same or opposite directions, then the diagrams become
s>0 degenerate (collinear) figures. (You should draw several cases for yourself to make
sure you understand.)
sv As we saw in Newton’s Law, we sometimes want to multiply a vector by a scalar.
This is defined as follows. If s is a scalar and v is a vector, then sv is the vector
with magnitude |s||v| and its direction is either the same as that of v, if s > 0, or
s<0 opposite to v, if s < 0.

Note that the above definition omits the case that s = 0. In that case, we run into a
problem, at least if we represent vectors by directed line segments. It does not make
1.1. INTRODUCTION 5

sense to talk of the direction of a directed line segment with length zero. For this
reason, we introduce a special quantity we call the zero vector which has magnitude
zero and no well defined direction. It is not a vector as previously defined, but we
allow it as a degenerate case which is needed so operations with vectors will make
sense in all cases. The zero vector is denoted by a bold face zero 0 or in writing by


0 . With that notation, sv = 0 if s = 0. Note also that a degenerate case of the
parallelogram law yields v + 0 = v for any vector v.

You can subtract a vector a from a vector b by adding the opposite vector −a =
(−1)a. Study the accompanying diagram for some idea of how b − a might be
represented geometrically.

Same vector
b-a b-a
b

-a a

Note that if a and b are represented by directed line segments with the same tail,
then the line segment from the end of a to the end of b represents b − a.

In the study of motion, which is of interest both in physics and in mathematics,


it is common to use the so called position vector (also called radius vector). Thus,
if a particle moves on a path, its position can be specified as the endpoint P of a
−−→
directed line segment from a common origin O. The position vector is r = OP . In
this context, we are often interested in comparing the position vectors r1 and r2 of
the same particle at different times (or of two different particles). As the diagram
indicates, the difference r2 − r1 is associated with the directed line segment from
the first position to the second position. This is called the displacement vector , and
it would be the same for any choice of common origin O.

P1 r - r
2 1

r P2
1
r
O 2
6 CHAPTER 1. VECTORS

The operations defined above satisfy the usual laws of algebra. Here are some of
them

(a + b) + c = a + (b + c) Associative Law
a+b=b+a Commutative Law
s(a + b) = sa + sb
(s + t)a = sa + ta
(st)a = s(ta)

There are several others which you can probably invent for yourself. These rules
y are not too difficult to check from the definitions we gave, but we shall skip that
here. (You might try doing it yourself by drawing the appropriate diagrams.)

Components The geometric definitions above result in nice pictures and nourish
x our intuition, but they are quite difficult to calculate with. To make that easier, it is
necessary to introduce coordinate systems and thereby reduce geometry to algebra
and arithmetic. We start with the case of vectors in the plane since it is easier to
visualize, but of course to deal with the the real world, we shall have to also extend
our notions to space.

Recall that a coordinate system in the plane is specified by choosing an origin O and
then choosing two perpendicular axes meeting at the origin. These axes are chosen
in some order so that we know which axis (usually the x-axis) comes first and which
(usually the y-axis) second. Note that there are many different coordinate systems
which could be used although we often draw pictures as if there were only one.

In physics, one often has to think carefully about the coordinate system because
choosing it appropriately may greatly simplify the resulting analysis. Note that the
axes are usually drawn with right hand orientation where the right angle from the
positive x-axis to the positive y-axis is in the counter-clockwise direction. However,
it would be equally valid to use the left hand orientation in which that angle is in the
clockwise direction. One can easily switch the orientation of a coordinate system
by reversing one of the axes. (The concept of orientation is quite fascinating and
it arises in mathematics, physics, chemistry, and even biology in many interesting
ways. Note that almost all of us base our intuitive concept of orientation on our
inborn notion of “right” versus “left”.)

y y

x x

Right hand Left Hand


orientation orientation
1.1. INTRODUCTION 7

Given a vector v, the projections—of any directed line segment representing it—
onto the coordinate axes are called the components of the vector. The components of
v are often displayed as an ordered pair hv1 , v2 i where each component is numbered
according to the axis it is associated with. Another common notation is hvx , vy i.
We shall use both.

Notice that the Pythagorean Theorem tells us that



|v| = v1 2 + v2 2 .

Our notation distinguishes between the coordinates (x, y) of a point P in the plane, v v j
v 2
and the components hvx , vy i of a vector. This is to emphasize the difference between 2
a vector and a point. That distinction is a bit esoteric, particularly in the analytic
context, since a pair of numbers is just that no matter what notation we use. Hence,
v1 i
you will find that many mathematics and physics books make no such distinction.
(Review in your mind the distinction between a point P and the position vector
−−→
OP . What relation exists between the coordinates of P and the components of
−−→ v
OP ?) 1

There is another common way to indicate components. Let i denote a vector of


length 1 pointing in the positive direction along the x-axis, and let j denote the
corresponding unit vector for the y-axis. (These are also commonly written x̂ and
ŷ.) Then the diagram makes clear that

v = v1 i + v2 j.

Vector operations are mirrored by corresponding operations for components. Thus,


if u has components hu1 , u2 i and v has components hv1 , v2 i, then

u + v = u1 i + u2 j + v1 i + v2 j
= (u1 + v1 )i + (u2 + v2 )j

from which we conclude that the components of u + v are hu1 + v1 , u2 + v2 i. (The


diagram below exhibits a more geometric argument.

u v

u+v

u1 u2
u1 + v
1
8 CHAPTER 1. VECTORS

(Note that I only drew one of the many diagrams needed to handle all possible
cases.) In words, we may restate the rule as follows

the components of the sum of two vectors are the sums of the components
of the vectors.

A similar argument shows that the components of sv are hsv1 , sv2 i, i.e.,

the components of a scalar multiple of a vector are that multiple of the


components of the vector.

The same sort of considerations apply in space. To set up a coordinate system,


we first choose an origin O, and then choose, in a some order, three mutually
perpendicular axes through O, each with a specified positive direction. These are
z (right)
usually called the x-axis, the y-axis, and the z-axis. Since any two intersecting lines
in space determine a plane, we can think of the first two axes generating an x, y-
plane. Since the z-axis must be perpendicular to this plane, the line along which
it lies is completely determined, but we have two possible choices for its positive
y direction. (See the diagram.)

A set of axes in space has the right hand orientation if when you point the fingers
x of your right hand from the positive x-axis to the positive y-axis, your upright
thumb points in the direction of the positive z-axis. Otherwise, it has the left hand
orientation. As in the plane case, reversing the direction of one axis reverses the
orientation. Almost all authors today use the right hand orientation for coordinate
z (left) axes.

Given a set of coordinate axes, a point P in space is assigned coordinates (x, y, z)


as follows. Let the origin O and the point P be opposite vertices of a rectangular
box.

The coordinates x, y, and z are the (signed) magnitudes of the sides of this box.
Points with x > 0 are in front of the y, z-plane, and points with x < 0 are in back of
that plane. You should think out the possibilities for the signs of y and z. A point
has coordinate x = 0 if and only if it lies in the y, z-plane (in which case the “box”
is degenerate). Similarly, y = 0 characterizes the x, z-plane and z = 0 characterizes
the x, y-plane.
1.1. INTRODUCTION 9

z
P

O
y
x

Our previous discussion of vectors generalizes in a more or less obvious way to


space. The components hv1 , v2 , v3 i (sometimes hvx , vy , vz i) of a vector v are ob-
tained by projecting a directed line segment representing the vector onto each of
the coordinate axes.

As before, we have √
|v| = v1 2 + v2 2 + v3 2 .
(Look at the diagram for an indication of why this is so. It requires two applications
of the Pythagorean Theorem.) In addition, the same rules for components of sums
and scalar multiples apply as before except, of course, there is one more component
to worry about.

In space, in addition to i and j, we have a third unit vector k pointing along the
positive z-axis, and any vector can be resolved

v = v1 i + v2 j + v3 k

in terms of its components.

Often, to save writing, we shall not distinguish between a vector and its components,
and we will write

v = hv1 , v2 i in the plane,


v = hv1 , v2 , v3 i in space.

This is an ‘abuse of notation’ since a vector is not the same as its set of components,
which generally depend on the choice of coordinate axes. But, it is a convenient
notation, and you usually won’t get in trouble if you use it with discretion.

Higher dimensions and Rn One can’t progress very far in the study of science
and mathematics without encountering a need for higher dimensional “vectors”.
For example, physicists have known since Einstein that the physical universe is best
10 CHAPTER 1. VECTORS

thought of as a 4-dimensional entity called spacetime in which time plays a role


close to that of the 3 spatial coordinates. Since, we don’t have any way to deal
intuitively with any higher dimensional geometries, we must proceed by analogy
with two and three dimensions, and the easiest way to proceed is to generalize the
analytic approach by adding additional coordinates. Thus, in general, we consider
n-tuples
(x1 , x2 , . . . , xn )
where n can be any positive integer. The collection of all such n-tuples is denoted
Rn , where the R refers to the fact that the entries (coordinates) are supposed to
be real numbers. From this perspective, it doesn’t make a whole lot of sense to
distinguish points from vectors, so the two terms are often used interchangeably.
Vector operations in Rn may be defined in terms of components.

|(x1 , x2 , . . . , xn )| = x1 2 + x2 2 + · · · + xn 2 ,
(x1 , x2 , . . . , xn ) + (y1 , y2 , . . . , yn ) = (x1 + y1 , x2 + y2 , . . . , xn + yn ),
s(x1 , x2 , . . . , xn ) = (sx1 , sx2 , . . . , sxn ).
The case n = 1 yields “geometry” on a line, the cases n = 2 and n = 3 geometry
in the plane and in space, and the case n = 4 yields the geometry of “4-vectors”
which are used in the special theory of relativity. Larger values of n are used in a
variety of contexts, some of which we shall encounter later in this course.

Exercises for 1.1.

1. Find |a|, 5a − 2b, and −3b for each of the following vector pairs:
(a) a = 2i + 3j, b = 4i − 9j
(b) a = h1, 2, −1i, b = h2, −1, 0i
(c) a = 0, b = h2, 3, 4i
(d) a = 3i − 4j + k, b = k
(e) a = hcos t, sin ti, b = h− sin t, cos ti
2. Find the vector of the form ai + bj + ck that is represented by an arrow from
the point P (7, 2, 9) to the point Q(−2, 1, 4).
3. Find unit vectors with the same direction as the vectors (a) h−4, −2i, (b)
3i + 5j, (c) h1, 3, −2i.
4. Show that if v is any non-zero vector, then u = v/|v| is a unit vector. Hint:
Use the following property for the magnitude of vectors, |sv| = |s||v|. (Note
that |s| means the absolute value of the scalar s, but |v| and |sv| mean the
magnitudes of the indicated vectors).
−−→ −−→ −→
5. Show by direct calculation that the rule AB + BC + CA = 0 holds for the
three points A(2, 1, 0), B(−4, 1, 3), and C(0, 12, 0). Can you prove the general
rule for any three points in space?
1.2. KINEMATICS AND VECTOR FUNCTIONS 11

6. Show that if a vector v in the plane has components hvx , vy i then the scalar
multiple sv has components hsvx , svy i.
7. Use Newton’s Second Law, F = ma, to find the acceleration of a 10 kg box if
a 120 N force is applied in the horizontal direction. Draw the vector diagram.
Are F and a in the same direction? Is this always true? Why? What if the
force were applied at a 30 angle to the horizontal?
8. If an airplane flies with apparent velocity va relative to air, and the wind
velocity is denoted w, then the planes true velocity relative to the ground, is
vg = va + w. Draw the diagram to assure yourself of this.
(a) A farmer wishes to fly his crop duster at 80 km/h north over his fields. If
the weather vane atop the barn shows easterly winds at 10 km/h, what should
his apparent velocity, va be?
(b) What if the wind were northeasterly? southeasterly?
9. Suppose a right handed coordinate system has been set up in space. What
happens to the orientation of the coordinate system if you make the following
changes? (a) Change the direction of one axis. (b) Change the direction of
two axes. (c) Change the direction of all three axes. (d) Interchange the x
and y axes.
10. In body-centered crystals, a large atom (assumed spherical) is surrounded by
eight smaller atoms. If the structure is placed in a ”box” that just contains
the large atom, the smaller atoms each occupy a corner. If the central atom
has radius R, what is the greatest atomic radii the smaller atoms may have?
11. Prove that the diagonals of a parallelogram bisect each other. (Hint: show
that the position vectors from the origin to the midpoints are equal).

1.2 Kinematics and Vector Functions

In physics, you will study the motion of particles, so we need to investigate here
the associated mathematics.

Example 1 You probably learned in high school that the path of a projectile moving
under the force of gravity near the earth is pretty close to a parabola.

If we choose coordinates x (for horizontal displacement) and y (for vertical displace-


ment), the path may be described by the equations
x = vx0 t
1
y = vy0 t − gt2 ,
2
where t denotes time, vx0 and vy0 are the components of a vector v0 called the
initial velocity, and g is a constant giving the acceleration of gravity at the surface
12 CHAPTER 1. VECTORS

of the Earth. This can be simplified if we combine the two coordinate functions of
t in a single vector quantity

1
r = xi + yj = (vx0 t)i + (vy0 t − gt2 )j.
2

P
v
0
r
y
x
O

We can think of this as expressing r as a single vector function of time t: r = r(t).


(Note that r is just the position vector connecting the origin to the position of
projectile at time t.)

In general, we can think of any vector function r = r(t) as describing the path of a
particle as it moves through space. For such a function we will have

r = x(t)i + y(t)j + z(t)k (1)

so giving a vector function is equivalent to giving three scalar functions x(t), y(t),
and z(t). The single vector equation (1) can also be written as three scalar equations

x = x(t)
y = y(t)
z = z(t)

If, as in Example 1, the motion is restricted to a plane, then, with suitably a chosen
coordinate system, we may omit one coordinate, e.g., we may write r = x(t)i+y(t)j,
or x = x(t), y = y(t).

Example 2. Uniform Circular Motion Let

x = R cos ωt
y = R sin ωt.

The (end of) the position vector r = xi + yj traces out a circle of radius R centered
at the origin.
1.2. KINEMATICS AND VECTOR FUNCTIONS 13

To see this note that x2 + y 2 = R2 (sin2 ωt + cos2 ωt) = R2 so the path is certainly a
subset of that circle. The exact part of the circle traced out depends on which values
of t are prescribed (i.e., on the domain of the function.) If t extends over an interval
of size 2π/ω (i.e., ωt extends over an interval of size 2π), the circle will be traced P
exactly once, but for other domains, only part of the circle may be traced or the
circle might be traced several times. The constant ω determines the rate at which r
the particle moves around the circle. You should try some representative values of R ωt
t, say with ω = 1 to convince yourself that the circle is traced counterclockwise if O
ω > 0 and clockwise if ω < 0. (What about ω = 0?)

The vector equation for the circle would be r = (R cos ωt)i + (R sin ωt)j or, in
component notation, r = hR cos ωt, R sin ωti.

Example 3 Let

x = R cos ωt
y = R sin ωt
z = bt.

Then the position vector r = xi + yj + zk traces out a path in space. The projection
of this path in the x, y-plane (obtained by setting z = 0) is the circle described
P
in Example 2. At the same time, the particle is rising (assuming b > 0) in the
z-direction at a constant rate. The resulting path is called a helix. (What if b < 0 O
or b = 0?)

The vector equation for the helix would be r = (R cos ωt)i + (R sin ωt)j + bt k or, in
component notation, r = hR cos ωt, R sin ωt, bti.

Velocity and Acceleration Suppose the path of a particle is described by a vector


function r = r(t). The derivative of such a function may be defined exactly as in
the case of a scalar function of a real variable. Let t change by a small amount ∆t,
and put ∆r = r(t + ∆t) − r(t). Then define

dr ∆r
r0 (t) = = lim .
dt ∆t→0 ∆t

Although this looks formally exactly like the scalar case, the geometry is quite
different. Study the accompanying diagram.

∆r
r (t)

r (t + ∆t)
O
14 CHAPTER 1. VECTORS

∆r is the displacement vector from the position of the particle at time t to its
position at time t + ∆t. It is represented by the directed chord in the diagram. As
∆t → 0, the direction of the chord approaches a limiting direction, which we call the
tangent direction to the curve. The derivative r0 (t) is also called the instantaneous
velocity, and it is usually denoted v (or v(t) if we want to emphasize its functional
dependence on t.) It is, of course, a vector, and we usually picture it with its tail at
the point with position vector r(t) and pointing (appropriately) along the tangent
line to the curve.

Calculating the derivative (or velocity) is much easier if we use components. If


r = xi + yj + zk, then we may write ∆r = ∆xi + ∆yj + ∆zk, and

∆r
r0 (t) = lim
∆t→0∆t
∆x ∆y ∆z
= lim i + lim j lim k
∆t→0 ∆t ∆t→0 ∆t ∆t→0 ∆t
dx dy dz
= i+ j + k.
dt dt dt

In other words, the components of the derivative of a vector function are just the
derivatives of the component functions.

Of course, in calculus, one need not stop with the first derivative. The second
derivative
d2 r d2 x d2 y d2 z
2
= r00 (t) = 2 i + 2 j + 2 k
dt dt dt dt

is called the acceleration, and it is often denoted a. It may also be described as


dv
a= .
dt
Example 1 Let x(t) = vx0 t, y(t) = vy0 t − (1/2)gt2 as in the previous section. Then

v = r0 (t) = vx0 i + (vy0 − gt)j

and, in particular, at t = 0, we have r0 (0) = vx0 i + vy0 j. In other words, the velocity
vector at t = 0 is the vector v0 with components hvx0 , vy0 i, as expected. (Where on
the path does the velocity vector point horizontally?) Similarly, the acceleration

dv
a= = 0i + (0 − g)j = −gj.
dt

is directed vertically downward and has magnitude g. You are probably familiar
with this from your previous work in physics.
1.2. KINEMATICS AND VECTOR FUNCTIONS 15

v
0 r v
a

O
x

Example 2 Let r = R cos ωti + R sin ωtj, (i.e., x = R cos ωt, y = R sin ωt.) Then

dr v
v= = (−Rω sin ωt)i + (Rω cos ωt)j P
dt
and a little trigonometry will convince you that v is perpendicular to r. For, r a
makes angle θ = ωt with the positive x-axis while v makes angle θ + π/2. Hence, v R ωt
is tangent to the circle as expected. Also, O

|v| = R2 ω 2 sin2 ωt + R2 ω 2 cos2 ωt = Rω. (2)

Hence, the speed |v| is constant.

The acceleration is given by

dv
a= = (−Rω 2 cos ωt)i + (−Rω 2 sin ωt)j = −ω 2 r.
dt
Hence, the acceleration is directed opposite to the position vector and points from
the position of the particle toward the origin. This is usually called centripetal
|v|
acceleration. Also, |a| = ω 2 |r| = ω 2 R. By equation (2), ω = , so
R
v
|v|2 |v|2 a
|a| = 2 R = .
R R
O
Note that in this example, acceleration results entirely from changes in the direction
of the velocity vector since its magnitude is constant. In general, both the direction
and magnitude of the velocity vector will be changing.

Example 3 Let r = hR cos ωt, R sin ωt, bti represent a helix as above.

The velocity and acceleration are given by

v = h−Rω sin ωt, Rω cos ωt, bi


a = h−Rω 2 cos ωt, Rω 2 sin ωt, 0i.
16 CHAPTER 1. VECTORS

Since its third component is zero, a points parallel to the x, y-plane. Also, if you
compare the first two components with what we obtained in the example of uniform
circular motion, you will see that a points from the position of the particle directly
at the z-axis.

Example 4. Uniformly Accelerated Motion Suppose the acceleration vector


a is constant. Then, just as would be the case for scalar functions, we can integrate
the equation
dv
=a
dt
to obtain
v = at + c
where c is a vector constant. In fact, putting t = 0 shows that c = v(0), the value
dr
of v(t) at 0, and this is usually denoted v0 . Thus, we may write = v = ta + v0 .
dt
(It is customary to put the scalar t in front of the vector, but it is not absolutely
necessary.) If we integrate this, we obtain
1 2
r= t a + tv0 + C,
2
but putting t = 0, as above, yields C = r(0) = r0 , the value of the position vector
at t = 0. Thus, we obtain
1
r = t2 a + tv0 + r0 .
2
The path of such a particle is a parabola. (Can you see why?)

The special case a = 0 is of interest. This is called uniform linear motion and is
described by
r = tv0 + r0 .
The path is a straight line. (See the diagram.) Moreover, v = dr/dt = v0 , so
the velocity vector is constant. Of course, it points along the line of motion. The
particle moves along this line at constant speed.

t v
v0 0

r0
r
O

All our discussions have assumed implicitly that v 6= 0. If v vanishes for one partic-
ular value of t, there may not be a well defined tangent vector at the corresponding
1.2. KINEMATICS AND VECTOR FUNCTIONS 17

point on the path. A simple example of this would be a particle which rises verti- Turns around
cally in a straight line with decreasing velocity until it reaches its maximum height
where it reverses direction and falls back along the same line. At the point of max-
imum height, the velocity would be zero, and it does not really make sense to talk
about a tangent vector there since we can’t assign it a well defined direction. If
v(t) vanishes for all t in an interval, then the particle stays at one point without
moving. It would make even less sense to talk about a tangent vector in that case.

Remark. In our discussion of derivatives (and integrals) of vector functions, we shall


ignore for the present the issue of how limits are handled for such functions. This
would be a matter of some importance for a completely rigorous treatment because
those concepts are defined by limits. These issues are best postponed to a course
in real analysis where there is time for such matters. It suffices for now to say that
such limits behave exactly as you would expect. Also, one way to avoid worrying
about the matter is to reduce all limits for vector functions of a single variable t to
statements about limits for the scalar component functions.

Polar Coordinates For motion in the plane where there is some sort of circular
symmetry, it is often more convenient to use polar coordinates. This would cer-
tainly be the case for circular motion, but it is also useful, for example, in celestial
mechanics where we think of a planet as a particle moving in the gravitational field
of a star. In that case, the gravitational force may be assumed to point directly
toward the origin, and Newton, confirming what Kepler had demonstrated, showed
that the motion would lie in a plane and follow a conic section.

Recall how polar coordinates are defined in the plane. We choose an origin O, and a
(right handed) cartesian coordinate system with that origin. The polar coordinates
−−→
(r, θ) of a point P are the distance r = |r| = |OP | to the origin and the angle θ
which the position vector r makes with the positive x-axis.

P
r

θ
O

By definition r ≥ 0 since it is a distance. Generally, θ is allowed to have any value,


but so that the point P will uniquely determine its polar angle θ, it is common
18 CHAPTER 1. VECTORS

to restrict θ to some interval of length 2π. Common choices are 0 ≤ θ < 2π or


θ=α −π ≤ θ < π, but many other choices are possible. Note that in any case, there is
r=a no way to define θ unambiguously at the origin where r = 0.

The set of points with a fixed value r = a > 0 constitute a circle with radius a. The
set of points with a fixed value θ = α constitute a ray, i.e., a half line, emanating
from the origin, making angle α with the positive x-axis.

It is common in elementary mathematics books to interpret a point with polar


coordinates (r, θ) where r < 0 as lying on the ray opposite to the ray for the given
value of θ. This is convenient in some formulas, but it adds another degree of
ambiguity since we can get to the opposite ray just as well by adding π to θ. Such
practices are best avoided. In this course, you may generally assume r ≥ 0, but
each time you encounter polar coordinates in other contexts, you will have to check
what the author intends.

r and θ are the most commonly used symbols for polar coordinates in mathematics
and physics books, but there are others you may encounter. For example, you
may sometimes see ρ in place of r or φ in place of θ. Later in this course, we
shall introduce coordinates in space which are analogous to polar coordinates in
the plane. For these, there is even more variation of symbols in common use. It
is specially important when you see any of these coordinate systems used that you
concentrate on the geometric and physical meaning of the quantities involved rather
than on the particular letters used to represent them.

You may remember the following formulas relating rectangular and polar coordi-
nates. In any case, they are clear by elementary trigonometry from the diagram.

x = r cos θ
y = r sin θ

r and
y

r= x2 + y 2
φ y
tan θ = , if x 6= 0.
x x

The description of motion in polar coordinates is both simpler and more complicated
than in rectangular coordinates. Instead of expressing vectors in terms of i and j,
it is useful instead to use unit vectors associated with the polar directions. These
depend on the value of θ at the point P under consideration and should be viewed
as placed with their tails at that point. ur is chosen to point directly away from the
−−→
origin, so it is parallel to the position vector r = OP . uθ is chosen perpendicular
to ur and pointing in the counter-clockwise direction (positive θ), so it is tangent
to a circle passing through P and centered at O.
1.2. KINEMATICS AND VECTOR FUNCTIONS 19

u
θ
u
r r = r ur
r
θ

(These are also commonly denoted r̂ and θ̂). Because of the definition of ur , we
have
r = rur
(This equation is a little confusing since the position vector r is commonly thought of
with its tail at O while ur was supposed to be thought of with its tail at P . However,
if you remember that the tail of a vector can be placed wherever convenient without
changing the vector, you won’t have a problem.)

It follows that the velocity vector is given by

dr d(rur ) dr dur
v= = = ur + r ,
dt dt dt dt
where we have calculated the product using the product rule for derivatives. Since,
ur changes its direction as we move along the path, it is necessary to keep the
second term. (How does the case of rectangular coordinates differ?) To calculate
further, we first express ur and uθ in rectangular coordinates. (See the diagram.)

ur = cos θ i + sin θ j,
uθ = − sin θ i + cos θ j.

φ ur
u
φ φ

r
φ
20 CHAPTER 1. VECTORS

Hence,

dur dθ dθ dθ
= − sin θ i + cos θ j = uθ . (3)
dt dt dt dt
A similar calculation shows that
duθ dθ
= − ur . (4)
dt dt
Putting (3) in the expression for v yields
dr dθ
v= ur + r uθ .
dt dt
dv
This same process can be repeated to obtain the acceleration a = . Using both
dt
(3) and (4) yields after considerable calculation
( ( )2 ) ( 2 )
d2 r dθ d θ dr dθ
a= −r ur + r 2 + 2 uθ . (5)
dt2 dt dt dt dt

I leave it as a challenge for your algebraic prowess to try to verify this formula.

Example. Uniform Circular Motion Suppose the particle moves in a circle of


radius R centered at the origin, so that r = R, and dr/dt = 0. Suppose in addition
that dθ/dt = ω is constant. Then d2 r/dt2 = d2 θ/dt2 = 0, and putting all this in
the above expressions yield
v = Rωuθ
|v|2
a = −Rω 2 ur = − ur
R
as we discovered previously.

Exercises for 1.2.

1. Suppose a projectile follows the path described in Example 1 of in this section.


Show that the range (i.e., the x-coordinate of the point of impact) is given by
vx0 vy0
2 . Hint: Find the time at which impact occurs.
g
2. Find the velocity and acceleration vectors at the indicated times if the position
vector is given by:
(a) r(t) = 4i + 5j − 3k, t = 2;
(b) r(t) = 3i cos t − 4j sin t, t = 0;
(c) r(t) = −2ie3t + jt, t = 1;
(d) r(t) = (2t − 5)i + (3t + 1)j + 2k, t = −2;
(e) r(t) = hcos t, 0i, t arbitrary.
1.2. KINEMATICS AND VECTOR FUNCTIONS 21

3. Calculate the following. (Integrate each component separately.)


∫π
(a) 0 ((1 + cos t)i − 2j sin t) dt
∫1
(b) 0 ((3 + 2x)i + x2 j − 5k) dx
∫ 2π
(c) 0 (3i sin θ − 4j cos θ) dθ.

4. Given the parametric equations x = −2 sin t, y = 7, and z = 4 cos t, find the


particle’s position, velocity, speed, and acceleration at t = 2π.

5. Using the method in Example 4 of this section, find the general velocity and
position vectors if
(a) a = 2i, r0 = i + 2j, v0 = 0.
(b) a = 0, r0 = 3i + 4j + 5k, v0 = 5i.
(c) a = i sin t − j cos t, r0 = i, v0 = j
1
6. Show that the path described by the equation r = t2 a+tv0 +r0 is a parabola.
2
Hint: This is not too hard to see if you choose your coordinate system properly.
Suppose first that the origin is the position of the particle at t = 0. Then
r0 = 0. Suppose, moreover that the y-axis is chosen in the direction of a.
Then, a = aj. Finally, choose the z-axis so that it is perpendicular both to a
and to the initial velocity v0 . Then v0 = vx0 i + vy0 j. Write out the equations
for x and y with these assumptions.

7. Consider the path in the plane described by the equation r = ha cos ωt, b sin ωti.
x2 y 2
(a) Show that the particle traces out the ellipse 2 + 2 = 1. How much time
a b
must elapse for the particle to trace the ellipse exactly once?
(b) Show that the acceleration is directed towards the origin.

8. Find v(t) and r(t) in terms of ur and uθ if


(a) r = 2(sin t) and θ = 2t
(b) r = a and θ = 2t
(c) r = t and θ = t
duθ dθ
9. Show that = − ur . (This is equation (4) in this section.)
dt dt
10. (Optional) Verify formula (5) in this section.

11. A bead is on a spoke of a wheel 20 inches in diameter. The wheel rotates at a


rate of 2 revolutions per second. At the same time, the bead moves uniformly
out from the center at 2 inches per second. Assume the bead starts at the
center at t = 0. (a) Find expressions for r and θ as functions of t. (b) Find
the velocity and acceleration vectors at each point between the center and the
rim.
22 CHAPTER 1. VECTORS

12. A billiard ball bounces off the side of a billiard table at an angle. What
can you say about the velocity vector at the point of impact? Is there a
well defined tangent direction? Note that since this question involves making
some assumptions about the physics of the collision, it does not have a single
mathematical answer.

1.3 The Dot Product

Let a and b be vectors. We assume they are placed so their tails coincide. Let θ
denote the smaller of the two angles between them, so 0 ≤ θ ≤ π. Their dot product
is defined to be
a · b = |a| |b| cos θ.
b
a This is also sometimes called the scalar product because the result is a scalar. Note
that a · b = 0 when either a or b is zero or, more interestingly, if their directions
θ are perpendicular. If the two vectors have parallel directions, a · b is the product of
their magnitudes if they point the same way or the negative of that product if they
point in opposite directions. (Make sure you understand why.)

The dot product is a useful concept when one needs to find the component of one
vector in the direction of another. For example, in a typical inclined plane problem
in elementary mechanics, one needs to resolve the vertical gravitational force into
components, one parallel to the inclined plane, and one perpendicular to it. To see
how the dot product enters into that, note that

the quantity |b| cos θ is just the perpendicular projection of b onto any line parallel
to a. (See the diagram.) Hence, we have

a · b = |a| (projection of b on a). (6)

(Note that this description is symmetric; either vector could be projected onto a
line parallel to the other.) In particular, if a is a unit vector (|a| = 1), a · b is just
the value of the projection. (What does it mean if a · b < 0?)

a a

| b| cos θ ( b . u) u
θ b θ b

Scalar projection Vector projection


1.3. THE DOT PRODUCT 23

The above projection is a scalar quantity, but one is often interested in the vector
obtained by by multiplying the scalar projection of b on a with a unit vector in the
1
direction of a. (See the accompanying diagram.) u = a is such a unit vector, so
|a|
this vector projection of b on a is given by
1 1 a·b
(u · b)u = ( a · b) a = a. (7)
|a| |a| |a|2

The dot product satisfies certain simple algebraic rules.

a·b=b·a commutative law,


a · (b + c) = a · b + a · c distributive law,
(sa) · b = a · (sb) = s(a · b),

and

a · a = |a|2 .

These can be proved without too much difficulty from the geometric definition.
See, for example, the accompanying diagram which illustrates the proof of the
distributive law.

c
b

b +c

line of a
Projections add

Theorem 1.1 Let the components of a be ha1 , a2 , a3 i and let those of b be


hb1 , b2 , b3 i. Then
a · b = a1 b1 + a2 b2 + a3 b3 .

For plane vectors, the same principle applies without the third components.

Examples

For h1, 2, −1i and h2, −1, 3i, the dot product is 2 − 2 − 3 = −3. In particular, that
means the angle between the two vectors is obtuse.

For h1, 1, 2i and h−1, −1, 1i, the dot product is −1 − 1 + 2 = 0. That means the
two vectors are perpendicular.
24 CHAPTER 1. VECTORS

Proof. We have

a = a1 i + a2 j + a3 k
b = b1 i + b2 j + b3 k,

so the aforementioned rules of algebra yield

a · b = a1 b1 i · i + a1 b2 i · j + a1 b3 i · k
+ a2 b1 j · i + a2 b2 j · j + a2 b3 j · k
+ a3 b1 k · i + a3 b2 k · j + a3 b3 k · k.

However, i, j, and k are mutually perpendicular unit vectors, so the off-diagonal


dot products (e.g., i · j, i · k, etc.) are all zero while the diagonal dot products
(e.g., i · i) are all one. Hence, only the three diagonal terms survive and we obtain
a1 b1 + a2 b2 + a3 b3 as claimed.

The theorem gives us a way to calculate the angle between two vectors in case we
know their components. Indeed, the formula may be rewritten
a·b
cos θ = .
|a||b|
The right hand side may be computed using a · b = a1 b1 + a2 b2 + a3 b3 , |a| =

a1 2 + a2 2 + a3 2 , and similarly for |b|, and from this we can determine θ.

Example Suppose a = i + 2j − k, b = 2i − j + 3k. Then

a · b = (1)(2) + (2)(−1) + (−1)(3) = −3


√ √
|a| = 1 + 4 + 1 = 6
√ √
|b| = 4 + 1 + 9 = 14

so
−3
cos θ = √
84
θ = 1.90427 radians

The use of components simplifies other calculations also. For example, suppose we
want to know the projection of the vector b on a line parallel to a. We know from
(7) that this is
a·b −3
|b| cos θ = =√ .
|a| 6
Similarly, from (7), we see that the vector projection of b on a is given by
a·b −3 1 1
a= (i + 2j − k) = − i − j + k.
|a|2 6 2 2
1.3. THE DOT PRODUCT 25

Note that in all these calculations we must rely on the use of components to make
the formulas useful.
c A
The Law of Cosines Let A, B, and C be the vertices of a triangle, and define
B
vectors
−−→
a = CB b
−→ a
b = CA θ
−−→
c = BA = b − a.
C
Then we have

|c|2 = c · c = (b − a) · (b − a)
=b·b−b·a−a·b+a·a
= |b|2 − 2a · b + |a|2

which may be rewritten

|c|2 = |a|2 + |b|2 − 2|a||b| cos θ.

You should recognize that as the Law of Cosines from Trigonometry.

Generalizations We have already mentioned higher dimensional vectors being


characterized as n-tuples for values of n ≥ 4. The dot product of n-tuples can
be defined by analogy with the formula derived above. Thus for 4-vectors, a =
ha1 , a2 , a3 , a4 i, b = hb1 , b2 , b3 , b4 i, we would define

a · b = a1 b1 + a2 b2 + a3 b3 + a4 b4 .

In the theory of special relativity, it turns out to be useful to consider instead a


“dot product” of 4-vectors of the form

a1 b1 + a2 b2 + a3 b3 − c2 a4 b4

where c is the speed of light. (Some authors prefer the negative of this quantity.)
This is a bit bizarre, because the “dot product” of a vector with itself can be zero
without the vector being zero.

We shall consider some of these strange and wonderful ideas later in this course.

Differentiation of Dot Products The usual differentiation rules, e.g., the prod-
uct rule, the chain rule, etc., apply to vector functions as well as to scalar functions.
Indeed, we have already used some of these rules without making a point of it. The
proofs are virtually the same as those in the scalar case, so we need not go through
them again in this course. However, since there are so many different vector op-
erations, it is worth exploring the consequences of these rules in interesting cases.
26 CHAPTER 1. VECTORS

They are sometimes not what you would expect. For example, the product rule for
the dot product of two vector functions f (t) and g(t) takes the form

d df (t) dg(t)
f (t) · g(t) = · g(t) + f (t) · .
dt dt dt
Let us apply this to the case f (t) = g(t) = r(t), the position vector of a particle
moving in the plane or in space. Assume the particle moves so that its distance to
the origin is a constant R. Symbolically, this can be written r · r = |r|2 = R2 . Then
the product rule gives

d(r · r) dr dr
0= = ·r+r· = 2r · v.
dt dt dt
It follows from this that either v = 0 or v ⊥ r. In the plane case, this is yet another
confirmation that for motion in a circle, the velocity vector is perpendicular to the
radius vector.

Similar reasoning shows that if |v| is constant, then the acceleration vector a is
perpendicular to v. (You should write it out to convince yourself you understand
the argument.)

Exercises for 1.3.

1. In each case determine if the given pair of vectors is perpendicular.


(a) a = 3i + 4j, b = −4i + 3j
(b) a = h4, −1, 2i, b = h3, 0, −6i
(c) a = 3i − 2j, b = −2i − 4k
2. Assume u = ha, bi is a non-zero plane vector. Show that v = h−b, ai is
perpendicular to u. By examining all possible signs for a and b, convince
yourself that the 90 degree angle between u and v is in the counter-clockwise
direction.
3. The methane molecule, CH4 , has four Hydrogen atoms at the vertices of a
regular tetrahedron and a Carbon atom at its center. Choose as the vertices
of this tetrahedron the points (0,0,0), (1,1,0), (1,0,1), and (0,1,1). (a) Find
the angle between two edges of the tetrahedron. (b) Find the bond angle
between two Carbon–Hydrogen bonds.
4. An inclined plane makes an angle of 30 degrees with the horizontal. Use
vectors and the dot product to find the scalar and vector projections of the
gravitational acceleration vector −gj along a unit vector pointing down the
inclined plane.
5. Show that if the velocity vector v is perpendicular to the acceleration vector
a at every point of a path, then the speed |v| is constant.
1.4. THE VECTOR PRODUCT 27

6. Derive the formula

|a + b|2 = |a|2 + |b|2 + 2|a| |b| cos θ.

How is it related to the Law of Cosines? A picture might help.


7. Use the dot product to determine if the points P (3, 1, 2), Q(−1, 0, 2), and
R(11, 3, 2) are collinear. b
b cos θ
8. If a relativity physicist claimed that the universe is orderly since the physical
vectors h2.54, 9.8, 6.626 × 10−34 , −c2 i and h−9.8, −2.54, 1.509 × 1033 , 0i are
perpendicular in a four-space, would his mathematics be correct? Would it
make any difference which ‘dot product’ he was using? θ
a

1.4 The Vector Product

Let a and b be two vectors in space placed so their tails coincide, and let θ be the
smaller of the two angles between them (i.e., 0 ≤ θ ≤ π). Previously, we defined
the dot or scalar product a · b. There is a second product called the vector product
or cross product. It is denoted a × b, and as its name suggests, it is a vector. It is
used extensively in mechanics for such notions as torque and angular momentum,
and we shall use it shortly in studying solid analytic geometry.
a x b
a × b is defined as follows. Its magnitude is

|a × b| = |a||b| sin θ. (9)


b
The quantity on the right has a very simple interpretation. |b| sin θ is the height of
the parallelogram spanned by a and b, so |a||b| sin θ is the area of the parallelogram. θ
Note it is zero if the vectors point in the same (θ = 0) or opposite (θ = π) directions,
but otherwise it is non-zero. a

The direction of a × b (when it isn’t zero) is a bit harder to describe. First, a × b is


perpendicular to both a and b, and given that we know its magnitude that leaves
precisely two possibilities. We specify that it has the right hand orientation. That
is, if the fingers of your right hand point from a to b through the angle θ, a × b
should point in the direction of your thumb.

The vector product has some surprising algebraic properties. First, as already
noted,
a×b=0
when a and b point in the same or opposite directions. In particular, a × a = 0 for
any vector a. Secondly, the commutative law fails, and we have instead

a × b = −b × a.
28 CHAPTER 1. VECTORS

(Point from b to a, and your thumb reverses direction.) The vector product does
satisfy other rules of algebra such as

a × (b + c) = a × b + a × c
(sa) × b = a × (sb) = s(a × b),

but they are a bit tricky to verify from the geometric definition. (See below for
another approach.)

k The vector products of the basis vectors are easy to calculate from the definition.
We have
j i × j = −j × i = k,
j × k = −k × j = i
i k × i = −k × i = j.

To calculate vector products in general, we expand in terms of components.

a × b = (a1 i + a2 j + a3 k) × (b1 i + b2 j + b3 k)
= a1 b1 i × i + a1 b2 i × j + a1 b3 i × k
+ a2 b1 j × i + a2 b2 j × j + a2 b3 j × k
+ a3 b1 k × i + a3 b2 k × j + a3 b3 k × k
= 0 + a1 b2 k − a1 b3 j
− a2 b1 k + 0 + a2 b3 i
+ a3 b1 j − a3 b2 i + 0

so

a × b = (a2 b3 − a3 b2 )i − (a1 b3 − a3 b1 )j + (a1 b2 − a2 b1 )k.

(Note that this makes extensive use of the rules of algebra for vector products, so
one should really prove those rules first.)

There is a simple way to remember the formula. Recall that a 2 × 2 determinant is


defined by [ ]
a b
det = ad − bc.
c d
Now consider the matrix or array
[ ]
a1 a2 a3
b1 b2 b3

formed from the components of a and b, and calculate the 2 × 2 determinants


obtained by omitting successively each of the columns. For the first and third, use
a (+) sign and for the second use a (−) sign.
1.4. THE VECTOR PRODUCT 29

Example 5 Let a = h1, 2, −1i and b = h1, −2, 3i. Then

a × b = h6 − 2, −(3 + 1), −2 − 2i = h4, −4, −4i.

We can check that this vector is indeed perpendicular to both a and b by calculating
dot products.

a · (a × b) = 4 − 8 + 4 = 0
b · (a × b) = 4 + 8 − 12 = 0.

(Many texts suggest defining a × b by a 3 × 3 determinant


 
i j k
a × b = det a1 a2 a3  . (10)
b1 b2 b3

If you are not familiar with 3 × 3 determinants, see the Exercises.)

The triple product Let a, b, and c be three vectors in space and suppose they
are placed so their tails coincide. Then the product

(a × b) · c

has a simple geometric interpretation. The vectors are three of the twelve sides of
a parallelepiped. |a × b| is the area of the base of that parallelepiped. Moreover the
projection of c on a × b is either the altitude of the parallelepiped or the negative
of that altitude depending on the relative orientation of the vectors. Hence, the dot
product is, except for sign, the volume of the parallelepiped. It is positive if c is
on the same side of the plane determined by a and b as a × b, and it is negative if
they are on opposite sides.

a b
a b

c
b a
c

Positive triple product Negative triple product


30 CHAPTER 1. VECTORS

Example 6 Let a = h0, 1, 1i, b = h1, 1, 0i, and c = h1, 0, 1i. Then a × b =
h−1, 1, −1i, so (a × b) · c = −1 + 0 + (−1) = −2. Hence, the volume is 2. You
should study the diagram to make sure you understand why the answer is negative.

An immediate consequence of the geometric interpretation is the formula


(a × b) · c = a · (b × c). (11)
For, except for sign, both sides may be interpreted as the volume of the same
parallelepiped, and careful inspection of all possible relative orientations shows that
the signs will be the same.

Using 3 × 3 determinants—see the Exercises—you can check the formula


 
a1 a2 a3
a · (b × c) = det  b1 b2 b3  . (12)
c1 c2 c3

It is worth noting that we can use (11) to verify the formula


a × (b + c) = a × b + a × c.
For, it would suffice to show that both sides have the same components. Consider
the x-component. In general, the x-component of a vector v is its projection on the
x-axis, that is, it is i · v. However,
i · (a × (b + c)) = (i × a) · (b + c)
= (i × a) · b + (i × a) · c
= i · (a × b) + i · (a × c)
= i · (a × b + a × c).
(Here we used the fact that the distributive law does hold for the dot product.) It
follows that the two sides have the same x-component. Similar arguments work for
the y and z-components, so the two sides are the same.

Product Rule Like the dot product, the cross product also satisfies the product
rule. (Also, the proof is virtually identical to the proof for scalar functions.) Thus,
if f (t) and g(t) are vector valued functions, then
d df dg
(f × g) = ×g+f × . (13)
dt dt dt
This formula has many useful consequences. We shall illustrate one such by showing
that if a particle moves in space so that the acceleration a is parallel to the position
vector r, then the motion must be constrained to a plane. To see this, consider the
quantity L = r × v. We have
dL dr dv
= ×v+r×
dt dt dt
= v × v + r × a = r × a.
1.4. THE VECTOR PRODUCT 31

However, since r and a are parallel, r × a = 0, and it follows that dL/dt = 0, i.e., L
L is constant. In particular, the direction of L = r × v does not change, so since
r ⊥ L, it follows that the position vector is constrained to the plane perpendicular v
to L and containing the origin. O
a
The quantity r × v seems to have been pulled from a hat, but you will learn in
physics how the related quantity r × (mv), which is called angular momentum, is
used to make sense of certain aspects of motion. P

Exercises for 1.4.

1. Find a × b for the following vector pairs:


(a) a = h4, −2, 0i, b = h2, 1, −1i
(b) a = h3, 3, 3i, b = h4, −3, 2i
(c) a = 2i + 3j + 4k, b = i − 3j + 4k

2. Use the vector product to find the areas of the following figures.
(a) The parallelogram with vertices (0, 0, 0), (1, 1, 0), (1, 2, 1) and (0, 1, 1). (Per-
haps you should first check that this is a parallelogram.)
(b) The triangle with vertices (1, 0, 0), (0, 1, 0), and (0, 0, 1).

3. Prove that the vector product is not associative by calculating a × (b × c) and


(a × b) × c for a = i, b = c = j.

4. Show that if a, b, and c are mutually perpendicular, then a × (b × c) = 0.


What is the analogous geometric situation?

5. 3 × 3 determinants are defined by the formula


 
a1 a2 a3
det  b1 b2 b3  = a1 b2 c3 + a2 b3 c1 + a3 b1 c2 − a3 b2 c1 − a2 b1 c3 − a1 b3 c2 .
c1 c2 c3

(Can you see the pattern?) Using this rule, verify formulas (10) and (12) in
this section.

6. Find the volume of the parallelepiped spanned by the vectors h1, 1, 0i, h0, 2, −2i,
and h1, 0, 3i.

7. A triangular kite is situated with its vertices at the points (0, 0, 10), (2, 1, 10),
and (0, 3, 12). A wind with velocity vector 20i + 6j + 4k displaces the kite by
blowing for 1/2 second. Find the volume of the solid bounded by the initial
and final positions of the kite. (Assume all the distance units are in feet.)
32 CHAPTER 1. VECTORS

8. Prove the formula

(a × b) × c = −(b · c)a + (a · c)b

as follows. Since the quantities on both sides of the equation are defined
geometrically, you can use whatever coordinate system you find convenient.
Choose the coordinate system so that a points along the x-axis and b lies
in the x, y-plane. Under these assumptions, a has components ha1 , 0, 0i, b
has components hb1 , b2 , 0i, and c has components hc1 , c2 , c3 i. Show that both
sides of the equation have components

h−a1 b2 c2 , a1 b2 c1 , 0i.

9. Verify the formula.


|a × b|2 = |a|2 |b|2 − (a · b)2 .
Hint: Use the definitions in terms of the sine and cosine of the included angle
θ.
10. Suppose a particle moves in such a way that r × v is a constant vector. Show
that at each point of the path either r or a is zero or they are parallel.
11. One may try to generalize the vector product to higher dimensions. One plau-
sible approach would be to consider all 2 × 2 determinants (with appropriate
signs) which one can extract from the array
[ ]
a1 a2 . . . an
.
b1 b2 . . . bn

n(n − 1)
You learned in high school that there are ways to choose 2 things
2
from a set of n things, so that is the number of components. Show that
n(n − 1)
= n only in the case n = 3. (The moral is that we may be able to
2
define a ‘cross product’ for dimensions other than 3, but only in that case will
we get something of the same dimension.)

1.5 Geometry of Lines and Planes

We depart for the moment from the study of concepts of immediate use in dynamics
to discuss some analytic geometry in space. The notions we shall introduce have a
variety of applications, but, more important, they will help you develop your spatial
intuition and teach you something about expressing such intuition analytically.

Lines We saw before that the motion of a particle moving with constant speed
1.5. GEOMETRY OF LINES AND PLANES 33

on a line is described by a vector equation of the form

r = tv + r0

where v is the constant velocity vector pointing along the line in the direction of
motion, and r0 is its position at t = 0. (Previously v was denoted v0 , but since the
velocity is constant, we can use either, and we drop the subscript to save writing.)
v
It is often more convenient to use a slightly different form of this equation
r
r = (t − t0 )v + r0 . 0

Here, ‘t’ is replaced by ‘t − t0 ’ which denotes the time ‘elapsed’ since some initial
time t0 , and r0 denotes the position at t = t0 . If you multiply this out, you will see r
it is really the same as the previous equation with r0 being replaced by the constant O
vector −t0 v + r0 . (Why is ‘elapsed’ in quotes? Think about the case t < t0 .)

There are a couple of points that should be stressed at this point. First, the above
equations apply in the plane as well as in space. You should think about how the
ordinary analytic geometry of lines in the plane can be related to this approach.
Secondly, while we have been thinking of t as representing “time”, it is not absolutely
necessary to do so. It is better in some circumstances to think of it as just another
variable which ranges over the domain of a vector function described by the equation

r = r(t) = tv + r0 ,

and the line is the image of this function, i.e., the set of all r(t) so obtained. Such
a variable is called a parameter , and the associated description of the line is called
a parametric representation. This is a more static point of view since we need not
think of the line as traced out by a moving particle. Also, there is no need to use the
symbol ‘t’ for the parameter. Other letters, such as ‘s’ or ‘x’ are perfectly acceptable,
and in some circumstances may be preferable because of geometric connotations in
the problem.

Example 7 Consider the line through the points P0 and P1 with respective coor-
dinates (1, 1, 0) and (0, 2, 2).
−−−→
Choose v = P0 P1 . This is certainly a vector pointing along the line, and it does
not matter what its magnitude is. (Thinking in kinematic imagery, we don’t care
how fast the line is traced out.) Its components are h0 − 1, 2 − 1, 2 − 0i = h−1, 1, 2i.
−−→
Since the line passes through P0 , we may choose r0 = OP0 which has components
h1, 1, 0i. With some abuse of notation, we may write the parametric equation of the
line

r = th−1, 1, 2i + h1, 1, 0i = h−t + 1, t + 1, 2ti

or

r = (1 − t)i + (1 + t)j + (2t)k.


34 CHAPTER 1. VECTORS

This can also be written as 3 component equations

x = 1 − t,
Line
y = 1 + t,
z = 2t.
P
1 Note that there are other ways we could have approached this problem. We could
v have reversed the roles of P0 and P1 , we could have used 12 v instead of v, etc. Each
O of these would have produced a valid parametric equation for the line, but they
would all have looked different.
P0
Example 8 Consider the line in the plane with parametric equation

r = th−1, 1i + h1, 3i

or

x = −t + 1,
v y = t + 3.
(1, 3)

We may eliminate t algebraically by writing

t=1−x=y−3

whence we obtain y = −x + 4. This is the usual way of representing a line in the


plane by an equation of the form y = mx + b, where m is the slope, and b is the
y-intercept.

Eliminating the parameter makes sense for lines in space, but unfortunately it does
not lead to a single equation. The vector equation r = tv + r0 can be written as 3
scalar equations

x = ta + x0 ,
y = tb + y0 ,
z = tc + z0

where v = ha, b, ci. If none of the components of v are zero, we can solve these
x − x0 y − y0 z − z0
equations for t to obtain t = , t= , and t = . Eliminating t
a b c
yields so called symmetric equations of the line
x − x0 y − y0 z − z0
= = .
a b c
These have the advantage that the components of v and the coordinates of P0 are
clearly displayed. However, symmetric equations are not directly applicable if, as
in Example 7, one or more of the components of v are zero.
1.5. GEOMETRY OF LINES AND PLANES 35

Example 7 revisited We found v = h−1, 1, 2i and r0 = h1, 1, 0i so the symmetric


equations are
x−1 y−1 z
= = .
−1 1 2

The geometry of lines in space is a bit more complicated than that of lines in the
plane. Lines in the plane either intersect or are parallel. In space, we have to be
a bit more careful about what we mean by ‘parallel lines’, since lines with entirely
different directions can still fail to intersect.

Parralel lines Intersecting lines Skew lines

Example 9 Consider the lines described by

r = th1, 3, −2i + h1, 2, 1i


r = th−2, −6, 4i + h3, 1, 0i.

They have parallel directions since h−2, −6, 4i = −2h1, 3, −2i. Hence, in this case
we say the lines are parallel. (How can we be sure the lines are not the same?)

Example 10 Consider the lines

r = th1, 3, −2i + h1, 2, 1i


r = th0, 2, 3i + h0, 3, 9i.

They are not parallel because neither of the vectors v is a multiple of the other.
They may or may not intersect. (If they don’t, we say the lines are skew.) How
can we find out? One method, is to set them equal and see if we can solve for the
point of intersection. There is one tricky point here. If we think of the parameter
t as time, even if the lines do intersect, there is no guarantee that particles moving
on these lines would arrive at the point of intersection at the same instant. Hence,
the way to proceed is to introduce a second parameter, call it s, for one of the lines,
and then try to solve for the point of intersection. Thus, we want

r = th1, 3, −2i + h1, 2, 1i = sh0, 2, 3i + h0, 3, 9i,

which after collecting terms yields

ht + 1, 3t + 2, −2t + 1i = h0, 2s + 3, 3s + 9i.


36 CHAPTER 1. VECTORS

Picking out the components yields three equations

t+1=0
3t + 2 = 2s + 3
−2t + 1 = 3s + 9

in 2 unknowns s and t. This is an overdetermined system, and it may or may not


have a consistent solution. In this case, the first two equations yield t = −1 and
s = −2. Putting these values in the last equation yields (−2)(−1) + 1 = 3(−2) + 9
which checks. Hence, the equations are consistent, and the lines do intersect. To
find the point of intersection, put t = −1 in the equation for the first line (or s = −2
in that for the second) to obtain h0, −1, 3i.

Example 11 Consider the lines

r = th1, 3, −2i + h1, 2, 1i


r = sh0, 2, 3i + h0, 3, 8i.

We argue exactly as above, except in this case, we obtain component equations

t+1=0
3t + 2 = 2s + 3
−2t + 1 = 3s + 8

Again, the first two equations yield t = −1, s = −2, but these values are not
consistent with the third equation. Hence, the lines are skew, they are not parallel
and they don’t intersect.

Planes A plane in space may be characterized geometrically in several different


ways. Here are some of the most common characterizations.

1. Any three non-collinear points P1 , P2 , and P3 (points not on a common line)


determine a plane.
2. There is a unique plane passing through a given point P0 and perpendicular
to a given line l. (A plane is perpendicular to a line l if each line in the plane
through the point of intersection with l is perpendicular to l.)
3. Any two distinct lines l1 and l2 which intersect determine a plane.
4. A line l and a point P0 not on l determine a plane.

We want to describe planes analytically. For this, it is best to start with the second
characterization. Let P0 with coordinates (x0 , y0 , z0 ) be the given point. Clearly, we
will get the same plane if we replace the perpendicular line l with any line parallel
1.5. GEOMETRY OF LINES AND PLANES 37

to l, so we may as well assume that l passes through P0 . Choose a vector N pointing


in the direction of l. If P is any point in the plane, then the displacement vector
−−→ −−→
P0 P is perpendicular to N. However, P0 P = r − r0 , so the perpendicularity may
be described algebraically by

N · (r − r0 ) = 0. (14)

P r - r
0 0

r P
0 Plane
r
O

Suppose N has components ha, b, ci. Since r − r0 has components hx − x0 , y − y0 , z −


z0 i, this equation may be rewritten

a(x − x0 ) + b(y − y0 ) + c(z − z0 ) = 0. (15)

This is called a normal form of an equation of a plane.

Example 12 We shall find an equation for the plane through the points with
coordinates (1, 0, 0), (0, 1, 0), and (0, 0, 1). By symmetry, the angles which a normal
vector makes with the coordinate axes (called its direction angles), should be equal,
so the vector N with components h1, 1, 1i should be perpendicular to this plane.
For P0 , we could use any of the three points; we choose it to be the point with
coordinates (1, 0, 0). With these choices, we get the normal form of the equation of
the plane
1(x − 1) + 1(y − 0) + 1(z − 0) = 0
which can be rewritten more simply

x+y+z−1=0
or x + y + z = 1.

Note that we chose the normal direction by being clever, but there is a quite straight-
forward way to do it. If the three points are denoted P0 , P1 and P2 , then the directed
−−−→ −−−→
line segments P0 P1 and P0 P2 lie in the plane, so the vector product P0 P1 × P0 P2 is
−−−→
perpendicular to the plane. In this case, P0 P1 has components h0 − 1, 1 − 0, 0 − 0i =
38 CHAPTER 1. VECTORS

−−−→
h−1, 1, 0i, and P0 P2 has components h0 − 1, 0 − 0, 1 − 0i = h−1, 0, −1i. Hence, the
cross product has components h−1, −1, −1i. This yields the normal form

−1(x − 1) + (−1)(y − 0) + (−1)(z − 0) = 0


or − x − y − z = −1.

As the above example illustrates, there is nothing unique about the equation of a
plane. For example, if N with components ha, b, ci is a normal vector, so is sN with
components hsa, sb, sci for any non-zero scalar s. Replacing N by sN just multiplies
the normal form by the factor s, and clearly this does not change the locus of the
equation. (The locus of an equation is the set of all points whose coordinates satisfy
the equation.)

In general, the normal form may be rewritten

ax + by + cz − ax0 − by0 − cz0 = 0


or ax + by + cz = d

where d = ax0 + by0 + cz0 . Conversely, the locus of any such linear equation,
ax + by + cz = d, in which not all of the coefficients a, b, and c are zero, is a plane.
This is not hard to prove in general, but we illustrate it instead in an example.

Example 13 Consider the locus of the equation

2x − 3y + z = 6.

The choice of a normal vector is clear, N = h2, −3, 1i. Hence, to express the above
equation in normal form, it suffices to find one point P0 in the plane. This could
be done in many different ways, but one method that works in this case would be
to set x = y = 0 and to solve for z. In this case, we get z = 6, so the point with
coordinates (0, 0, 6) lies in the plane. Write

2x − 3y + z = 6 for a general point


2(0) − 3(0) + 6 = 6 for the specific point

and then subtract to obtain

2(x − 0) − 3(y − 0) + 1(z − 6) = 0.

That is an equation for the same plane in normal form. It may also be written

h2, −3, 1i · hx − 0, y − 0, z − 6i = 0
−−→
to express the perpendicularity of P0 P to N.

You should convince yourself that this method works in complete generality. You
should also ask what you might do for an equation of the form 2x + 3y = 12 where
you can’t set x = y = 0 and solve for z.
1.5. GEOMETRY OF LINES AND PLANES 39

Various special cases where one or more of the coefficients in the linear equation perp
ax+by +cz = d vanish represent special orientations of the plane with respect to the plane
coordinate axes. For example, x = d has as locus a plane parallel to the y, z-plane
(i.e., perpendicular to the x-axis), and passing through the point (d, 0, 0). ax + by =
d would have as locus a plane perpendicular to the x, y-plane and intersecting it in
the line in that plane with equation ax + by = c.

It is important to note that the equation ax+by = d does not by itself specify a locus.
You must say whether you are considering a locus in the plane or in space. You
should consider other variations on these themes and draw representative diagrams line
to make sure you understand their geometric significance.

We saw that the coefficients a, b, and c in the linear equation may be chosen to be
the components of a normal vector N. The significance of the constant d is a bit
trickier. We have
d = ax0 + by0 + cz0 = N · r0
It is easiest to understand this if N is a unit vector, i.e., a2 + b2 + c2 = 1, so we
suppose that to be the case. Then N · r0 is the projection of the position vector r0
of the reference point P0 on the normal direction. Since the vector N can be placed
wherever convenient, we move it so its tail is at the origin. Then it is clear that this
projection is just the signed distance of the plane to the origin. The sign depends
on whether the origin is on the side of the plane pointed at by the normal or the
other side. (How?)

This reasoning can be extended further as follows. Let P1 be any point, not nec-
essarily the origin, and let (x1 , y1 , z1 ) be its coordinates. Then the projection of
−−−→
P0 P1 = r1 − r0 on the vector N gives the signed perpendicular distance from the
point P1 to the plane. Thus, if N is a unit vector, the distance is given by

D = |N · (r1 − r0 )|.

1 P
If N is not a unit vector, we may replace it by n = N. This yields the formula 0
|N|
for that distance
1
D= |N · r1 − N · r0 |
|N|
1 n
=√ |ax1 + by1 + cz1 − d|. (16)
a2 + b2 + c2
O
Example 14 The distance of the point (−1, 1, 0) to the plane with equation x +
y + z = 1 is
1 1
√ |1(−1) + 1(1) + 1(0) − 1| = √ .
3 3
Note that the sign inside the absolute values is negative which reflects the fact that
(−1, 1, 0) is on the side of the plane opposite to that pointed at by h1, 1, 1i.
40 CHAPTER 1. VECTORS

Intersection of Planes In general, two distinct planes in space are either parallel
or they intersect in a line. In the first case, the normal vectors are parallel.

Example 15 The loci of


2x − 3y + 2z = 12
and − 6x + 9y − 6z = 12
are parallel. Indeed the indicated normal vectors are h2, −3, 2i and h−6, 9, −6i
which are multiples of one another, so they are parallel. (How do you know the
planes are not identical?)

In the second case, there is a simple way to find an equation for the line of inter-
section of the two planes.

Example 16 We find the line of intersection of the planes which are loci of the
linear equations
6x + 2y − z = 2
x − 2y + 3z = 5. (17)

normal
1
normal
2

line

plane 1

plane cross product


2

The line of intersection of the two planes is perpendicular to normals to both planes.
Thus it is perpendicular to h6, 2, −1i and also to h1, −2, 3i. Thus, the cross product
v = h6, 2, −1i × h1, −2, 3i = h4, −19, −14i
is parallel to the desired line. To find a parametric representation of the line, it
suffices to find one point P0 in the line. We do this as follows. In the system of
equations (17), choose some particular value of z, say z = 0. this yields the system
6x + 2y = 2
x − 2y = 5
1.5. GEOMETRY OF LINES AND PLANES 41

which may be solved by the usual methods of high school algebra to obtain x =
1, y = −2. That tells us that the point P0 with coordinates (1, −2, 0) is a point on
the line. Hence,

r = tv + r0 = th4, −19, −14i + h1, −2, 0i = h4t + 1, −19t − 2, −14ti

is a vector parametric representation of the line. (What would you have done if x
did not appear in either equation, i.e., appeared with 0 coefficients? In that case,
you would not be able to set z = 0 and solve for x.)

Note that the symmetric equations of a line


x − x0 y − y0 z − z0
= =
a b c
may be interpreted in this light as asserting the intersection of two planes. The first
equality could be rewritten

bx − ay = bx0 − ay0

which defines a plane perpendicular to the x, y-plane. Similarly, the second equality
could be rewritten
cy − bz = cy0 − bz0
which defines a plane perpendicular to the y, z-plane. The line is the intersection
of these two planes.

In general, the intersection of three planes in space is a point, but there are many
special cases where it is not. For example, the planes could be parallel, or the line
of intersection of two of the planes might be parallel to the third, in which case they
do not intersect at all. Similarly, all three planes could intersect in a common line.
This geometry is reflected in the algebra. If the planes intersect in a single point,
their defining equations provide a system of 3 equations in 3 unknowns which have
a unique solution. In the second case, the equations are inconsistent, and there is
no solution. In the third case there are infinitely many solutions. We shall return
to a study of such issues when we study linear algebra and the solution of systems
of linear equations.

Elaborations The analytic geometry of lines and planes in space is quite interesting
and presents us with many challenging problems. We shall not go into this in
detail in this course, but you might profit from trying some of the exercises. One
interesting problem, is to determine the perpendicular distance between skew lines
in space. If the lines are given parametrically by equations

r = tv1 + r1
r = tv2 + r2 ,

then the distance between the lines is given by


|(r2 − r1 ) · (v2 × v1 )|
. (18)
|v2 × v1 |
42 CHAPTER 1. VECTORS

See if you can derive this formula! (There are some hints in the exercises.)

Exercises for 1.5.

1. Find a vector (parametric) equation for the line that


(a) passes through (0, 0, 0) and is parallel to v = 3i + 4j + 5k,
(b) passes through (1, 2, 3) and (4, −1, 2),
(c) passes through (1, 1) and is perpendicular to v = h3, 1i,
(d) passes through (9, −2, 3) and (1, 2, 3).
2. Determine if the lines with the following vector equations intersect: r =
h1, −1, 2i + th2, 1, 1i, r = h0, 1, 1i + th1, 0, −1i.
3. Find an equation for the plane with the given normal vector and containing
the given point: (a) N = h2, −1, 3i, P (1, 2, 0),
(b) N = h1, 0, 3i, P (2, 4, 5).
4. Write parametric and symmetric equations for the line which
(a) passes through (0, 1, 2) and is perpendicular to the yz-plane,
(b) passes through (5, 2, −1) and is perpendicular to the plane with equation
3x + 4y − z = 2,
(c) passes through (1, 3, 0) and is parallel to the line with parametric equations
x = t, y = t − 1, z = 2t + 3.
5. Write an equation for the plane that
(a) passes through (1, 4, 3) and is perpendicular to the line with equation
r = h1 + t, 2 + 4t, ti.
(b) passes through the origin and is parallel to the plane with equation 3x +
4y − 5z = −1,
(c) passes through (0, 0, 0), (1, −2, 8), (−2, −1, 3).
6. Find a vector (parametric) equation for the line of intersection of the planes
with equations 2x + 3y − z = 1 and x − y − z = 0.
7. Find the angle between the normals to the following planes:
(a) the planes with equations x + 2y − z = 2 and 2x − y + 3z = 1,
(b) the plane with equation 2x + 3y − 5z = 0 and the plane containing the
points (1, 3, −2), (5, 1, 3), and (1, 0, 1).
8. Use formula (3) for the perpendicular distance from the point P0 (x0 , y0 , z0 )
to the plane ax + by + cz = d to find the distance from
(a) the origin to the plane x − 3z = −2,
(b) the point (−1, 2, 1) to the plane 3x + 4y − 5z = −2.
1.6. INTEGRATING ON CURVES 43

9. Show that the distance between the lines given parametrically by equations
r = tv1 + r1
r = tv2 + r2 ,
is
|(r2 − r1 ) · (v2 × v1 )|
.
|v2 × v1 |
Hint: Consider the line segment Q1 Q2 from one line to the other which is
perpendicular to both. Note that |v1 ×v1
2|
v1 × v2 is a unit vector parallel to
Q1 Q2 . Convince yourself that Q1 Q2 is the projection of the line segment
connecting the endpoints of r1 and r2 onto this unit vector.
10. A projectile follows the path
r = 1000t i + (1000t − 16t2 ) j.
At t0 = 1, an anti-gravity ray zaps the projectile, making it impervious to
gravitation, so it proceeds off on the tangent line at the point. In general, the
equation for the tangential motion would be
r = (t − t0 )v0 + r0
where r0 is the position vector at t0 and v0 is the instantaneous velocity vector
there. Where will the projectile be after one additional second?

1.6 Integrating on Curves

Arc Length Suppose a particle follows a path in space (or in the plane) described
by r = r(t). Suppose moreover it starts for t = a at r(a) and ends for t = b at r(b).
We want to calculate the total distance it travels on the curve. The correct formula
for this distance is ∫ b
L= |r0 (t)|dt (19)
a
dr
where r0 (t) = is the velocity vector at time t (when the particle has position
dt
vector r(t)).

d r = r’ (t) dt

r (a)

O
r (b)
44 CHAPTER 1. VECTORS

This makes sense, because |r0 (t)| should be the speed of the particle at time t, so
|r0 (t)|dt should be a good approximation to the distance traveled in a small time
interval dt. (Integration amounts to adding up the incremental distances.)

Example 17 Let r = R cos t i + R sin t j with 0 ≤ t ≤ 2π. As we saw earlier, this


describes one circuit of a circle of radius R centered at the origin. We have
dr
= −R sin t i + R cos t j,
dt
R √
t so |r0 (t)| = R2 cos2 t + R2 sin2 t = R. Hence,
∫ 2π

L= Rdt = Rt|0 = 2πR
0

just as we would expect.

Formula (19) may be taken as the definition of the arc length of the curve, but a little
more discussion will clarify its ramifications. Choose n+1 points r0 , r1 , r2 , . . . , rn on
the curve (as indicated in the diagram) by choosing a partition of the time interval

a = t0 < t1 < t2 < · · · < tn−1 < tn = b,

and letting the ith position vector ri = r(ti ).

r
r2 n

r
n-1 chord | ∆ r |
r
1
arc ∆ s
r
i-1 r
r i
0

For two neighboring points ri−1 and ri on the curve, the displacement vector ∆ri =
ri − ri−1 is represented in the diagram by the chord connecting the points. Hence,
for any plausible definition of length, the length of arc ∆si on the curve from ri−1
to ri is approximated quite well by the length of the chord, at least if the points
are closely spaced. Thus
∆si ≈ |∆ri |.
(‘≈’ means ‘is approximately equal to’.) Hence, if we add everything up, we get

n ∑
n
L= ∆si ≈ |∆ri |,
i=1 i=1
1.6. INTEGRATING ON CURVES 45

that is, the length of the curve is approximated by the length of the polygonal path
made up of the chords. To relate the latter length to the integral, note that if we
put ∆ti = ti − ti−1 , then for closely spaced points

∆ri ≈ r0 (ti−1 )∆ti .

Putting this in the prior formula for L yields



n
L≈ |r0 (ti−1 )|∆ti ,
i=1

and the sum on the right is one of the approximating (Riemann) sums which ap-
∫ b
proach the integral |r0 (t)|dt as n → ∞.
a

In our discussions, we have denoted the independent variable by ‘t’ and thought
of it as time. Although this is helpful in kinematics, from a mathematical point
of view this is not necessary. Any vector valued function r = r(u) of a variable u
can be thought of as yielding a curve as u ranges through the set of values in the
domain of the function. As mentioned elsewhere, the variable u is usually called a
parameter , and often it will have some geometric or other significance. For example,
in Example 17, it might have made more sense to call the parameter θ (instead of
t) and to think of it as the angle the position vector makes with the positive real
axis.

Example 18 We shall find the length of the parabola with equation y = x2 on


the interval −1 ≤ x ≤ 1. One parametric representation of the parabola would be
r = xi + yj = ti + t2 j where −1 ≤ t ≤ 1. However, there is no real need to introduce
a new variable; we might just as well use x, and write instead

r = x i + x2 j, −1 ≤ x ≤ 1.

The formula (19) still applies with t replaced by x. Then,


dr (-1, 1) (1, 1)
= i + (2x)j,
dx

so |r0 (x)| = 1 + 4x2 . Hence,
∫√ 1 √
√ x2
1 5+2
L= 1 + 4x dx = 5 + ln √
2 .
−1 4 5−2 x

You should ponder what we just did, and convince yourself there is nothing peculiar
about the use of x as parameter.

As the previous example shows, calculation of lengths often results in difficult inte-
grals because of the square root. Recourse to integral tables or appropriate computer
software is advised.
46 CHAPTER 1. VECTORS

There is one very tricky point in using formula (19) to define length of arc on a
curve. The same curve might be given by two different parametric representations
r = r1 (u), a ≤ u ≤ b and r = r2 (v), c ≤ v ≤ d. How can we be sure we will get the
same answer for the length if we compute
∫ b ∫ d
0
|r1 (u)|du and |r02 (v)|dv?
a c
The superficial answer is that in general we won’t always get the same answer. For
example, a circle might be represented parametrically in two different ways, one
which traverses the circle once, and one which traverses it twice, and the answers for
the length would be different. The curve as a point set does not by itself determine
the answer if part or all of it is covered more than once in a given parametric
representation. This issue, while something to worry about in specific examples, is
really a red herring. If we are careful to choose parametric representations which
trace each part of the curve exactly once, then we expect always to get the same
answer. (Such representations are called one-to-one.) Our reason for believing this
is that we think of length as a physical quantity which can be measured by tapes
(or more sophisticated means), so we expect the mathematical theory to fall in
line with reality. Unfortunately, it is not logically or philosophically legitimate to
identify the world of mathematical discourse with physical reality, so it is incumbent
on mathematicians to prove that all one-to-one parametric representations yield the
same answer. We won’t actually do this in this course, but some of the relevant
issues are discussed in the Exercises.

Line Integrals In elementary physics, the work W done by a constant force F


exerted over a distance ∆ is defined to be the product F ∆.

In more complicated situations, physicists need to deal with forces which may vary
or which may not point along the direction of motion. In addition, they have
to worry about motion which is not constrained to a line. We shall investigate the
mathematical concepts needed to generalize the concept of work in those situations.
First suppose that the work is done along a line, but that the force varies with
position on the line. If x denotes a coordinate describing that position, and the
force is given by F = F (x), then the work done going from x = a to x = b is
∫ b
W = F (x)dx.
a

∆xi
a= x x1 x x i-1 x x n-1 x n = b
0 2 i

The justification for that formula is that if we think of the interval [a, b] being
partitioned into small segments by division points
a = x0 < x1 < x2 < · · · < xn−1 < xn = b,
1.6. INTEGRATING ON CURVES 47

then the work done going from xi−1 to xi should be given approximately by ∆Wi ≈
F (xi−1 )∆xi where ∆xi = xi − xi−1 . Adding up and taking a limit yields the
integral.

Example 19 Suppose F (x) = −kx, 0 ≤ x ≤ D. (This is the restoring force of a


spring, where the constant k is the spring constant.) Then
∫ D
D
x2 kD2
W = (−kx)dx = −k = − .
0 2 0 2

More generally, the force may not point in the direction of the displacement. If
that is the case, we simply use the component of the force in the direction of the
displacement. Thus, if the force F is a constant vector, and the displacement is
given by a vector ∆r, then the component in the direction of of ∆r is |F| cos θ and
the work is (|F| cos θ)|∆r|, which you should recognize as the dot product
F
F · ∆r.
θ
If the force is not constant, this formula will still be approximately valid if the
magnitude of the displacement ∆r is sufficiently small. ∆ r

We are now ready to put all this together for the most general situation. Suppose
we have a force F which may vary from point to point, and it is exerted on a particle
moving on a path given parametrically by r = r(u), a ≤ u ≤ b. (We assume also
that the representation is one-to-one, although it turns out that this is not strictly
necessary here.) The correct mathematical quantity to use for the work done is the
integral
∫ b
F(r(u)) · r0 (u)du. (20)
a

Force varies on path

F
Small displacement
d r = r’ (u) du

The idea behind this formula is that dr = r0 (u)du is the displacement along the
curve produced by a small increment du in the parameter, so F · dr = F · r0 (u)du is
a good approximation to the work done in that displacement. As usual, the integral
sign suggests a summing up of the incremental contributions to the total work.
48 CHAPTER 1. VECTORS

Example 20 Let C be a circle of radius R centered at the origin and traversed in


the counter-clockwise direction. C may be represented parametrically by

r = (R cos θ)i + (R sin θ)j, 0 ≤ θ ≤ 2π.

Suppose the the force is given by F = −y i + x j. To calculate the work by formula


(20), we calculate
F
r0 (θ) − (−R sin θ)i + (R cos θ)j,
and note that since x = R cos θ, y = R sin θ on the circle, we have
R
θ F = −y i + x j = (−R sin θ)i + (R cos θ)j.

Hence, ∫ ∫
2π 2π
W = (R2 cos2 θ + R2 sin2 θ)dθ = R2 dθ = 2πR2 .
0 0

Note that had we not substituted for x and y in terms of θ in the expression for
F, we would have gotten a meaningless answer with x’s and y’s in it. Generally,
when working such problems, one must be sure that the relevant quantities in the
integrand have all been expressed in terms of the parameter.

Some additional discussion will clarify the meaning of formula (20). Suppose a
curve C is represented parametrically by a vector function r = r(u), a ≤ u ≤ b,
and suppose a force F is defined on C but may vary from point to point. Choose a
sequence of points on the curve with position vectors r0 , r1 , . . . , rn by subdividing
the parameter domain

a = u0 < u1 < u2 < · · · < un−1 < un = b

and letting ri = r(ui ). Let ∆ri = ri − ri−1 .

F2 F
r n-1 F
F r n
1 2
r
Fi-1 n-1
chord ∆r
r
1
F
0
r
i-1 r
r i
0

Then it is plausible that the work done moving along the curve from ri−1 to ri is
approximated by F(ri−1 ) · ∆ri , at least if |∆ri | is small. Moreover, adding it all up,
1.6. INTEGRATING ON CURVES 49

we have

n
W ≈ F(ri−1 ) · ∆ri .
i=1

Just as in the case of arc length, the right hand side approaches the integral in
formula (20) as a limit as n → ∞ and as the points get closer together.

We introduce the following notation for the limit of the sum


∫ ∑
n ∫ b
F · dr = lim F(ri−1 ) · ∆ri = F(r(u)) · r0 (u) du,
C i=1 a

and call it a line integral. It first appears in physics in discussions of work and
energy where F represents a force, but it may be thought of as a mathematical
entity defined for any vector function of position F.

The curve C and ultimately the line integral have been discussed in terms of a
specific parametric representation r = r(u). As in the case of arc length, it is
possible to show that different parametric representations produce the same answer
except for one additional difficulty. In the formula for arc length, we dealt with
|∆r|, so the direction in which the curve was traced was not relevant. In the case
of line integrals, we use F · ∆r, so reversing the direction changes the sign of ∆r
and hence it changes the sign of the line integral. Line integrals, in fact, must be
defined for oriented curves for which one of the two possible directions has been
specified. If C 0 and C are the same curve but traversed in opposite directions then
∫ ∫
F · dr = − F · dr.
C0 C

C’
C

There are several different notations for line integrals. For example, you may see
∫ ∫
F · dl or F · ds.
C C

The ‘ds’ suggests a vector version of the element of arc length ds = |r0 (t)|dt. We
can also write formally, F = Fx i + Fy j + Fz k, dr = dz i + dy j + dz k, so F · dr =
50 CHAPTER 1. VECTORS

Start = Finish Fx dx + Fy dy + Fz dz, and


∫ ∫
F · dr = Fx dx + Fy dy + Fz dz.
C C

The notation I
F · dr
B
is sometimes used to denote a line integral for an unspecified closed path (i.e., one
which starts and ends at the same point.) Finally, in some cases the line integral
depends only on the endpoints of A and B of the path, so you may see the notation
∫ B
F · dr
A

or something equivalent. We shall discuss some of these concepts and special nota-
tions later in this course.
A
Geometric Reasoning

Example 20, revisited In applying formula (20) to Example 20, you may have
noticed that, on the circle, the force

F = −R sin θ i + R cos θ j

and the displacement

dr = r0 (θ)dθ = (−R sin θ i + R cos θ j)dθ

have the same direction. If we had been able to visualize this geometric relationship,
we could have calculated the line integral more directly. Thus,

F · dr = |F||dr| cos 0 = |F||dr|.


√ √
ds = On the other hand, |F| = (−y)2 + x2 = R2 = R, and |dr| = ds = Rdθ. (The
length of arc on a circle is always the radius of the circle times the subtended angle).
r dθ
Hence, we could have written directly
∫ ∫ 2π
F · dr = R Rdθ = 2πR2 .
C 0

You will often see this approach used by physicists or engineers. Note that it
emphasizes the geometric or physical significance of the quantities, and it often
makes clear underlying simplicity which might otherwise be hidden by complicated
formulas. The approach used previously seems more straightforward, but that is
because the the geometric (or physical) part of the problem had already been solved
for you by giving you the parametric representation. For a real problem, you would
have to do that yourself.
1.6. INTEGRATING ON CURVES 51

To approach line integral problems geometrically, it is useful to introduce one final


bit of notation. We can think of the displacement dr as connecting two very close
points on the curve, or to a high degree of approximation as being tangent to the
curve. We have already introduced the notation ds for the magnitude of dr, and we
can specify its direction by giving the unit vector T which is tangent to the curve
at the point and which points in the preferred direction along the curve. Then
dr = T ds, and we may write
∫ ∫
F · dr = F · T ds.
C C

It is important to note that this last formula is only useful if you argue geometrically.

Example 21 Suppose F = 2xj and let C be the path which starts at (0, 0), moves
to (1, 0) and then moves to (1, 1). (See the diagram.) In this case, the path consists
of two segments C1 and C2 joined at a corner where the direction changes discon-
tinuously. Clearly, the proper definition of the line integral in this situation should
be ∫ ∫ ∫
F · dr = F · dr + F · dr.
C C1 C2

For the first segment C1 , F = 2xj is perpendicular to T (hence, to dr), so F · dr = 0.


Thus the first integral vanishes. For the second segment, C2 , F = 2xj = 2j, and
dr = T ds = j dy. F and T point the same way, so F · dr = 2ds = 2dy. Hence,
∫ ∫ 1
1
F · dr = 2 dy = 2y|0 = 2.
C2 0

To find the total answer, add up the answers for the two segments to get 0 + 2 = 2.

(1, 1) (1, 1) (1, 1)

F C
F
F C
2

(0, 0) C1 (1, 0) (0, 0) (1, 0) (0, 0)

Example 5 Example 6

Example 22 Let F = 2xj as in the previous example, but let C be the straight line
segment from (0, 0) to (1, 1). Then, F = 2xj makes angle π/4 with T (hence, with
dr). Thus,
π 1
F · dr = |F|ds cos( ) = 2xds √ .
4 2
52 CHAPTER 1. VECTORS

Choosing x as the parameter, we may write ds = 2 dx, so
∫ ∫ 1 √ ∫ 1
2x 2dx 1
F · dr = √ = 2x dx = x2 0 = 1.
C 0 2 0

Here is another slightly different way to do the calculation. We have F = 2xj, dr =


dx i + dy j. Hence, F · dr = 2x dy. It would seem appropriate to choose y as
parameter, so since x = y on the given line, we have
∫ ∫ 1
F · dr = 2y dy = 1.
C 0

Finally, the calculation could be done using formula (20) as follows. The given line
segment can be described by the parametric representation r = t i + t j, 0 ≤ t ≤ 1.
Then r0 (t) = i + j, and on the line F = 2x j = 2t j. Hence, by formula (20),
∫ ∫ 1 ∫ 1
F · dr = (2t j) · (i + j)dt = 2t dt = 1.
C 0 0

Polar Coordinates In polar coordinates, the velocity vector is given by


dr dr dθ
= ur + r uθ
dt dt dt
so we may write symbolically
dr
dr = dt = dr ur + rdθ uθ .
dt
Hence, if F = Fr ur + Fθ uθ is resolved into polar components, we may write
∫ ∫
F · dr = Fr dr + Fθ r dθ.
C C

1
Example Let F = − ur . Such a force has magnitude inversely proportional to
r2
the square of the distance to the origin and is directed toward the origin. (Does it
remind you of anything?) Let C be any path whatsoever, which starts at a point
with r = a and ends up at a point with r = b. We have
1 1
F · dr = (− ur ) · (dr ur + rdθ uθ ) = − 2 dr.
r2 r
(In other words, only the radial component of the displacement counts since the
force is entirely radial in direction.) If we choose r as the parameter, we have
∫ ∫ b b
1 1 1 1
F · dr = − 2 dr = = − .
C a r r a b a
In particular, the answer does not depend on the path, only on the values of r at
the start and finish.
1.6. INTEGRATING ON CURVES 53

r=b
u C
θ
O
dr
u
r=a r

Note that the argument (and diagram) presumes that the particle moves in such a
way that the radius always increases. Otherwise, it would not make sense to use
r as the parameter, since there should be exactly one point on the curve for each
value of the parameter. Can you see how to make the argument work if the path
is allowed to meander in such a way that r sometimes increases and sometimes
decreases?

Exercises for 1.6.

1. Find the arc length of each of the following paths for the given parameter
interval.
(a) r = 2 sin t i − 2 cos t j + 2t k, from t = 0 to t = 2π.
(b) r = he−t cos t, e−t sin t, e−t i, from t = 0 to t = 2.
(c) x = t2 , y = 4t + 3, z = −4 ln t, from t = 1 to t = 2.
(d) x = 4t, y = 3, z = 2t2 , from t = 0 to t = 3.

2. Let C be the graph of the function defined by y = f (x) on the interval a ≤


x ≤ b. Use the parametric representation r = x i+f (x) j to derive the formula
∫ b √
L(C) = 1 + f 0 (x)2 dx.
a

3. (Optional, for those with an interest in mathematical rigor). Suppose a curve


C is represented parametrically by r = r1 (u), a ≤ u ≤ b and r = r2 (v), c ≤
v ≤ d. Under reasonable assumptions on the parametric representations,
it may be shown that there is a functional relation u = p(v), c ≤ v ≤ d
such that p is a differentiable function with continuous derivative, p0 (u) ≥ 0,
p(c) = a, p(d) = b, and r1 (p(v)) = r2 (v) for c ≤ v ≤ d. Show that
∫ b ∫ d
|r01 (u)|du = |r02 (v)|dv.
a c
54 CHAPTER 1. VECTORS

Hint: Use the chain rule


dr1 (p(v)) dr1 (u) dp(v)
= where u = p(v),
dv du dv
and apply the change of variables formula for integrals.

4. Evaluate C F · Tds in each case.
(a) F = xi + yj + zk, r = ht, t, 2i, 0 ≤ t ≤ 1.
(b) F = yzi + xzj + xyk, x = sin t, y = 2 cos t, z = t, 0 ≤ t ≤ π.
(c) F = yi + zj + xk, r = ti + t2 j + t3 k, 0 ≤ t ≤ 1.

5. Let F = −yi, and let C be the


∫ rectilinear path starting at (0, 1), going to (0, 3)
and ending at (3, 3). Find F · dr.
C

6. Let F = r = xi + yj + zk. Compute F · dr for each of the following paths:
C
(a) the linear path which goes directly from the origin to the point (2, 1, 3),
(b) the rectilinear path which goes from the origin to (2, 0, 0), then to (2, 1, 0)
and finally to (2, 1, 3).

7. Let F = yi and let C be the path∫ which starts at (−1, 1) and follows the
parabola y = x2 to (1, 1). Find F · dr.
C

8. A workman at the top of a ten story building (50 m high) needs to lower a
10 kg box to a point on the ground 50 m from the base of the building. To
do this, he constructs a variety of slides. Show that the work done by gravity
(with force per unit mass 9.8j) is identical if the slide is
(a) A freefall chute, and the box is then pushed 50 m horizontally.
(b) A straight line of slope 1.
(c) One quarter of a circle, with its center at the base of the building (concave
down).
(d) One quarter of a circle, with its center 50 m above the final point (concave
up).
Interesting diversion: Are the times involved the same? Hint: Consider the
ISP toy with two balls on two slightly different tracks.

9. Suppose F = ruθ . (a) Calculate F · dr for C a circle of radius R centered at
C
the origin and traversed once counter- clockwise. (This was done differently as
an Example in the text. Do you recognize it?) (b) Calculate the line integral
for the same F, but for the path given parametrically by r = ti+tj, 0 ≤ t < ∞.
In principle, the fact that the path is unbounded could cause a problem. Why
doesn’t it in this case?
1.6. INTEGRATING ON CURVES 55

10. Let F = r = x i + y j + z k, and let C be the circle of radius R lying in the


plane z = h and centered at the point (0, 0, h). Assume C is traversed
∫ in the
counter-clockwise direction when viewed from above. Calculate F · dr.
C

11. Let F = j and let C be the quarter of the circle x2 + y 2 = R2 in the first
quadrant
∫ of the x, y-plane. Assume C is traversed counter-clockwise. Find
C
F · dr. (Hint: The angle between F and dr is the polar angle θ.) What if
we use F = i instead? How about xi?
56 CHAPTER 1. VECTORS
Chapter 2

Differential Equations,
Preview

2.1 Some Elementary Differential Equations

Later in this course, you will study differential equations in a systematic way, but
before that you are likely to encounter some simple differential equations in Physics
and other courses. Hence, we shall give a brief introduction to the subject here.

A differential equation is an equation involving an independent variable, a depen-


dent variable, and derivatives of the latter. The order of a differential equation is
the highest order of any derivative which appears in it. For example,
dx
= ax (22)
dt
is an example of a first order differential equation, while
d2 x
= −Kx (23)
dt2
is an example of a second order differential equation.

To solve a differential equation, you must find a function x = x(t), expressing the
dependent variable in terms of the independent variable, which when substituted
in the equation yields a true identity. For example, substituting x = Ceat (where
C is any constant) and dx/dt = Caeat in equation (22) yields
Caeat = a(Ceat )
which is true. Hence, the function given by x(t) = Ceat is a solution. Note the
following very important point. If someone is kind enough to suggest a solution to

57
58 CHAPTER 2. DIFFERENTIAL EQUATIONS, PREVIEW

you, it is not necessary to go through any procedure to “find” that solution. All
you need to do is to check that it works.

If you don’t have any idea of what a solution to a given differential equation might
be, then you need some method to try to find a solution. How one goes about this
is a vast subject, and we shall go into it later, as mentioned above. For the moment,
we describe only the method of separation of variables which works in many simple
examples. We illustrate it by solving equation (22).
dx
= ax
dt
dx
= adt Variables are separated formally
∫ x ∫
dx
= adt Take antiderivatives
x
ln |x| = at + c
|x| = eat+c = eat ec Exponentiate
x = ±e e .
c at

However, ±e is just some constant which we may call C. Thus we find that
c

x = Ceat is a solution.

Note that when integrating there should be an undetermined constant on each side
of the equation. However, these constants can be combined in one by transposition,
and that is what we did by putting the +c only on the right side of the equation in
the fourth line.

Whenever the variables can be so separated, the above procedure produces a general
solution of the equation. There are some technical difficulties with the method. (For
example, a denominator might vanish after the variables have been separated. Also,
the formal separation procedure needs some rigorous justification.) However, these
difficulties can usually be resolved and one can make a convincing argument that
any solution will be of the form obtained by the method.

Note that in the example, the solution produced one arbitrary constant C, i.e., one
constant which is not otherwise determined by the equation. (a is also a constant,
but it was assumed known in the original equation.) You can think of that constant
arising from integration. To complete the solution, it is necessary to determine the
constant. There are many ways to do this, depending on the formulation of the
problem giving rise to the differential equation. One of the easiest is to give the
value x0 of the dependent variable for one specified value t0 of the independent
variable and then solve the resulting equation for the constant C. (This is called
satisfying an initial condition.) For example, suppose we are given that x = 10
when t = 0. Substituting these values yields
10 = Cea(0) = Ce0 = C
or C = 10. With that initial condition, the solution is x = 10eat .
2.1. SOME ELEMENTARY DIFFERENTIAL EQUATIONS 59

Equation (22) describes processes of growth or decay. x might represent the size of
a population, the number of radioactive atoms, or something similar. The equation
asserts that the growth rate (decay rate if a < 0) is proportional to the amount
present. In the case of population growth, a is often called the “birth rate”. In the
case of radioactive decay, a < 0, so we put a = −γ, and γ tells us the proportion
of atoms present at a given time which will decay at that time. In that case, γ is
often expressed in terms of the half life T which is the time for the x to decrease
to half its initial value. You might check your proficiency in the use of exponentials
and logarithms by checking the following formula.
ln 2
γ= .
T

d2 x
Equation (23), = −Kx, arises in studying simple harmonic motion. For
dt2
example, the motion of a particle of mass m oscillating at the end of a spring with
spring constant k is governed by such an equation with K = k/m, if we ignore
friction. (23) is much harder to solve than (22). One must distinguish the cases
K > 0 and K < 0, but the former has more interesting applications (as in simple
harmonic motion), so we shall assume K > 0. The solution proceeds by separation
of variables as follows. First, put v = dx/dt. Then the equation becomes
dv
= −Kx
dt
dv dx
= −Kx using the chain rule
dx dt
dv
or v = −Kx
dx
v dv = −Kx dx separating the variables Equil.
2 2
v x
= −K +c
2 2 x
v = −Kx + 2c.
2 2
m

However, we might just as well rename the constant 2c and call it C1 . Note that
since K > 0, we have C1 = v 2 + Kx2 ≥ 0. If C1 = 0, it follows that v = x = 0 for
all t. That is certainly possible, but it isn’t very interesting, so we assume C1 > 0.
Now, recalling that v = dx/dt, we obtain
dx √
v= = ± C1 − Kx2
dt
dx
√ = ±dt separating variables
C1 − Kx2

dx
√ = ±t + C2 integrating
C1 − Kx2

1 K
√ cos−1 ( x) = ±t + C2 .
K C 1
60 CHAPTER 2. DIFFERENTIAL EQUATIONS, PREVIEW

Note that we need K > 0 and C1 > 0 for the last integration to be valid. Continuing,
we have

−1 K √ √
cos ( x) = ± Kt + KC2 or
C1

C1 √ √
x= cos(± Kt + KC2 )
K

C1 √ √
= cos( Kt ± KC2 ).
K

C1 √
However, we may define A = (so A > 0) and δ = ± KC2 to obtain the
K
general solution √
x = A cos( Kt + δ). (24)
Note that this solution has two arbitrary constants arising from the two integrations
which had to be performed. Generally, the solution of an nth order equation involves
n arbitrary constants.

The above derivation depended strongly on the assumption K > 0. For K < 0, we
would not√be able to conclude that C1 > 0, and the integration step which resulted
in cos−1 ( CK1 x) on the left would not be valid. If you are ambitious, you might try
doing the integration under the assumpition K < 0 to see what you get. Later in
this course, we shall derive other methods to solve this equation whatever the sign
of K.

To determine the constants, one may again specify initial conditions. However,
specifying the value x0 at t0 will yield only one equation for the two constants.
Hence, one needs an additional condition to obtain another equation. To get that,
one commonly specifies the derivative v0 at t0 . For example, suppose K = 3, and
suppose x = 1, dx/dt = −1 at t = 0. Then (24) yields the two equations
1 = A cos(δ)

−1 = −A 3 sin(δ).
Dividing the second equation by the first, yields
1
tan δ = √
3
from which we conclude δ = π/6 or −5π/6. Since both sin δ and cos δ are positive,
this yields δ = π/6. Putting this in the first equation yields

π 3
1 = A cos( ) = A
6 2

so A = 2/ 3. We conclude that the solution satisfying these initial conditions is
2 √ π
x = √ cos( 3t + ).
3 6
2.1. SOME ELEMENTARY DIFFERENTIAL EQUATIONS 61

The quantity A in equation (24) is called the amplitude of the solution, and δ is
called the phase. One way to visualize the solution is as follows. Consider a point
moving on a circle of radius A centered at the origin.

θ
x

θ = ω t+ δ

The projection of that point on the x-axis oscillates back according to (24). δ gives
the angle the √position vector of the point makes with the x-axis at t = 0. The
quantity ω = K gives the angular velocity of the point on the circle. ω is also
called the angular frequency of the oscillation. The period T = 2π/ω is the time
required for one circuit (one complete oscillation). The frequency f = 1/T = ω/2π
is the number of circuits (oscillations) per unit time. The frequency is usually
measured in Hertz (Hz), i.e., oscillations per second.

One often sees the solution written differently in the form obtained by expanding
(24) as follows.

x = A cos(ωt + δ)
= A cos(ωt) cos δ − A sin(ωt) sin δ
x = C cos(ωt) + D sin(ωt)

where C = A cos δ and D = −A sin δ.

You should note that had you suspected that x = A cos(ωt + δ) is a solution, you
could have checked it by substituting into (23), and avoided most of the work in the
derivation above. You might argue that one would have little reason to suspect such
a solution might work. That is true if one is thinking in purely mathematical terms,
but given that one knows the differential equation arises from a physical problem
in which one observes oscillatory behavior, it is not entirely unreasonable to guess
at a solution like (24). Of course, without going through a more refined analysis,
you could not be absolutely sure that (24) encompasses all possible solutions (i.e.,
that it is sufficiently general), but the fact that it involves two arbitrary constants
would strongly suggest that such is the case.
62 CHAPTER 2. DIFFERENTIAL EQUATIONS, PREVIEW

Remember the moral of the above discussion. If somehow or other a solution to


a problem is suggested to you, you don’t have to bother “deriving” that solution
by a “mathematical” procedure. If the solution works, and if in addition, you have
reason to believe there is only one solution which can work, then the one you have
must be it.

Exercises for 2.1.

1. Find general solutions for the following differential equations.


dy
(a) = 4x2 y 2 .
dx
dy x
(b) = .
dx y
dy 1 + x2
(c) = . (Don’t try to express y as a function of x.)
dx 1 + y2
dy
(d) = (xy)3/2 .
dx
2. Solve the initial value problem
dy
= y 2 , where y(0) = 2
dx
.
3. The arctic fox population in a certain habitat is given by the equation
dP √
= −k P
dt
. (a) If the initial population (at t = 0) is P0 , find the general solution.
(b) If there are 100 foxes initially, and 25 remain after 6 weeks, how long until
extinction? Do you see anything wrong with this approach?
4. The bacterial species E. Coli doubles in number every twenty minutes. If a
single colony is present initially, how long will a biologist have to wait before
having one million colonies? One billion colonies?
5. Show that the time T required for e−γt to drop to half its value at t = 0 is
ln 2
T = .
γ

6. Archaeologists recently uncovered a relic purported to date from 0 A.D. Sev-


eral independent laboratories used carbon dating to analyze the sample. If
the fabric contains 6.0 × 1011 atoms of 14 C per gram, and modern fabric of
the same type contains 7.0 × 1011 atoms of 14 C per gram, is it authentic?
2.1. SOME ELEMENTARY DIFFERENTIAL EQUATIONS 63

7. Electricity drains from a capacitor according to the equation


dV 1
= − V.
dt 20
Solve the equation in terms of V (0) = V0 . Find how long it will take for V to
be reduced to one hundredth its value at t = 0. Is V ever zero?
8. Newton’s Law of Cooling states that the rate at which an object’s temperature
changes is proportional to the temperature difference between the object and
its surroundings. In other words,
dT
= −k(T − Ts ),
dt
where T is the temperature of the object, and Ts is the temperature of its
surroundings (a constant).
(a) Solve this equation for T.
(b) A steel ingot at 1000 K is exposed to air at 300 K. If it cools to 800 K in
one hour, what will its temperature be after five hours? How long will it take
until equilibrium (T = Ts ) is attained?
9. In the solution of the equation d2 x/dt2 = −Kx, it was noted that the constant
C1 = 2c > 0 if K > 0. However, the reasoning did not exclude the case
C1 = 2c = 0. How might we exclude that case? Hint: What would C1 = 0
say about x and v if K > 0?
10. A mass on a spring undergoes simple harmonic motion.
(a) If the frequency is 6 Hz, x(0) = 5, and x0 (0) = 0, find the amplitude A of
the motion.
(b) If the period is 4 seconds, x(0) = 0, and x0 (0) = 5, find the amplitude A
of the motion. If the mass is 10 kg, what is the force applied by the spring at
t = 0?
(c) If the amplitude is 10, the mass starts at the origin, and x0 (0) = 1, find
the frequency and period of motion.
(d) Find the phases for (a), (b), and (c).
11. A pendulum at the end of a (massless) rod of length L satisfies the differential
d2 θ g
equation 2 = − sin θ. Show that
dt L
( )2
dθ 2g
− cos θ = C
dt L

where C is a constant. Solve for and try to solve the resulting first order
dt
differential equation explicitly. If you can’t integrate the resulting expression,
don’t be surprised.
64 CHAPTER 2. DIFFERENTIAL EQUATIONS, PREVIEW
Chapter 3

Differential Calculus of
Functions of n Variables

We want to develop the calculus necessary to discuss functions of many variables.


We shall start with functions f (x, y) of two independent variables and functions
f (x, y, z) of three independent variables. However, in general, we need to consider
functions f (x1 , x2 , . . . , xn ) of any number of independent variables. We shall use
the notation Rn as before to stand for the set of all n-tuples (x1 , x2 , . . . , xn ) with
real entries xi . For n = 2, we shall identify R2 with the plane and, for n = 3, we
shall identify R3 with space. We shall use the old fashioned term locus to denote
the set of all points satisfying some equation or condition.

3.1 Graphing in Rn

We shall encounter equations involving two, three, or more variables. As you know,
an equation of the form
f (x, y) = C
may be viewed as defining a curve in the plane. For example, ax + by = c has plane
locus a line, while x2 + y 2 = R2 has plane locus a circle of radius R centered at the
origin. Similarly, an equation involving three variables

f (x, y, z) = C

may be thought of as defining a surface in space. Thus, we saw previously that the
locus in R3 of a linear equation

ax + by + cz = d

65
66CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

(where not all a, b, and c are zero) is a plane. If we use more complicated equations,
we get more complicated surfaces.

Example 23 The equation


x2 + y 2 + z 2 = R2

may be rewritten |r| = x2 + y 2 + z 2 = R, so it asserts that the point with position
vector r is at distance R from the origin. Hence, the locus of all such points is a
sphere of radius R centered at the origin.

z z

y y

x x

Sphere centered at (0, 0, 0) Sphere centered at (-1, 2, 0)

Example 24 Consider the locus of the equation

x2 + 2x + y 2 − 4y + z 2 = 20.

This is also a sphere, but one not centered at the origin. To see this, complete the
squares for the terms involving x and y.

x2 + 2x + 1 + y 2 − 4y + 4 + z 2 = 10 + 1 + 4 = 25
(x + 1)2 + (y − 2)2 + z 2 = 52 .

This asserts that the point with position vector r = hx, y, zi is 5 units from the
point (−1, 2, 0), i.e., it lies on a sphere of radius 5 centered at (−1, 2, 0).

Example 25 Consider the locus of the equation z = x2 + y 2 (which could also be


written x2 + y 2 − z = 0.) To see what this looks like, we consider its intersection
with various planes. Its intersection with the y, z-plane is obtained by setting x = 0
to get z = y 2 . This is a parabola in the y, z-plane. Similarly, its intersection with
the x, z-plane is the parabola given by z = x2 . To fill in the picture, consider
intersections with planes parallel to the x, y-plane. Any √ such plane has equation
z = h, so the intersection has equation x2 + y 2 = h = ( h)2 , which you should
3.1. GRAPHING IN RN 67

recognize as a circle of radius h, at least if h > 0. Note that the circle is centered
at (0, 0, h) on the z-axis since it lies in the plane z = h. If z = h = 0, the circle
reduces to a single point, and for z = h < 0, there is no locus. The surface is “bowl”
shaped. It is called a circular paraboloid.

Graphing a surface in R3 by sketching its traces on various planes is a useful strat-


egy. In order to be good at it, you need to know the basics of plane analytic
geometry so you can recognize the resulting curves. In particular, you should be fa-
miliar with the elementary facts concerning conic sections, i.e., ellipses, hyperbolas,
and parabolas. Edwards and Penney, 3rd Edition, Chapter 10 is a good reference
for this material.
x2 y2
Example 26 Consider the locus in space of + = 1. Its intersection with a
4 9
plane z = h parallel to the x, y-plane is an ellipse centered on the z-axis and with
semi-minor and semi-major axes 2 and 3. The surface is a cylinder perpendicular
to the x, y-plane with elliptical cross sections. Note that the locus in space is not
just the ellipse in the x, y-plane with the same equation.

1
z=
x2 y2 2
+ = 1 x + y 2
4 9

1
Example 27 Consider the locus in space of the equation z = . Its inter-
x2
+ y2
section √with the plane z = h (for h > 0) is the circle with equation x2 + y 2 =
1/h = ( 1/h)2 . The surface does not intersect the x, y-plane itself (z = 0) nor any
plane below the x, y-plane. It intersection with the x, z-plane (y = 0) is the curve
z = 1/x2 which is asymptotic to the x-axis and to the positive z-axis. Similarly,
for its intersection√with the y, z-plane The surface flattens out and approaches the
x, y-plane as r = x2 + y 2 → ∞. It approaches the positive z-axis as r → 0.

Example 28 Consider the locus in space of the equation yz = 1. Its intersection


with a plane parallel to the y, z-plane (x = d) is a hyperbola asymptotic to the
y and z axes. The surface is perpendicular to the y, z-plane. Such a surface is
68CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

also called a cylinder although it doesn’t close upon itself as the elliptical cylinder
considered above.

Example 29 Consider the locus of the equation x2 + z 2 = y 2 − 1. For each plane


parallel
√ to the x, z-plane (y = c), the intersection is a circle x2 + z 2 = c2 − 1 =
( c − 1)2 centered on the y-axis, at least of c2 > 1. For y = c = ±1, the locus is a
2

point, and for −1 < y = c < 1, the locus is empty. In addition, the intersection of
the surface with the x, y-plane (z = 0) is the hyperbola with equation x2 − y 2 = −1,
and similarly for its intersection with the y, z-plane. The surface comes in two
pieces which open up as “bowls” centered on the positive and negative y-axes. The
surface is called a hyperboloid of 2 sheets.

2
x 2+ z 2= y - 1
yz = 1

Graphs of Functions For a scalar function f of one independent variable, the


graph of the function is the set of all points in R2 of the form (x, f (x)) for x in
the domain of the function. (The domain of a function is the set of values of the
independent variable for which the function is defined.) In other words, it is the
(x, y, f(x, y)) locus of the equation y = f (x). It is generally a curve in the plane.

We can define a similar notion for a scalar function f of two independent variables.
The graph is the set of points in R3 of the form (x, y, f (x, y)) for (x, y) a point in the
domain of the function. In other words, it is the locus of the equation z = f (x, y),
(x, y) and it is generally a surface in space. The graph of a function is often useful in
understanding the function.

We have already encountered several examples of graphs of functions. For example,


the locus of z = x2 + y 2 is the graph of the function f defined by f (x, y) = x2 + y 2 .
Similarly, the locus of z = 1/(x2 + y 2 ) is the graph of the function f defined by
f (x, y) = 1/(x2 + y 2 ) for (x, y) 6= (0, 0). Note that in the first case there need be no
restriction on the domain of the function, but in the second case (0, 0) was omitted.
3.1. GRAPHING IN RN 69

In some of the other examples, the locus of the equation cannot be considered the
graph of a function. For example, the equation x2 + y 2 + z 2 =√R2 cannot be
solved uniquely for z in terms of (x, y). Indeed, we have z = ± √R2 − x2 − y 2 ,
so that two possible functions suggest themselves. z = f1 (x, y) = R2 − x2 − y 2
defines
√ a function with graph the top hemisphere of the sphere, while z = f2 (x, y) =
− R2 − x2 − y 2 yields the lower hemisphere. (Note that for either of the functions
the relevant domain is the set of points on or inside the circle x2 + y 2 = R2 . For
points outside that circle, the expression inside the square root is negative, so,
since we are only talking about functions assuming real values, such points must be 2 2 2
excluded.) z = R - x - y

Example 30 Let f (x, y) = xy for all (x, y) in R2 . The graph is the locus of the
the equation z = xy. We can sketch it by considering traces on various planes. Its
intersection with a plane parallel to the x, y-plane (z = constant) is a hyperbola
asymptotic to lines parallel to the x and y axes. For z > 0, the hyperbola is in
the first and third quadrants of the plane, but for z < 0 it is in the second and
fourth quadrants. For z = 0, the equation is xy = 0 with locus consisting of the
x-axis (y = 0) and the y-axis (x = 0). Thus, the graph intersects the x, y-plane
in two straight lines. Th surface is generally shaped like an “infinite saddle”. It is
called a hyperbolic paraboloid. It is clear where the term “hyperbolic” comes from.
Can you see any parabolas? (Hint: Try planes perpendicular to the x, y-plane with
equations of the form y = mx.)

x
z = xy

Example 31 Let f (x, y) = x/y for y 6= 0. Thus, the domain of this function consists
of all points (x, y) not on the x-axis (y = 0). The trace in the plane y = c, c 6= 0
is the line z = (1/c)x with slope 1/c. Similarly, the trace in the plane z = c, c 6= 0
is the line y = (1/c)x. Finally, the trace in the plane x = c, is the hyperbola
z = c/x. Even with this information you will have some trouble visualizing the
graph. However, the equation z = x/y can be rewritten yz = x. By permuting the
variables, you should see that the locus of yz = x is similar to the saddle shaped
surface we just described, but oriented differently in space. However, the saddle is
not quite the graph of the function since it contains the z-axis (y = x = 0) but the
70CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

graph of the function does not. In general, the graph of a function, since it consists
of points of the form (x, y, f (x, y)), cannot contain points with the same values for
x and y but different values for z. In other words, any line parallel to the z-axis
can intersect such a graph at most once.

Sketching graphs of functions, or more generally loci of equations in x, y, and z,


is not easy. One approach drawn from the study of topography is to interpret
the equation z = f (x, y) as giving the elevation of the surface, viewed as a hilly
terrain, above a reference plane. (Negative elevation f (x, y) is interpreted to mean
that the surface dips below the reference plane.) For each possible elevation c, the
intersection of the plane z = c with the graph yields a curve f (x, y) = c. This curve
is called a level curve, and we draw a 2-dimensional map of the graph by sketching
the level curves and labeling each by the appropriate elevation c. Of course, there
are generally infinitely many level curves since there are infinitely many possible
values of z, but we select some subset to help us understand the topography of the
surface.

z
z = f(x, y)
y

x
x
Projected level curves in plane
Contour lines on surface

Example 32 The level curves of the surface z = xy have equations xy = c for


various c. They form a a family of hyperbolas, each with two branches. For c > 0,
these hyperbolas fill the first and third quadrants, and for c < 0 they fill the second
and fourth quadrants. For c = 0 the x and y axes together constitute the level
“curve”. See the diagram.

You can see that the region around the origin (0, 0) is like a “mountain pass” with
the topography rising in the first and third quadrants and dropping off in the second
and fourth quadrants. In general a point where the graph behaves this way is called
3.1. GRAPHING IN RN 71

a saddle point. Saddle points indicate the added complexity which can arise when
one goes from functions of one variable to functions of two or more variables. At
such points, the function can be considered as having a maximum or a minimum
depending on where you look.

Quadric Surfaces One important class of surfaces are those defined by quadratic
equations. These are analogues in three dimensions of conics in two dimensions.
They are called quadric surfaces. We describe here some of the possibilities. You
can verify the pictures by using the methods described above.

Consider first equations of the form


x2 y2 z2
± ± ± =1
a2 b2 c2

If all the signs are positive, the surface is called an ellipsoid. Planes perpendicular to
one of the coordinate axes intersect it in ellipses (if they intersect at all). However,
at the extremes these ellipses degenerate into the points (±a, 0, 0), (0, ±b, 0), and
(0, 0, ±c).

Ellipsoid Hyperboloid of one sheet Hyperboloid of two sheets

If exactly one of the signs are negative, the surface is called a hyperboloid of one
sheet. It is centered on one axis (the one associated to the negative coefficient in the
equation) and it opens up in both positive and negative directions along that axis.
Its intersection with planes perpendicular to that axis are ellipses. Its intersections
with planes perpendicular to the other axes are hyperbolas.

If exactly two of the signs are negative, the surface is called a hyperboloid of two
sheets. It is centered on one axis (associated to the positive coefficient). For exam-
ple, suppose the equation is
x2 y2 z2
− 2
+ 2 − 2 = 1.
a b c
72CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

For y < −b or y > b, the graph intersects a plane perpendicular to the y-axis in an
ellipse. For y = ±b, the intersection is the point (0, ±b, 0). (These two points are
called vertices of the surface.) For −b < y < b, there is no intersection with a plane
perpendicular to the y-axis.

The above surfaces are called central quadrics. Note that for the hyperboloids,
with equations in standard form as above, the number of sheets is the same as the
number of minus signs.

Consider next equations of the form

x2 y2
z=± 2
± 2
a b

(or similar equations obtained by permuting x, y and z.)

If both signs are the same, the surface is called an elliptic paraboloid. If both signs
are positive, it is centered on the positive z-axis and its intersections with planes
perpendicular to the positive z-axis are a family of similar ellipses which increase
in size as z increases. If both signs are negative, the situation is similar, but the
surface lies below the x, y plane.

If the signs are different, the surface is called a hyperbolic paraboloid. Its intersection
with planes perpendicular to the z-axis are hyperbolas asymptotic to the lines in
those planes parallel to the lines x/a = ±y/b. Its intersection with the x, y-plane is
just those two lines. The surface has a saddle point at the origin.

The locus of the equation z = cxy, c 6= 0 is also a hyperbolic paraboloid, but rotated
so it intersects the x, y-plane in the x and y axes.

Elliptic paraboloid Hyperbolic paraboloid

Finally, we should note that many so called “degenerate conics” are loci of quadratic
equations. For example, consider

x2 y2 z2
2
+ 2 − 2 =0
a b c
3.1. GRAPHING IN RN 73

which may be solved to obtain



x2 y2
z = ±c 2
+ 2.
a b

The locus is a double cone with elliptical cross sections and vertex at the origin.

Generalizations In general, we will want to study functions of any number of


independent variables. For example, we may define the graph of a scalar valued
function f of three independent variables to be the set of all points in R4 of the
form (x, y, z, f (x, y, z)). Such an object should be considered a three dimensional
subset of R4 , and it is certainly not easy to visualize. It is more useful to con-
sider the analogues of level curves for such functions. Namely, for each possible
value c attained by the function, we may consider the locus in R3 of the equation
f (x, y, z) = c. This is generally a surface called a level surface for the function.

Examples For f (x, y, z) = x2 + y 2 + z 2 , the level surfaces are concentric spheres


centered at the origin if c > 0. For c = 0 the level ‘surface’ is not really a surface
f(x, y, z) = c
at all; it just consists of the point at the origin. (What if c < 0?)

For f (x, y, z) = x2 + y 2 − z 2 , the level surfaces are either hyperboloids of one sheet
if c > 0 or hyperboloids of two sheets if c < 0. (What if c = 0?)

For functions of four or more variables, geometric interpretations are even harder
to come by. If f (x1 , x2 , . . . , xn ) denotes a function of n variables, the locus in Rn
of the equation f (x1 , x2 , . . . , xn ) = c is called a level set, but one doesn’t ordinarily
try to visualize it geometrically.

Instead of talking about many independent variables, it is useful to think instead of


a single independent variable which is a vector, i.e., an element of Rn for some n. In
the case n = 2, 3, we usually write r = hx, yi or r = hx, y, zi so f (x, y) or f (x, y, z)
would be written simply f (r). If n > 3, then one often denotes the variables
x1 , x2 , . . . , xn and denotes the vector (i.e., element of Rn ) by x = (x1 , x2 , . . . , xn ).
Then f (x1 , x2 , . . . , xn ) becomes simply f (x). The case of a function of a single real
variable can be subsumed in this formalism by allowing the case n = 1. That is, we
consider a scalar x to be just a vector of dimension 1, i.e, an element of R1 .

When we talked about kinematics, we considered vector valued functions r(t) of a


single independent variable. Thus we see that it makes sense to consider in general
functions of a vector variable which can also assume vector values. We indicate this
by the notation f : Rn → Rm . That shall mean that the domain of the function
f is a subset of Rn while the set of values is a subset of Rm . Thus, n = m = 1
would yield a scalar function of one variable, n = 2, m = 1 a scalar function of two
variables, and n = 1, m = 3 a vector valued function of one scalar variable. We
shall have occasion to consider several other special cases in detail.

There is one slightly non-standard aspect to the above notation. In ordinary usage
74CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

in mathematics, “f : Rn → Rm ” means that Rn is the entire domain of the function


f , whereas we are taking it to mean that the domain is some subset. We do this
mostly to save writing since usually the domain will be almost all of Rn or at least
some significant chunk of it. What we want to make clear by the notation is the
dimensionality of both the independent and dependent variables.

Exercises for 3.1.

You are encouraged to make use of the available computer software (e.g., Maple,
Mathematica, etc.) to help you picture the graphs in the following problems.

1. State the largest possible domain for the function


2
−y 2
(a) f (x, y) = ex
(b) f (x, y) = ln(y 2 − x2 − 2)
x2 − y 2
(c) f (x, y) =
x−y
1
(d) f (x, y, z) =
xyz
1
(e) f (x, y, z) = √
z 2 − x2 − y 2
2. Describe the graph of the function described by
(a) f (x, y) = 5
(b) f (x, y) = 2x − y
(c) f (x, y) = 1 − x2 − y 2

(d) f (x, y) = 4 − x2 + y 2

(e) f (x, y) = 24 − 4x2 − 6y 2

3. Sketch selected level curves for the functions given by


(a) f (x, y) = x + y
(b) f (x, y) = x2 + 9y 2
(c) f (x, y) = x − y 2
(d) f (x, y) = x − y 3
(e) f (x, y) = x2 + y 2 + 4x + 2y + 9

4. Describe selected level surfaces for the functions given by


(a) f (x, y, z) = x2 + y 2 − z
(b) f (x, y, z) = x2 + y 2 + z 2 + 2x − 2y + 4z
(c) f (x, y, z) = z 2 − x2 − y 2
3.2. LIMITS AND CONTINUITY 75

5. Describe the quadric surfaces which are loci in R3 of the following equations.
(a) x2 + y 2 = 16
(b) z 2 = 49x2 + y 2
(c) z = 25 − x2 − y 2
(d) x = 4y 2 − z 2
(e) 4x2 + y 2 + 9z 2 = 36
(f) x2 + y 2 − 4z 2 = 4
(g) 9x2 + 4y 2 − z 2 = 36
(h) 9x2 − 4y 2 − z 2 = 36

6. Describe the traces of the following functions in the given planes


(a) z = xy, in horizontal planes z = c
(b) z = x2 + 9y 2 in vertical planes x = c or y = c
(c) z = x2 + 9y 2 in horizontal planes z = c

7. Describe the intersection of the cone x2 + y 2 = z 2 with the plane z = x + 1.

8. Let f : R2 → R2 .
(a) Try to invent a definition of ‘graph’ for such a function. For what n would
it be a subset of Rn ?
(b) Try to invent a definition of ‘level set’ for such a function. For what n
would it be a subset of Rn ?

3.2 Limits and Continuity

Most users of mathematics don’t worry about things that might go wrong with the
functions they use to represent physical quantities. They tend to assume that func-
tions are differentiable when derivatives are called for (except possibly for a finite
set of isolated points), and they assume all functions which need to be integrated
are continuous so the integrals will exist. For much of the period during which
Calculus was developed (during the 17th and 18th centuries), mathematicians also
did not bother themselves with such matters. Unfortunately, during the 19th cen-
tury, mathematicians discovered that general functions could behave in unexpected
and subtle ways, so they began to devote much more time to careful formulation of
definitions and careful proofs in analysis. This is an aspect of mathematics which is
covered in courses in real analysis, so we won’t devote much time to it in this course.
(You may have noticed that we didn’t worry about the existence of derivatives in
our discussion of velocity and acceleration.) However, for functions of several vari-
ables, lack of rigor can be more troublesome than in the one variable case, so we
76CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

briefly devote some attention to such questions. In this section, we shall discuss the
concepts of limit and continuity for functions f : R2 → R. The big step, it turns
out, is going from one independent variable to two. Once you understand that,
going to three or more independent variables introduces few additional difficulties.

Let r0 = hx0 , y0 i be (the position vector of) a point in the domain of the function
f . We want to define the concept to be expressed symbolically

lim f (r) = L or lim f (x, y) = L.


r→r0 (x,y)→(x0 ,y0 )

We start with two examples which illustrate the concept and some differences from
the single variable case.
(1, 2, 9)
Example 33 Let f (x, y) = x2 + 2y 2 , and consider the nature of the graph of f
near the point (1, 2). As we saw in the previous section, the graph is an elliptic
paraboloid, the locus of z = x2 + 2y 2 . In particular, the surface is quite smooth,
and if (x, y) is a point in the domain close to (1, 2), then f (x, y) will be very close
to the value of the function there, f (1, 2) = 12 + 2(22 ) = 9. Thus, it makes sense to
assert that
lim x2 + 2y 2 = 9.
(1, 2) (x,y)→(1,2)

In Example 33, the limit was determined simply by evaluating the function at the
desired point. You may remember that in the single variable case, you cannot always
do that. For example, putting x = 0 in sin x/x yields the meaningless expression
0/0, but limx→0 sin x/x is known to be 1. Usually, it requires some ingenuity to
find such examples in the single variable case, but the next example shows that
fairly simple formulas can lead to unexpected difficulties for functions of two or
more variables.

Example 34 Let

x2 − y 2
f (x, y) = for (x, y) 6= (0, 0).
x2 + y 2

What does the graph of this function look like in the vicinity of the point (0, 0)?
(Since, (0, 0) is not in the domain of the function, it does not make sense to talk
about f (0, 0), but we can still seek a ‘limit’.) The easiest way to answer this question
is to switch to polar coordinates. Using x = r cos θ, y = r sin θ, we find

r2 cos2 θ − r2 sin2 θ
f (r) = f (x, y) = = cos2 θ − sin2 θ = cos 2θ.
r2 cos2 θ + r2 sin2 θ
Thus, f (r) = f (x, y) is independent of the polar coordinate r and depends only
on θ. As r = |r| → 0 with θ fixed, f (r) is constant, and equal to cos 2θ, so, if it
‘approaches’ a limit, that limit would have to be cos 2θ. Unfortunately, cos 2θ varies
between −1 and 1, so it does not make sense to say f (r) has a limit as r → 0. You
can get some idea of what the graph looks like by studying the level curves which
3.2. LIMITS AND CONTINUITY 77

are pictured in the diagram. For each value of θ, the function is constant, so the -1 0
0
level curves consist of rays emanating from the origin, as indicated. On any such
ray, the graph is at some constant height z with z taking on every value between
−1 and +1.

In general, the statement 1 1


lim f (r) = L
r→r0

will be taken to mean that f (r) is close to L whenever r is close to r0 . As in the


case of functions of a single scalar variable, this can be made completely precise by
0 0
the following ‘, δ’ definition. -1

For each number  > 0, there is a number δ > 0 such that

0 < |r − r0 | < δ implies |f (r) − L| < .

In this statement, |r − r0 | < δ asserts that the distance from r to r0 is less than
δ. Since δ is thought of as small, the inequality makes precise the meaning of ‘r is
close to r0 ’. Similarly, |f (r) − L| <  catches the meaning of ‘f (r) is close to L’.
Note that we never consider the case r = r0 , so the value of f (r0 ) is not relevant in
checking the limit as r → r0 . (It is not even necessary that f (r) be well defined at +ε
r = r0 .)
- ε
Limits for functions of several variables behave formally much the same as limits
L
for functions of one variable. Thus, you may calculate the limit of a sum by taking
the sum of the limits, and similarly for products and quotients (except that for
quotients the limit of the denominator should not be zero). The understanding you
gained of these matters in the single variable case should be an adequate guide to
δ
what to expect for several variables. If you never really understood all this before,
we won’t enlighten you much here. You will have to wait for a course in real analysis r
for real understanding.
0

Continuity In Example 33, the limit was determined simply by evaluating the
function at the point. This is certainly not always possible because the value of
the function may be irrelevant or there may be no meaningful way to attach a
value. Functions for which it is always possible to find the limit this way are called
continuous. (This is the same notion as for functions of a single scalar variable).
More precisely, we say that f is continuous at a point r0 if the point is in its domain
(i.e., f (r0 ) is defined) and
lim f (r) = f (r0 ).
r→r0

Points at which this fails are called discontinuities or sometimes singularities. (The
latter term is also sometimes reserved for less serious kinds of mathematical pathol-
ogy.) It sometimes happens, that a function f has a well defined limit L at a point
r0 which does not happen to be in the domain of the function, i.e., f (r0 ) is not
defined. (In the single variable case, sin x/x at x = 0 is a good example.) Then we
78CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

can extend the domain of the function to include the point r0 by defining f (r0 ) = L.
Thus the original function had a discontinuity, but it can be eliminated simply by
extending the definition of the function. In this case, the discontinuity is called
removable. As Example 34 shows, there are functions with discontinuities which
cannot be defined away no matter what you try.

A function without discontinuities is called continuous. Continuous functions have


graphs which look reasonably smooth. They don’t have big holes or sudden jumps,
but as we shall see later, they can still look pretty bizarre. Usually, just knowing
that a function is continuous won’t be enough to make it a good candidate to
represent a physical quantity. We shall also want to be able to take derivatives and
do the usual things one does in differential calculus, but as you might expect, this
is somewhat more involved than it is in the single variable case.

Exercises for 3.2.

1. Use the examples of limits and the definition of continuity in this section to
find the following, if they exist.
(a) lim e−xy
(x,y)→(−1,1)

(b) lim sin 1 − x2 − y 2
(x,y)→(0,0)
− x2 +y12 +z2
(c) lim e
(x,y,z)→(0,0,0)
xyz
(d) lim ln √
(x,y,z)→(0,0,1) 1 − x2 − y 2 − z 2
2. Change from rectangular to polar coordinates to determine the following limits
when they exist.

(a) lim (x2 + y 2 ) x2 + y 2
(x,y)→(1,0)
xy
(b) lim √
(x,y)→(0,0) x2 + y 2
x2 − y 2
(c) lim
(x,y)→(0,0) x2 + y 2

3.3 Partial Derivatives

Given a function f of two or more variables, its partial derivative with respect to
one of the independent variables is what is obtained by differentiating with respect
to that variable while keeping all other variables constant.
3.3. PARTIAL DERIVATIVES 79

Example In thermodynamics, the function defined by


T
p = f (v, T ) = k
v
expresses the pressure p in terms of the volume v and the temperature T in the case
of an ‘ideal gas’. Here v and T are considered to be independent variables, and k is
a constant. (k = nR where n is the number of moles of the gas and R is a physical
constant.) The partial derivative with respect to v (keeping T constant) is
T
−k
v2
while the partial derivative with respect to T (keeping v constant) is
1
k .
v

Notation One uses a variety of notations for partial derivatives. For example, for
a function f of two variables,
∂f
(x, y) and fx (x, y)
∂x
are used to denote the partial derivative with respect to x (y kept constant).

Example

f (x, y) = 2x + sin(xy)
fx (x, y) = 2 + cos(xy) y = 2 + y cos(xy)
fy (x, y) = 0 + cos(xy) x = x cos(xy).

In some circumstances, the variable names may change frequently in the discussion,
so the partial derivative is indicated by an numerical subscript giving the position
of the relevant variable. Thus,
f2 (x, y, z, t)
denotes the partial derivative of f (x, y, z, t) with respect to the second variable, in
this case y. In thermodynamics, one may see things like
( )
∂p
∂v T

which is interpreted as follows. It is supposed there is a functional dependence


p = p(v, T ) and the notation represents the partial derivative of this function with
respect to v with T kept constant.

It should be emphasized that just as in calculus of one variable, it is only functions


which can have derivatives (partial or not). It does not make sense to ask for the
80CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

rate of change of one variable with respect to another without assuming there is
a specific functional relation between the two. In the many variable case, since
there are other variables which may be interrelated in complex ways, it is specially
important to get this distinction straight.

If f (x, y) describes a function of two variables, its partial derivatives could in fact
be defined directly by

f (x + ∆x, y) − f (x, y)
fx (x, y) = lim
∆x→0 ∆x
f (x, y + ∆y) − f (x, y)
fy (x, y) = lim .
∆y→0 ∆y

Geometric Interpretation for f : R2 → R Let f be a function with domain a


subset of R2 and assuming scalar values. Fix a point (x0 , y0 ) in the domain of f .
We shall give geometric interpretations of fx (x0 , y0 ) and fy (x0 , y0 ) in terms of the
graph of the function f . First consider the function of x given by f (x, y0 ) (i.e., y
is kept constant, and x varies in the vicinity of x = x0 ). The graph of z = f (x, y0 )
may be viewed as the curve in which the plane y = y0 intersects the graph of f . It is
called the sectional curve in the x-direction. The partial derivative fx (x0 , y0 ) is the
slope of this curve for x = x0 . In other words, it is the slope of the tangent line to
the curve at the point (x0 , y0 , f (x0 , y0 )) on the graph. Similarly, fixing x = x0 , and
letting y vary leads to the sectional curve in the y-direction. (It is the intersection of
the plane x = x0 with the graph of the function.) Its slope for y = y0 is the partial
derivative fy (x0 , y0 ). Study the diagram to see how the two sectional curves and
their tangents at the common point (x0 , y0 , f (x0 , y0 )) are related to one another.
Note in particular that they lie in two mutually perpendicular planes.
3.3. PARTIAL DERIVATIVES 81

(x , y , f(x , y ))
0 0 0 0

x = x
(x , y ) 0
0 0

y = y
0

The two tangent lines to the sectional curves determine a plane through the point
(x0 , y0 , f (x0 , y0 )). It is reasonable to think of it as being tangent to the surface at
that point. Put z0 = f (x0 , y0 ). From the above discussion, it is clear that the first
sectional tangent (in the x-direction) may be characterized by the equations
z − z0 = fx (x0 , y0 )(x − x0 ), y = y0 .
This characterizes it as the intersection of two planes. Similarly, the other sectional
tangent (in the y-direction) may be characterized by the equations
z − z0 = fy (x0 , y0 )(y − y0 ), x = x0 .
However, the plane characterized by the equation
z − z0 = fx (x0 , y0 )(x − x0 ) + fy (x0 , y0 )(y − y0 ) (25)
contains both these lines, the first by intersecting with y = y0 and the second by
intersection with x = x0 . It follows that (25) is the equation of the desired tangent
plane.

Example Let f (x, y) = x2 − y 2 . We find the tangent plane at (1, 1, 0) (x0 = 1, y0 =


1, z0 = 12 − 12 = 0). We have
fx (x, y) = 2x = 2 at x = 1, y = 1,
fy (x, y) = −2y = −2 at x = 1, y = 1.
Hence, the tangent plane has equation
z − 0 = 2(x − 1) + (−2)(y − 1)
82CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

or

2x − 2y − z = 0

You should try to sketch the surface and the tangent plane. You may find the
picture somewhat surprising.

Exercises for 3.3.

1. Compute all first order partial derivatives of the following functions


(a) f (x, y) = x5 − 4x4 + 2x + 10
(b) f (x, y) = x cos y + y cos x
(c) f (x, y) = ln(x2 − y 2 )
(d) f (x, y, z) = x2 y 3 z 4
(e) f (u, v, w) = uew + vev + weu

2. Find an equation for the tangent plane to the graph of the function at the
given point:
(a) f (x, y) = x2 − y 2 ; P (5, 4, 9)
πxy
(b) f (x, y) = cos ; P (4, 2, 1)
4
(c) f (x, y) = xy; P (3, −2, −6)

3. The ideal gas law may be written pv = nRT . Show that


( ) ( ) ( )
∂p ∂v ∂T
= −1
∂v T ∂T p ∂p v

Hint: To find ∂p/∂v, for example, you could solve for p in terms of v and T
as above in the text. Alternatively, you could assume that p is a function of
v and T , and differentiate the equation pv = nRT with respect to v treating
v and T as independent variables.

4. Let (x0 , y0 , z0 ) be a point on the upper cone defined by the equation z 2 =


x2 + y 2 .
(a) Find an equation for the tangent plane at (x0 , y0 , z0 ). Hint: You may
solve for z in terms of x and y and determine the partial derivatives ∂z/∂x
and ∂z/∂y at the point (x0 , y0 , z0 ). Alternatively, you may assume z is a
function of x and y and differentiate the equation z 2 = x2 + y 2 with respect
to x and y treating them as independent variables.
(b) Show that the plane determined in part (a) passes through the origin. Is
this reasonable on geometric grounds?
3.4. FIRST ORDER APPROXIMATION AND THE GRADIENT 83

5. Newton’s Method is an iterative process used to solve systems of equations of


the form f (x) = 0. There is a generalization for a system of two equations in
two unknowns: f (x, y) = 0, g(x, y) = 0. Each equation has as locus a curve in
the x, y-plane. Consider a guess (x0 , y0 ) for the point of intersection of these
two curves. The point (x0 , y0 , f (x0 , y0 )) is on the surface z = f (x, y) and
the point (x0 , y0 , g(x0 , y0 )) in on the second surface z = g(x, y). The tangent
planes to these surfaces at these two points and the xy-plane intersect in
a point (x1 , y1 ) which—we hope—is closer to the desired intersection than
(x0 , y0 ).
(a) Verify that the the following equations give the coordinates of (x1 , y1 ).

f (x0 , y0 )gy (x0 , y0 ) − g(x0 , y0 )fy (x0 , y0 )


x1 = x0 −
fx (x0 , y0 )gy (x0 , y0 ) − gx (x0 , y0 )fy (x0 , y0 )
g(x0 , y0 )fx (x0 , y0 ) − f (x0 , y0 )gx (x0 , y0 )
y1 = y0 − .
fx (x0 , y0 )gy (x0 , y0 ) − gx (x0 , y0 )fy (x0 , y0 )

(b) Are there circumstances in which no such point will exist?


(c) The above process may be iterated to find a solution of of a pair of equa-
tions. Choose some arbitrary guess, use the above equations to obtain another
guess, do the same for the new guess to obtain a third guess, etc. Continue
this process until you have sufficient accuracy for your purposes.
Apply this method to the system x3 + y 2 + 2xy 2 = 0, x2 − y 2 + 4 = 0. Try 3
or 4 iterations.
(d) Clearly, you would be better off doing part (c) on a computer. Write a
computer program to run the algorithm. Run the program for many more
iterations.

3.4 First Order Approximation and the Gradient

Most functions cannot be calculated directly, and so one uses approximations which
are accurate enough for one’s needs. For example, the statement

e = 2.71828

is presumably accurate to 5 decimal places, but it is certainly not an exact equality.


(In fact, e cannot be given exactly by any finite decimal. Do you know why?) You
may have learned in your previous calculus course that ex may be represented in
general by an infinite series

x2 x3 x4
ex = 1 + x + + + + ...
2! 3! 4!
84CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

In calculations, you use as many terms as necessary to get the accuracy you need.
In general, many interesting functions can be represented by power series, that is
we have
f (x) = a0 + a1 x + a2 x2 + a3 x3 + · · · + an xn + . . .
(Do you know what an is and how it is related to the function f ? Refer to the
chapter on Taylor series in your one variable Calculus book.) The simplest kind
of approximation is the linear approximation where one ignores all terms of degree
higher than one. The linear approximation is intimately tied up with the notion of
derivative. We review what you learned in one variable calculus about derivatives,
approximation, and tangent lines.

Let f : R → R denote a function of a single variable. The relation between the


value of the function y = f (x) at one point and its value y + ∆y = f (x + ∆x) at a
nearby point is given approximately by

f (x + ∆x) ≈ f (x) + f 0 (x)∆x.

The quantity f 0 (x)∆x may be interpreted geometrically as the change in y along


the tangent line. (See the diagram.) It is also sometimes expressed in differential
notation
dy = f 0 (x)dx
where for consistency we put ∆x = dx. Here, dy is only an approximation for the
true change ∆y, but one often acts as though they were equal. (Differentials are a
wonderful tool for doing calculations, but sometimes it requires great ingenuity to
see why the calculations really work.)

e
∆y
dy

f(x)

x dx x + dx

If we want to be completely accurate, we should write instead

f (x + ∆x) = f (x) + f 0 (x)∆x + e(x, ∆x)

where e is an “error term” representing the difference between the value of the y-
coordinate on the graph of the function and y-coordinate on the tangent line. Since
3.4. FIRST ORDER APPROXIMATION AND THE GRADIENT 85

the tangent line is very close to the graph, we expect e to be very small, at least if
∆x is small.

One way to think of this is that there is an infinite expansion in “higher order
terms” which involve powers of ∆x
f (x + ∆x) = f (x) + f 0 (x)∆x + . . .
and the tangential approximation ignores all terms of degree greater than one. e
would be the sum of these additional terms. (This may have been brought up in
your previous Calculus course as part of a discussion of Taylor’s formula.)

To proceed further, we need to be careful about what we mean by e(x, ∆x) “being
very small”. Indeed, all the incremental quantities will be small, so we must ask
“small compared to what?” To answer this question, rewrite (26) by transposing
and dividing by ∆x
f (x + ∆x) − f (x) e(x, ∆x)
= f 0 (x) + .
∆x ∆x
Letting ∆x → 0, we see that the left hand side approaches f 0 (x) as a limit, so it
follows that
e(x, ∆x)
lim = 0.
∆x→0 ∆x
This says that if ∆x is small, the ratio e/∆x will be small, i.e., that e is small even
when compared to ∆x. A simple example will illustrate this.

Example 35 Let f (x) = x3 , x = 2. Then


f (x + ∆x) = (2 + ∆x)3 = 8 + 12∆x + 6∆x2 + ∆x3 .
Here, f 0 (x) = f 0 (2) = 3 · 22 = 12. Hence, f (2) + f 0 (2)∆x = 8 + 12∆x, and
e(2, ∆x) = 6∆x2 + ∆x3 . Indeed,
e
= 6∆x + ∆x2 .
∆x
Thus, if ∆x = .01 (a fairly small number), e/∆x = .0601, and e = .000601 which is
quite a bit smaller than ∆x = .01.

The theory of linear approximation for functions of two (or more variables) is similar,
but complicated by the fact that more things are allowed to vary. We start with an
example.

Example 36 Let f (r) = f (x, y) = x2 + xy, and let r = (x, y) = (1, 2). Then,
proceeding as in Example 35, we have
f (x + ∆x, y + ∆y) = f (1 + ∆x, 2 + ∆y)
= (1 + ∆x)2 + (1 + ∆x)(2 + ∆y)
= 1 + 2∆x + ∆x2 + 2 + 2∆x + ∆y + ∆x∆y
= 3 + 4∆x + ∆y + ∆x2 + ∆x∆y. (27)
86CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

These terms can be grouped naturally. f (1, 2) = 3 so the first three terms may be
written
f (1, 2) + 4∆x + ∆y
and these constitute the linear terms or linear approximation to f (x + ∆x, y + ∆y).
Denote the remaining terms

e(r, ∆r) = ∆x2 + ∆x∆y.

In the one variable case, we compared e to ∆x, but now we have


√ both ∆x and ∆y
to contend with. The way around this is to use ∆s = |∆r| = ∆x2 + ∆y 2 . Thus,

e ∆x2 + ∆x∆y ∆x ∆x
= = ∆x + ∆y,
∆s ∆s ∆s ∆s
and the quantity on the right approaches zero as ∆s → 0. (The reasoning is that
since |∆x|, |∆y| < ∆s, it follows that the fraction ∆x/∆s has absolute value never
exceeding 1, while both ∆x and ∆y must approach 0 as ∆s → 0.) It follows that for
∆s small, e is small even compared to ∆s. For example, let ∆x = .003, ∆y = .004.
Then, ∆s = .005, while e = (.003)2 + (.003)(.004) = 0.000021.

Note that the coefficients of ∆x and ∆y are just the partial derivatives fx (1, 2) and
fy (1, 2). This is not very surprising. We can see why it works by calculating

f (1 + ∆x, 2) − f (1, 2)
fx (1, 2) = lim
∆x→0 ∆x

which from (27) with ∆y = 0

4∆x + ∆x2
= lim
∆x→0 ∆x
= lim (4 + ∆x) = 4.
∆x→0

A similar argument shows that fy (1, 2) is the coefficient of ∆y (which is 1).

In general, suppose we have a function f : R2 → R. Fix a point in the domain


of f with position vector r = hx, yi, and consider the change in the function when
we change r by ∆r = h∆x, ∆yi. It may be possible to express f near r by a linear
approximation, i.e., to write

f (x + ∆x, y + ∆y) = f (x, y) + a∆x + b∆y + e(r, ∆r) (28)

where
e(r, ∆r)
lim = 0.
∆r→0 |∆r|
(This last statement says that e is small compared to ∆s = |∆r| when the latter
quantity is small enough.) If this is the case, it is not hard to see, just as in the
3.4. FIRST ORDER APPROXIMATION AND THE GRADIENT 87

example, that
∂f
a=
∂x
∂f
b= .
∂y
So (28) may be rewritten
graph
f (x + ∆x, y + ∆y) = f (x, y) + fx (x, y)∆x + fy (x, y)∆y + e(r, ∆r). (29)
If this is so, we say that the function is differentiable at the point r in its domain. ε
Equation (29) may be interpreted as follows. The first term on the right f (x, y) is
the value of the function at the base point. Added to this is the linear part of the tangent
change in the function. This has two parts: the partial change fx ∆x due only to plane
the change in x and the partial change fy ∆y due only to the change in y. Each
partial change is appropriately the rate of change for that variable times the change
in the variable. Finally, added to this is the discrepancy e resulting from ignoring
all but the linear terms.

Equation (29) also has a fairly simple geometric interpretation. Recall from the
previous section that the tangent plane (determined by the sectional tangents) at ∆r ∆y
the point (x0 , y0 , z0 = f (x0 , y0 )) in the graph of f has equation
∆x
z − z0 = fx (x0 , y0 )(x − x0 ) + fy (x0 , y0 )(y − y0 ).
Except for a change in notation this is exactly what we have for the middle two
terms on the right of (29). We have just changed the names of the coordinates
of the base point from (x0 , y0 ) to (x, y), and the increments in the variables from
(x − x0 , y − y0 ) to (∆x, ∆y). Hence, f is differentiable at (x, y) exactly when the
tangent plane at (x, y, f (x, y)) is a good approximation to the graph of the function,
at least if we stay close to the point of tangency. The Gradient The gradient of
∂f ∂f
a function f : R2 → R is defined to be the vector with components h , i. It is
∂x ∂y
denoted ∇f (pronounced “del f”) or grad f .

Example Let f (x, y) = x2 + xy + 2y 2 . Then ∇f = h2x + y, x + 4yi. It may also be


expressed using the unit vectors i and j by ∇f (x, y) = (2x + y)i + (x + 4y)j.

Notice that the gradient is actually a function of position (x, y).

The gradient may be used to simplify the expression for the linear approximation.
Namely, fx ∆x + fy ∆y can be rewritten as ∇f · ∆r. Hence, we can write the
differentiability condition purely in vector notation
f (r + ∆r) = f (r) + ∇f · ∆r + e(r, ∆r)
e(r, ∆r)
where → 0 as ∆r → 0. If you look carefully, this looks quite a bit like
|∆r|
the corresponding equation in the single variable case with ∇f playing the role of
88CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

the ordinary derivative. For this reason, it makes sense to think of ∇f as a higher
dimensional derivative.

Note that much of the discussion is this section could have been done for functions
f : R3 → R of three variables (or indeed for functions of any number of variables).
Thus, for a function of three variables given by f (r) = f (x, y, z), the gradient
∂f ∂f ∂f
∇f = i+ j+ k. Also the differentiability condition looks the same in
∂x ∂y ∂y
vector notation:
f (r + ∆r) = f (r) + ∇f · ∆r + e(r, ∆r)

where e(r, ∆r)/|∆r| → 0 as |∆r| → 0. However, when written out in terms of


components, this becomes

f (x + ∆x, y + ∆y, z + ∆z) = f (x, y, z) + fx ∆x + fy ∆y + fz ∆z + e.

The geometric meaning of all this is much less clear since the number of dimensions
is too high for clear visualization. Hence, we usually develop the theory in the two
variable case, and then proceed by analogy in higher dimensions. Fortunately, the
notation need not change much if we consistently use vectors.

‘O’ and ‘o’ notation In analysis and its applications, one is often interested in the
general behavior of functions rather than their precise forms. Thus, we don’t really
care how to express the error term e(r, r0 ) exactly as a formula, but we do know
something about how fast it approaches zero. Similarly, the most important thing
about the exponential function ex is not its exact values, but the fact that it gets
large very fast as x → ∞. The term “order of magnitude” is often used in these
contexts. There are two common notations used by scientists and mathematicians
in this context. We would say that a quantity e is ‘O(∆s)’ as ∆s → 0 if the ratio
e/∆s stays bounded. That means they have roughly the same order of magnitude.
We say that e is ‘o(∆s)’ if the ratio e/∆s goes to zero. That means that e is an
order of magnitude (or more) smaller than ∆s. ‘o’ is stronger than ‘O’ in that the
former implies the latter, but not necessarily vice versa.

With this terminology, we could express the differentiability condition

f (r + ∆r) = f (r) + ∇f · ∆r + o(|∆r|).

You might also see the following

f (r + ∆r) = f (r) + ∇f · ∆r + O(|∆r|2 ).

This assumes more information about the error than the previous formula. It as-
serts that the error behaves like the square |∆r|2 (or better) whereas the previous
statement is not that explicit. For almost all interesting functions the ‘O(|∆r|2 )’ is
valid, but mathematicians, always wanting to use the simplest hypotheses, usually
develop the subject using the less restrictive ‘o’ estimate.
3.4. FIRST ORDER APPROXIMATION AND THE GRADIENT 89

Differential Notation Let f : R2 → R denote a function of two variables, and


fix a point r in its domain. As in the single variable case, we write

∂f ∂f
dz = ∇f · dr = dx + dy (30)
∂x ∂y

(where we put ∆x = dx, ∆y = dy, and ∆r = dr.) This is the change in z in the
tangent plane corresponding to a change dr = ∆r. It should be distinguished from
the change
∆z = ∇f · ∆r + e.
in the function itself. The expression on the right of (30) is often called the total
differential of the function. As in the one variable case, one can use differentials for
“quick and dirty” estimates. What we are doing in essence is assuming all functions
are linear, and as long as all changes are small, this is not a bad assumption.

Example 36 The pressure of an ideal gas is given in terms of the volume and
Temperature by the relation

T
p = p(v, T ) = k
v
where k is a constant. Suppose v = 10, T = 300 and both v and T increase by
1%. What is the change in p? To get an approximate answer to this question, we
calculate the total differential
∂p ∂p T 1
dp = dv + dT = −k 2 dv + k dT.
∂v ∂T v v
Putting v = 10, dv = .1, T = 300, dT = 3, we get
300 1
dp = −k .1 + k 3 = 0
102 10
so to a first approximation, there is no change in p. (The actual values of v and T
were not relevant. Can you see why? Hint: Calculate dp/p in general.)

Calculations with differentials work just as well for functions of any number of
variables. They amount to a use of the linear approximation. The only difficulty is
that one can’t easily visualize things in terms of “tangent planes” since the number
of dimensions is too large.

Exercises for 3.4.

1. Write out the linear approximation in each case in terms of ∆x, ∆y, (and if
appropriate, ∆z) at the indicated point P . Use it to estimate the value of the
function at the indicated point Q, and compare the estimate with the ‘true
value’ determined from your calculator.
90CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

1
(a) f (x, y) = ; P (1, 1); Q(1.02, 0.97)
x+y

(b) f (x, y) = x2 + y 2 ; P (3, 4); Q(3.02, 3.98)
(c) f (x, y, z) = xyz; P (1, 1, 1); Q(0.98, 1.02, 1.02)

2. Use differentials to approximate


√ √
(a) ( 17 + 63)2
√ √
(b) ( 5 − 3)2

L
3. The period of a simple pendulum is given by T = 2π . Use differentials to
g
estimate the error in the period if the length is 1.1 meter, you take it to be
m
1.0 meters, and you approximate g by 10.0 rather than 9.8 2 ?
s
4. Find the gradient vector for the following functions at the given points:
(a) f (x, y) = 4x − 7y + 3, P (3, −2)
(b) f (x, y) = x2 − 4y 2 + y − 2, P (2, 1)

(c) f (x, y, z) = x2 + y 2 + z 2 , P (2, 2, 2)
(d) f (x, y, z) = 3xy + y 2 z − z 3 , P (1, −1, 8)

5. Verify the following properties of the gradient. Assume u and v are functions
and a and b are constants.
(a) ∇(au + bv) = a∇u + b∇v
(b) ∇(uv) = u∇v + v∇u

1 1 √
6. Let f (r) = ln(x2 + y 2 ). Show that ∇f = ur where r = x2 + y 2 is the
2 r
radial polar coordinate.

7. Let f (x, y) = x3 + y 2 x + 3y − 1. Expand f (1 + ∆x, −1 + ∆y) by substituting


1 + ∆x for x and −1 + ∆y for y. Identify the linear terms and compare
with ∇f (1, −1) · ∆r. Identify the higher order terms and show that they are
‘O(∆r2 )’.

2xy
8. Let f : R2 → R be defined by f (x, y) = for (x, y) 6= (0, 0) and
x2 + y 2
f (0, 0) = 0.
(a) Show that fx (0, 0) = fy (0, 0) = 0. Hint: What are f (x, 0) and f (0, y)?
(b) Show that f is not continuous at (0, 0). Hint: Express f using polar
coordinates and show that the limit as r → 0 is not defined.
3.5. THE DIRECTIONAL DERIVATIVE 91

3.5 The Directional Derivative f

Consider a function f : R2 → R. In the following discussion, refer to the diagram


where we have sketched in some of the contour curves of the function. At some u
point r in the domain of the function, pick a direction, and draw a ray emanating
from the point r in the indicated direction. We want to consider the rate of change
of the function in that direction. This is called the directional derivative in the
desired direction. You can think of it roughly as the rate at which you cross level
curves as you move away from the point in the indicated direction.

Unfortunately, there is no standard notation for the directional derivative. One


common notation is as follows. Specify the direction by an appropriate unit vector
u. The directional derivative in the direction u is then denoted

df
.
ds u

You already know about two cases. If the direction is that of the unit vector i (paral-
lel to the x-axis), the directional derivative is the partial derivative fx (r). Similarly,
if the direction is that of j, the directional derivative is the partial derivative fy (r).
It turns out that the directional derivative in any direction can be expressed in
terms of the partial derivatives. To see this, let ∆r = ∆s u be a displacement
through distance ∆s in the direction u. Then, if the function is differentiable at r,
we have

f (r + ∆r) = f (r) + ∇f (r) · ∆r + e


= f (r) + ∇f (r) · (∆s u) + e

so
f (r + ∆r) − f (r) e
= ∇f (r) · u + .
∆s ∆s
However, by hypothesis, e/∆s → 0, so

f (r + ∆s u) − f (r0 )
lim = ∇f (r0 ) · u.
∆s→0 ∆s
The directional derivative is the limit on the left, so we obtain

df
= ∇f (r) · u.
ds u

Note that the directional derivative depends on both the point r at which it is cal-
culated and the direction in which it is calculated. Some of this may be suppressed
in the notation if it is otherwise clear, so you may see for example df /ds = ∇f · u.

Example 37 A climber ascends a mountain where the elevation in feet is given


by z = f (x, y) = 5000 − x2 − 2y 2 . x and y refer to the coordinates (measured
in feet from the summit) of a point on a flat map of the mountain. Most of the
92CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

discussion which follows refers to calculations involving that map. Suppose the
climber finds herself at the point with map coordinates (20, 10) and wishes to move
in the direction (on the map) of the unit vector ( 35 , 45 ) (a bit north of northeast).
How fast will she descend?

To solve this problem, we calculate the directional derivative in the indicated direc-
tion. First,
∇f = h−2x, −4yi = h−40, −40i at (20, 10).
Hence, in the indicated direction
df 3 4 −120 − 160
= h−40, −40i · h , i = = −56.
ds 5 5 5
Thus she descends 56 feet for each foot she moves horizontally (i.e., on the map).
(She had better be using a rope!) Notice that the answer is negative which accords
with the fact that the climber is descending.

It is worth spending some time thinking about what this means on the actual surface
of the mountain, i.e., the graph of the function in R3 . The point on the graph would
be that with coordinates (20, 10, 5000 − 202 − 2 · 102 ) = (20, 10, 4400). Moving on
the mountain,
√ if the climber moves
√ ∆s units horizontally (i.e., on the map), she will
move ∆s2 + 562 ∆s2 = ∆s 3137 in space. You should make sure you visualize
all of this. In particular, try to understand the relation between the unit vector u
in the plane, and the corresponding displacement vector in space. Can you find a
vector in space which is tangent to the graph of the function and which projects
on u? Can you find a unit vector in the same direction? (The answer to the first
question is h 35 , 45 , −56i.)

Significance of the Gradient Fix a point r in the domain of the function f , and
consider the directional derivative

df
= ∇f · u
ds u

as a function of u. Assume ∇f 6= 0 at r. (Otherwise, the directional derivative


f is always zero.) The directional derivative is 0 if u is perpendicular to ∇f . It
attains is maximum positive value if u points in the direction of ∇f , and it attains
is minimum (most negative) value if u points in the direction opposite to ∇f , (i.e.,
−∇f .) Finally, if u points the same way as ∇f , the directional derivative is
different
u ’s df
= ∇f · u = |∇f ||u| = |∇f |.
ds

The upshot of this is that

1. the direction of the gradient is that in which the directional derivative is as


large as possible;
3.5. THE DIRECTIONAL DERIVATIVE 93

2. the magnitude of the gradient is the directional derivative in that direction

Example 37, revisited Consider the same climber on the same mountain. In which
direction should she move (on her map) to go down hill as fast as possible? By the
above analysis, this is opposite to the direction of the gradient, i.e., the direction
of −∇f (1, 2) = h40, 40i. √Directions are often given by unit vectors, so we might
normalize this to u = (1/ 2)h1, 1i. Note that the question of finding a vector on
the surface of the mountain pointing down hill is somewhat different. Can you solve
1
that problem? (The answer is √ h1, 1, −80i. This is not a unit vector, but you can
2
get a unit vector in the same direction—should you need one—by dividing it by its
length.)

Exercises for 3.5.

1. For each of the following functions, find the directional derivative in the di-
rection of the given vector:
(a) f (x, y) = x2 + xy + y 2 , P (1, −1), v = h2, 3i
(b) f (x, y) = ey sin x, P ( π4 , 0), v = h1, −1i

(c) f (x, y, z) = x2 + y 2 + z 2 , P (2, 3, 6), v = 2i − 3j + k
(d) f (x, y, z) = 20 − x2 − y 2 − z 2 , P (4, 2, 0), v = k
(e) f (x, y, z) = (x − 1)2 + y 2 + (z + 1)2 , P (1, 0, −1), v = h1, 0, 0i
2. Find the maximum directional derivative of the function at the given point
and find its direction:
(a) f (x, y) = x2 − 3y 2 , P (1, −2)
(b) f (x, y, z) = x2 + 4y 3 − z, P (1, −3, 0)

(c) f (x, y, z) = xyz, P (−1, 4, −1)
3. A mountain climber stands on a mountain described by the equation z =
10000(25 − x2 − y 2 ) where x and y are measured in miles and z is measured in
feet. The ground beneath the climber’s feet is at the point (2.5, 3.0, z) where
z is determined by the above equation. If the climber slips in a northeasterly
direction, at what rate will the fall occur? What is the angle of descent? Is
there a steeper path from this point?
4. An electric charge is uniformly distributed over a thin non-conducting wire of
radius a centered at the origin in the x, y-plane. The electrostatic potential
C
due to this charge at a point on the z-axis is given by V (0, 0, z) = √
2 + z2
a
dV
where C is an appropriate constant. Find at the point (0, 0, h) in the
dx u
direction u towards the origin. Hint. You weren’t given enough information
to determine ∇V . Why was the information you were given sufficient?
94CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

5. The temperature at any given point (x, y, z) in the universe is given by the
equation T = 72 + 12 xyz, with the origin defined as the corner of Third and
Main in Omaha, Nebraska. You are on a hilly spot, P (−2, −1, −7) in Lincoln,
and the surrounding topography has equation z = 3x−y 2 . If you start heading
northeast towards Omaha, what initial rate of temperature change will you
experience?
6. The temperature inside the cylinder described by x2 + y 2 ≤ a2 , 0 ≤ z ≤ h
is given by T = (x2 + y 2 )z. What is the direction in which the temperature
changes as fast as possible at the point (a/2, a/2, h/2)?

3.6 Criteria for Differentiability

We need a simple way to tell if a function f : Rn → R is differentiable. Following


the philosophy enunciated previously, we concentrate on the case n = 2, and proceed
by analogy in higher dimensional cases.
∂f ∂f
Theorem 3.2 Let f : R2 → R. Suppose the partial derivatives and exist
∂x ∂y
and are continuous functions in the vicinity of the point r. Then f is differentiable
at r.

How would you generalize this result to apply to functions of three or more variables?

Examples If we take f (x, y) = x2 + 3y 2 , then fx (x, y) = 2x, and fx (x, y) = 6y.


These are certainly continuous for all points (x, y) in R2 . Hence, the theorem
assures us that f will be differentiable at every point in R2 . The graph of f is an
elliptic paraboloid, and it is very smooth. At each point of the graph, we expect
No tangent plane the tangent plane to be a good approximation to the graph.
√ √
On the other√ hand, take f (x, y) = x2 + y 2 . Then fx (x, y) = x/ x2 + y 2 and
fy (x, y) = y/ x2 + y 2 . Both of these fail to be continuous at (0, 0) although the
function f is defined there and is even continuous. Hence, the theorem will not
insure that f is differentiable at (0, 0). In fact, the graph of f is a right circular
cone with its vertex at the origin, and it is clear that there is no well defined tangent
plane at the vertex.

There is a hierarchy of degrees of “smoothness” for functions. The lowest level


is continuity, and the next level is differentiability. A function can be continuous
without being
√ differentiable. We saw an example of this in the function defined by
f (x, y) = x2 + y 2 . However, a differentiable function is necessarily continuous.
(See the exercises.)

Continuity of partial derivatives provides a still higher level of smoothness. Such


3.6. CRITERIA FOR DIFFERENTIABILITY 95

functions are often called C 1 functions. The theorem tells us that such functions
are differentiable. However, if the partial derivatives are not continuous, we cannot
necessarily conclude that the function is not differentiable. The theorem just does
not apply. In fact there are some not too bizarre examples in which the partials are
not continuous but where the function is differentiable, i.e., there is a well defined
tangent plane which is a good approximation to the graph.

Proof of the Theorem While it is not essential that you understand how the
theorem is proved, you might find it enlightening. The proof makes extensive use
of the Mean Value Theorem, which you probably saw in your previous Calculus
course. (See also Edwards and Penney, 3rd Edition, Section 4.3.) The Mean Value
Theorem may be stated as follows. Suppose f is a function of a single variable which
is defined and continuous for a ≤ x ≤ b and which is differentiable for a < x < b.
Then there is a point x1 with a < x1 < b such that

f (b) − f (a) = f 0 (x1 )(b − a).

If we substitute x for a and ∆x for b − a, we could also write this

f (x + ∆x) − f (x) = f 0 (x1 )∆x.

The quantity on the right looks like the change in the linear approximation except
that the derivative is evaluated at x1 rather than at x. Also the equation is an exact
equality rather than an approximation. This form of the Mean Value Theorem is
better for our purposes because although in the previous analysis we had ∆x > 0,
i.e., a < b, the Mean Value Theorem is also true for ∆x < 0. (Just interchange
the roles of a and b.) In this form, we would say simply that x1 lies between x and
x + ∆x so we don’t have to commit ourselves about the sign of ∆x.

To prove the theorem, consider the difference f (x + ∆x, y + ∆y) − f (x, y) which we
want to relate to fx (x, y)∆x + fy (x, y)∆y. We have

f (x + ∆x, y + ∆y) − f (x, y) = f (x + ∆x, y + ∆y) − f (x, y + ∆y)


+ f (x, y + ∆y) − f (x, y).

Consider the first difference on the right as a function only of the first coordinate
(with the second coordinate fixed at y + ∆y.) By the Mean Value Theorem, there
is an x1 between x and x + ∆x such that

f (x + ∆x, y + ∆y) − f (x, y + ∆y) = fx (x1 , y + ∆y)∆x.

Consider the second difference f (x, y + ∆y) − f (x, y) as a function of the second
variable, and apply the Mean Value Theorem again. There is a y1 between y and
y + ∆y such that
f (x, y + ∆y) − f (x, y) = fy (x, y1 )∆y.
Thus,

f (x + ∆x, y + ∆y) − f (x, y) = fx (x1 , y + ∆y)∆x + fy (x, y1 )∆y,


96CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

so
e = f (x + ∆x, y + ∆y) − f (x, y) − fx (x, y)∆x − fy (x, y)∆y
= fx (x1 , y + ∆y)∆x + fy (x, y1 )∆y − fx (x, y)∆x − fy (x, y)∆y
= (fx (x1 , y + ∆y) − fx (x, y))∆x + (fy (x, y1 ) − fy (x, y))∆y.
Hence,
e ∆x ∆y
= (fx (x1 , y + ∆y) − fx (x, y)) + (fy (x, y1 ) − fy (x, y)) . (32)
∆s ∆s ∆s
Now let ∆s → 0. Since |∆x| ≤ ∆s, it follows that ∆x/∆s has absolute value at
most 1, and a similar argument applies to ∆y/∆s. In addition, as ∆s → 0, so also
∆x → 0 and ∆y → 0. Since x1 is between x and x + ∆x, it follows that x1 → x.
Similarly, since y1 lies between y and y + ∆y, it follows that y1 → y. Hence, since
fx and fy are continuous functions,
lim fx (x1 , y + ∆y) = fx (x, y),
∆s→0
lim fy (x, y1 ) = fy (x, y).
∆s→0

However, this implies that the expressions in parentheses on the right of (32) both
approach zero. It follows that
e
lim =0
∆s→0 ∆s
as required.

Domains of Differentiable Functions So far we haven’t been very explicit about


the domain of a function f : Rn → R except to say that it is some subset of Rn .
A moment’s thought will convince you that to do differential calculus, there will
have to be some restrtiction on possible domains. For example, suppose n = 2 so
f (x, y) gives a function of two variables. If the domain were the subset of R2 which
is the locus of the equation y = x, it would not make much sense to try to talk
about the partial derivative ∂f /∂x which is supposed to be the derivative of f with
y kept constant. If y = x, we can’t vary x without also varying y. (The same would
apply if there was any algebraic relation between x and y and the domain were some
curve in R2 .) In order to make sense of partial derivatives and related concepts,
the domain must be ‘fat enough’, i.e., the variables must really be independent.
However, the derivatives at a point r depend only on values of the variables near to
the point, not on the entire domain of the function. Hence, the domain need only
be ‘fat’ in the immediate vicinity of a point at which we want to take derivatives.

To make this precise, we introduce some new concepts. We concentrate on the case
of R2 . The generalization to R3 and beyond is straightforward. A set D in R2 is
said to be an open set if it has the following property. If a point is in D then there
is an entire disk of some radius centered at the point which is contained in D. (A
disk is a circle including the circumference and what is contained inside it.) We
can state this symbolically as follows. If r0 is a point in D, then there is a number
δ > 0 such that all points r satisfying |r − r0 | ≤ δ are also in D.
3.6. CRITERIA FOR DIFFERENTIABILITY 97

Examples The entire plane R2 is certainly open.

Also, the set D obtained by leaving out any one point is open.

The rectangular region consisting of all points (x, y) such that a < x < b, c < y < d
is open, but the region defined instead by a ≤ x ≤ b, c ≤ y ≤ d is not open.
The points on the perimeter of the rectangle in the latter case are in the set, but
they don’t have the desired property since any disk centered at such a point will
necessarily contain points not in the rectangle.

One point deleted

Open disk Open rectangle

The set of all points inside a circle but not on its circumference is open, but the set
of points in a circle or on its circumference is not open. The former is often called
an open disk, and the latter is called a closed disk.

There are many other examples.

There are a couple of related concepts. If D is a subset of R2 , a point is said to


be on its boundary if every disk centered at the point has some points in the set D
and some points not in the set D. For example, the perimeter of a rectangle or the
circumference of a disk consists of boundary points. A point in D is said to be an
interior point if it is not on the boundary, i.e., there is some disk centered at the
point consisting only of points of D. The interior of a set is always open. A set is
open if it consists only of interior points. Finally, we say a set is closed if it contains
all its boundary points.
98CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

Interior point

Boundary point Closed rectangle

Generally, we only want to take derivatives at interior points of the domain of a


function because at such points we can move in all possible independent directions,
at least if we stay sufficiently close to the point. One way to insure this is to assume
that the domain of a function f is always an open set. However, there are times
when we want to include the boundary of the set in the domain. At such points,
we may not be able to take derivatives. However, we can sometimes do something
like that if we are careful. For example, for points on the perimeter of a rectangle
we can take “one sided derivatives” by allowing variation in directions which point
into the rectangle.

Exercises for 3.6.

1. Let f : R2 → R. Show that if f is differentiable at r then f is continuous at


r. Hint: The definition of continuity at r can be rewritten
lim f (r + ∆r) = f (r).
∆r→0

Use the differentiability condition


f (r + ∆r) = f (r) + ∇f (r) · ∆r + e(r, ∆r)
to verify the previous statement. What happens to e as ∆r → 0?
2. For each of the following subsets of R2 , tell if it is open, closed, or neither.
(a) The set of (x, y) satisfying −1 < x < 1, 0 < y < 2.
(b) The set of (x, y) satisfying −1 < x ≤ 1, 0 < y < 2.
(c) The set of (x, y) satisfying x ≤ y ≤ 1, 0 ≤ x ≤ 2. (Region is triangular.)
(d) The set of all points in R2 with the exception of the four points (±1, ±1).
(e) The set of all points with the exception of the line with equation y = 2x+3.
3. Describe the set of all points in R3 with position vector r satisfying 1 < |r| < 2.
Is it open or closed?
3.7. THE CHAIN RULE 99

3.7 The Chain Rule

The chain rule for functions of a single variable tells us how to find the derivative of
a function of a function, i.e., a composite function. Thus, if y = f (x) and x = g(t),
then
d df dg
(f (g(t)) = (x) (t) with x = g(t).
dt dx dt

The generalization to higher dimensions is quite straightforward, at least if we use


vector notation, but its elaboration is terms of components can look pretty involved.
Suppose z = f (r) describes a differentiable function f : Rn → R and r = g(t) a
vector valued differentiable function g : R → Rn . (We shall concentrate on the two
cases n = 2 and n = 3, but it all works just as well for any n.) Then z = f (g(t))
describes the composite function f ◦ g : R → R which is just a scalar function of a
single variable. The multidimensional chain rule asserts

d dg
(f (g(t)) = ∇f (r) · with r = g(t). (33)
dt dt

Note that the gradient ∇f plays the role that the derivative plays in the single
variable case.

Formula (33) looks quite simple in vector form, but it becomes more elaborate if
we express things in terms of component functions. Let h(t) = f (g(t)) denote
the composite function. Then, for n = 2, we have ∇f = h∂f /∂x, ∂f /∂yi and
dg/dt = hdx/dt, dy/dti. Thus, the chain rule becomes

dh ∂f dx ∂f dy
= + . (34)
dt ∂x dt ∂y dt

Similarly, for n = 3, the chain rule becomes

dh ∂f dx ∂f dy ∂f dz
= + + . (35)
dt ∂x dt ∂y dt ∂z dt

Example 38 Let w = f (x, y, z) = exy+z , x = t, y = t2 , z = t3 . (Thus, g(t) =


ht, t2 , t3 i. Then

∂f 3
= exy+z y = e2t t2
∂x
∂f 3
= exy+z x = e2t 2t
∂y
∂f 3
= exy+z = e2t ,
∂z
100CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

and
dx
=1
dt
dy
= 2t
dt
dz
= 3t2 .
dt
Putting these in formula (35) yields
dh 3 3 3 3
= e2t t2 + e2t t · 2t + e2t 3t2 = e2t (6t).
dt
There are a couple of things to notice in the example. First, in principle, the inter-
mediate variables x, y, z should be expressed in terms of the ultimate independent
variable t. Otherwise, the answer might be considered incomplete. Secondly, the
derivative could have been calculated by first making the substitutions, and then
taking the derivative.
2 3 3
w = h(t) = et t +t = e2t
dh 3 3
= e2t (2 · 3t2 ) = e3t (6t2 ).
dt
Here we only needed to use the single variable chain rule (to calculate the derivative
of eu with u = 2t3 ,) and the calculation was much simpler than that using the mul-
tidimensional chain rule. This is almost always the case. About the only exception
would be that in which we happened to know the partial derivatives ∂f /∂x, ∂f /∂y,
and ∂f /∂z, but we did not know the function f explicitly. In fact, unlike the single
variable chain rule, the multidimensional chain rule is a tool for theoretical analysis
rather than an aid in calculating derivatives. In that role, it amply justifies itself.

Proof of the Chain Rule

Proof. To prove the chain rule, we start with the differentiability condition for f .

f (r + ∆r) = f (r) + ∇f (r) · ∆r + e(r, ∆r)

where e/|∆r| → 0 as |∆r| → 0. Hence,

∆w = f (r + ∆r) − f (r) = ∇f (r) · ∆r + e(r, ∆r), (36)

and dividing by ∆t yields


∆w ∆r e
= ∇f (r) · + .
∆t ∆t ∆t
Now let ∆t → 0. On the left, the limit is dw/dt = dh/dt. On the right, we have
∆r dr
∇f · lim = ∇f ·
∆t→0 ∆t dt
3.7. THE CHAIN RULE 101

which is what we want, so the rest of the argument amounts to showing that the
additional term e/∆t goes to 0 as ∆t → 0.

We need to distinguish two cases. For a given ∆t, we may have ∆r = 0. In that
case, ∆w = 0, and it also follows from (36) that e = 0. Otherwise, if ∆r 6= 0, write

e e |∆r|
= . (37)
|∆t| |∆r| |∆t|

Now, let ∆t → 0, but first restrict attention just to those ∆t for which ∆r = 0.
For those ∆t, we have, as noted above, e = 0, so we have trivially e/∆t → 0. Next,
let ∆t → 0, but restrict attention instead just to those ∆t for which ∆r 6= 0. Since
|∆r/∆t| → |dr/dt|, it follows that |∆r| → 0 in these cases. By assumption e/|∆r| →
0 generally, so the same is true if we restrict attention to those |∆r| 6= 0 obtained
from non-zero ∆t going to zero. Thus, equation (37) tells us that e/|∆t| → 0 also
in the second case.

It should be noted that this is exactly the same as the usual proof of the single
variable chain rule except for modifications necessary due to the fact that some of
the arguments are vectors rather than scalars.

Geometric Interpretation of the Chain Rule For variety, we consider the case
of a function f : R3 → R composed with a function g : R → R3 . You might think
of the function f (r) giving the temperature w at the point with position vector
r. Then the level surfaces of the function would be called isotherms, surfaces of
constant temperature. As before r = g(t) would be a parametric representation of
a curve in R3 and we could think of it as the path of a particle moving in space.
The derivative dr/dt = g0 (t) would be the velocity vector at time t. Then the chain f
rule could be written
dw dr
= ∇f (r) · r = g(t),
dt dt
and it would say that the rate of change of temperature w experienced by the v = d r/ dt
particle as it moves through the point with position vector r would be the gradient
of the temperature function f at r dotted with the velocity vector at that point of
the curve. Of course, you could think of the function w as giving any other quantity
which might be interesting in the problem you are studying.

Note that the formula for the directional derivative


df
= ∇f (r) · u
ds
is a special case of the chain rule. Namely, if the particle moves in such a way that
the speed ds/dt = 1, then the velocity vector u = dr/dt will be a unit vector, and
we may identify s with t, i.e., “distance” with “time”.

One important consequence of the chain rule is that at any point r, the gradient
∇f (r) (provided it is not zero) is perpendicular to the level surface through r. For,
102CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

suppose a curve given by r = g(t) is contained in the level surface


f (r) = c.
Since the derivative of a constant is zero, the chain rule tells us
dr
∇f · =0
dt
so ∇f is perpendicular to dr/dt. On the other hand, dr/dt is tangent to the curve,
so it is also tangent to the surface. For any reasonable surface, we can manage to
get every possible tangent vector to the surface by a suitable choice of a curve lying
in the surface. Hence, it follows that the gradient is perpendicular to the surface.

f(x, y, z) = c

(The realizability of all tangents to level surface is a bit subtle, and we shall study
it more closely in a later section. For now, it suffices to say that for reasonable
surfaces, there is no problem, and the gradient is always normal (perpendicular) to
the level surface.)

Example 39 Let f (r) = 1/|r| = 1/ x2 + y 2 + z 2 . The level surfaces of this
function are spheres centered at the origin. On the other hand,
−x −y −z 1
∇f = h , 2 , 2 i = − 3 r.
(x2 2 2
+y +z )3/2 2 2
(x + y + z ) 3/2 2 2
(x + y + z ) 3/2 |r|
This vector points toward the origin, so it is perpendicular to a sphere centered at
the origin.

Two level f at two points in space


surfaces

O
3.7. THE CHAIN RULE 103

Notice that in this example |∇f | = |r|/|r|3 = 1/|r|2 , so it satisfies the inverse square
law. Except for a constant, the gravitational force due to a point mass at the origin
is given by this rule. The function f in this case is the gravitational potential
energy function. It is true for many forces (e.g., gravitational or electrostatic) that
the gradient of the potential energy function is the force.

If we are working in R2 rather than R3 , then, f (r) = f (x, y) = c defines a family


of level curves rather than level surface. As above, the gradient ∇f is generally
perpendicular to these level curves.

Other Forms of the Chain Rule The most general chain rule tells us how to
find derivatives of composites of functions Rn → Rm for appropriate combinations
of m and n. We consider one special cases here. You will see how to generalize
easily to other cases. Let w = f (x, y) describe a scalar valued function of two
variables. (f : R2 → R.) Suppose, in addition, x = x(s, t) and y = y(s, t) describe
two functions of two variables s, t. (As we shall see later, this amounts to a single
function R2 → R2.) Then we may consider the composite function described by

w = h(s, t) = f (x(s, t), y(s, t)).

The chain rule generates formulas for ∂h/∂s and ∂h/∂t as follows. Partial deriva-
tives are ordinary derivatives computed with the assumption that all the variables
but one vary. Hence, to compute ∂h/∂t, all we need to do is to use (34), replacing
dh/dt by ∂h/∂t, dx/dt by ∂x/∂t, and dy/dt by ∂y/∂t. Similarly, for ∂h/∂s. We get

∂h ∂f ∂x ∂f ∂y
= +
∂s ∂x ∂s ∂y ∂s
∂h ∂f ∂x ∂f ∂y
= + . (38)
∂t ∂x ∂t ∂y ∂t

Note in these formulas that the differentiation variable on the left (s or t) must
agree with the ultimate differentiation variable on the right.

Example 40 Let f (x, y) = x2 + y 2 , x = r cos θ, y = r sin θ. Here, the intermedi-
ate variables are x, y and the ultimate independent variables are r, θ. We have

∂f x r cos θ
=√ = = cos θ
∂x x2 + y 2 r
∂f y r sin θ
=√ = = sin θ.
∂y 2
x +y 2 r

Also,

∂x ∂y
= cos θ = sin θ
∂r ∂r
∂x ∂y
= −r sin θ = r cos θ.
∂θ ∂θ
104CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

Hence,
∂h
= (cos θ)(cos θ) + (sin θ)(sin θ) = 1
∂r
∂h
= (cos θ)(−r sin θ) + (sin θ)(r cos θ) = 0.
∂θ
Note that one must substitute for x, y in terms of r, θ in the expressions for ∂f /∂x
and ∂f /∂y or one won’t get a complete answer.

The simplicity of the answers is more easily seen if we do the substitution before
differentiating. √
h(r, θ) = r2 cos2 θ + r2 sin2 θ = r.
Hence, ∂h/∂r = 1 and ∂h/∂θ = 0. Again, this illustrates the point that the
multidimensional chain rule is not primarily a computation device. However, we
shall see that its utility in theoretical discussions in mathematics and in applications
more than justifies its use.

A Confusing Point

The most difficult use of the chain rule is in situations like the following. Suppose
w = f (x, y, z), z = z(x, y), and we want the partial derivatives of w = h(x, y) =
f (x, y, z(x, y)) with respect to x and y. The correct formulas are
∂h ∂f ∂f ∂z
= +
∂x ∂x ∂z ∂x
∂h ∂f ∂f ∂z
= + .
∂y ∂y ∂z ∂y
One way to see the truth of these formulas is as follows. Suppose we introduce
new variables s and t with x = x(s, t) = s, y = y(s, t) = t, and z = z(s, t). Let
w = h(s, t) = f (x(s, t), y(s, t), z(s, t)) = f (s, t, z(s, t)). Then, according to the chain
rule
∂h ∂f ∂x ∂f ∂z
= +
∂s ∂x ∂s ∂z ∂s
∂f ∂f ∂z
= (1) + .
∂x ∂z ∂s
∂h
A similar argument works for . We may now obtain the previous formulas by
∂t
identifying x with s and y with t.

The source of the problem is confusing the variables with functions. Thus, writing
z = z(x, y) lacks precision since the name ‘z’ should not be used both for the
variable and the function expressing it in terms of other variables. However, this is
a common ‘abuse of notation’ in applications, since it allows us to concentrate on
the physical interpretation of the variables which might otherwise be obscured by
a more precise mathematical formalism.
3.7. THE CHAIN RULE 105

To see how confusing all this can be, let’s consider a thermodynamic application.
The entropy s is a function s = s(p, v, T ) of pressure p, volume v, and tempera-
ture T . Also, the pressure may be assumed to be a function p = p(v, T ). Thus,
ultimately, we may express the entropy s = s(v, T ). Note the several abuses of
notation. s(p, v, T ) and s(v, T ) refer of course to different functions. With this
notation, the above formulas give
∂s ∂s ∂s ∂p
= +
∂v ∂v ∂p ∂v
∂s ∂s ∂s ∂p
= +
∂T ∂T ∂p ∂T
which suggests that we may cancel the common terms on both sides to conclude
that the remaining term is zero. This is not correct, since, as just mentioned, the
two functions ‘s’ begin differentiated are different. To clarify this, one should write
the formulas
( ) ( ) ( ) ( )
∂s ∂s ∂s ∂p
= +
∂v T ∂v p,T ∂p v,T ∂v T
( ) ( ) ( ) ( )
∂s ∂s ∂s ∂p
= + .
∂T v ∂T v,T ∂p v,T ∂T v

Here, the additional subscripts tell us which variables are kept constant, so by
implication we may see which variables the given quantity is supposed to be a
function of. Gradient in Polar Coordinates As you have seen, it is sometimes

useful to resolve vectors in the plane in terms of the polar unit vectors ur and uθ . We
want to do this for the gradient of a function f given initially by w = f (r) = f (x, y).
Suppose
∇f = Ar ur + Aθ uθ . (39)
For a particle moving on a curve in the plane, the chain rule tells us
dw dr
= ∇f · . (40)
dt dt
On the other hand,
dr dr dθ
= ur + r uθ . (41)
dt dt dt
Putting (39) and (41) in (40) yields

dw dr dθ dr dθ
= (Ar ur + Aθ uθ ) · ( ur + r uθ ) = Ar + Aθ r . (42)
dt dt dt dt dt
On the other hand, we can write w = g(r, θ) = f (x, y) = f (r cos θ, r sin θ), so
applying the chain rule directly to g, gives
dw ∂g dr ∂g dθ
= + . (43)
dt ∂r dt ∂θ dt
106CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

f
We now argue that since the curve could be anything, so could dr/dt and dθ/dt.
Hence, the coefficients of these two quantities in (42) and (43) must be the same.
u Hence, Ar = ∂g/∂r and Aθ r = ∂g/∂θ, i.e., Aθ = (1/r)∂g/∂θ. It follows that
θ
u ∂g 1 ∂g
r ∇f = ur + uθ
∂r r ∂θ
where g is the function expressing the desired quantity in polar coordinates.
Level √
Example 39, revisited Let f (x, y) = 1/ x2 + y 2 = 1/r = g(r, θ). Notice that this
curve is what we would get if we restricted Example 39 to the x, y-plane by setting z = 0.
Then ∂g/∂r = −(1/r2 ) and ∂g/∂θ = 0. Hence,
1
∇f = − ur .
r2
This is the same answer we got previously except that we are restricting attention
to the x, y-plane.

A similar analysis can be done in space if one uses the appropriate generalization
of polar coordinates, in this case what are called spherical coordinates. We shall
return to this later in the course.

Exercises for 3.7.

In the following problems, use the appropriate form of the chain rule. The rule you
need may not have been stated explicitly in the text, but you ought to be able to
figure out what it is. The notation is typical of what is used in practice and is not
always completely precise.

1. Find dw/dt by first using the chain rule, then by determining w(t) explicitly
before differentiating. Check that the answers are the same by expressing
them both entirely in terms of t.
(a) w = x2 + y 2 , x = t, y = −2t.
1
(b) w = , x = cos t, y = sin t.
x+y

(c) w = ln(x2 + y 2 + z 2 ), x = 2 − 3t, y = t, z = t.
2. Use the chain rule to find ∂h/∂x,∂h/∂y, and ∂h/∂z.
(a) h = e−2u−3v+w , u = xz, v = yz, w = xy.

(b) h = u2 + v 2 + w2 , u = x + y + z, v = x2 , w = y 2 .
3. Suppose w = w(x, y) and y = y(x). Hence, ultimately w = w(x). With
dw
this abuse of notation, derive a correct formula for . (Hint: You might
dx
try writing the above more precisely w = f (x, y), y = g(x), and w = h(x) =
f (x, g(x)).)
3.7. THE CHAIN RULE 107

4. In each case, use the chain rule to find ∂h/∂x and ∂h/∂y in terms of x and y
for the given composite function h(x, y) . Then express h explicitly in terms
of x and y, and find those partials again. Check that you get the same thing.
(a) h = u2 + v 2 + x2 + y 2 , u = 2x − 3y, v = 2x + 3y.
(b) h = uvwxy, u = x2 + y 2 , v = 5x − 6y, w = xy.

5. In each case find a normal vector to the indicated level set at the indicated
point.
(a) x2 + 3y 2 = 7 at (2, −1).
(b) x2 + y 2 − z 2 = 1 at (2, 1, 2).
(c) x3 − x + y 2 − z = 0 at (2, −3, 15).

6. Find a normal vector to the level curve defined by F (x, y) = f (x) − y = 0 at a


general point. Show that it is perpendicular to a tangent vector to the graph
y = f (x). Hint: What is the product of the slopes of the tangent line and the
normal line?

7. The temperature on a heated plate is given by the formula T = T (x, y) =


x2 + xy + y 2 . A psychologist induces a bug to follow the circular path given
by r = 3 cos 2t i+3 sin 2t j. Find the rate of change of temperature experienced
by the bug at t = π/2.

8. In a meteorological theory, it is assumed that pressure is a function only of z,


p = p(z). A rocket with remote sensing equipment is launched in a parabolic
path. It is intuitively clear that at the top of the path, it will report that the
rate of change dp/dt = 0. Verify this conclusion mathematically. Could you
draw the same conclusion if p = p(x, z) and the path is in the y, z-plane? the
x, z-plane?

9. In each case express ∇f in polar coordinates in terms of ur and uθ .


(a) f (x, y) = x.
(b) f (x, y) = y.
(c) f (x, y) = x2 + y 2 = r2 .

y
10. Suppose f (x, y) = tan−1 for x, y > 0. Calculate ∇f in rectangular coordi-
x
nates. Also, express f (x, y) = g(r, θ) and use the formula ∇f = (∂g/∂r)ur +
(1/r)(∂g/∂θ)uθ . Can you convince yourself the two answers are the same?
108CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

3.8 Tangents to Level Sets and Implicit Differen-


tiation

We saw in the previous section that for a function f : R2 → R of two variables,


the gradient ∇f (r0 ) is perpendicular to the level curve f (r) = c through r0 , and
similarly for functions f : R3 → R except that the locus is a level surface instead.

Example 41 Consider the locus in R2 of the equation

x3 + 3xy 2 + y = 15

at the point (1, 2). The normal vector is ∇f (1, 2) for f (x, y) = x3 + 3xy 2 + y. That
is, the normal vector is

h3x2 + 3y 2 , 6xy + 1i = h15, 13i.


dr = r - r
0
The tangent line will be characterized by the relation

f ∇f (r0 ) · (r − r0 ) = 0

which in this case becomes

h15, 13i · hx − 1, y − 2i = 15(x − 1) + 13(y − 2) = 0.

Simplifying, this gives


15x + 13y = 41.

Example 42 Consider the hyperboloid of one sheet with equation

x2 + 2y 2 − z 2 = 2.

at (1, 1, 1). The normal vector is ∇f (1, 1, 1) where f (x, y, z) = x2 + 2y 2 − z 2 . Thus,


it is
h2x, 4y, −2zi = h2, 4, −2i.
f The tangent plane will be characterized by the equation ∇f (r0 ) · (r − r0 ) = 0 which
in this case becomes

2(x − 1) + 4(y − 1) − 2(z − 1) = 0

or
2x + 4y − 2z = 2.

There is a subtle point involved here. What justification do we have for believing
that an equation of the form f (x, y) = c defines what we can honestly call a curve
in R2 , and what justification is there for calling the line above a tangent? Similar
questions can be asked in R3 about the locus of f (x, y, z) = c and the corresponding
plane. In fact, there are simple examples of where this would not be a reasonable
3.8. TANGENTS TO LEVEL SETS AND IMPLICIT DIFFERENTIATION 109

use of language. For example, the locus in R2 of x2 + y 2 = 0 is the point (0, 0), and
it certainly is not a curve in any ordinary sense. Similarly, we saw previously that
the locus in R3 of x2 + y 2 − z 2 = 0 is a (double) cone, which looks like a surface all
right, but it does not have a well defined tangent plane at the origin. If you look
carefully at both these examples, you will see what went wrong; in each case the
gradient ∇f vanishes at the point under consideration. It turns out that if r0 is a
point satisfying the equation f (r) = c and ∇f (r0 ) 6= 0, then the level set looks like
what it should look like and the locus of ∇f (r0 ) · (r − r0 ) = 0 makes sense as a
tangent (line or plane) to that level set.

Graph of x 2 + y 2 = 0 is a point. Tangent plane not well defined

The basis for all of this is a deep theorem called the implicit function theorem. We
won’t try to prove this theorem here, or even to state it very precisely, but we
indicate roughly what it has to do with the above discussion. Consider first the
case of a level curve f (x, y) = c in R2 . Let (x0 , y0 ) be a point on that curve, where
∇f (x0 , y0 ) 6= 0. The implicit function theorem says that (subject to reasonable
smoothness assumptions on f ) the curve is identical with the graph of some function
g of one variable, at least if we stay close enough to the point (x0 , y0 ). Moreover,
the tangent line to the graph is the same as the line obtained from the equation
∇f (r0 ) · (r − r0 ) = 0.

f graph of y = g(x)

(x 0, y 0)

level set f(x,y) = c


110CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

Example 43 The locus of f (x, y) = x2 + y 2 = 1 is a circle of radius 1 centered at


the origin. The gradient ∇f = h2x, 2yi does not vanish at any point on the circle. If
we fix a point (x0 , y0 ) with y0 > 0 (top semicircle), then in the vicinity of√that point,
the circle can be identified with the graph
√ of the function y = g(x) = 1 − x2 . If
y0 < 0, we have to use y = g(x) = − 1 − x2 instead. It is not hard to check that
the tangent line is the same whichever method we use to find it.

There is one problem. Namely, at the points (1, 0) and (−1, 0) the tangent lines
are vertical, so neither can be obtained as a tangent to the graph of a function
given by y = g(x). (The slopes would be infinite.)√Instead, we have to reverse the
roles of x and√y and use the graph of x = g(y) = 1 − y 2 near the point (1, 0) or
x = g(y) = − 1 − y 2 near the point (−1, 0).

Similar remarks apply to level surfaces f (x, y, z) = c in R3 except that the conclu-
sion is that in the neighborhood of a point at which ∇f does not vanish the level
set can be identified with the graph of a function g of two variables. In particular,
it really looks like a surface, and the equations of the tangent plane work out right.
(If the normal vector ∇f points in a horizontal direction, i.e., fz = 0, you may have
to try x = g(y, z) or y = g(x, z) rather than z = g(x, y).)

Having noted that level sets can be thought of (at least locally) as graphs of func-
tions, we should note that the reverse is also true. For example, suppose f : R2 → R
is a function of two variables. Its graph is the locus of z = f (x, y). If we define
F (x, y, z) = z − f (x, y), the graph is the zero level set

F (x, y, z) = z − f (x, y) = 0.

This remark allows us to treat the theory of tangent planes to graphs as a special
case of the theory of tangent planes to level sets. The normal vector is ∇F =
h−fx , −fy , 1i and the equation of the tangent plane is h−fx (r0 ), −fy (r0 ), 1i · hx −
x0 , y − y0 , z − z0 i = 0.

Example 44 Consider the graph of the function z = xy to be the level set of the
function
F (x, y, z) = z − xy = 0.
At (0, 0, 0), ∇F = h−y, −x, 1i = h0, 0, 1i. Hence, the equation of the tangent plane
is
0(x − 0) + 0(y − 0) − 1(z − 0) = 0
or
z = 0.
Thus, the tangent plane to the graph (which is a hyperbolic paraboloid) at (0, 0, 0)
is the x, y-plane. In particular, note that it actually intersects the surface in two
lines, the x and y axes.

Implicit Differentiation The above discussion casts some light on the problem
of “implicit differentiation”. You recall that this is a method for finding dy/dx in
3.8. TANGENTS TO LEVEL SETS AND IMPLICIT DIFFERENTIATION 111

a situation in which y may not be given explicitly in terms of x, but rather it is


assumed there is a functional relation y = y(x) which is consistent with a relation

f (x, y) = c

between y and x.

Example 45 Suppose x2 + y 2 = 1 and find dy/dx. To solve this we differentiate


the relation using the usual rules to obtain

dy
2x + 2y =0
dx
which can be solved
dy x
=− .
dx y
Note that the answer depends on both x and y, so for example it would be different
for y > 0 (the top semi-circle) and y < 0 (the bottom semi-circle). This is consistent
with the fact that in order to pick out a unique functional relationship y = y(x),
we must specify where on the circle we are. In addition, the method does not make
sense if y = 0, i.e., at the points (±1, 0). As we saw above, it would be more
appropriate to express x = x(y) as a function of y in the vicinity of those points.

The process of implicit differentiation can be explained in terms of the tangent line
to a level curve as follows. First, rewrite the equation of the tangent to f (x, y) = c
in differential form
∂f ∂f
∇f · dr = dx + dy = 0. (44)
∂x ∂y

y = g(x) tangent line

domain of g
level set
f(x, y) = c

This is just a change of notation where we drop the subscript 0 in describing the
point on the curve and we use dr rather than r − r0 for the displacement along
112CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

the tangent line. (We usually think of dr as being very small so we don’t have to
distinguish between the displacement in the tangent direction and the displacement
along the curve, at least if we are willing to tolerate a very small error.) (44) can
be solved for dy by writing
∂f /∂x
dy = − dx
∂f /∂y
provided the denominator fy 6= 0. On the other hand, the implicit function theorem
tells us that near the point of tangency we may assume that the graph is the graph of
a function given by y = g(x), and the tangent line to that graph would be expressed
in differential notation
dy
dy = g 0 (x)dx = dx.
dx
Since it is the same tangent line in either case, we conclude

dy ∂f /∂x
=− .
dx ∂f /∂y

(Note that the assumption fy 6= 0 at the point of tangency plays an additional role
here. That condition insures that the normal vector ∇f won’t point horizontally,
i.e., that the tangent line is not vertical. Hence, it makes sense to consider the
tangent as a tangent line to the graph of a function y = g(x).)

Example 45, revisited For

f (x, y) = x2 + y 2 = 1,

we have fx (x, y) = 2x, fy (x, y) = 2y, so (45) tells us dy/dx = −x/y, just as before.

Similar reasoning applies to level surfaces in R3 . Let

f (x, y, z) = c

define such a level surface, and write the equation of the tangent plane in differential
notation
∂f ∂f ∂f
∇f · dr = dx + dy + dz = 0. (46)
∂x ∂y ∂z
Solving for dz yields
∂f /∂x ∂f /∂y
dz = − dx − dy
∂f /∂z ∂f /∂z
provided the denominator fz 6= 0. On the other hand, at such a point, the implicit
function theorem allows us to identify the surface near the point with the graph of
a function z = g(x, y). Moreover, we may write the equation of the tangent plane
to the graph in differential form as

∂g ∂g
dz = dx + dy.
∂x ∂y
3.8. TANGENTS TO LEVEL SETS AND IMPLICIT DIFFERENTIATION 113

Since it is the same tangent plane in either case, we conclude that

∂g ∂f /∂x
=−
∂x ∂f /∂z
∂g ∂f /∂y
=− .
∂y ∂f /∂z

Example 46 Suppose xyz + xz 2 + z 3 = 1. Assuming z = g(x, y), find ∂z/∂x


and ∂z/∂y in general and also for x = 1, y = −1, z = 1. To solve this, we put
f (x, y, z) = xyz + xz 2 + z 3 and set its total differential to zero

df = fx dx + fy dy + fz dz = (yz + z 2 )dx + (xz)dy + (xy + 2xz + 3z 2 )dz = 0.

This can be rewritten


yz + z 2 xz
dz = − dx − dy
xy + 2xz + 3z 2 xy + 2xz + 3z 2
so
∂z yz + z 2
=−
∂x xy + 2xz + 3z 2
∂z xz
=−
∂y xy + 2xz + 3z 2

provided the denominator does not vanish. At (1, −1, 1) we have ∂z/∂x = 0/4 = 0
and ∂z/∂y = 1/4.

There is another way this problem could be done which parallels the approach
you learned in your previous course. For example, to find ∂z/∂x, just apply the
“operator” ∂/∂x to the equation xyz + xz 2 + z 3 = 1 under the assumption that
z = g(x, y) but x, y are independent. We get

∂z ∂z ∂z
yz + xy + z 2 + 2xz + 3z 2 = 0.
∂x ∂x ∂x
This equation can be solved for ∂z/∂x and we get the same answer as above. (Check
it!)

A Theoretical Point Consider the level surface f (r) = c at a point where the
gradient ∇f does not vanish. In showing that the gradient is perpendicular to the
level surface (in the previous section), we claimed that every possible tangent vector
to the surface at the point of tangency is in fact tangent to some curve lying in the
surface. A moment’s thought shows there are some problems with this assertion.
First, how do we know that the level set looks like a surface, (as opposed say to
a point) and if it does, what do we mean by its tangent plane? These issues are
settled by using the implicit function theorem. For as long as ∇f does not vanish
at the point, that theorem allows us to view the surface in the vicinity of the given
114CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

tangent point as the graph of some function. However, the graph of a function certainly
has the right properties for our concept of “surface”. Moreover, there is no problem
curve with tangent vectors to a graph, and any such vector determines a plane section
perpendicular to the domain of the function which intersects the graph in a curve
with the desired tangent. (See the diagram.)

Exercises for 3.8.

1. Find the equation ∇f (r0 ) · (r − r0 ) = 0 for each of the following level surfaces.
(a) 2x2 + 3y + z 3 = 5, P (3, −7, 2).
(b) x4 + 3xy 2 z + xy 2 z 2 + z 3 = 0, P (0, −2, 0)

2. Show that every tangent plane of the cone z 2 = x2 + y 2 passes through the
origin. (See also Exercise (4) in Section 3.)

3. Find all point(s) where the tangent plane to the surface z = x2 + 4xy + 4y 2 −
12x + 4y is horizontal.

4. In each of the following, find ∂z/∂x and ∂z/∂y in terms of x,y, and z if you
assume z = f (x, y). Do not try to solve explicitly for z.
5 5 5
(a) x 2 + y 2 + z 2 = 1
(b) 2ex z + 4ey z − 6ex y = 17
(c) xyz = 2x + 2y + 2z
(d) 4x2 y 2 z + 5xyz 3 − 2x3 y = −2

5. Suppose that f (x, y, z) = c expresses a relation among the three variables


x, y, and z. We may suppose that this defines any one of the variables as a
function of the other two. Show that
( ) ( ) ( )
∂x ∂y ∂z
= −1.
∂y z ∂z x ∂x y

Hint: You should be able to express each of the indicated derivatives in terms
of fx , fy , and fz .

6. Each of the equations g(x, y, z) = c and h(x, y, z) = d defines a surface in


R3 . The intersection of the two surfaces is usually a curve in R3 . At any
given point on this curve, ∇g is perpendicular to the first surface and ∇h is
perpendicular to the second surface, so both are perpendicular to the curve.
(a) Use this information to find a vector tangent to the curve at the point. (b)
Apply this analysis to find a tangent vector to the intersection of the cylinder
x2 + y 2 = 25 with the plane x + y − z = 0 at the point (3, 4, 5).
3.9. HIGHER PARTIAL DERIVATIVES 115

3.9 Higher Partial Derivatives

Just as in the single variable case, you can continue taking derivatives in the mul-
tidimensional case. However, because there is more than one independent variable,
the situation is a bit more complicated.

Example 47 Let f (x, y) = x2 y + y 2 . Then


∂f ∂f
= 2xy = x2 + 2y.
∂x ∂y
Each of these can be differentiated with respect to either x or y and the results are
denoted
( ) ( )
∂2f ∂ ∂f ∂2f ∂ ∂f
= = 2y = = 2x
∂x2 ∂x ∂x ∂x∂y ∂x ∂y
( ) ( )
∂2f ∂ ∂f ∂2f ∂ ∂f
= = 2x = = 2.
∂y∂x ∂y ∂x ∂y 2 ∂y ∂y

All these are called second order partial derivatives. Note that the order in which the
operations are performed is from right to left; the operation closer to the function
∂2f ∂2f
is performed first. and are called mixed partials. Other notation for
∂x∂y ∂y∂x
partial derivatives is fxx , fxy , fyx , and fyy . However, just to make life difficult for
you, the order is different. For example, fxy means first differentiate with respect
to x and then differentiate the result with respect to y.

Fortunately, it doesn’t usually make any difference which order you do the opera-
tions. For example, in Example 47, we have

∂2f ∂ ∂f
= = 2x
∂x∂y ∂x ∂y
∂2f ∂ ∂f
= = 2x
∂y∂x ∂y ∂x
so the mixed partials are equal. The following theorem gives us conditions under
which we can be sure such is the case. We state if for functions of two variables,
but its analogue holds for functions of any number of variables.

Theorem 3.3 Let z = f (x, y) denote a function of two variables defined on some
open set in R2 . Assume the partial derivatives fx , fy , and fxy are defined and fxy
is continuous on that set. Then fyx exists and fxy (x, y) = fyx (x, y).

A function with continuous second order partial derivatives is usually called C 2 .


This is more stringent than the condition of being C 1 (having continuous first order
partials). It will almost always be true that functions you have to deal with in
applications are C 2 except possibly for an isolated set of points.
116CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

Clearly, we can continue this game ad infinitum. There are 8 possible 3rd order
derivatives for a function of two variables:

fxxx , fyxx , fxyx , fxxy , fxyy , fyxy , fyyx , fyyy .

(These could also be denoted ∂ 3 f /∂x3 , ∂ 3 f /∂x2 ∂y, etc.) However, for sufficiently
smooth functions, the 2nd, 3rd, and 4th are the same, as are the 5th, 6th, and 7th.

How many second order partial derivatives are there for a function of 3 variables
x, y, and z? How many are the same for C 2 functions?

Proof of the Theorem We include here a proof of Theorem 3.3 because some of
you might be curious to see how it is done. However, you will be excused if you
choose to skip the proof.

The proof is based on the Mean Value Theorem (as was the proof of Theorem 3.1).

We have
fy (x + ∆x, y) − fy (x, y)
fyx (x, y) = lim .
∆x→0 ∆x
However,
f (x, y + ∆y) − f (x, y)
fy (x, y) = lim
∆y→0 ∆y
f (x + ∆x, y + ∆y) − f (x + ∆x, y)
fy (x + ∆x, y) = lim .
∆y→0 ∆y
Hence,
[ ]
f (x + ∆x, y + ∆y) − f (x + ∆x, y) f (x, y + ∆y) − f (x, y)
fyx = lim lim −
∆y→0 ∆x→0 ∆x∆y ∆x∆y
f (x + ∆x, y + ∆y) − f (x + ∆x, y) − f (x, y + ∆y) + f (x, y)
= lim lim .
∆y→0 ∆x→0 ∆x∆y
(47)

Call the expression in the numerator ∆. Now, put

g(x) = f (x, y + ∆y) − f (x, y),

so the dependence on y is suppressed. Note that

g(x + ∆x) − g(x) = f (x + ∆x, y + ∆y) − f (x + ∆x, y) − f (x, y + ∆y) + f (x, y) = ∆.

By the Mean Value Theorem,

∆ = g(x + ∆x) − g(x) = g 0 (x1 )∆x

for some x1 between x and x + ∆x. Remembering what g is, we get

∆ = (fx (x1 , y + ∆y) − fx (x1 , y))∆x.


3.9. HIGHER PARTIAL DERIVATIVES 117

(The differentiation in g 0 was with respect to x.) Now apply the Mean Value
Theorem again to get
∆ = (fxy (x1 , y1 )∆y)∆x
for some y1 between y and y + ∆y. Note that (x1 , y1 ) → (x, y) as ∆x, ∆y → 0.
Referring again to (47), we have


fyx (x, y) = lim lim = lim lim fxy (x1 , y1 ) = fxy (x, y).
∆y→0 ∆x→0 ∆x∆y ∆y→0 ∆x→0
The last equality follows from the hypothesis that fxy is continuous.

An Example It is easy to given an example of a function for which the mixed


partials are not equal. Let

x2 − y 2
f (x, y) = xy .
x2 + y 2

This formula does not make sense for (x, y) = (0, 0) but it is not hard to see that
lim(x,y)→(0,0) f (x, y) = 0. Hence, we can extend the definition of the function by
defining f (0, 0) = 0, and the resulting function is continuous.

The mixed partial derivatives of this function at (0, 0) are not equal. To see this,
note first that
y(x4 + 4x2 y 2 − y 4 )
fx (x, y) =
(x2 + y 2 )2
as long as (x, y) 6= (0, 0). (That is a messy but routine differentiation which you
can work out for yourself.) To determine fx (0, 0), note that for y = 0, we get
f (x, 0) = 0. Hence, it follows that fx (x, 0) = 0 for every x including x = 0. We can
now calculate fxy (0, 0) as

fx (0, 0 + ∆y) − fx (0, 0) 1 ∆y(−(∆y)4 )


fxy (0, 0) = lim = lim = −1.
∆y→0 ∆y ∆y→0 ∆y (∆y 2 )2

Calculating fyx (0, 0) (in the other order) proceeds along the same lines. However,
since f (y, x) = −f (x, y), you can see that reversing the roles of x and y will yield
+1 instead, and indeed fyx (0, 0) = +1. Hence, the two mixed partials are not equal.

An Application to Thermodynamics You may have been studying thermody-


namics in your chemistry course. In thermodynamics, one studies the relationships
among variables called pressure, volume, and temperature. These are usually de-
noted p, v, and T . They are assumed to satisfy some equation of state

f (p, v, T ) = 0.

For example, pv − kT = 0 is the law which is supposed to hold for an ideal gas,
but there are other more complicated laws such as the van der Waals equation. In
any case, that means that we can pick two of the three variables as the indepen-
dent variables and the remaining variable depends on them. However, there is no
118CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES

preferred choice for independent variables and one switches from one to another,
depending on the circumstances. In addition, one introduces other quantities such
as the internal energy (u), the entropy (s), the enthalpy, etc. which are all functions
of the other variables. (In certain circumstances, one may use these other variables
as independent variables.)

It is possible to state two of the basic laws of thermodynamics in terms of entropy


(s) and internal energy (u) as follows.
( ) ( )
∂s ∂u
T = First Law
∂T v ∂T v
( ) ( )
∂s ∂u
T = +p Second Law.
∂v T ∂v T

From these relations, it is possible to derive many others. For example, from the
first law, we get by differentiating with respect to v,

∂2s ∂2u
T = .
∂v∂T ∂v∂T
From the second law, we get by differentiating with respect to T
( ) ( )
∂s ∂2s ∂2u ∂p
+T = + .
∂v T ∂T ∂v ∂T ∂v ∂T v

Subtracting the first from the second and using the equality of the mixed partials
yields ( ) ( )
∂s ∂p
= .
∂v T ∂T v
This is one of four relations called Maxwell’s relations.

Exercises for 3.9.

1. Calculate fxy and fyx and show that they are equal.
(a) f (x, y) = x2 cos y.
(b) f (x, y) = x ln(x + 2y).
(c) f (x, y) = x3 − 5x2 y + 7xy 2 − y 3 .
2. Suppose f (x, y) denotes a twice differentiable function with equal mixed par-
tials. Show that if f (x, y) = −f (y, x) then fxy (x, y) = 0 at any point on the
line y = x. Why does this show that the function considered in the Example
in the section cannot have equal mixed partials at the origin?
∂ m+n f
3. For the function f (x, y) = ex+y , show that all partials are the same.
∂y m ∂xn
What are they?
3.9. HIGHER PARTIAL DERIVATIVES 119

4. The vertical displacement y(x, t) of a point on a vibrating string (as a function


of position x and time t) is governed by the wave equation

∂2y 2
2∂ y
= c .
∂t2 ∂x2
(This is an approximation which is only accurate for small displacements.)
Show that there are solutions of the form:
(a) y = sin(kx + at).
(b) y = sin kx cos at.
What are k and a?
5. Laplace’s Equation
∇2 u = 0
governs many physical phenomena, e.g., the potential energy of a gravitational
field.
Which of the following functions satisfy Laplace’s Equation? Use ∇2 u =
∂2u ∂2u
+ 2 for a function u(x, y) of two variables.
∂x2 ∂y
(a) u = ln(x + y)
(b) u = 2x − 3y + 4x2 − 4y 2 + xy

(c) u = x2 + y 2
6. Suppose w = f (x, y). Then, in polar coordinates, w = f (r cos θ, r sin θ) =
h(r, θ). Show that

∂2f ∂2f ∂ 2 h 1 ∂h 1 ∂2h


+ = + + .
∂x2 ∂y 2 ∂r2 r ∂r r2 ∂θ2

7. Given the thermodynamic relation

du = T ds − p dv

derive the Maxwell relation


( ) ( )
∂T ∂p
=− .
∂v s ∂s v
120CHAPTER 3. DIFFERENTIAL CALCULUS OF FUNCTIONS OF N VARIABLES
Chapter 4

Multiple Integrals

4.1 Introduction

We now turn to the subject of integration for functions of more than one variable.
As before, the case of functions of two variables is a good starting point.

We start with a discussion of how integration commonly arises in applications, and


to be concrete we shall discuss the the concept of center of mass. (Your physics book
should have a discussion of the significance of this concept.) Given a finite system
of particles with masses m1 , m2 , . . . , mn with position vectors at r1 , r2 , . . . , rn , the
center of mass of the system is defined to be
1 ∑
n
1
rcm = (m1 r1 + m2 r2 + · · · + mn rn ) = mi ri
M M i=1

where

n
M = m1 + m2 + · · · + mn = mi
i=1
is the total mass. The center of mass is important in dynamics because it moves
like a single particle of mass M which is acted upon by the sum of the external
forces on the individual particles.

The principle we want to illustrate is that for each concept definable for finite sets of
points, there is an appropriate generalization for continuous distributions in which
the finite sum is replaced by an integral.

Example 48 Let a mass M be uniformly distributed along a thin rod of length


L. We shall set up and evaluate an integral representing the center of mass of this
system.

121
122 CHAPTER 4. MULTIPLE INTEGRALS

0 L

We treat the rod as if it were a purely 1-dimensional distribution, thus ignoring its
extension in the other directions. We introduce a coordinate x to represent distance
along the rod, so 0 ≤ x ≤ L. Since the mass is uniformly distributed, the mass
density per unit length will be
M
δ= .
L
The link between the finite and continuous is obtained by imagining that the rod
is partitioned into small segments by picking division points

0 = x0 < x1 < x2 < · · · < xi−1 < xi < · · · < xn = L.

The easiest way to do this would be to have them equally spaced, but to be com-
pletely general we don’t assume that. Thus, the mass contained in the ith segment
is ∆mi = δ∆xi where ∆xi = xi − xi−1 . We now replace each segment ∆xi by
a point mass of the same mass ∆mi placed at the right hand endpoint xi . (As
we shall see, it is not critical where the point mass is positioned as long as it is
somewhere in the segment. It could be instead at the left endpoint xi−1 or at any
point x̃i satisfying xi−1 ≤ x̃i ≤ xi .) The x-coordinate of the center of mass of this
finite discrete system is

1 ∑ 1 ∑
n n
(δ∆xi )xi = δ xi ∆xi .
M i=1 M i=1

To find—actually, to define—the x-coordinate of the center of mass of the original


continuous distribution, we let n → ∞ and all ∆xi → 0 and take the limit. The
result is the integral
∫ L
1
xcm = δ x dx.
M 0
This can be evaluated in the usual way
∫ L
1 L
1 x2 1 M L2 L
δ x dx = δx = = .
M 0 M 2 0 M L 2 2

Thus xcm = L/2 just as you would expect.

Note that had we used some other x̃i in the above construction, we would have
gotten instead the sum
∑n
δ x̃i ∆xi ,
i=1

but that wouldn’t have made any difference in the limit. As you may remember
from your previous study of integral calculus, all such sums are called Riemann
4.1. INTRODUCTION 123

sums and, in the limit, it doesn’t matter which x̃i you choose in the ith segment.
All such sums approach the same limit, the definite integral.

In the above example, we could have had a non-uniformly distributed mass. In that
case, the linear density δ(x) would have been a non-constant function of position x.
In that case, a typical Riemann sum would look like

n
δ(x̃i )x̃i ∆xi
i=1

where xi−1 ≤ x̃i ≤ xi , and in the limit it would approach the definite integral
∫ L
δ(x)x dx.
0

The x-coordinate of the center of mass would be


∫ L
1
δ(x)x dx,
M 0
but in this case, using similar reasoning, we would calculate the mass as an integral
∫ L
M= δ(x) dx.
0

In such problems there are always two parts to the process. The first step is con-
ceptual ; it involves setting up an integral by visualizing it as a limit of finite sums.
The second step is to evaluate the integral by using antiderivatives. In the example,
x2 /2 doesn’t have anything to do with any sums, it is just an antiderivative or in-
definite integral of x. The link with limits of sums is provided by the Fundamental
Theorem of Calculus.

Sometimes we can’t evaluate the integral through the use of antiderivatives. For ex-
ample, δ(x)x might be an expression for which there is no antiderivative expressible
in terms of known functions. Then we have to go back to the idea of the definite
integral as the limit of finite sums and try to approximate it that way. Refinements
of this idea such as the trapezoidal rule or Simpson’s rule are often helpful in that
case.

Let’s now consider a two dimensional example.

Example 49 Suppose a mass M is uniformly distributed over a thin sheet of some


shape. By ignoring the thickness of the sheet, we may assume the mass is distributed
over a region in a plane, and after choosing coordinates x, y in that plane, we may
describe the region by the equations of its bounding curves. To be explicit, suppose
the mass is distributed over the region D contained between the line y = 1 on top
and the parabola y = x2 underneath. We want to find the center of mass (xcm , ycm )
of the distribution.
124 CHAPTER 4. MULTIPLE INTEGRALS

y=1
(-1, 1) (1, 1)

2
y = x

It is clear that xcm = 0. (Why?) Hence, we concentrate on finding ycm We try to


proceed as in Example 48. The first conceptual step will be to visualize it as the
limit of finite sums.

Before beginning, note that the mass density per unit area will be δ = M/A where
A is the area of the region D, but to find the area, we need to evaluate an integral.
∫ 1
1
x3 4
A= (1 − x2 )dx = x − = .
−1 3 −1 3

M 3M
Thus, δ = = .
4/3 4

Imagine next that the region D is dissected into small rectangles by a grid as in the
diagram. (There is a problem on the bottom edge where we won’t have rectangles,
but ignore that for the moment.)

Of course, to actually do this, we will have to number the rectangles some way, so
we need subscripts and we have to keep track of the bookkeeping. However, for the
moment let’s ignore all that. Consider a typical rectangle with sides ∆x, ∆y, and
area ∆A = ∆x∆y. The mass inside that rectangle will be ∆m = δ ∆A.
4.1. INTRODUCTION 125

∆y

∆x

Now imagine that each small rectangle is replaced by a particle of the same mass
∆m placed at some point (x, y) inside the rectangle. (If the rectangles are all small
enough, it won’t matter much how the points (x, y) are chosen. For example, you
could always choose the upper right corner, or the lower left corner, or the center,
or anything else that takes your fancy.) The y-coordinate of the center of mass of
this system will be
1 ∑ 1 ∑
ycm = y ∆m = y δ ∆A.
M M
all rectangles all rectangles

Now let the number of rectangles approach ∞ while the size of each rectangle
approaches zero. The sum approaches a limit which is denoted
∫∫
y δ dA
D

and which is called a double integral. (Two integral signs are used to remind us of
the dimensionality of the domain.) The y-coordinate of the center of mass of the
continuous distribution is ∫∫
1
ycm = y δ dA.
M D
This completes the first part of the problem: setting up the integral. However,
we have no analogue as yet of the second step: evaluating the integral using an-
tiderivatives and the Fundamental Theorem of Calculus. Of course, we could always
approximate it by an appropriate sum, and the smaller we take the rectangles and
the more of them there are, the better the approximation will be.

The General Concept of a Double Integral If the mass density per unit area
were a function of position δ(x, y), then the mass in one of the small rectangles would
be approximately δ(x, y) ∆A where as above (x, y) is any point in that rectangle.
In the limit, the y-coordinate of the center of mass would be
∫∫
1
ycm = y δ(x, y) dA.
M D

More generally, let D be a subset of R2 and let f (x, y) denote a function defined
on D.
126 CHAPTER 4. MULTIPLE INTEGRALS

As above, dissect the region into small rectangles. For each such rectangle, let its
sides have dimensions ∆x and ∆y, so its area is ∆A = ∆x ∆y, and choose a point
(x, y) in the rectangle. Form the sum

f (x, y)∆A
all rectangles

and consider what happens as the size of each rectangle approaches zero and the
number of rectangles approaches ∞. If the resulting sums approach a definite limit,
we call that limit the double integral of the function over the region D and we
denote it ∫∫
f (x, y) dA.
D

∆x

∆y

There are some problems with this definition that should be discussed briefly. First
of all, in principle the set D could be quite arbitrary, but in that case the limit
may not exist and in any case it may be impossible to evaluate. Usually, in useful
situations, the region D is something quite reasonable. For example, it might be
4.1. INTRODUCTION 127

bounded by a finite set of smooth curves as in Example 49. Secondly, the region D
had better be bounded; that is, it should be possible to enclose it in a sufficiently
large rectangle. If that were not the case, it would not be possible to dissect the
region into finitely many small rectangles. (What one does for unbounded regions
will be discussed later.)
D

Another issue was raised briefly above. On the boundary of the region D, the
dissection may not yield rectangles. It turns out that in all reasonable cases, this
does not matter. For suppose the dissection into small rectangles is obtained by
imposing a grid on an enclosing rectangle R containing the region D. Consider the
rectangles in the grid which overlap the region D but don’t lie entirely within it. If
we allow some or all of these rectangles in the sum, we run the risk of “overestimating
the” sum, but if we omit them all, we run the risk of “underestimating” it. However,
in all reasonable cases, it won’t matter which we do, since the total area of these
questionable rectangles will be small compared to the area of the region D, and it
will approach zero in the limit. That is because, in the limit, we would obtain the
“area” of the boundary of D, and since the boundary would ordinarily be a finite
collection of smooth curves, that “area” would be zero. One way to deal with this
question of partially overlapping rectangles is as follows. The contribution from
such a rectangle would be f (x, y)∆A where (x, y) is some point in the rectangle. If
the point is in the region D, we include the term in the sum. On the other hand, we
could have chosen for that rectangle a point (x, y) not in the region D, so f (x, y)
might not even be defined. In that case, just redefine it to be zero, so the term
f (x, y) ∆A would be zero in any case. In essence, this amounts to defining a new
function f ∗ (x, y) which agrees with the function f inside the region D and is zero
outside the region, and considering sums for this function.

At this point f(x, y) = 0

Finally, we come to the nitty gritty of how we go about adding up the contribu-
tions from the individual small rectangles. Getting this straight is essential either
128 CHAPTER 4. MULTIPLE INTEGRALS

for developing a precise, rigorous theory of the double integral, or for actually ap-
proximating it numerically, say using a computer program. Here is how it is done.
Assume the region D is contained in a (large) rectangle described by two inequali-
ties a ≤ x ≤ b and c ≤ y ≤ d. We form a grid on this rectangle by choosing division
points
a = x0 < x1 < x2 < · · · < xi−1 < xi < · · · < xm = b
along the x axis, and

c = y0 < y1 < y2 < · · · < yj−1 < yj < · · · < yn = d

along the y-axis. Hence, each rectangle is characterized by a pair of indices (i, j)
where 1 ≤ i ≤ m, 1 ≤ j ≤ n. There are a total mn rectangles. The division points
can be chosen in any convenient manner. They might be equally spaced, but they
need not be. Put ∆xi = xi − xi−1 , ∆yj = yj − yj−1 and ∆Aij = ∆xi ∆yj .

∆ xi
d=y
m

y
j
∆ y
y j
j-1

c = y0
a=x x xi x =b
0 i-1 n

In the (i, j)-rectangle, we choose a point (x̃ij , ỹij ). As mentioned above, there are a
variety of ways we could choose such a point. The contribution from this rectangle
will be f (x̃ij , ỹij )∆Aij except that we will agree to set f (x̃ij , ỹij ) = 0 if the point is
not in the region D. Finally, we add up the contributions. There are clearly many
ways we could do this. For example, we could form

n ∑
m
f (x̃ij , ỹij )∆Aij
j=1 i=1

which amounts to summing across “horizontal strips” first and then summing up
the contributions from these strips. Alternately, we could form

m ∑
n
f (x̃ij , ỹij )∆Aij
i=1 j=1
4.1. INTRODUCTION 129

which sums first along “vertical strips”. There are even more complicated ways of
adding up, which might be a good idea in special circumstances, but the bookkeep-
ing would be more complicated to describe.

To complete the process, it is necessary to take a limit, but this also requires some
care. If we just let the number mn of rectangles go to ∞, we could encounter
problems. For example, if n → ∞, but m stays fixed, we would expect some
difficulty. In that case, the area ∆Aij of individual rectangles would approach zero,
but their width would stay constant. The way around this is to insist that not only
should the number of rectangles go to ∞, but also the largest possible diagonal of
any rectangle in a dissection should approach zero.

Good
Bad
∆s

Both -> 0
∆ y -> 0, ∆ x does not

Perhaps it will make the process seem a bit more concrete if we give a (Pascal)
computer program to approximate the integral in Example 49
∫∫
yδ dA
D

where D is the region described by x2 ≤ y ≤ 1, −1 ≤ x ≤ 1. As above, the variable


δ represents the density. The horizontal interval −1 ≤ x ≤ 1 is divided into m
equal subintervals, each of length Dx = 2/m. Similarly, the y-range is divided
into m equal subintervals, each of length Dy = 1/m. (Note that this means that
n = m.) Thus, the region D is covered by a collection of subrectangles, each of
area DA = Dx Dy, but some of these subrectangles overlap the bottom edge of
the region. The integrand is evaluated in the i, j-rectangle at the upper right hand
corner (x, y). The sums are done first along vertical strips, but in such a way that
subrectangles below the bottom edge of the region don’t contribute to the sum.
(Examine the program to see if the subrectangles overlapping the bottom edge
contribute to the sum.)
program integrate;
var m,i,j : integer;
x, y, Dx, Dy, DA, delta, sum : real;
begin
write(’Give density’);
readln(delta);
write(’Give number of subintervals’);
130 CHAPTER 4. MULTIPLE INTEGRALS

readln(m);
Dx := 2/m;
Dy := 1/m;
DA := Dx*Dy;
x := -1.0;
sum := 0.0;
for i := 1 to m do
begin
x := x + Dx;
y := 0;
for j := 1 to m do
begin
y := y + Dy;
if x*x <= y then
sum := sum + delta*y*DA;
end;
end;
writeln(’Approximation = ’, sum);
end.

Here are some results from an equivalent program (written in the programming
language C) where δ = 1.0.
m Approximation
10 0.898000
50 0.818863
100 0.809920
200 0.804981
1000 0.800981
As we shall see in the next section, the exact answer, determined by calculus, is
0.8. If you examine the program carefully, you will note that the approximating
sums overestimate the answer because the overlapping rectangles on the bottom
parabolic edge are more often included than excluded.

Other Notations There are a variety of other notations you may see for double
integrals. First, you may see ∫
f (x, y) dA
D
with just one integral sign to stand for the summation process. In that case, you
have to look for other clues to the dimensionality of the domain of integration. You
may also see ∫∫
f (x, y) dx dy.
D
The idea here is that each small element of area is a rectangle with sides dx and
dy, so the area is dA = dx dy. However, we shall see later that there may be
4.1. INTRODUCTION 131

advantages to dissecting the region into things other than rectangles (as when using
polar coordinates, for example). Hence, it is better generally to stick with the ‘dA’
notation. In the next section, we shall introduce the notion of an iterated integral .
This is a related but logically distinct concept with a slightly different notation, and
the two concepts should not be confused.

What Is an Integral? If you ignore the subtlety involved in taking a limit, then
an integral of the form
∫ b ∫∫
f (x) dx or f (x, y) dA
a D

should be thought of basically as a sum. In the first case, you take an element of
length dx, weight it by the factor f (x) to get f (x) dx and then add up the results.
Similarly, in the second case, the element of area dA is weighted by the factor
f (x, y) to get f (x, y) dA and the results are added up to get the double integral,
These are the first of many examples we shall see of such sums. Of course, the
concepts have many important applications, but no one of these applications tells
you what the integral ‘really is’. Unfortunately, students are often misled by their
∫b
first introduction to integrals where the sum a f (x) dx is interpreted as the area
between the x-axis and the graph of y = f (x). This is one of many interpretations of
such an integral (i.e., sum) and should not be assigned special significance. Similarly,
all the other integrals we shall discuss are (except for technicalities concerning
limits) sums; they are not areas or volumes or any special interpretation.

Exercises for 4.1.

1. Enter the program in the text into a computer, compile it and run it. Try
different values of m and see what happens. (If you prefer, you may write an
equivalent program in some other programming language.)
∫∫
2. Modify the program in the text to approximate the double integral D (x +
y)dA√where D is the region above the line y = x and below the parabola
y = x. Run your program for various values of m.

3. (Optional) The program in the text is not very efficient. Try to improve it
so as to cut down the number of multiplications. Also, see if you can think
of better ways to approximate the integral. For example, instead of using the
value of the integrand at the upper right hand corner of each subrectangle,
you might use the average value of the integrand at the four corners of the
subrectangle. See if that gives greater accuracy for the same value of m.
132 CHAPTER 4. MULTIPLE INTEGRALS

4.2 Iterated Integrals

Let z = f (x, y) = f (r) denote a function of two variables defined on some region D
in R2 . In the previous section we discussed the definition of the double integral
∫∫
f (x, y) dA.
D

We discuss next how you can use antiderivatives to calculate such a double integral.

Suppose first that the region D is bounded vertically by graphs of functions. More
precisely, suppose the region is bounded on the left and right by vertical lines
x = a, x = b and between those lines it is bounded below by the graph of a function
y = hbot (x) and above by the graph of another function y = htop (x).

y= h (x)
top

y= h (x)
bot

x=a x=b

A region bounded vertically by graphs

The region in Example 2 in the previous section was just such a region: a = −1, b =
1, hbot (x) = x2 , htop (x) = 1. Many, but not all regions have such descriptions.

As in the previous section, imagine the region dissected into many small rectangles
by∫ imposing a mesh, and indicate an approximating sum for the double integral

f (x, y) dA schematically by
D

f (x, y) ∆x ∆y
all rectangles

where we have used ∆A = ∆x ∆y.


4.2. ITERATED INTEGRALS 133

∆y

x
∆x

We pointed out in the previous section that there are two rather obvious ways to
add up the terms in the sum (as well as many not very obvious ways to do it.) The
way which suggests itself here is to add up along each vertical strip and then add
up the contributions from the individual strips:
∑ ∑ ∑
f (x, y) ∆x ∆y = f (x, y) ∆x ∆y.
all rectangles all strip
strips at x

For any given strip, we can assume we are using the same value of x, (say the value
of x at the right side of all the rectangles in that strip), and we can factor out the
common factor ∆x to obtain
∑ ∑
( f (x, y) ∆y) ∆x. (48)
all strip
strips at x

Now let the number of rectangles go to ∞ while the size of each rectangle shrinks
to 0. Concentrating on what happens in a vertical strip, we see that
∑ ∫ y=htop (x)
f (x, y)∆y −→ G(x) = f (x, y) dy.
strip y=hbot (x)
at x

Note that the integral on the right is a ‘partial integral’. That is, x is temporarily
kept constant and we integrate with respect to y. Note also that the limits depend
on x since, in the sum on the left, we only include rectangles that lie within the
region, i.e., those for which hbot (x) ≤ y ≤ hbot (x). Thus the integral is altogether a
function G(x) of x. If we replace the internal sum in expression (48) by the partial
integral G(x) (which it is approximately equal to), we get

G(x) ∆x
all
strips

which in the limit approaches


∫ b ∫ (∫ )
b y=htop (x)
G(x)dx = f (x, y)dy dx.
a a y=hbot (x)
134 CHAPTER 4. MULTIPLE INTEGRALS

y = h top(x)
Thus, we obtain finally the formula
∫∫ ∫ (∫ )
b y=htop (x)
f (x, y) dA = f (x, y)dy dx. (49)
D a y=hbot (x)

y= h (x)
bot

x=a x x=b
Example 50, (Example 2, Section 1) We had
∫∫
1
ycm = yδ dA
M D

where δ = 3M/4. We use the above method to calculate the integral, where a =
−1, b = 1, hbot (x) = x2 , and htop (x) = 1.
∫∫ ∫ 1 (∫ y=1 )
yδ dA = yδ dy dx
D −1 y=x2
∫ 1 ( 1 )
y 2
= δ dx
−1 2 x2
∫ 1 ( )
1 x4
= δ − dx
−1 2 2
∫ 1
1
=δ (1 − x4 )dx
− 2
[ 1 ]
δ x5
= x−
2 5 −1
[ ]
δ 1 −1
= 1 − − (−1 − )
2 5 5

y=1 = .
10
(Note that if δ = 1, this gives 0.8 as suggested by numerical approximation in
Section 1.) We may now determine the y-coordinate of the center of mass
2
y = x
1 8 1 3M 8 3
ycm = δ = = .
M 10 M 4 10 5
x = -1 x=1

There are a couple of remarks that should be made about formula (49). As noted
earlier, the double integral on the left is defined as the limit of sums obtained by
dissecting the region. The integral on the right is called an iterated integral and
it represents something different. It is an ‘integral of an integral’ where the inner
integral is a ‘partial integral’ depending on x. However, both integrations are with
respect to a single variable, so both can be evaluated by means of anti-derivatives
and the Fundamental Theorem, as we did in the example.

The above derivation of formula (49) is not a rigorous argument. It is not possible
to separate the process of taking the inner limit (counting vertically) from the
4.2. ITERATED INTEGRALS 135

outer limit (counting horizontally). A correct proof is actually quite difficult, and
it is usually done in a course in real analysis. The correctness of the formula for
reasonable functions is called Fubini’s Theorem. You should remember the intuition
behind the argument, because it will help you understand how to turn a double
integral into an iterated integral.

If we reverse the roles of x and y in the above analysis, we obtain a formula for
regions D which are bounded horizontally by graphs. Such a region is bounded below
by a line y = c, above by a line y = d, and between those lines it is bounded on
the left by the graph of a function x = glef t (y) and on the right by the graph of
another function x = gright (y). For such a region, the double integral is evaluated
by summing first along horizontal strips (y constant, x changing) and then summing
vertically the contributions from the strips (y changing). The analogous formula is
∫∫ ∫ (∫ )
d x=gright (y)
f (x, y) dA = f (x, y)dx dy. (50)
D c x=glef t (y)

y=d

y
x = g right (y)
x=g (y)
left
y=c

Note that the integration is in the direction in which the variables increase: x from
left to right, and y from bottom to top.

Example 51 Let f (x, y) = x2 + y 2 , and let D be the region between the parabola
x = y 2 on the left and the line x = y + 2 on the right. These curves intersect when

y2 = y + 2
or y2 − y − 2 = 0
or (y − 2)(y + 1) = 0
so y = 2 or y = −1.

Hence, D also lies between y = −1 below and y = 2 above.


136 CHAPTER 4. MULTIPLE INTEGRALS

y=1
y
x=y+2 Bounded horizontally by graphs
x = y2

y = -1

Thus

∫∫ ∫ 2 (∫ x=y+2 )
x2 + y 2 dA = x2 + y 2 dx dy
D −1 x=y 2
∫ 2( 3 y+2 )
x 2

= + xy dy
−1 3 y2
∫ 2( )
(y + 2)3 y6
= + (y + 2)y 2 − − y 4 dy
−1 3 3
( ) 2
(y + 2) 4
y 4
2y 3
y 7
y 5
= + + − −
12 4 3 21 5 −1
256 16 16 128 32 1 1 2 1 1
= + + − − − − + − −
12 4 3 21 5 12 4 3 21 5
639
= ≈ 18.26.
35

Note that the region is also bounded vertically by graphs, so in principle the integral
could be evaluated by the previous method using formula (49). There is a√serious
problem in trying this, however. The top graph is that of y = htop (x) = x, but
the bottom graph is described by two different formulas depending on what x is. It
is a parabola to the left of the point (1, −1) and a line to the right of that point, so

{ √
− (x) 0≤x≤1
hbot (x) =
y−2 1 ≤ x ≤ 4.
4.2. ITERATED INTEGRALS 137

y= x
y= x

y=x-2
Bounded vertically by graphs

y=- x

x=0 x=1 x=4

(The x-values at the relevant points are determined from the corresponding y-values
which were calculated above.) That means to do the calculation effectively using
vertical strips we must in effect decompose the region D into two subregions meeting
along the line x = 1 and treat each one separately. Then

∫∫ ∫ (∫ √ ) ∫ (∫ √ )
1 y= x 4 y= x
2 2 2 2 2 2
x + y dA = √
x + y dy dx + x + y dy dx.
D 0 y=− x 1 y=x−2

You should work out the two iterated integrals on the right just to check that their
sum gives the same answer as above.

As the previous example indicates, even if in principle you could treat a region
as either bounded vertically by graphs or as bounded horizontally by graphs, the
choice can make a big difference in how easy it is to calculate. In some cases, it
may be impossible to do the iterated integrals by antiderivatives in one order but
fairly easy in the other order.

Example 52 Consider the iterated integral

∫ 1 ∫ 1
sin y
dy dx.
0 x y

This is the iterated integral obtained from the double integral of the function
f (x, y) = sin y/y for the triangular region D contained between the vertical lines
x = 0, x = 1, the line y = x below, and the line y = 1 above.
138 CHAPTER 4. MULTIPLE INTEGRALS

y=1

y=x

x=0 x x=1

The indefinite integral (anti-derivative)



sin y
dy
y

cannot be expressed in terms of known elementary functions. (Try to integrate it


or look in an integral table if you don’t believe that.) Hence, the iterated integral
cannot be evaluated by anti-derivatives. However, the triangular region may be
described just as well by bounding it horizontally by graphs: it lies between y = 0
and y = 1 and for each y between

y=1

x =0 x=y

y=0

x = 0 and x = y. Thus, the double integral can be evaluated from the iterated
integral
∫ ∫ ∫ y
1 y
sin y 1
sin y
dx dy = x dy
0 0 y 0 y 0
∫ 1 ∫ 1
sin y
= y dy = sin y dy
0 y 0
1
= − cos y|0 = 1 − cos 1.
4.2. ITERATED INTEGRALS 139

Note that in order to set up the iterated integral in the other order, we had to draw
a diagram and work directly from that. There are no algebraic rules which will
allow you to switch orders without using a diagram.

The simplest kind of region is a rectangle, described,say, by inequalities a ≤ x ≤


b, c ≤ y ≤ d. In this case the iterated integrals look like
∫ b∫ d ∫ d∫ b
f (x, y) dy dx = f (x, y) dx dy
a c c a

and it should not make much difference which you choose. You should immedi-
ately recognize a rectangular region from the fact that the internal limits are both
constants. In the general case they will depend on one of the variables. Since the
geometry can be somewhat complicated, it is easy to put constant limits where they
are not appropriate. Suppose for example we have a region bounded vertically by
graphs. The appropriate way to think of it is that we temporarily fix one value
of x (between the given x-limits), and for that x add up along a vertical strip (y
varying) of width dx. Hence, the limits for that strip will generally depend on x.
Unfortunately, students often oversimplify and take for limits the minimum and
maximum values of y for the region as a whole. If you do that, you have in effect
replaced the desired region by a minimal bounding rectangle. (See the diagram.)
You can recognize that you have done that when the limits tell you the region is a
rectangle, but you know it is not.

y=d y
max

y
y=c min

x=a x=b x=a x=b

In the previous examples, we dealt with regions which were bounded by graphs,
either vertically or horizontally. There are many examples of regions which are
neither. For such a region, we employ the ‘divide and conquer’ strategy. That
is, we try to decompose the region into subregions that are bounded by graphs.
In so doing we use the following additivity rule. If D can be decomposed into
subsets D1 , D2 , . . . , Dk where at worst any two subsets Di and Dj share a common
boundary which is a smooth curve, then
∫∫ ∫∫ ∫∫ ∫∫
f dA = f dA + f dA + · · · + f dA.
D D1 D2 D2
140 CHAPTER 4. MULTIPLE INTEGRALS

(This rule certainly makes sense intuitively, but is is a little tricky to prove.)
D1
Example 53 Consider
∫∫
1
D2 dA
D3 D x2 + y2

D for D the region between the circle x2 + y 2 = 1 and the circle x2 + y 2 = 4. D can
4 be decomposed into 4 regions

D = D1 ∪ D2 ∪ D3 ∪ D4

as indicated in the diagram.

D2

D3
D1

D4

Each of these regions is bounded by graphs, and the double integrals on them may
be evaluated by iterated integrals. Thus, we have

∫∫ ∫ ∫ √
−1 4−x2
1 1
dA = √ dy dx
D1 x + y2
2
−2 − 4−x2 x2 + y2
∫∫ ∫ ∫ √
1 4−x2
1 1
dA = √ dy dx
D2 x + y2
2
−1 1−x2 x2 + y2
∫∫ ∫ 2 ∫ √4−x2
1 1
dA = √ dy dx
D3 x2 + y 2 1 − 4−x2 x2 + y 2
∫∫ ∫ 1 ∫ √1−x2
1 1
dA = √ dy dx.
D2 x2 + y 2 −1 − 4−x2 x2 + y 2

Because of the symmetric nature of the integrand 1/(x2 + y 2 ), the integrals for D1
and D3 are the same as are those for D2 and D4 . Hence, only two of the four
integrals need to be computed. However, these integrals are not easy to do. For
4.2. ITERATED INTEGRALS 141

example, for the region D2 ,


∫ √ √4−x2
4−x2
1 1 y
dy = tan −1
( )

1−x2 x2 + y 2 x x √1−x2
( √ √ )
1 4 − x2 1 − x 2
= tan−1 ( ) − tan−1 ( ) .
x x x

Hence,
∫∫ ∫ ( √ √ )
1 1
1 −1 4 − x2 −1 1 − x2
dA = tan ( ) − tan ( ) dx.
D2 x + y2
2
−1 x x x

I asked Mathematica to do this for me, but it could not give me an exact answer.
It claimed that a good numerical approximation was 1.16264.

We shall see in the next section how to do this problem using polar coordinates. It
is much easier that way.

Things to Integrate The choice of f (x, y) in


∫∫
f (x, y) dA
D

depends strongly on what problem we want to solve. We saw that f (x, y) = yδ was
appropriate for finding the y-coordinate of the center of mass. You will later learn
about moments of inertia where the appropriate f might be f (x, y) = (x2 + y 2 )δ.
In electrostatics, f (x, y) might represent the charge density, and the integral would
be the total charge in the domain D. For the special case, f (x, y) = 1, the integral
∫∫
1 dA
D

is just the area of the region D. (The integral just amounts to adding up the
contributions from small elements of area ‘dA’, so the sum is just the total area.)
If the region D is bounded vertically by graphs y = hbot (x) and y = htop (x), and
extends horizontally from x = a to x = b, we get
∫∫ ∫ b ∫ htop (x) htop (x) - h (x)
bot
A= 1 dA = dy dx
D a hbot (x)
∫ b
h (x)
= y|htop
bot (x)
dx
a
∫ b
= (htop (x) − hbot (x)) dx.
a
x
You may recognize the last expression as the appropriate formula for the area be-
tween two curves that you learned in your single variable integral calculus course. x=a x=b
142 CHAPTER 4. MULTIPLE INTEGRALS

There is one fairly simple geometric interpretation which always makes sense. f (x, y)
Column of height is the height at the point (x, y) of the graph of the function. We can think of
f(x,y) f (x, y)dA as the∫volume
∫ is a column of height f (x, y) and base area dA. Hence, the
double integral f dA represents the volume under the graph of the function.
D
However, just as in the case of area under a graph in single variable calculus, this
volume must be considered a signed quantity. Contributions from the part of the
graph under the x, y-plane are considered negative, and the integral is the algebraic
sum of the contributions from above and below the x, y-plane.
dx
dA Exercises for 4.2.
dy
∫2∫3
1. Evaluate the following iterated integrals with constant limits: (a) 1 0 (5x −
∫2∫2 ∫π∫ π
3y)dy dx (b) 0 0 (xy − 7x2 y 2 + 2xy 3 − y 5 )dy dx (c) 0 02 (cos x sin y)dy dx

2. Show, for the answers to Problem 1, that the order of integration is unimpor-
tant, i.e.
∫ b∫ d ∫ d∫ b
f (x, y) dy dx = f (x, y) dx dy.
a c c a
∫2∫x ∫1∫x
3. Evaluate the following iterated integrals: (a) 0 0 (2x−5y)dy dx. (b) 0 x2
(2y−
∫ 1 ∫ x2 ∫π∫y
x2 )dy dx. (c) 0 0 ey/x dy dx. (d) 0 0 cos(y 2 ) dx dy.

4. For each of the following, sketch the region of integration, then use your
drawing to reverse the order of integration, then evaluate the new iterated
∫ 2 ∫ x2 ∫ 1 ∫ 4−x2
integral. (a) 0 0 4xy dy dx. (b) 0 3x 1 dy dx. Hint: it isn’t easier in
∫ π ∫ π sin x ∫1∫1 3
the other order in this case. (c) 0 y
dx dy. (d) 0 √y ex dx dy.
x

5. Use double integration to find the area between each of the following sets of
curves.
(a) x = y 2 , y = x2 .
(b) y = x, y = 2x2 − 3.
6. Use double integration to find the volume under the surface z = f (x, y) and
between the given curves.
(a) z = x + y, x = 0, x = 2, y = 0, y = 2.
(b) z = 2 − x + 3y, x = −1, x = 1, y = 0, y = 1.
(c) z = x2 − 3 cos y, x = 0, x = 2, y = π
2, y = π.
x y
(d) z = e + e , x = 0, x = 1, y = 0, y = x.
(e) z = 4 − x2 − y 2 , y = x, y = x2 .
4.3. DOUBLE INTEGRALS IN POLAR COORDINATES 143

7. Use appropriate double integrals to determine the volume of each of the fol-
lowing solids. You may take advantage of symmetry.
(a) A sphere of radius a.
(b) A cylinder with base of radius a, and height h.
(c) An ellipsoid with semimajor axes a, b, and c.

4.3 Double Integrals in Polar Coordinates θ=α

Some double integrals become much simpler if one uses polar coordinates. If there
r=a
is circular symmetry of some sort present in the underlying problem, you should
always consider polar coordinates as a possible alternative.

Graphing in Polar Coordinates You should learn to recognize certain curves


α
when expressed in polar coordinates. For example, as mentioned in Chapter I, a
Section 2, the equation r = a describes a circle of radius a centered at the origin,
and the equation θ = α describes a ray starting at the origin and extending to ∞.
The ray makes angle α with the positive x-axis. Note that, depending on the value
of α, the ray could be in any of the four quadrants.

Here are some more complicated graphs.

Example 54 The equation r = 2a cos θ describes a circle of radius a centered at


the the point (a, 0). To see this transform to rectangular coordinates, using first
cos θ = x/r, and then r2 = x2 + y 2 , so
x
r = 2a or r2 = 2ax or x2 + y 2 = 2ax.
r
Now transpose and complete the square to obtain θ = π/2
x2 − 2ax + a2 + y 2 = a2 or (x − a)2 + y 2 = a2 .
Note that the equation r = 2a cos θ has no locus for π/2 < θ < 3π/2 since in that r
range cos θ < 0, and we are not allowing r to be negative. If we were to allow r to θ
be negative, we would retrace the circle. (Can you see why?) This example shows
the importance of thinking carefully about what the symbols mean. When we get
to integration in polar coordinates you will see that an unthinking use of formulas
in a case like this can lead either to double the correct answer or zero because some
part of a figure is considered twice, possibly with the same sign or possibly with
opposite signs. θ = −π/2 or 3π/2

Example 55 The equation r = a(1 + cos θ) has as locus a curve called a cardioid.
See the picture. Perhaps you can see the reason for the name. Here, the appropriate
range of θ would be 0 ≤ θ ≤ 2π.
144 CHAPTER 4. MULTIPLE INTEGRALS

r r
θ θ

Ellipse
Cardiod

2
Example 56 The equation r = has as locus an ellipse with one focus at
3 + cos θ
the origin. You can verify this by changing to rectangular coordinates as above.

2
r= x
3+ r
3r + x = 2 by cross multiplying
3r = 2 − x
9r2 = (2 − x)2 square—this might add some points
9(x + y ) = 4 − 4x + x
2 2 2

8x + 4x + 9y 2
2
=4
1 1 9
8(x2 + x + ) + 9y 2 = completing the square
2 16 2
1 9
8(x + )2 + 9y 2 =
4 2
(x + 1/4)2 y2
+ = 1.
9/16 1/2

The locus of the last equation is an ellipse√centered at the point (−1/4, 0), with
semi-major axis 3/4 and semi-minor axis 1/ 2. That the origin is one focus of the
ellipse requires going into the details of the analytic geometry of ellipses which we
won’t explore here, but it is true. Note that in the step where we squared both
sides, we might have added to the locus points where r > 0 but 2 − x < 0. You
should convince yourself that no point on the ellipse has this property.
∫∫
Integration in Polar Coordinates To evaluate the integral D f (x, y)dA in po-
lar coordinates, we must use a dissection of the region which is appropriate for polar
coordinates. In rectangular coordinates, the dissection into rectangles is obtained
from a network of vertical lines, x = constant, and horizontal lines, y = constant.
In polar coordinates the corresponding network would consist of concentric circles,
r = constant, and rays from the origin, θ = constant.
4.3. DOUBLE INTEGRALS IN POLAR COORDINATES 145

As indicated in the diagram, a typical element of area produced by such a network


is bounded by two nearby circular arcs separated radially by ∆r and two nearby
rays separated angularly by ∆θ. Such an element of area is called a polar rectangle.
To determine its area we argue as follows. The area of a circular wedge of radius
1
r and subtending an angle ∆θ at the center of the circle is r2 ∆θ. (If you are not
2
familiar with this formula, try to reason it out. The point is that the area of the
wedge is to the area of the circle as ∆θ, the subtended angle, is to 2π.) Suppose
now that the polar rectangle has inner radius rin and outer radius rout . Then the
area of the polar rectangle is
1 1 rout + rin
∆A = (rout 2 ∆θ − rin 2 ∆θ) = (rout + rin )(rout − rin )∆θ = (∆r)∆θ.
2 2 2
rout + rin
Now put r = (the average radius) to get ∆r
2 r
out
∆A = r∆r∆θ = ∆r(r∆θ). (51)
∆θ
This formula is an exact equality, but note that if we use any other value of r falling r
in
within the range rin ≤ r ≤ rout , it will only make a slight difference in the answer
if the polar rectangle is small enough.

Given a dissection of the region D into polar rectangles, we can form as before the
sum ∑
f (x, y)∆A
all polar
rectangles

where for each polar rectangle, (x, y) is a point somewhere inside the rectan-
gle. √
Again, it doesn’t matter much which point is chosen, so we can assume
r = x2 + y 2 has the same value as the r in the formula ∆A = r∆r∆θ. If we
now put x = r cos θ, y = r sin θ, the sum takes the form

f (r cos θ, r sin θ)r∆r∆θ. (52)
all polar
rectangles

It is fairly clear (and can even be proved with some effort) that the limit of this sum
as the number of polar rectangles
∫∫ approaches ∞ (and as the size of each shrinks to
zero) is the double integral D f dA.
146 CHAPTER 4. MULTIPLE INTEGRALS

Suppose now that the region is bounded radially by graphs. By this we mean that it
lies between two rays θ = α and θ = β (with α < β), and for each θ it lies between
two polar graphs: r = hin (θ) on the inside and r = hout (θ) on the outside.

θ=β

r = h out (θ)

r = h in (θ )
θ=α

For such a region, we can do the sum by adding first the contributions from those
polar rectangles within a given thin wedge—say it is at position θ and has angular
width ∆θ—and then adding up the contributions from the wedges.
∑ ∑ ∑
f (r cos θ, r sin θ)r∆r∆θ = f (r cos θ, r sin θ)r∆r∆θ
all polar all in a
rectangles wedges wedge
∑ ∑
= ( f (r cos θ, r sin θ)r∆r) ∆θ.
all in a
wedges wedge

In the limit, the expression in parentheses

∑ ∫ r=hout (θ)
f (r cos θ, r sin θ)r∆r −→ f (r cos θ, r sin θ) r dr = G(θ).
in a r=hin (θ)
wedge

As indicated, this is a partial integral depending on θ. Putting this back in (53),


we obtain the approximation ∑
G(θ)∆θ
all
wedges

which in the limit approaches


∫ β
G(θ) dθ.
α

Thus, we get finally


∫∫ ∫ β ∫ r=hout (θ)
f dA = f (r cos θ, r sin θ) r dr dθ. (54)
D α r=hin (θ)
4.3. DOUBLE INTEGRALS IN POLAR COORDINATES 147

Note that the derivation of formula (54) suggests the symbolic rule
r dθ
dA = r dr dθ = dr (r dθ). dr
r is to be thought of as a correction factor for changing the (false) ‘area’ dr dθ to
the correct area dA. One way to think of this is that the polar rectangle is almost dθ
a true rectangle of dimensions dr by rdθ.
∫∫
Example 57 We find D 1/(x2 + y 2 )dA for D the region between the circles
x2 + y 2 = 1 and x2 + y 2 = 4. Here,
1 1
f (x, y) = = 2,
x2 +y 2 r
and D lies between the inner graph r = 1 and the outer graph r = 2. It also lies
between two rays, but they happen to be the same ray described in two different
ways: θ = α = 0 and θ = β = 2π. Hence, the double integral is calculated by
∫∫ ∫ 2π ∫ r=2
1
f dA = 2
r dr dθ
D 0 r=1 r
∫ 2π ∫ r=2
1
= dr dθ r=2
0 r=1 r
∫ 2π
=
2
(ln r)|1 dθ r=1
0
θ=0
∫ 2π θ = 2π

= ln 2dθ = ln 2 θ|0
0
= 2π ln 2.


Example 58 We find the volume under the cone z = x2 + y 2 and√ over the circular
∫∫
disk inside r = 2 cos θ. That is given by the double integral D x2 + y 2 dA. In
this case, f (x, y) = r, and we can describe the region as bounded by the rays
θ = −π/2 and θ = π/2 and, for each θ, as lying between the inner graph r = 0 (i.e., θ = π/2
the origin) and the outer graph r = 2 cos θ. The integral is
r = 2 cos θ
∫ ∫ ∫ 2 cos θ
π/2 2 cos θ π/2
r3
r r dr dθ = dθ
−π/2 0 −π/2 3 0
∫ π/2
8 r=0
= cos3 θ dθ
−π/2 3
8 4 32
= = .
3 3 9
(The last integration was done by Mathematica.) θ = −π/2

Example 59 We find the center of mass of the region inside the cardioid r =
a(1 + cos θ) assuming constant density δ. We may as well assume the density is
148 CHAPTER 4. MULTIPLE INTEGRALS

δ = 1. (Can you see why?) Then the mass is the same as the area, which is
∫∫ ∫ 2π ∫ a(1+cos θ)
1 dA = 1 r dr dθ
D 0 0
∫ a(1+cos θ)

r2
= dθ
0 2 0

a2 2π
= (1 + 2 cos θ + cos2 θ)dθ
2 0
a2 3πa2
= (2π + 0 + π) = .
2 2

r = a(1 + cos θ )

θ θ=0
r=0 θ = 2π

We used here the formulas


∫ 2π
cos θ dθ = 0
0
∫ 2π
cos2 θ dθ = π
0

both of which can be derived without integrating anything very complicated. (Do
you know the appropriate tricks?)

The y-coordinate of the center of mass is 0 (by symmetry), and the x-coordinate is
∫∫ ∫ ∫
1 1 2π a(1+cos θ)
x dA = r cos θ r dr dθ
A D A 0 0
∫ a(1+cos θ)
1 2π r3
= cos θ dθ
A 0 3 0

a3 2π
= (cos θ + 3 cos2 θ + 3 cos3 θ + cos4 θ)dθ
3A 0
a3 3π 15πa3 15πa3 5a
= (0 + 3π + 0 + )= = 2
= .
3A 4 12A 18πa 6
4.3. DOUBLE INTEGRALS IN POLAR COORDINATES 149

This used the additional rule


∫ 2π

cos4 θ dθ =
0 4
for which I know no simple derivation. You can hack it out, or preferably use a
table or Mathematica.

It is always true that the center of mass of a distribution of constant density does
not depend on the density. (δ appears as a factor in two places which cancel.) In
that case, the center of mass is a purely geometric property of the region, and it is
called the centroid. Occasionally, in polar coordinates one deals with a region which
is bounded angularly by graphs. In that case, you would integrate first—the inner
integral—with respect to θ and then with respect to r. The student is encouraged
to work out what the corresponding iterated integral would look like.

Exercises for 4.3.

In the following problems, remember that by our conventions r ≥ 0. Thus you


should ignore any points where the equation would yield r < 0.

1. Sketch and identify the following graphs:


(a) r = 2.

(b) θ = 2 .
(c) r = 1 − 2 sin θ.
(d) r = 3 cos 2θ.
(e) r2 = −2 cos 2θ.
3
(f) r = .
2 + sin θ
2. Sketch the following regions and find their areas by double integration:
(a) the disc inside the circle r = 3.
(b) the disc inside the circle r = 3 sin θ.
(c) the area outside the circle r = 1 and inside the circle r = 2 cos θ.
(d) the area within the cardioid r = 1 + cos θ and the circle r = 1.
(e) the area within the lemniscate r2 = −3 sin 2θ.
3. Find the volume under the given surface and above the region bounded by
the given curve(s):
(a) z = x2 + y 2 , r = 2.
(b) z = x2 + y 2 , r = −2 cos θ.

(c) z = x2 + y 2 , r = 4 sin θ.
150 CHAPTER 4. MULTIPLE INTEGRALS

4. Calculate the following double integrals by switching to polar coordinates: (a)


∫ 1 ∫ √1−y2 2 2 ∫ 2 ∫ √4−x2 1 ∫ 1 ∫ √2−x2 x2 +y2

−1 − 1−y 2
(x +y )dx dy (b) 0 0
√ dy dx (c) 0 x
e dy dx
x2 + y 2
∫1∫x
(d) 0 0 x dy dx. (This is a bit silly to do in polar coordinates, but try it for
the practice. Use the fact that x = 1 implies r = sec θ.)

5. Find the volume of the following objects.


(a) The region inside the sphere x2 + y 2 + z 2 = 4a2 and above the plane z = a.
(b) An ‘ice cream cone’ bounded above by the sphere
√ of radius a centered at
the origin and below by right circular cone z = x2 + y 2 .
(c) The region below the paraboloid z = 4 − x2 − y 2 and above the x, y-plane.
(d) The solid torus formed by revolving the vertical disk within (x−b)2 +z 2 =
a2 about the z-axis. (Assume a < b.)

4.4 Triple Integrals

The theory of integration in space proceeds much as the theory in the plane. Sup-
pose E is a bounded subset of R3 , and suppose w = f (x, y, z) = f (r) describes a
function defined on E. (‘E is bounded’ means that E is contained in some rectan-
gular box , a ≤ x ≤ b, c ≤ y ≤ d, e ≤ z ≤ f .) Usually, E will be a quite reasonable
looking set; in particular its boundary will consist of a finite set of smooth surfaces.
To define the integral, we proceed by analogy with the 2-dimensional case. First
partition the region E into a collection of small rectangular boxes or cells through
a lattice of closely spaced parallel planes. We employ three such families of planes,
each perpendicular to one of the coordinate axes. A typical cell will be a box with
dimensions ∆x, ∆y, and ∆z and volume

∆V = ∆x∆y∆z.

Choose a point (x, y, z) in such a cell, and form f (x, y, z) ∆V , which will be the
contribution from that cell. Now form the sum

f (x, y, z)∆V
all cells

and let the number of cells go to ∞ while the size of each goes to zero. If the region
and function are reasonably smooth, the sums will approach a definite limit which
is called a triple integral and which is denoted
∫∫∫ ∫
f (x, y, z) dV or f (x, y, z) dV.
E E
4.4. TRIPLE INTEGRALS 151

Iterated Integrals in R3 There are many ways in which one could add up the
terms in the above sum. We start with one that is appropriate if E is bounded by
graphs in the z-direction. That is, E lies above the graph of a function z = zbot (x, y)
and below the graph of another function z = ztop (x, y). In addition, we assume that
E is bounded on its sides by a ‘cylindrical surface’ consisting of perpendiculars to
a closed curve γ in the x, y-plane. The region D in the x, y-plane bounded by γ is
then the projection of E in the x, y-plane. The ‘cylinder’ is only cylindrical in a very
general sense, and it may even happen that part or all of it consists of curves
√ rather
than surfaces. For example, consider the region between the cone z = x2 + y 2
(below) and the plane z = 1 (above). The ‘cylinder’ in this case is just a circle.

z=z (x, y)
top

z = z bot(x, y)

∆A

Suppose the region is dissected into cells as described above. There will be a corre-
sponding dissection of the projected region D into small rectangles. Consider one
such rectangle with area ∆A positioned at (x, y) in D, and consider the contribution
from the cells forming the column lying over that rectangle. In the limit, this will
approach an integral
∫ z=ztop (x,y)
f (x, y, z) dz ∆A = G(x, y)∆A.
z=zbot (x,y)

Here, we used the fact that all the cells in the column share a common base with
area ∆A, so the volume of any such cell is ∆V = ∆z ∆A where ∆z is its height.
G(x, y) is the partial integral obtained by integrating with respect to z, keeping x, y
constant, and then evaluating at limits depending on x, y. Putting this in the sum,
we have schematically
∑ ∑ ∑ ∫∫
f (x, y, z)∆V → G(x, y) ∆A → G(x, y) dA.
rectangles cells in rectangles D
in D column in D
152 CHAPTER 4. MULTIPLE INTEGRALS

Recalling what G(x, y) is, we get the following formula for the triple integral.
∫∫ ∫ ∫ (∫ z=ztop (x,y) )
f (x, y, z) dV = f (x, y, z) dz dA
E D z=zbot (x,y)

This in effect reduces the calculation of triple integrals to that of double integrals.
The double integral can be done in any order that is appropriate. (It may even be
done in polar coordinates!)
z= 1-x-y Example 60 We shall find the centroid (center of mass for constant density) of the
solid region E in the first octant which lies beneath the plane x + y + z = 1. The
solid E has four faces. It is an example of a tetrahedron. If we take the density to be
1, the mass will equal the volume. A tetrahedron is a special case of pyramid, and
you should recall from high school that the volume of a pyramid is 1/3 its height
D times the area of its base. In this case, we get M = V = (1/3) × (1) × (1/2) = 1/6.
z=0
By symmetry it is clear that the three coordinates of the centroid are equal, so we
need only find ∫∫∫
1
xcm = x dV.
V E

To evaluate the triple integral, note that E is z-bounded by graphs since it lies lies
between z = zbot (x, y) = 0 and z = ztop (x, y) = 1 − x − y. The projection of E in
the x, y-plane is the triangular region D in the first quadrant, bounded by the line
x + y = 1. Hence,
∫∫∫ ∫ ∫ ∫ z=1−x−y
x dV = xdz dA
E
∫ ∫D z=0
z=1−x−y
= x z|z=0 dA
∫ ∫D ∫∫
= [x(1 − x − y)] dA = (x − x2 − xy) dA.
D D
The problem has now been reduced to a double integral. It is best to treat this as
a separate problem, redrawing if necessary a diagram of the region D. We can view
D as bounded in the y-direction by the graphs y = 0 and y = 1 − x with 0 ≤ x ≤ 1.
Thus,
∫∫ ∫ 1 ∫ y=1−x
(x − x − xy) dA =
2
(x − x2 − xy)dy dx
D 0 y=0
∫ 1 y=1−x
= (xy − x2 y − xy 2 /2) y=0 dx
0
∫ 1
= (x(1 − x) − x2 (1 − x) − x(1 − x)2 /2) dx
0
∫ 1
= (x3 /2 − x2 + x/2) dx
0
1 1
= x4 /8 − x3 /3 + x2 /4 0 = .
24
4.4. TRIPLE INTEGRALS 153

It follows that ∫∫∫


1 1/24 1
xcm = x dV = = . y=1-x
V E 1/6 4
Hence the centroid is at (1/4, 1/4, 1/4).
D
Note that if we had suppressed the evaluations temporarily, the triple integral above
would appear as the following triply iterated integral
∫∫∫ ∫ ∫ ∫ z=1−x−y x=0 y=0 x=1
x dV = xdz dA
E D z=0
∫ 1 ∫ y=1−x ∫ z=1−x−y
= xdz dy dx.
0 y=0 z=0

You should try to visualize the dissection of the solid region associated with each
step in the iterated integral.
∫ 1 ∫ y=1−x ∫ z=1−x−y
xdz dy dx .
0 y=0
| z=0 {z }
column
| {z }
slab
| {z }
solid

First, we sum in the z-direction to include all cells in a column. Next, we sum in
the y-direction to include all cells in a row of columns to form a slab. Finally, we
sum in the x-direction to put these slabs together to form the entire solid region.

Solid

Slab

Column

The above example was done in the order dz dy dx. There are in fact six possible
orders of integration in R3 . Which is appropriate depends on how the solid region
and its projections in the coordinate planes are bounded by graphs.
154 CHAPTER 4. MULTIPLE INTEGRALS

Example 61 We find the volume in the first octant of the solid E bounded by
the graphs y 2 + z 2 = 1 and x = z 2 . The former surface is part of a right circular
cylinder perpendicular to the y, z-plane. The latter is a cylinder (in the general
sense) perpendicular to the x, z-plane. E is z-bounded by graphs, but it is not
easy to visualize its projection in the x, y-plane. In this case, it would be better to
project instead in the y, z-plane or the x, z-plane. Let’s project in the y, z-plane, so
E will be viewed as bounded in the x-direction by the graph x = 0 (behind) and
x = z 2 (in front). The projection D of E in the y, z-plane is the quarter disc inside
the circle y 2 + z 2 = 1.
∫∫∫ ∫∫ ∫ x=z 2
V = 1 dV = 1 dx dA
E
∫∫ D x=0
x=z 2
= x|x=0 dA
∫ ∫D
= z 2 dA.
D

We now calculate the double integral in the y,


√z-plane by viewing D as bounded in
the z-direction by the graphs z = 0 and z = 1 − y 2 with 0 ≤ y ≤ 1.

z z
2 2 2
y + z = 1 z= 1 -y

y y
2 y= 0 y=1
x = z z=0

∫∫ ∫ ∫ √
1 z= 1−y 2
2
z dA = z 2 dz dy
D 0 z=0
∫ 1 z=√1−y2
= z /3 z=0
3
dy
0
∫ 1
1 π
= (1 − y 2 )3/2 dy = .
3 0 16

The last step was done by Mathematica.


4.4. TRIPLE INTEGRALS 155

You should try to do the same triple integral


√ by viewing it as bounded in the
y-direction by the graphs y = 0 and y = 1 − z 2 and projecting in the x, z-plane.

Sometimes it may be appropriate to do the double integral in polar coordinates.


∫∫∫
Example 62 We shall find E
z dV where E is the solid cone contained between
H√ 2
the cone z = x + y 2 and the plane z = H. This is a cone of height H
R
and radius R. Its projection in the x, y plane is the region D inside the circle
x2 + y 2 = R2 .
∫∫∫ ∫ ∫ ∫ z=H
z dV = √ zdz dA
E D z=(H/R) x2 +y 2
∫∫
z=H
= z 2 /2 z=(H/R)√x2 +y2 dA
∫D∫
1
= (H 2 − (H/R)2 (x2 + y 2 )) dA.
2 D

R r=R

θ θ=0
E H r=0 θ = 2π

We could of course do the double integral in rectangular√coordinates. (The√region D


in the x, y-plane is bounded in the y-direction by y = − R2 − x2 and y = R2 − x2
with −R ≤ x ≤ R.) You should try to do it that way. It makes more sense, however,
to use polar coordinates.
∫∫ ∫ ∫
1 1 2π r=R 2
(H 2 − (H/R)2 (x2 + y 2 )) dA = (H − (H/R)2 r2 ) r dr dθ
2 D 2 0 r=0

1 2π r=R
= (H 2 r2 /2 − (H/R)2 r4 /4) r=0 dθ
2 0

1 2π
= (H 2 R2 /2 − (H 2 /R2 )R4 /4) dθ
2 0

1 2π 2 2 1 H 2 R2 πH 2 R2
= H R /4 dθ = 2π = .
2 0 2 4 4
156 CHAPTER 4. MULTIPLE INTEGRALS

Note that you can do Example 61 this way if you are willing to introduce polar
coordinates in the y, z-plane.

Exercises for 4.4.

∫∫∫
1. Calculate the triple integral E
f (x, y, z)dV for each of the following:
(a) f (x, y, z) = 2x + 3y − z , over the rectangular box with 0 ≤ x ≤ 4,
2

0 ≤ y ≤ 2, and 0 ≤ z ≤ 1.
(b) f (x, y, z) = yz cos z, over the cube with opposite vertices at the origin and
(π, π, π).
(c) f (x, y, z) = xyz, over the region inside the cone z 2 = x2 + y 2 and between
the planes z = 0 and z = 9.
(d) f (x, y, z) = 2z − 4y 2 , over the region between the cylinders z = x2 and
z = x3 and between the planes y = −1 and y = 2.

2. Consider the tetrahedron bounded by the plane 2x + 3y + z = 6 and the


coordinate planes. There are six possible orders of integration for the triple
integral representing its volume. Write out iterated integrals for each of these
and evaluate two of them.

3. Sketch the solid bounded by the given surfaces and find its volume by triple
integration. If you see an easier way to find the same volume, use that to
check your answer.
(a) 3x − 2y + 4z = −2, x = 0, y = 0, z = 0.
(b) y = x2 + z 2 , y = 4.
(c) x2 + y 2 + z 2 = 1, x2 + y 2 + z 2 = 4.
(d) z = 2x2 + 3y 2 , z = 5 − 3x2 − 2y 2 . (Note that the two surfaces intersect in
a circle of radius 1.)

4. Use a triple integral to find the volume of the solid in the first octant bounded
by the cylinders x2 + y 2 = 1 and x2 + z 2 = 1. Do this both in the order
‘dz dy dx’ and in the order ‘dx dz dy’. (The latter order involves a complication
you might not expect.)

5. Find the centroid for each of the following objects. (Take δ = 1.)
(a) A solid hemisphere of radius a.
(b) A solid right cone of radius a and height h.
(c) The solid bounded by the paraboloid z = x2 + y 2 and the plane z = h > 0.
4.5. CYLINDRICAL COORDINATES 157

4.5 Cylindrical Coordinates

In the previous section, we saw that we could switch to polar coordinates in the
x, y-plane when doing a triple integral. This really amounts to introducing a new
coordinate system in space called cylindrical coordinates. The cylindrical coordi-
nates of a point in space with rectangular coordinates (x, y, z) are (r, θ, z) where
(r, θ) are the polar coordinates of the projection (x, y) of the point in the x, y-plane. r
Just as with polar coordinates, we insist that r ≥ 0, and usually θ is restricted to
some range of size 2π. Moreover, the same formulas
x = r cos θ z
y = r sin θ z
and
√ x θ r
r= x2 + y 2
y y
tan θ = if x 6= 0.
x

The geometric interpretations of r and θ in space are a bit different. r is the


perpendicular distance of the point to the z axis (as well as being the distance of
its projection to the origin). θ is the angle that the plane determined by the point
and the z-axis makes with the positive x, z-plane.

You should learn to recognize certain important surfaces when described in cylin-
drical coordinates.

Example 63 r = a describes an (infinite) cylinder of radius a centered on the


z-axis. If we let a vary, we obtain an infinite family of concentric cylinders. We can
treat the case a = 0 (the z-axis) as a degenerate cylinder of radius 0.

r=a Cylinder θ=α Half plane

Example 64 θ = α describes a half plane making angle α with the positive x, z-


plane. Note that in this half plane, r can assume any non-negative value and z can
assume any value.
158 CHAPTER 4. MULTIPLE INTEGRALS

Example 65 z = mr describes an (infinite) cone centered on the z-axis with vertex


at the origin. For a fixed value of θ, we obtain a ray in this cone which starts at the
origin and extends to ∞. This ray makes angle tan−1 m with the z-axis, and if we
let θ vary, the ray rotates around the z-axis generating the cone. Note also that if
m > 0, the angle with the z-axis is acute and the cone lies above the x, y-plane. If
m < 0, the angle is obtuse, and the cone lies below the x, y-plane. The case m = 0
yields the x, y-plane (z = 0) which may be considered a special ‘cone’.

Note that in rectangular coordinates, z = mr becomes z = m x2 + y 2 .

z=mr Cone Sphere


2 2 2
r + z = a

Example 66 r2 + z 2 = a2 describes a sphere of radius a centered at the origin.


The easiest way to see this is to put r2 = x2 + y 2 whence the equation becomes
x2 +√y 2 + z 2 = a2 . The top hemisphere of the √ sphere would be described by
z = a2 − r2 and the bottom hemisphere by z = − a2 − r2 .

Integrals in Cylindrical Coordinates Suppose E is and solid region is R3 ,


and it is bounded in the z direction by graphs. If we use cylindrical coordinates
directly (rather than switching to polar coordinates after the z integration), the
triple integral would take the form
∫∫∫ ∫∫ ∫ z=ztop (x,y)
f (x, y, z) dV = f (r cos θ, r sin θ, z) dz dA
E D z=zbot (r,θ)

where the upper and lower graphs are expressed in cylindrical coordinates and the
double integral over the region D should be done in polar coordinates. Symbolically,
we have for dA = r dr dθ, so we may also write

dV = dA dz = r dr dθ dz = r dz dr dθ

for the element of volume in cylindrical coordinates. Implicit in this is a dissection


of the region into cylindrical cells as indicated in the diagram.
4.5. CYLINDRICAL COORDINATES 159

r dθ

dr

dz

d θ
r
dA

∫∫∫ 2
Example 67 We calculate E
x dV for E the solid region contained within the
cylinder x2 + y 2 = a2 and between the planes z = 0 and z = h. Here f (x, y, z) =
x2 = r2 cos2 θ, and E is bounded in the z direction between z = 0 and z = h.
The projection of D in the x, y-plane is the disc inside the circle x2 + y 2 = a2 (i.e.,
r = a). Thus,

∫∫∫ ∫∫ ∫ z=h
2
x dV = r2 cos2 θ dz dA
E
∫∫ D z=0
z=h
= r2 cos2 θ z z=0 dA
∫D∫
=h r2 cos2 θ dA
D z=h
∫ 2π ∫ r=a
2 2
=h r cos θ r dr dθ
0 r=0
∫ 2π ∫ r=a
=h r3 cos2 θ dr dθ
0 r=0
∫ 2π r=a
=h r4 /4 r=0 cos2 θ dθ
0
∫ 2π
a4 a4 πa4 h
=h cos2 θ dθ = h π= . D
4 0 4 4 z=0

∫∫∫
Example 68 We shall find E
(x2 +y 2 ) dV for E a solid sphere of radius a centered
at the origin. Here f (x, y, z) = x2 + y 2 = r2 . Moreover, the surface of the sphere
has equation r2 + z 2 = z 2 in cylindrical √coordinates, so the solid√sphere may be
viewed a lying between the graphs z = − a2 − r2 below and z = a2 − r2 above.
The projection D in the x, y-plane is the disc inside the circle r = a. Hence, the
160 CHAPTER 4. MULTIPLE INTEGRALS

integral is
∫∫∫ ∫∫ ∫ √
z= a2 −r 2
2 2
(x + y ) dV = √ r2 dz dA
E D z=− a2 −r 2
∫∫ ( √ )
z= a2 −r 2
= r2 z|z=−√a2 −r2 dA
z = a 2- r 2 ∫ ∫D √
= r2 (2 a2 − r2 ) dA
D
∫ 2π ∫ r=a √
= r2 (2 a2 − r2 ) r dr dθ
0 r=0
∫ 2π ∫ r=a √ 4a5 8πa5
= r3 (2 a2 − r2 ) dr dθ = 2π = .
0 r=0 15 15
z = - a 2 - r 2 (The last step was done by Mathematica.)

There is no need to first reduce to a double integral. We could have written out the
triply iterated integral directly.
∫ ∫ ∫ √
2π r=a z= a2 −r 2

√ r2 r dz dr dθ .
0 r=0 z=− a2 −r 2
| {z }
column
| {z }
wedge
| {z }
solid

The order of integration suggests a dissection of the sphere. The first integration
with respect to z (r, θ fixed) adds up the contributions from cells in a column.
The second integration with respect to r (θ fixed) adds up the contributions from
columns forming a wedge. The final integration with respect to θ adds up the
contributions from the wedges to form the solid sphere.

Wedge
Solid

Column

It is sometimes worthwhile doing the summation in some other order. For example,
4.5. CYLINDRICAL COORDINATES 161

consider the order

∫ ∫ √ ∫
z=a r= a2 −z 2 2π
r2 r dθ dr dz .
z=−a r=0
| 0
{z }
ring
| {z }
slab
| {z }
solid

The first integration with respect to θ (r, z fixed) adds up the contribution from
all cells in a ring at height z above the x, y-plane and distance r from the z-axis.
The next integration with respect to r (z fixed) adds up√the contributions from all
rings at height z which form a circular slab of radius a2 − z 2 . Finally, the last
integration adds up the contributions from all the slabs as z ranges from z = −a to
z = a.

Ring Slab

Solid

You should try the integration in this order to see if it is easier. You should also
try to visualize the dissection for the order dr, dz, dθ.

Example 69 We shall find the volume of the solid region E inside both the sphere
x2 + y 2 + z 2 = 4 and the cylinder r = 2 cos θ. Recall that the second equation
describes a circle in the x, y-plane of radius 1 and centered at (1, 0). However, in
space, it describes the cylinder perpendicular to that circle. The appropriate range
162 CHAPTER 4. MULTIPLE INTEGRALS

for θ is −π/2 ≤ θ ≤ π/2. The volume is


∫∫∫ ∫ ∫ ∫ √
π/2 r=2 cos θ z= 4−r 2
1 dV = √ 1 dz r dr dθ
E −π/2 r=0 z=− 4−r 2
∫ π/2 ∫ r=2 cos θ √
z= 4−r 2
= z|z=−√4−r2 r dr dθ
−π/2 r=0
∫ π/2 ∫ r=2 cos θ √
= 2 4 − r2 r dr dθ
−π/2 r=0
∫ ( ) r=2 cos θ
π/2
(4 − r2 )3/2
= − dθ
−π/2 3/2 r=0
∫ π/2
2
= (8 − 8| sin θ|3 ) dθ.
3 −π/2

Here we used 1 − cos2 θ = sin2 θ and the fact that sin2 θ = | sin θ|, not sin θ. This
would cause a problem in integration over the range −π/2 ≤ θ ≤ π/2, so we get
around it by integrating over 0 ≤ θ ≤ π/2 and doubling the result. We get
∫ π/2 ( )
16 32 π 2 16(3π − 4)
2 (1 − sin3 θ) dθ = − = .
3 0 3 2 3 9

2 θ = π/2
z= 4-r

r = 2 cos θ

r=0

θ = −π/2
z= - 4-r 2

Off center cylinder intersecting sphere Projected region in plane

∫∫∫ 2
Moments of Inertia In the Example 68, we calculated E
r dV where r is the
perpendicular distance to the z-axis. This is a special case of the concept of moment
of inertia. In physics, the moment of inertia of a finite set of points about an axis
L is defined to be ∑
IL = mi ri 2
i
4.5. CYLINDRICAL COORDINATES 163

L
where mi is the mass of the ith particle and ri is its perpendicular distance to the m1
axis. The generalization for a mass distribution of density δ is
∫∫∫ ∫ r1
2
IL = r dm = r2 δ dV.
E E

Here r is the distance of a point inside the solid region E to the axis L. We often m2
choose the coordinate system so the z-axis lies along L. The density δ = δ(x, y, z) m r2
can generally be variable. Moments of inertia are very important in the study of 3
r
rotational motion of rigid bodies. 3

Example 68, (revisited) For a mass of constant density δ distributed over a solid
sphere of radius a, the moment of inertia about a diameter (which we can take to
be the z-axis) is
∫∫∫ ∫∫∫ L
8πa5 δ
Iz = r2 δ dV = δ r2 dV = .
E E 15
However, the total mass in the sphere will be dm
4πa3
M =Vδ = δ r
3
so the moment of inertia may be rewritten
2
Iz = M a2 .
5

Exercises for 4.5.

1. Find cylindrical coordinates for the point with the given rectangular coordi-
nates:
(a) P (0, 0, 5). (b) P (−1, 3, 2). (c) P (3, 2, 0). (d) P (−1, 1, −1). (e) P (0, 4, −7).
2. Identify the following graphs from their equations in cylindrical coordinates:
(a) r = 3.
π
(b) θ = 2.
(c) r = 2 cos θ.
(d) sin θ + cos θ = 0.
(e) z = 5 + r2 .
3. Convert the following equations to cylindrical coordinates:
(a) x2 + y 2 + z 2 = 4.
(b) z = x2 + y 2 .
(c) 3x + 2y − z = 0.
(d) y = 4.
164 CHAPTER 4. MULTIPLE INTEGRALS

4. A sphere of radius a centered at the origin has a cylindrical hole of radius


a/2, centered on the z-axis drilled in it. Describe the solid region left by
inequalities involving cylindrical coordinates.
5. Find the following volumes by using triple integration in cylindrical coordi-
nates. The region
(a) inside both the sphere x2 + y 2 + z 2 = 9 and the cylinder x2 + y 2 = 1,
(b) inside a sphere of radius a,
(c) between the paraboloid z = 3 − x2 − y 2 and the plane z = 0,
(d) above the paraboloid z = r2 and under the plane z = x,
(e) inside both the paraboloids z = 9 − x2 − y 2 and z = r2 ,
(f) inside the right circular cone z = (a/2)r and under the plane z = a.
6. Find the centroid of a solid hemisphere of radius a using cylindrical coordi-
nates.
7. Find the volume above the x, y-plane, inside the cylinder r = 2 sin θ and under
the plane z = y.
8. Find the volume bounded by the planes z = y, z = 0, z = x, z = −x, y = 1.
First find the volume in rectangular coordinates Then find the volume in
cylindrical coordinates. (This is just for practice. Ordinarily it would be silly
to use cylindrical coordinates for such a problem.)

4.6 Spherical Coordinates

Cylindrical coordinates are one way to generalize polar coordinates to space, but
there is another way that is more useful in problems exhibiting spherical symmetry.
We suppose as usual that a rectangular coordinate system has been chosen. The
spherical coordinates (ρ, φ, θ) of a point P in space are defined as follows. ρ is the
−−→
distance |OP | of the point to the origin. It is always non-negative, and it should
be distinguished from the cylindrical coordinate r which is the distance from the
−−→
z-axis. The azimuthal angle φ is the angle between OP and the positive z-axis. φ
is always assumed to lie between 0 and π. Finally, the longitudinal angle θ is the
same as the cylindrical coordinate θ. θ is assumed to range over some interval of
size 2π, e.g., 0 ≤ θ < 2π. Note the reason for the range chosen for φ. Fix ρ and θ. If
φ = 0, the point is on the positive z-axis, and as φ increases, the point swings down
toward the negative z-axis, but it stays in the half plane determined by that value
of θ. For φ = π, the point is on the negative z-axis, but if we allow φ to increase
further, the point swings into the opposite half plane with longitudinal angle θ + π.
Such points can be obtained just as well by swinging down from the positive z-axis
in the opposite half plane determined by θ + π.
4.6. SPHERICAL COORDINATES 165

φ ρ

The following relationships hold between spherical coordinates, cylindrical coordi-


nates, and spherical coordinates. Refer to the diagram
r = ρ sin φ
z = ρ cos φ

so

x = ρ sin φ cos θ
y = ρ sin φ sin θ
z = ρ cos φ

and
√ √
ρ= r 2 + z 2 = x2 + y 2 + z 2
r
tan φ = if z 6= 0.
z

One may think of the spherical coordinates (ρ, φ) as polar coordinates in the half
plane determined by fixing θ. However, because of the restrictions on φ, this is not
quite the same as polar coordinates in the x, y-plane.

You should learn to recognize certain important surfaces when described in spherical
coordinates.

Example 70 ρ = a describes a sphere of radius a centered at the origin.

Example 71 φ = α describes a cone making angle α with the positive z-axis. The
cone can lie above or below the x, y-plane, and φ = π/2 describes the x, y-plane.

Example 72 θ = β describes a half plane starting from the z-axis as before.


166 CHAPTER 4. MULTIPLE INTEGRALS

ρ= a φ = α θ=β

Example 73 ρ = 2a cos φ describes a sphere of radius a centered at (0, 0, a). You


can see this by fixing attention on the half plane determined by fixing θ. In that
half plane, the locus is the semi-circle with the given radius and center. If we then
let θ vary, the effect is to rotate the semi-circle about the z-axis and generate the
sphere

Geometry on the Surface of a Sphere If we fix ρ = a, we obtain a sphere of


radius a. Then (φ, θ) specify the position of a point on that sphere.

For θ = constant, we obtain the semi-circle which is the intersection of the half
plane for that θ with the sphere. That circle is called a meridian of longitude. This
is exactly the concept of longitude used to measure position on the surface of the
Earth, except that we use radians instead of degrees. Earth longitude is usually
measured in degrees east or west of the Greenwich Meridian. That corresponds in
our case to the positive and negative directions from the 0-meridian.

For φ = constant, we obtain the circle which is the intersection of the cone for
that φ with the sphere. Such circles are called circles of latitude. φ is related
to the notion of latitude on the surface of the Earth, except that the latter is an
4.6. SPHERICAL COORDINATES 167

angle λ measured in degrees north or south of the equatorial plane. The spherical
coordinate φ is sometimes called co-latitude, and we have φ = π/2 − λ, if both are
measured in radians. The unique point with φ = 0 is called the north pole, that
with φ = π is called the south pole, and at the poles θ is not well defined.

North Pole

Circle of Latitude λ

φ Colatitude φ

λ
θ
Meridian of
Longitude 0 Meridian of Longitude θ

South Pole

∫∫∫
Integrals in Spherical Coordinates We want to evaluate triple integrals E
f (x, y, z) dV
using spherical coordinates. The most common order of integration for spherical
coordinates is—from the inside out—dρ, dφ, dθ. As before, this is associated with a
certain dissection of the solid region E into spherical cells. To see what these cells
look like, we describe the dissection of the region in the reverse order. First, assume
α ≤ θ ≤ β. In this range, decompose the solid into wedges formed by a family of
half planes emanating from the z-axis. Let ∆θ be the angle subtended at the z-axis
for the wedge at longitudinal angle θ.

Spike
Wedge

∆θ ∆φ

∆θ
168 CHAPTER 4. MULTIPLE INTEGRALS

In that wedge, assume φ1 (θ) ≤ φ ≤ φ2 (θ), where the extreme values of φ depend
in general on θ. Decompose the wedge into spikes formed by a family of conical
surfaces for different (constant) values of φ. Let ∆φ be the angle subtended at the
origin by the spike at azimuthal angle φ.

Finally, assume for that spike that ρ1 (φ, θ) ≤ ρ ≤ ρ2 (φ, θ) where the extreme values
of ρ depend generally on φ and θ. Decompose the spike into spherical cells by a
family of concentric spherical surfaces. Let ∆ρ be the radial extension of the cell at
radius ρ.

ρ ∆φ

ρ sin φ ∆θ
ρ ∆φ ρ ∆ρ

∆φ

∆θ r ∆θ = ρ sin φ ∆θ

Note that the ‘base’ of this spherical cell is a spherical ‘rectangle’ on the sphere of
radius ρ. Two of its sides lie along meridians of longitude, and the length of each
of these sides is ρ∆θ. The other two sides are circles of latitude. The top circle of
latitude has radius r = ρ sin θ, and if everything is small enough the bottom circle
has only a slightly larger radius. The arc which is the top side of the spherical
rectangle subtends angle ∆θ at the center of the circle of latitude, so its length is
r∆θ = ρ sin φ ∆θ. It is not hard to see from this that the area of the spherical
rectangle is approximately ρ∆φ · ρ sin φ ∆θ = ρ2 sin φ ∆φ∆θ. Multiplying by ∆ρ,
we have the following approximate formula for the volume of a spherical cell

∆V = ρ2 sin φ ∆ρ ∆φ ∆θ.

(This can be made an exact formula if we use appropriate values of ρ and φ inside
4.6. SPHERICAL COORDINATES 169

the cell instead of the values at one one corner.) The iterated integral is
∫∫∫
f (x, y, z) dV
E
∫ β ∫ φ2 (θ) ∫ ρ2 (φ,θ)
= f (ρ sin φ cos θ, ρ sin φ sin θ, ρ cos φ) ρ2 sin φ dρ dφ dθ .
α φ1 (θ) ρ1 (φ,θ)
| {z }
spike
| {z }
wedge
| {z }
solid

Symbolically, we may write

dV = ρ2 sin φ dρ dφ, dθ.


| {z }
correction factor

∫∫∫
Example 74 We shall evaluate E
(x2 + y 2 )dV for E a solid sphere of radius
a centered at the origin. (This was done in the previous section in cylindrical
coordinates.) Here f (x, y, z) = x2 + y 2 = r2 = ρ2 sin2 φ. To generate the entire
sphere, we let 0 ≤ θ ≤ 2π. For each θ, to generate a wedge, we let 0 ≤ φ ≤ π.
Finally, for each φ, θ, to generate a spike, we let 0 ≤ ρ ≤ a. The integral is
∫∫∫ ∫ 2π ∫ φ=π ∫ ρ=a
r2 dV = ρ2 sin2 φ ρ2 sin φ dρ dφ dθ
E 0 φ=0 ρ=0
∫ 2π ∫ φ=π ∫ ρ=a
= ρ4 sin3 φ dρ dφ dθ
0 φ=0 ρ=0
∫ 2π ∫ φ=π
a5
= sin3 φ dφ dθ
5 0 φ=0

a5 4 2π
= dθ
5 3 0
a5 4 8πa5
= 2π = .
5 3 15
Note that it was not at all apparent whether this problem would be easier to solve
in cylindrical or in spherical coordinates. The fact that we were integrating r2
suggested the former but the fact that the region is a sphere suggested the latter.
It turned out that the integral was a trifle easier in spherical coordinates, but there
wasn’t much difference.

Example 75 We shall find the volume bounded by the cone z = 3 r and the sphere
r2 + (z − 1)2 = 1. Recall from Example 73 that the sphere may be described in
spherical coordinates by ρ√= 2 cos φ. The cone makes angle α with the positive z-axis
where tan α = r/z = 1/ 3. Hence, the cone is described in spherical coordinates
by φ = α = π/6. To generate the solid, let 0 ≤ θ ≤ 2π, for each θ, let 0 ≤ φ ≤ π/6,
170 CHAPTER 4. MULTIPLE INTEGRALS

and for each φ, θ, let 0 ≤ ρ ≤ 2 cos φ. Thus, the volume is given by


∫∫∫ ∫ 2π ∫ π/6 ∫ 2 cos φ
1 dV = ρ2 sin φ dρ dφ dθ
E 0 0 0
∫ 2π ∫ π/6
1
= 8 cos3 φ sin φ dφ dθ
3 0 0

8 2π π/6
ρ = 2 cos φ = (− cos4 φ)/4 0 dθ
3 0
∫ √
φ=0 2 2π
= (1 − ( 3/2)4 ) dθ
φ = π/6 3 0
2 7 7π
= 2π = .
3 16 12

There are other possible orders of integration in spherical coordinates, and you
should try to visualize some of them. For example, suppose the region E is a solid
sphere centered at the origin. The order dθ, dφ, dρ is associated with the following
dissection. The sphere is first dissected into spherical shells of thickness dρ. Then
each shell is dissected into ‘rings’ at different latitudes subtending angle dφ at the
center of the sphere. Finally, each ring is dissected into spherical cells as before
ρ=0 each subtending angle dθ on the z- axis.

Other Notation Unfortunately there is no universally accepted notation for spher-


ical coordinates. First of all, ρ = |r| is just the magnitude of the position vector
−−→
r = OP , and another common notation for |r| is r, which we have reserved for
the cylindrical coordinate. Secondly, many texts reverse the meanings of φ and θ.
Indeed, almost all physics books and most mathematics books—except for calcu-
lus books—use θ to denote the azimuthal angle and φ for the longitudinal angle.
Because of this inconsistency, you should be sure you check the meanings of the
symbols whenever you encounter these coordinate systems. In any case, you should
concentrate on the geometric and physical meaning of the concepts rather than the
symbols used to represent them.

Exercises for 4.6.

1. Find spherical coordinates for the points with the given rectangular coordi-
nates:
(a) P (0, 0, −2).
(b) P (−1, 0, −1).
(c) P (2, −3, 5).
(d) P (−2, 0, 1).
2. Identify the following graphs given in spherical coordinates:
4.6. SPHERICAL COORDINATES 171

(a) φ = π/2.
(b) ρ = 2.
(c) ρ = 2 sin φ.
(d) θ = π.
(e) ρ sin φ = 1.
3. Write equations in spherical coordinates for each of the following:
(a) x2 + y 2 + z 2 = 25.
(b) x + y + z = 1.
(c) z 2 = x2 + y 2 .
(d) z = 4 − x2 − y 2 .
4. A sphere of radius a centered at the origin has a cylindrical hole of radius a/2,
centered on the z-axis drilled in it. Describe the solid region that remains by
inequalities in spherical coordinates.
5. Two points on the surface of a sphere of radius R have co-latitude and lon-
gitude (φ1 , θ1 ) and (φ2 , θ2 ) respectively. Show that the great circle distance
between them is Rα where

cos α = sin φ1 sin φ2 (cos(θ1 − θ2 )) + cos φ1 cos φ2 .

(The great circle determined by two points on a sphere is the circle of intersec-
tion of the sphere and the plane determined by the two points and the center
of the sphere.) Use this fact to determine the great circle distance from your
home town to London, given that London is at 51.5 degrees north latititude
(co-latitude 38.5 degrees) and 0 degrees longitude. (If you don’t know the
latitude and longitude of your home town, look it up.)
6. Find the mass and centroid of a solid hemisphere of radius a if the density
varies proportionally to the distance from the center, i.e. δ = kρ.
7. Find the volume of the intersection of the cylinder r = 1 with the sphere
ρ = 2. Find the mass and center of mass if δ = ρ2 .
8. Find the mass and center of mass of the ‘ice cream cone’ between the cone
φ = π/6 and the sphere ρ = 2, if δ = 2ρ.
9. Find the moment√ of inertia of the right circular cylinder given by 0 ≤ r ≤ 1
and 0 ≤ z ≤ 3 about the z-axis assuming constant density δ = 1. Use
spherical coordinates even though it would be more natural to use cylindrical
coordinates. Hint: You will have to divide the region into two parts.
10. Find the volume left in a sphere of radius a centered at the origin if a cylin-
drical hole of radius a/2 centered on the z-axis is drilled out. Do the problem
in both cylindrical and spherical coordinates.
172 CHAPTER 4. MULTIPLE INTEGRALS

4.7 Two Applications

We illustrate the use of integration in spherical coordinates by giving two historically


important applications.

Olbers’ Paradox Olbers’ Paradox is the 19th century observation that, in an


infinite Newtonian universe in which stars are uniformly distributed and which has
always existed, the sky would not be dark at night.

The argument for the paradox goes as follows. Assume stars are uniformly dis-
tributed through space. (Although stars are discrete objects, the model assumes
that on a large scale, we may assume a uniform continuous mass distribution as
an approximation. Today, we would replace ‘star’ by ‘galaxy’ as the basic unit.)
Choose a coordinate system centered on our solar system. Since light intensity
follows the inverse square law , the stars in a cell dV at distance ρ, would produce
intensity proportional to dV /ρ2 . Choosing our units properly, we would obtain for
all the stars in a large sphere of radius R the light intensity
∫∫∫ ∫ 2π ∫ π ∫ R
1 1 2
I= 2
dV = 2
ρ sin φ dρ dφ dθ
E ρ 0 0 0 ρ
∫ 2π ∫ π ∫ R
= sin φ dρ dφ dθ
0 0 0
∫ 2π ∫ π
=R sin φ dφ dθ
0 0
∫ 2π
π
=R − cos φ|0 dθ
0
∫ 2π
= R2 dθ = 4πR.
0
ρ This is unbounded as R → ∞, whence the conclusion that the sky would not be
dark at night. Of course, there are lots of objections to this simple model, but the
paradox persists even if one attempts to be more realistic. The resolution of the
paradox had to await modern cosmology with its model of a universe expanding
from an initial ‘big bang’. We won’t go into this here, referring you instead to any
good book on cosmology.

The usual derivation of the paradox does not explicitly mention spherical coordi-
nates. The argument is that the intensity due to all the stars in a thin shell at
distance ρ will be proportional to the product of the area of the shell, 4πρ2 , with
1/ρ2 ; hence it will be proportional to 4π. In other words, the contribution from
each spherical shell is the same and independent of the radius of the shell. If the
contributions from all shells in the universe are added up, the result is infinite. You
should convince yourself that this is just the same argument in other language.

The Gravitational Attraction of a Solid Sphere Newton discovered his laws


of motion and the inverse square law for gravitational attraction about 1665, when
4.7. TWO APPLICATIONS 173

he was quite young, but he waited until 1686 to start his famous Principia in which
these laws are expounded. Some scholars think the reason is that he was stumped
by the problem of showing that the gravitational attraction of a solid sphere on
a particle outside the sphere is the same as if the entire mass of the sphere were
concentrated at the center. (However, according to the Encyclopedia Brittanica,
most authorities reject this explanation, thinking instead that he did not have an
accurate enough value for the radius of the Earth.) We shall show how to solve that
problem using spherical coordinates.

Let a mass M be distributed over a solid sphere of radius a in such a way that the
density δ = δ(ρ) depends only on the distance ρ to the center of the sphere. Let a
test particle of unit mass be located at a point P at distance R from its center, and
suppose R > a, i.e., P is outside the sphere. Choose the coordinate system so that
the origin is at the center of the sphere and so that the z-axis passes through P . We
can resolve the force F exerted on the test particle into components hFx , Fy , Fz i,
but it is clear by symmetry considerations that Fx = Fy = 0, so F = Fz k is directed η
toward the origin. Thus we need only find Fz . Let dV be a small element of volume s
located at a point inside the sphere with spherical coordinates (ρ, φ, θ). The mass R
inside dV will be dm = δ dV , and according to the law of gravitation, the force on
the test particle will have magnitude G dm/s2 , where s is the distance from P to dm
φ
dV . This force will be directed toward dV , but its z-component will be given by ρ
Gδ dV
dFz = − cos η (56) a
s2
where η is the angle between the vector from P to dV and the z-axis. (See the
diagram.)

We calculate the total z-component by integrating this over the solid sphere E.
∫∫∫
δ
Fz = −G 2
cos η dV. (57)
E s
We shall compute the integral by integrating in spherical coordinates in the order
dθ, dφ, dρ. ∫ a ∫ π ∫ 2π
δ
Fz = −G cos η ρ2 sin φ dθ dφ dρ.
0 0 0 s2
The first integration with respect to θ is easy since nothing in the integrand depends
on θ. It just yields a factor 2π which may be moved in front of the integral signs.
∫ a∫ π
δ
Fz = −G(2π) 2
cos η ρ2 sin φ dφ dρ
s
∫ a0 0 ∫ π
1
= −2πG ρ2 δ(ρ) 2
cos η sin φ dφ dρ.
0 0 s

(Since ρ and δ(ρ) do not depend on φ we have moved them out of the way.) The first
integration gives us the contribution from the mass in a ring situated at co-latitude
φ and distance ρ from the origin. The next integration with respect to φ is the
174 CHAPTER 4. MULTIPLE INTEGRALS

hardest part of the computation. It will give us the contribution from the mass in a
spherical shell of radius ρ. It is easier if we change the variable of integration from
φ to s. By the law of cosines, we have

s2 = ρ2 + R2 − 2ρR cos φ

whence

2s ds = −2ρR (− sin φ dφ)

or

s ds
sin φ dφ = .
ρR
Also, at φ = 0 (the north pole), we have s = R − ρ, and at φ = π (the south pole),
we have s = R + ρ. (Look at the diagram.) Hence,
∫ a ∫ R+ρ
1 s
Fz = −2πG ρ2 δ(ρ) 2
cos η ds dρ
0 R−ρ s ρR
∫ a ∫ R+ρ
ρδ(ρ) 1
= −2πG cos η ds dρ.
0 R R−ρ s

To proceed, we need to express cos η in terms of s. Refer to the diagram. By the


law of cosines, we have
ρ2 = s2 + R2 − 2Rs cos η
so
s2 + R2 − ρ2
cos η =
2Rs
and ( )
1 1 s2 + R2 − ρ2 1 R2 − ρ2
cos η = = 1+ .
s s 2Rs 2R s2
Hence,
∫ a ∫ R+ρ ( )
1 R2 − ρ2
Fz = −2πG 2 ρδ(ρ) 1+ ds dρ
2R 0 R−ρ s2
∫ a ( ) s=R+ρ
1 R2 − ρ2
= −πG 2 ρδ(ρ) s − dρ
R 0 s s=R−ρ
∫ a ( )
1 R2 − ρ2 R2 − ρ2
= −πG 2 ρδ(ρ) R + ρ − − (R − ρ) + dρ
R 0 R+ρ R−ρ
∫ a ∫ a
1 G
= −πG 2 ρδ(ρ)(4ρ) dρ = − 2 4πρ2 δ(ρ) dρ.
R 0 R 0
The integral on the right is just the total mass M in the sphere. You can see this
by setting up the integral for the mass in spherical coordinates and carrying out
4.7. TWO APPLICATIONS 175

the integrations with respect to θ and φ as above. However, since a sphere of radius
ρ has surface area 4πρ2 , it is clear that the mass in a thin shell of radius ρ and
thickness dρ is 4πρ2 δ(ρ) dρ. We get finally the desired result
GM
Fz = −
R2
as claimed.

The calculation of the force due to a spherical shell depends strongly on the test
particle being outside the shell, i.e., R > ρ. The expression for cos η is different if
the test particle is inside the shell, i.e., R < ρ. In that case, it turns out that the
force on the test particle is zero. (See the Exercises.)

Exercises for 4.7.

1. How would the argument for Olbers’ Paradox change if we assumed the density
of stars was constant for ρ < ρ0 and dropped off as 1/ρ for ρ > ρ0 ?
2. Find the force exerted on a test particle of unit mass at the origin by a solid
hemisphere of radius a centered at the origin if the density is given by δ = kρ.
Express the answer in terms of the total mass. Note that if the test particle
is at the origin, then s = ρ in equations (56) and (57). Note also that by
symmetry the x and y components of the force are zero.
3. Find the force exerted on a test particle of unit mass at the origin by
√ a solid
cone with vertex at the origin, centered on the z-axis, of height 3 a and
radius a. Use the same density function δ = kρ as in the previous problem.
4. Consider a spherical shell described by c ≤ ρ ≤ a of constant density δ. It is
clear by symmetry that the gravitational force exerted on a unit test mass at
the origin is zero. Show that the force exerted on a test mass at any point
inside the shell is zero. Hint: Change variables from φ to s, as was done in
the text. Convince yourself that if ρ >= R (the test mass is inside the shell
of radius ρ), then the calculation of cos η in terms of s is still correct, but the
limits for s are ρ − R ≤ s ≤ ρ + R. Show that in this case the answer is zero.
(Note that if R < ρ, it is possible for η to be obtuse, so w = R − z would be
negative.)
5. Let a mass M be uniformly distributed over a solid sphere of radius a. Imag-
ine a very narrow tunnel drilled along a diameter and a test particle placed
at distance R from the center. Show that the gravitational attraction is pro-
portional to R. Ignore the effect of removing the mass in the tunnel. Hint:
According to the previous problem, the mass in the shell R ≤ ρ ≤ a will
exert zero net force. What force will be exerted by the mass of the sphere
0 ≤ ρ ≤ R? How does this argument change if the mass density is δ = k/ρ?
176 CHAPTER 4. MULTIPLE INTEGRALS

4.8 Improper Integrals

One often encounters integrals involving infinities of one sort or another. This
may occur if either the domain of integration is not bounded or if the function
being integrated is not bounded on its domain. The basic method of dissecting the
domain, forming a finite sum, and taking a limit does not work in such cases, but
one can usually do something sensible. The resulting integrals are called improper
integrals.

Example 76 We shall find the area bounded by the graphs of x = 1, y = 0, and


y = 1/x2 . The region is bounded below by the x-axis and above by a graph which
approaches the x-axis asymptotically. The region is cut off by the line x = 1 on the
left, but it extends without limit to the right.

The area is calculated as follows. Consider a finite portion of the region, bounded
on the right by the line x = U (for ‘upper limit’). Its area is the integral

∫ U
U
dx 1 1
A(U ) = = − =1− .
1 x2 x 1 U

Now let the upper limit U → ∞. The term 1/U → 0, so the area is

1
y=
x2
A = lim A(U ) = 1.
U →∞

Note that the result seems a trifle paradoxical. Although the region is unbounded,
x=1 it does have a finite area according to this plausible method for finding area.

The above example is a special case of a more general concept. Suppose y = f (x)
∫U
defines a function for a ≤ x < ∞. Suppose moreover that the integral a f (x) dx
exists for each a < U . We define the improper integral

∫ ∞ ∫ U
f (x) dx = lim f (x) dx
a U →∞ a

∫b
provided this limit exists. Similar definitions can be made for −∞ f (x) dx or for
various unbounded regions in R2 and R3 . (See below for some examples.)
4.8. IMPROPER INTEGRALS 177

A(U)

x=1 x=U

The answer is not always finite.


∫ ∞ dx
Example 77 To determine 1
√ , we consider
x

∫ ∫ U
U
dx U
x1/2 √
√ = x−1/2 dx = = 2 U − 2.
1 x 1 1/2 1

This does not have a finite limit as U → ∞, so we say the improper integral diverges
or that the answer is +∞.
∫ 1 dx
Example 78 We shall evaluate 0
√ . At first glance, this looks like an ordinary
x
integral. Indeed, we have

∫ 1
1
dx x1/2
√ = = 2.
0 x 1/2 0

However, if you look carefully, you will notice that there is something not quite right
since the integrand is not bounded near the lower limit 0. (The graph approaches
the y-axis asymptotically.)
178 CHAPTER 4. MULTIPLE INTEGRALS

y = x -1/2 x
-1/2
dx

x=0 x=1 x= ε x=1


Unbounded region Let ε −> 0

The correct way to do this problem is to treat the integral as an improper integral
and to evaluate it as a limit of proper integrals. Let 0 <  < 1. Then
∫ 1
1
dx x1/2 √
√ = = 2 − 2 .
 x 1/2 

If we now let  → 0, we have


∫ ∫
1
dx 1
dx √
√ = lim √ = lim (2 − 2 ) = 2.
0 x →0  x →0

Evaluating the above integral as a limit is a bit silly, since the first method gives
the same answer. This is a common state of affairs. What saves us from error in
such cases is that the anti-derivative is continuous, so taking a limit or evaluating
it yield the same answer. However, as the next example shows, it is possible to
go wrong, so one should always be aware that one is really evaluating an improper
integral. (Check through the previous sections and you will find several improper
integrals in hiding.)
∫ 1 dx
Example 79 We shall try to evaluate −1 x2
. The graph of the function f (x) =
1/x is asymptotic to the positive y-axis, so it is unbounded as x → 0. Suppose we
2

ignore this and just try to do the integral by the usual method.
∫ 1
1
dx 1
= − = −2.
−1 x2 x −1

However, this is clearly not a correct answer since it is negative and the function
is always positive. Moreover, suppose we divide the integral into two parts: one
from −1 to 0 and the other from 0 to 1. Each of these is an improper integral,
4.8. IMPROPER INTEGRALS 179

so they should be computed as limits. Looking at the second integral, we have for
0 <  < 1,
∫ 1 1
dx 1 2
2
= − = − 2,
 x x  

and this does not approach a finite limit as  → 0. By symmetry, the same argument
works for the other integral, so the sum of the two is not a finite number.

In each of the above examples, the functions were always positive. In cases where
we have to combine ‘positive infinities’ with ‘negative infinities’, the situation is a
bit more complicated because the answer may depend on how you take limits.
∫ 1 dx x = -1 x=1
Example 80 Consider −1 x
. If we divide this into two improper integrals, we
could try
∫ 1 ∫ 0 ∫ 1
dx dx dx
= + .
−1 x −1 x 0 x

However
∫ 0 ∫ −
dx dx −
= lim = lim ln |x||−1
−1 x →0 −1 x →0

= lim (ln | − | − ln | − 1|) = lim ln  = −∞,


→0 →0

and
∫ 1 ∫ 1
dx dx 1
= lim = lim ln |x||η
0 x η→0 η x η→0

= lim (− ln η) = +∞.
η→0

There is no sensible way to combine these infinities to get a unique value. However,
we could combine the two integrals as follows
∫ 1 [∫ − ∫ 1 ]
dx dx dx x = -1
= lim +
−1 x →0 −1 x  x
x=1
= lim [(ln | − | − ln | − 1|) − (ln 1 − ln )] = 0.
→0

Here we have carefully arranged to approach zero from both directions at exactly
the same rate, so at each stage the integrals cancel. The result 0, in this case, is
called the Cauchy principal value of the improper integral.

The Normal Distribution In probability and statistics one encounters the so-
called ‘bell shaped curve’. This is the graph of the function f (x) = Ce−x /2σ where
2 2

C and σ are appropriate constants.


180 CHAPTER 4. MULTIPLE INTEGRALS

−σ σ

∫b
For any interval [a, b] on the real line, the integral a f (x)dx is supposed to be the
probability of a measurement of the quantity x giving a value in that interval. Here,
the mean value of the measured variable is assumed to be 0, and σ, which is called
the standard deviation, tells us how concentrated the measurements ∫will be about

that mean value. Moreover, the constant C should be chosen so that −∞ f (x) = 1
since it is certain that a measurement will produce some value. Hence, C should
be the reciprocal of ∫ ∞
e−x
2
/2σ 2
dx.
−∞
This is of course an improper integral. The fact that both limits are infinite adds
a complication, but since the function is always positive, no significant problem
arises. Indeed, by symmetry, we may assume
∫ ∞ ∫ ∞
e−x /2σ dx = 2 e−x /2σ dx,
2 2 2 2

−∞ 0

and we shall calculate the latter integral. The first step is to eliminate the parameter
σ by making the substitution u = x/σ, du = dx/σ. This gives
∫ ∞ ∫ ∞
e−x /2σ dx = σ e−u /2 du.
2 2 2

0 0
∫∞ −u2 /2
The integral I = 0 e du cannot be done by explicit integration, so we make
use of a clever trick. Consider
(∫ ∞ )2 (∫ ∞ ) (∫ ∞ )
e−u /2 du = e−u /2 du e−u /2 du
2 2 2
I2 =
0
(∫0 ∞ ) (∫0∞ )
−x2 /2
e−y /2 dy .
2
= e dx
0 0

(Here we used the fact that the ‘dummy variable’ in a definite integral can be called
anything at all, so we called it first ‘x’ and then ‘y’.) This product can also be
written as an iterated integral
∫ ∞∫ ∞ ∫ ∞∫ ∞
−x2 /2 −y 2 /2
e−(x +y )/2 dy dx.
2 2
e e dy dx =
0 0 0 0
4.8. IMPROPER INTEGRALS 181

This last integral can be viewed as an improper double integral , i.e., as


∫∫
e−(x +y )/2 dA
2 2

where D is the first quadrant in the x, y-plane.

Unbounded region Let R -> infinity

To calculate this improper integral, we switch to polar coordinates and treat the
region D as a limit of quarter discs D(R) of radius R as R → ∞. Thus,
∫∫ ∫∫
e−(x +y )/2 dA = lim e−(x +y )/2 dA
2 2 2 2

D R→∞ D(R)
∫ π/2 ∫ R
e−r
2
/2
= lim r dr dθ
R→∞ 0 0
∫ π/2 R

(−e−r
2
= lim /2
) dθ
R→∞ 0 0
π
lim (1 − e−R /2 )
2
=
2 R→∞
π
= ,
2

since limR→∞ e−R = 0. It follows that I 2 = π/2 whence I =
2
π/2. Hence,
∫ ∫ √

−x2 /2σ 2

−x2 /2σ 2 π √
e dx = 2 e dx = 2σI = 2 σ = 2π σ.
−∞ 0 2

Thus, we should take C = 1/( 2π σ), so
∫ ∞
e−x /2σ
2 2

√ dx = 1.
−∞ 2π σ

The adjusted integrand is called the normal or Gaussian density function.


182 CHAPTER 4. MULTIPLE INTEGRALS

The calculation of the improper double integral involves some hidden assumptions.
(See the Exercises.)

Similar calculations for unbounded regions may be done in R3

Example 81 We shall determine the improper integral


∫∫∫
e−(x +y +z )/2 dV.
2 2 2

R3

The method is to calculate the integral for a solid sphere E(R) of radius R, centered
at the origin, and then let R → ∞. Using spherical coordinates, we have
∫∫∫ ∫ 2π ∫ π ∫ R
−(x2 +y 2 +z 2 )/2
e−ρ /2 ρ2 sin φ dρ dφ, dθ
2

R e dV =
E(R) 0 0 0
∫ π ∫ R
e−ρ
2
/2
= |{z}
2π sin φ dφ ρ2 dρ.
from θ | 0
{z } 0
2

The ρ integral can be done by integrating by parts and the answer is


∫ R
−R2 /2
e−ρ /2 dρ.
2
−Re +
0

Let R → ∞. The first limit may be calculated by L’Hôpital’s rule.


R 1
lim Re−R
2
/2
= lim 2 = lim 2 = 0.
R→∞ R→∞ eR /2 R→∞ ReR /2
∫∞ √
The second term approaches 0 e−ρ /2 dρ = π/2 by the previous calculation.
2

Hence, ∫∫∫ (√ )
−(x2 +y 2 +z 2 )/2 π
e dV = (2π)(2) = (2π)3/2 .
R3 2

Note that the argument for Olbers Paradox in the previous section really involves
an improper integral. So do many gravitational force calculations which involve
integrating functions with denominators which may vanish.

Exercises for 4.8.

∫1 1
1. Evaluate the following improper integrals, if they converge: (a) 0
dx.
x5/2
∫∞ 1 ∫1 1 ∫1 1 ∫0 1
(b) dx. (c) −1
dx. (d) 0 dx. (e) −∞ dx. (f)
1 x5/2 1−x 2 x−1 x−1
∫∞ x
−∞
dx.
x+1
4.9. INTEGRALS ON SURFACES 183

2. Consider the curve y = f (x) = 1/x, x ≥ 1.


(a) Show that the area under the curve is infinite.
(b) Show that the volume formed by rotating the curve around the x-axis is
finite.

3. An infinite rod of uniform density δ lies along the x-axis. Find the net gravi-
tational force on a unit test mass located at (0, a).

4. Calculate
∫ ∞ ∫ ∞
dx dy
0 0 (1 + x2 + y 2 )2

by switching to polar coordinates.

5. Find the total mass in a solid sphere of radius a supposing the mass density
is given by δ(ρ) = k/ρ. First find this by doing the integral in spherical
coordinates. Next notice that because the integrand is unbounded as ρ → 0,
the integral is in fact an improper integral. With this in mind, recalculate the
integral as follows. Find the mass M () in the solid region E() between an
inner sphere of radius  and the outer sphere of radius a. Then take the limit
as  → 0. You should get the same answer.

6. Look back over previous integration problems to see if you can find any which
involve hidden improper integrals.

4.9 Integrals on Surfaces

The double integrals discussed so far have been for regions in R2 . We also want to
be able to integrate over surfaces in R3 . In the former case, we can always dissect
the region into true rectangles (except possibly near the boundary), but that won’t
generally be possible for surfaces which are usually curved. We encountered a similar
situation in our discussion of arc length and line integrals for paths in R2 and R3 ,
so we shall briefly review that here.

Parametric Representations Let r = r(t), a ≤ t ≤ b provide a parametric


representation for a path C in Rn (n = 2 or 3). It is useful to picture this by
drawing a diagram which exhibits the domain a ≤ t ≤ b on the left, Rn with the
image curve on the right, and a curved arrow indicating the action of mapping the
parameter t to the point r(t) on the path.
184 CHAPTER 4. MULTIPLE INTEGRALS

Map ∆s r (b)

r (t)
r (a)
a t b
∆ t

To integrate on the curve, we dissect the parameter domain into small intervals ∆t,
and that results in a corresponding dissection of the curve into small arcs ∆s where

∆s ≈ |r0 (t)|∆t.

(The quantity on the right is just the length of a small displacement tangent to the
curve, but it is also a good approximation to the length of the chord connecting
the endpoints of the small arc.) Suppose now that f : Rn → R is a scalar valued
function such that the image curve C is contained in its domain, i.e., f (r) is defined
for r on C. We can form the sum
∑ ∑
f (r)∆s ≈ f (r(t))|r0 (t)|∆t
t−dis- t−dis-
section section

which in the limit becomes


∫ ∫ b
f (r) ds = f (r(t))|r0 (t)| dt.
C a

This generalizes slightly what we did before when discussing line integrals. In that
case, we have a vector function F defined on C, and the scalar function f to be
integrated is given by f (r) = F(r) · T where T is the unit tangent vector at r.

Example 82 Suppose a mass is distributed on a thin wire shaped in a circle of radius


a in such a way that the density is proportional to the distance r to a fixed point O on
the circle. We shall find the total mass. To this end, introduce a coordinate system
with the origin at O and the x-axis pointing along the diameter through √O. (See
the diagram.) Then ∫the mass density will have the form δ(r) = kr = k x2 + y 2 ,
and we want to find C δ(r) ds. We know that the circle may be described in polar
coordinates by r = 2a cos θ, so using x = r cos θ, y = r sin θ, we obtain a parametric
representation in terms of θ

r = h2a cos2 θ, 2a cos θ sin θi, −π/2 ≤ θ ≤ π/2.


4.9. INTEGRALS ON SURFACES 185

Hence,

r0 (θ) = h−4a cos θ sin θ, −2a sin θ sin θ + 2a cos θ cos θi


= h−2a sin 2θ, 2a cos 2θi,

so |r0 (θ)| = 2a. In addition, on the curve, δ(r) = kr = k(2a cos θ), so
r( θ )
∫ ∫ π/2
π/2 θ a
δ(r) ds = (2ak cos θ)(2a) dθ = 4a2 k sin θ|−π/2 = 8a2 k.
C −π/2

(You might also try to do the problem by choosing a coordinate system with origin
at the center of the circle. Then the expression for δ(r) would be a bit more
complicated.)

We want to do something similar for surfaces in space. So far, we have met surfaces
as graphs of functions f : R2 → R or as level sets of functions g : R3 → R.
They may also be represented parametrically by vector valued functions R2 → R3 .
Before, discussing the general case, we recall our discussion of the surface of a sphere
which is one of the most important applications.

Example 83 In our discussion of ‘geography’, we noted that on the surface of


the sphere ρ = a, the spherical coordinates φ, θ are intrinsic coordinates specify-
ing the position of points on that sphere. Moreover, using x = ρ sin φ cos θ, y =
ρ sin φ sin θ, z = ρ cos φ, we may specify the relation between (φ, θ) and the position
vector of the point on the sphere by

r = ha sin φ cos θ, a sin φ sin θ, a cos φi,


0 ≤ φ ≤ π, 0 ≤ θ < 2π.

As above, consider a diagram with a φ, θ-plane on the left, the sphere imbedded in
R3 on the right, and a curved arrow indicating the action of mapping (φ, θ) to the
image point on the sphere.

Map φ = const
θ φ = const
2π θ = const

θ = const
φ
0 π
186 CHAPTER 4. MULTIPLE INTEGRALS

φ, θ on the left should be thought of as rectangular coordinates in a map of the


sphere, while the picture on the right represents ‘reality’. In the map, circles of
latitude are represented by vertical lines and meridians of longitude by horizontal
lines. The entire sphere is covered by mapping the rectangle 0 ≤ φ ≤ π, 0 ≤ θ ≤
2π. The bottom edge of this rectangle (φ = 0) is mapped to the North Pole on
the sphere, and similarly, the upper edge (φ = π) is mapped to the South Pole.
For points interior to the rectangle, there is a one-to-one correspondence between
parameter points (φ, θ) and points on the sphere.

The general situation is quite similar. We suppose we are given a smooth vector
valued function r = r(u, v) defined on some domain D in the u, v-plane and taking
values in R3 . The subset of R3 consisting of image points r(u, v) for (u, v) in D
will generally be a surface, and we say the function r = r(u, v) is a parametric
representation of this surface. As above, we picture this by a diagram with the
u, v-parameter plane on the left, R3 with the image surface imbedded on the right,
and a curved arrow indicating the action of mapping (u, v) to r(u, v).

z
Map u = const
v u = const

D S
y
v = const
v = const

u
x

We assume that at least for the interior of the domain, the function is one-to-one,
i.e., distinct points in the parameter plane map to distinct points on the surface.
However, for the boundary of the domain, the one-to-one condition may fail.

Horizontal lines (v = constant) in the parameter domain, and vertical lines (u =


constant) map to curves on the surface, and it is usually worthwhile seeing what
those lines are.

There are of course as many surfaces which can be defined this way as there are
functions R2 → R3 . However, it is not necessary at this point to be familiar with
all of them; knowing how to represent certain simple surfaces parametrically will
suffice. We started with the surface of a sphere. The case of a cylinder is even
easier.

Example 84 Consider a cylinder of radius a centered on the z-axis. In cylindrical


coordinates, this is described simply by r = a. Putting this in x = r cos θ, y = r sin θ,
4.9. INTEGRALS ON SURFACES 187

we obtain the following parametric representation of the cylinder

r = r(θ, z) = ha cos θ, a sin θ, zi


0 ≤ θ < 2π, −∞ < z < ∞.

z
θ = const Map

z = const
θ
0 2π z = const

θ = const

The parameter domain in this case is the infinite strip between the lines θ = 0 and
θ = 2π in the θ, z-plane. If we wanted only a finite portion of the cylinder, say
between z = 0 and z = h, we would appropriately limit the domain.

The lines θ = constant in the θ, z-plane correspond to vertical lines on the cylinder.
The lines z = constant in the parameter plane correspond to circles on the cylinder,
parallel to the x, y-plane.

Example 85 Let a = ha1 , a2 , a3 i and b = hb1 , b2 , b3 i be fixed vectors in R3 , and


consider the function defined by

r = r(u, v) = ua + vb = ha1 u + b1 v, a2 u + b2 v, a3 u + b3 vi,


−∞ < u < ∞, −∞ < v < ∞.

Here the domain is the entire u, v-plane and the image surface is just the plane
through the origin containing the vectors a and b.
188 CHAPTER 4. MULTIPLE INTEGRALS

v u = const
b
Small rectangle

v = const
Map

a v = const
u
Small parallelegram
u = const

The lines u = constant and v = constant in the parameter domain correspond to


lines in the image plane. Note that these lines won’t generally be perpendicular to
one another. In fact the line u = constant (v varying) will meet the line with v =
constant (u varying) in the same angle as that between a and b.

How would you modify this function to represent the plane which passes through
the endpoint of the position vector r0 and which is parallel to the above plane?

Example 86 Consider the circle of radius a in the x, z-plane centered at (b, 0, 0).
The surface obtained by rotating this circle about the z-axis is called a torus. Using
cylindrical coordinates, you see from the diagram that

r = b + a cos η
z = a sin η

where η is the indicated angle. Hence, we obtain the parametric representation

r = r(η, θ) = h(b + a cos η) cos θ, (b + a cos η) sin θ, a sin ηi,


0 ≤ η < 2π, 0 ≤ θ < 2π.

θ η = const
η = const
2π Map

θ = const

η θ = const

4.9. INTEGRALS ON SURFACES 189

The line η = constant in the η, θ-domain corresponds to a circle on the torus centered
on the z-axis and parallel to the x, y-plane. The line θ = constant in the η, θ-domain η
corresponds to a circle on the torus obtained by cutting it crosswise with the half-
plane from the z-axis determined by that value of θ. a
Integrating on Parametrically Defined Surfaces Let S denote a surface in
R3 represented parametrically by r = r(u, v). In what follows, we shall make use of b
the curves on the surface obtained by keeping one of the parameters constant and
letting the other vary. For example, if v is constant, r(u, v) provides a parametric Cross section with
representation of a curve with parameter u. As usual, you can find a tangent vector
to the curve by taking the derivative, but since v is constant, it is the partial constant θ
derivative, ∂r/∂u. Similarly, ∂r/∂v is a vector tangent to the curve obtained by
keeping u constant and letting v vary.

Let f (x, y, z) = f (r) be a scalar valued function with domain containing the surface
S. We want to define what we mean by the integral of the function on the surface.
Our method parallels what we did in the case of curves. Imagine that the domain
D of the parameterizing function is dissected into small rectangles with the area
of a typical rectangle being ∆A = ∆u∆v. Corresponding to this dissection is a
dissection of the surface into subsets we shall call curvilinear rectangles.

v Tangent parallelegram

∆v
(u, v) ∆u
u = const v = const
u

Suppose for the moment that we know how to find the surface area ∆S of each
such curvilinear rectangle. Also, for each such curvilinear rectangle, choose a point
r = r(u, v) inside it at which to evaluate f (r). (For reasonable functions, it won’t
matter much where it is chosen.) Form the sum

f (r) ∆S (58)
dissection
of S

and consider what happens to such sums as the dissection gets finer and finer and
the number of curvilinear rectangles goes to ∞. If the sums approach a limit, we
denote it by
∫∫ ∫
f (x, y, z) dS or sometimes f (x, y, z) dS. Dissection of
S S
surface S
190 CHAPTER 4. MULTIPLE INTEGRALS

The crucial part of this analysis is determining how to identify the element of surface
area ∆S for a typical curvilinear rectangle and seeing how that is related to the
area ∆A = ∆u∆v of the corresponding rectangle in the parameter domain. There
are several ways to do this, and seeing how they are related is a bit involved. The
method we shall use is based on the idea that the tangent plane to the surface
is a good approximation to the surface. Let (u, v) be the lower left corner of a
small rectangle in the parameter domain. Consider the two sides meeting there and
their images which form the corresponding sides of the curvilinear rectangle. The
image of the side from (u, v) to (u + ∆u, v) is mapped into the arc from r(u, v)
to r(u + ∆u, v). This arc is approximated pretty closely by the tangent vector
(∂r/∂u)∆u. Similarly, the arc from r(u, v) to r(u, v + ∆v) is approximated pretty
closely by the tangent vector (∂r/∂v)∆v.

r
∆v
v
u = const

r(u, v) r
∆u
u

r (u + ∆u, v) r
∆u
v = const u

Thus, the parallelegram spanned by these tangent vectors is a good approximation to


the curvilinear rectangle, and we may take ∆S to be the area of that parallelegram.

∂r ∂r ∂r ∂r

∆S = ∆u × ∆v = × ∆u ∆v.
∂u ∂v ∂u ∂v

We can also write this equation as



∂r ∂r

∆S = × ∆A,
∂u ∂v

∂r ∂r

so that × is the correction factor needed to convert area ∆A = ∆u∆v in
∂u ∂v
the parameter plane into area ∆S on the surface.

If we put this value for ∆S in formula (58), and take the limit, we obtain
∫∫ ∫∫
∂r ∂r
f (r) dS =
f (r(u, v)) × dA
S D ∂u ∂v
4.9. INTEGRALS ON SURFACES 191

where the integral on the right is a normal double integral in the u, v-parameter
plane. In particular, if we take f (r) = 1, we obtain a formula for the surface area
of S ∫∫ ∫∫
∂r ∂r
S= 1 dS = × dA.
∂v
S D ∂u

Example 83, (revisited) We shall find the element of surface area on a sphere. We
have
r = ha sin φ cos θ, a sin φ sin θ, a cos φi,
so
∂r
= ha cos φ cos θ, a cos φ sin θ, −a sin φi,
∂φ
∂r
= h−a sin φ sin θ, a sin φ cos θ, 0i.
∂θ
These vectors are perpendicular to each other. You can see that directly from the
formulas or you can remember that they are tangent respectively to a meridian
of longitude and a circle of latitude. The magnitude of the cross product of two
perpendicular vectors is just the product of their magnitudes. We have

∂r
= a2 sin2 φ(cos2 θ + sin2 θ) + a2 cos2 θ = a,
∂φ

∂r
= a2 sin2 φ(sin2 θ + cos2 θ) = a sin φ,
∂θ

so

∂r ∂r

dS = × dA
∂φ ∂θ
= a2 sin φ dφ dθ.

That of course is the same value that was obtained earlier when we viewed this
same curvilinear rectangle on the sphere as the base of a spherical cell in computing
the element of volume in spherical coordinates.

Let’s use this to calculate the surface area of a sphere of radius a.


∫∫ ∫ 2π ∫ π
S= 1 dS = a2 sin φ dφ dθ
S 0 0
∫ 2π∫ π
= a2 dθ sin φ dφ
| {z } |
0 0
{z }
2π 2
= 4πa2 ,

and that is the answer you should be familiar with from high school.
192 CHAPTER 4. MULTIPLE INTEGRALS

Example 84, (revisited) Assume a mass is distributed over the surface of a right
circular cylinder of height h and radius a so that the density δ is proportional to
the distance to the base. We shall find the total mass. We represent the cylinder
parametrically by

r = r(θ, z) = ha cos θ, a sin θ, zi


0 ≤ θ < 2π, 0 ≤ z ≤ h.

Then

∂r
= h−a sin θ, a cos θ, 0i
∂θ
∂r
= h0, 0, 1i,
∂z

and again these are perpendicular. The product of their magnitudes is just a, so
the element of area is
∆z
dS = a dθ dz.

a ∆θ Note that it is quite easy to see why this formula should be correct by looking
directly at the curvilinear rectangle on the surface of the cylinder. Its dimensions
are a dθ by dz.

The mass density has been assumed to have the form δ(r) = kz for some constant
of proportionality k. Hence, the mass is given by

∫∫ ∫ 2π ∫ h
kz dS = kz a dθ dz
S 0 0
∫ ∫ h
2π h
z 2
= ak dθ z dz = 2πak
2 0
| 0 {z } 0

= πah2 k.

The Graph of a Function One very important case of a surface in R3 is that


of a graph of a function. This may be treated as a special case of a parametrically
defined surface as follows. Suppose z = f (x, y) denotes a function with domain D
in R2 . Let x, y be the parameters and set

r = r(x, y) = hx, y, f (x, y)i, for (x, y) in D.


4.9. INTEGRALS ON SURFACES 193

z = f(x, y) ∆S

∆A

In this case the element of surface area is calculated as follows.


∂r
= h1, 0, fx i,
∂x
∂r
= h0, 1, fy i.
∂y

These vectors are not generally perpendicular, so we need to calculate

∂r ∂r
× = h1, 0, fx i × h0, 1, fy i = h−fx , −fy , 1i.
∂x ∂y

(You may remember that we encountered the same vector earlier as√
a normal vector
to the graph.) Hence, the correction factor is |∂r/∂x × ∂r/∂y| = fx 2 + fy 2 + 1,
and √
dS = fx 2 + fy 2 + 1 dA,

where dA = dy dx is the element of area in the domain D of the function f .

Example 87 We shall find the surface area of that portion of the paraboloid z =
x2 + y 2 below the plane z = 4. Here f (x, y) = x2 + y 2 and D is the disc in the
x, y-plane determined by x2 + y 2 ≤ 4. We have fx = 2x, fy = 2y, so
√ √
fx 2 + fy 2 + 1 = 4x2 + 4y 2 + 1

and ∫∫ ∫∫ √
S= 1 dS = 4(x2 + y 2 ) + 1 dA.
S D

The problem has now been reduced to a double integral in the x, y-plane. This can
be done by any method you find convenient. For example, since the region D is a
194 CHAPTER 4. MULTIPLE INTEGRALS

disk, it would seem reasonable to use polar coordinates.


∫∫ √ ∫ 2π ∫ 2 √
4(x2 + y 2 ) + 1 dA = 4r2 + 1 r dr dθ
D 0 0
∫ 2π ∫ 2 √
1
= dθ 4r2 + 1 8r dr
8
| 0 {z } 0

2
π (4r2 + 1)3/2
=
4 3/2 0
π √
= ( 4913 − 1).
6

Note
√ that in effect, we have introduced two correction factors here. The first
4x2 + 4y 2 + 1 converted area in the x, y-plane to surface area on the graph. The
second r was needed because we chose to use polar coordinates in the x, y-plane.
There is an entirely different approach which introduces only one correction factor.
Namely, use the parametric representation
r = hr cos θ, r sin θ, r2 i,
0 ≤ r ≤ 2, 0 ≤ θ < 2π.
In this case, the domain of integration is a rectangle in the r, θ-plane, and the
correction factor ends up being
√ √
∂r ∂r

×
∂r ∂θ = · · · = 4r + r = 4r + 1 r.
4 2 2

(You should check the . . . in the above calculation.)

Example 88 We shall find the area of the circle in the plane x = 1 of radius 2
centered on the point (1, 0, 0). This is a bit silly since the answer is clearly π22 = 4π,
but let’s see how the method gives that answer. The surface in this case may be
viewed as the graph of a function x = g(y, z) = 1 with domain D a disc of radius
2 centered at the origin in the y, z-plane. In this case, the appropriate element of
area would be √
dS = gy 2 + gz 2 + 1 dy dz
D but since gy = gz = 0, the correction factor is just 1. Hence,
∫∫
S= 1 dy dz.
D
S x=1 We need not actually do the integral, since we know that the answer will just be
the area of the domain D, which is π22 = 4π.

Note that you could also represent the surface parametrically by


r = h1, r cos θ, r sin θi,
0 ≤ r ≤ 2, 0 ≤ θ < 2π.
4.9. INTEGRALS ON SURFACES 195

Exercises for 4.9.


1. Calculate C f (r) ds for each indicated function over the curve given by the
indicated parametriztion.
(a) f (x, y) = x − y, r = ht2 , t3 i, −1 ≤ t ≤ 1.
(b) f (x, y, z) = x + 2y + z, r = hcos θ, sin θ, 1i, 0 ≤ θ ≤ 2π.
(c) f (x, y) = x, r = hx, x2 i, 0 ≤ x ≤ 1. (C is the graph of y = x2 .)

2. A mass is distributed over a thin wire in the shape of the semi-circle x2 + y 2 =


a2 , y ≥ 0. If the density is given by δ = y, find the center of mass.

3. Show that the surface area of a right circular cylinder of radius a and height
h is 2πah.

4. Find the surface area of the indicated surface.


(a) The part of the plane 2x + 3y + z = 2 contained inside the cylinder
x2 + y 2 = 4.
(b) The part of the paraboloid z = 4 − x2 − y 2 above the x, y-plane.
(c) The part of the sphere x2 + y 2 + z 2 = a2 above the plane z = h where
0 ≤ h ≤ a.
(d) The surface parametrized by r = hu2 , v 2 , uvi, 0 ≤ u ≤ 1, 0 ≤ v ≤ 1.

5. Let a sphere of radius a be intersected by two parallel planes. Show that


the area of the portion of the sphere between the planes depends only on the
distance between the planes. Hint: Use part (c) above.
∫∫
6. Evaluate S z dS for each of the surfaces in Problem (??).

7. Find the centroid of the hemisphere x2 + y 2 + z 2 = a2 , 0 ≤ z.

8. Consider the cone described parametrically by r = hr cos θ, r sin θ,


√mri, 0 ≤
r ≤ a, 0 ≤ θ ≤ 2π. Show that its surface area is πaL where L = a 1 + m2 is
its ‘slant height’. What is the geometric significance of L?

9. Find the surface area of that part of the cylinder x2 + y 2 = 1


(a) under the plane z = y and above the x, y-plane,
(b) cut off by the cylinder x2 + z 2 = 1.

10. Find the surface area of that part of the sphere x2 + y 2 + z 2 = 4a2 inside the
cylinder (x − a)2 + y 2 = a2 . (Break the surface into two symmetrical parts
and double the answer. This is a sphere, but you can also treat each part as
the graph of a function.)

11. Find the surface area of the torus obtained by rotating the circle (x−b)2 +z 2 =
a2 about the z-axis. Hint: Use the parametric representation in Example 86.
196 CHAPTER 4. MULTIPLE INTEGRALS

12. A mass of density δ is uniformly distributed over the surface of the cylinder
x2 + y 2 = a2 , 0 ≤ z ≤ h. Find the gravitation force on a test mass at the
origin.

4.10 The Change of Variables Formula


∫b
Recall how we use substitution to evaluate an ordinary integral a
f (x) dx. We try
to express x = x(u) in terms of some other variable, and then
∫ b ∫ d
dx
f (x) dx = f (x(u)) du
a c du

where the u-limits are chosen so


∫∫that a = x(c) and b = x(d). A similar method works
to evaluate a double integral D f (x, y) dA, but of course it is more complicated.
To proceed by analogy, assume we have x = x(u, v) and y = y(u, v). These two
functions may be used to define a vector function R2 → R2 given by

r = r(u, v) = hx(u, v), y(u, v)i

which transforms some domain U in the u, v-plane into the domain D in the x, y-
plane.

θ y
Map

U D
x
r

Assume that this function provides a one-to-one correspondence between the inte-
rior of U and the interior of D. That will insure that some parts of D are not covered
more than once, so we won’t have to worry about a part of the domain contributing
more than once to the integral. (The restriction can be weakened ∫∫for the bound-
aries without creating problems.)
∫∫ We want a formula with relates D
f (x, y) dy dx
to an integral of the form U f (x(u, v), y(u, v)) C(u, v) du dv where C(u, v) is an
appropriate ‘correction factor’. One way to determine the correction factor is as
follows. Think of the function as a mapping into R3 with the third coordinate zero

r = r(u, v) = hx(u, v), y(u, v), 0i.


4.10. THE CHANGE OF VARIABLES FORMULA 197

From this point of view, the region D is a surface in R3 (represented parametrically)


which happens to be contained in the x, y-plane. Then the correction factor for area
on this ‘surface’ is C(u, v) = |(∂r/∂u) × (∂r/∂v)|. However,
 
∂r ∂x ∂y
= , ,0 ,
∂u ∂u ∂u
 
∂r ∂x ∂y
= , ,0 .
∂v ∂v ∂v
so  
∂r ∂r ∂x ∂y ∂y ∂x
× = 0, 0, − .
∂u ∂v ∂u ∂v ∂u ∂v
Thus,
∂x ∂y ∂y ∂x

dy dx = − du dv.
∂u ∂v ∂u ∂v
The quantity in absolute values is called the Jacobian of the transformation relating
x, y to u, v. It is often denoted
∂(x, y)
.
∂(u, v)
It may also be characterized as the 2 × 2 determinant
 
∂x ∂y
 ∂u 
det  ∂u
∂x ∂y  .
∂v ∂v

Example 89, (Polar Coordinates). Consider the transformation R2 → R2 defined


by
r = r(r, θ) = hr cos θ, r sin θi.
This is the transformation used when expressing points in the x, y-plane in terms of
polar coordinates (r, θ). From this point of view, r and θ are rectangular coordinates
in a ‘fictitious’ r, θ-plane which through the transformation maps points in the ‘real’
x, y-plane. We have
∂x ∂y
= cos θ = sin θ
∂r ∂r
∂x ∂y
= −r sin θ = r cos θ
∂θ ∂θ
so the Jacobian is
∂(x, y)
= r cos2 θ − (−r sin2 θ) = r.
∂(r, θ)
Since r ≥ 0, we have the change of variables formula
∫∫ ∫∫
f (x, y) dA = f (r cos θ, r sin θ) r dr dθ
D U
198 CHAPTER 4. MULTIPLE INTEGRALS

where U is the domain in the r, θ-plane which describes the region D in polar
coordinates. Notice that this is essentially the same as what we derived earlier.

x2 y2
Example 90 We shall find the area enclosed within the ellipse + = 1.
a2 b2
(Assume a, b > 0.) We make the change of variables x = au, y = bv, i.e.,

r = r(u, v) = hau, bvi.

Then
∂x ∂y
=a =0
∂u ∂u
∂x ∂y
=0 = b.
∂v ∂v
It follows that
∂(x, y)
= ab,
∂(r, θ)
so ∫∫ ∫∫
1 dA = 1 (ab) du dv,
D U
where U in the u, v-plane corresponds to D. However, substituting for x and y in
terms of u and v yields

x2 y2 a2 u2 b2 v 2
+ = + = u2 + v 2 ,
a2 b2 a2 b2
x2 y 2
so the circle u2 + v 2 = 1 in the u, v-plane corresponds to the ellipse 2 + 2 = 1 in
a b
the x, y-plane. It is not hard to see that U is the interior of a circle of radius 1 in
the u, v-plane, so its area is easy to find.

y
v
2 2 x2 y2
u + v =1 Map + = 1
a2 b2

U D
x
u

Thus, ∫∫ ∫∫
ab du dv = (ab) du dv = ab π 12 = π ab.
U U
4.10. THE CHANGE OF VARIABLES FORMULA 199

Note that in this example, it wasn’t actually necessary to work out the u, v integral.
This is in keeping with the point. One chooses to use the transformation formula
for the multiple integral in the hope that the calculation in the new coordinates
(u, v) will be easier than the calculation in the original coordinates (x, y).

Some Subtle Points in the Theory Our treatment of the change of variables
formula can involve elements of circular reasoning if it is not worked out carefully.
Our basic argument is that the change of variables formula can be derived from
the formula for the integral over a surface. There is an implicit assumption here,
namely, that the concept of area (‘dA’) in the domain D, when it is viewed as a
subset of R2 , is the same as the concept of surface area (‘dS’) when D is viewed
as a parametrically defined (albeit flat) surface. Of course, that is a true fact, but
one must prove it, and the proof will depend on the precise definitions of the two
concepts.

In our previous discussions we were a trifle vague about how these concepts are
defined. Let’s look a little closer. First, let’s consider the definition we introduced
for the surface integral

∫∫ ∫∫
∂r ∂r
f (r) dS =
f (r(u, v)) × du dv.
S D ∂u ∂v

Suppose the same surface has another parametric representation r = r0 (u0 , v 0 ) with
domain D0 . If we compute

∫∫
∂r ∂r 0 0
0 0
f (r(u , v )) 0 × 0 du dv ,
D0 ∂u ∂v

how do we know that we will get the same answer ? This questions merits some
thought. For example, note that we won’t necessarily get the same answer in
general, since if the parameterizing functions are not both one-to-one, parts of the
surface may be covered more than once by one or the other of the two functions.
Suppose then that both parameterizing functions are one-to-one (except possibly
on their boundaries). Consider the relation between u, v-coordinates and u0 , v 0 -
coordinates of the same point on the surface:

(u, v) −→ r(u, v) = r0 (u0 , v 0 ) ←− (u0 v 0 ).


200 CHAPTER 4. MULTIPLE INTEGRALS

z
S

v v
y
D D

x
u u

Use this to define a transformation (u0 , v 0 ) = T(u, v). Then some rather involved
calculation using the chain rule gives the formula

∂r ∂r ∂r ∂r ∂(u0 , v 0 )

∂u × ∂v = ∂u0 × ∂v 0 ∂(u, v) .

So to show that the elements of surface area in the two parameterizations are equal,
i.e., that

∂r ∂r 0 0 ∂r ∂r ∂r ∂r ∂(u0 , v 0 )

∂u0 × ∂v 0 du dv = ∂u × ∂v du dv = ∂u0 × ∂v 0 ∂(u, v) du dv,

we need to show that


∂(u0 , v 0 )
0
du dv = 0 du dv.
∂(u, v)
But this is just the change of variables formula for the transformation T relating
the element of area in the u0 , v 0 -plane to the element of area in the u, v-plane.

The above analysis shows that to define surface integrals in terms of parametric
representations and to know that the answer depends only on the surface, we must
first prove the change of variables formula for double integrals. That can be done
with some effort, but the technical hurdles are difficult to surmount. If you go back
to the discussion of integrals in polar coordinates, you can see the source of some
of the problems. The double integral is defined originally in terms of rectilinear
partitions by a network of lines parallel to the coordinate axes. To use the other
coordinate system, we need to use polar rectangles (curvilinear rectangles in the
general case), so we have to prove that the limits for curvilinear partitions and
rectilinear partitions are the same. We leave further discussion of such issues for a
course in real analysis.

We noted in passing that both for integrals over surfaces and for change of variables,
the relevant functions should be one-to-one except possibly on boundaries. That
will insure that sets with positive area are counted exactly once in any integrals.
There is a related property of the ‘correction factor’, namely that it should not
4.10. THE CHANGE OF VARIABLES FORMULA 201

vanish. Consider the case of surface integrals. We have



∂r ∂r
dS = × du dv
∂u ∂v
and we generally want it to be true that

∂r ∂r

∂u × ∂v 6= 0.

Otherwise, the cross product will vanish, meaning that the two tangent vectors
generating the sides of the parallelegram are collinear. That means that the tangent
plane might degenerate into a line (or even a point), and our whole analysis might
break down. Points at which this happens are called singular points, and it is
often (but not always) the case that the one-to-one property fails near such a point.
Similar remarks apply to the change of variables formula where one wants the
Jacobian not to vanish.

Example 91
r = ha sin φ cos θ, a sin φ sin θ, a cos φi
provides a parametric representation of a sphere of radius a centered at the origin.
For (φ, θ) = (0, 0), we have
∂r
= ha, 0, 0i
∂φ
∂r
= h0, 0, 0i
∂θ

so the cross product and the correction factor are zero. Of course, the parametric
representation fails to be one-to-one at the corresponding point (the North Pole of
the sphere) because the entire boundary segment φ = 0, 0 ≤ θ ≤ 2π maps into it.
Of course, this does not affect the validity of the integration formulas because the
point is on the boundary of the parameter domain.

Generalizations to Higher Dimensions The change of variables formula may


be generalized to three dimensions. Let E be a region in let f (x, y, z) denote a
reasonably smooth function defined on E. Suppose we change variables by smooth
functions x = x(u, v, w), y = y(u, v, w), z = z(u, v, w) so that the vector valued
function R3 → R3 given by
r = r(u, v, w) = hx(u, v, w), y(u, v, w), z(u, v, w)i
carries a subset U in the parameter space onto E. Suppose moreover that this
function is one-to-one except possibly on the boundary of U . Define the Jacobian
of the transformation to be
 
∂x/∂u ∂y/∂u ∂z/∂u
∂(x, y, z)
= det  ∂x/∂v ∂y/∂v ∂z/∂v  .
∂(u, v, w)
∂x/∂w ∂y/∂w ∂z/∂w
202 CHAPTER 4. MULTIPLE INTEGRALS

Then the change of variables formula says that

∫∫∫
f (x, y, z) dx dy dz =
E
∫∫∫
∂(x, y, z)
f (x(u, v, w), y(u, v, w), z(u, v, w))
∂(u, v, w) du dv dw.
U

Example 92 Consider the transformation

r = r(ρ, φ, θ) = hρ sin φ cos θ, ρ sin φ sin θ, ρ cos φi

which relates the rectangular coordinates (x, y, z) of a point in space to its spherical
coordinates. We have
∂x ∂y ∂z
= sin φ cos θ = sin φ sin θ = cos φ
∂ρ ∂ρ ∂ρ
∂x ∂y ∂z
= ρ cos φ cos θ = ρ cos φ sin θ = −ρ sin φ
∂φ ∂φ ∂φ
∂x ∂y ∂z
= −ρ sin φ sin θ = ρ sin φ cos θ = 0.
∂θ ∂θ ∂θ
We leave it to you to calculate the determinant of this 3×3 array. It is not surprising
that the answer is
∂(x, y, z)
= ρ2 sin φ.
∂(ρ, φ, θ)

The change of variables formula in fact holds in Rn for any positive integer n, but of
course to make sense of it, one must first define multiple integrals and determinants
in higher dimensions. We shall come back to such matters later in this text.

Exercises for 4.10.

1. In each case find the Jacobian of the indicated transformation.


(a) x = uv, y = u2 + v 2 .
(b) x = au + bv, y = cu + dv where a, b, c, d are constants.
(c) x = r cos θ, y = 2r sin θ.
2. The transformation given by x = 2u + 3v, y = u + 2v carries the unit square
0 ≤ u ≤ 1, 0 ≤ v ≤ 1 into a parallelogram. Show that the parallelogram has
area 1.
3. Use
∫∫ the transformation x = ar cos θ, y = br sin θ to evaluate the integral
2
D
(x + y 2 )dA where D is the region inside the ellipse x2 /a2 + y 2 /b2 = 1.
4.11. PROPERTIES OF THE INTEGRAL 203

4. Use the transformation x = au, y = bv, z = cw to show that the volume inside
the ellipsoid x2 /a2 + y 2 /b2 + z 2 /c2 = 1 is (4/3)πabc.

5. Show that the Jacobian of the transformation

x = ρ sin φ cos θ
y = ρ sin φ sin θ
z = ρ cos φ

is ρ2 sin φ.

4.11 Properties of the Integral

Multiple integrals satisfy the rules you learned for single variable integrals. It isn’t
necessary at this point for you to go through the proofs of the rules, and in the
ordinary course of events, most people just use these rules without having to be
told about them. Unfortunately, there are some hypotheses which must hold for
a rule to apply, so it is possible to go wrong. Such matters, including proofs are
usually studied in a course in real analysis. Just for the record, here are some of
the relevant rules, without the proofs. We state them for double integrals, but they
also hold much more generally, e.g., for triple integrals.

(i) Existence. Let D be a closed, bounded subset of R2 such that its boundary
∫∫
consists of a finite set of smooth curves. If f is continuous on D, then D f dA
exists.

(ii) Linearity. If f and g are both integrable on D, then so is af + bg for any


constants a and b, and
∫∫ ∫∫ ∫∫
(af + bg) dA = a f dA + b g dA.
D D D

(iii) Additivity. If D1 and D2 are disjoint sets such that f is integrable on both,
then f is integrable on their union D1 ∪ D2 , and
∫∫ ∫∫ ∫∫
f dA = f dA + f dA.
D1 ∪D2 D1 D2

This can be extended slightly in most cases to regions which intersect only on their
common boundary, provided that boundary is at worst a finite collection of smooth
curves. This rule was referred to earlier when we discussed decomposing a general
region into ones bounded by graphs. (iv) Inequality Property. If f and g are
204 CHAPTER 4. MULTIPLE INTEGRALS

integrable on D and f (r) ≤ g(r) for every point r in D, then


∫∫ ∫∫
f dA ≤ g dA.
D D

(v) Average Value Property Suppose f is continuous on the closed bounded set
D in R2 . Then there is a point r0 in D such that
∫∫
1
f (r0 ) = f (r) dA,
A(D) D

where A(D) is the area of the region D.

Rule (v) is actually a simple consequence of rule (iv), so let’s derive it. Since f is
continuous, and since the domain D is closed and bounded , f takes on a maximum
value M and a minimum value m. Since, m ≤ f (r) ≤ M , rule (iv) gives
∫∫ ∫∫ ∫∫
m dA ≤ f (r) dA ≤ M dA.
D D D

The constants
∫∫ m and M may be moved out of the integrals (by rule (ii)), so dividing
by A(D) = D dA, we obtain
∫∫
1
m≤ f (r) dA ≤ M.
A(D) D

Again, since f is continuous, it assumes every possible value between its minimum
value m and its maximum value M . (That is called the intermediate value property
of continuous functions.) the quantity in the middle of the inequality is one such
value, so there is a point r0 in D such that
∫∫
1
f (r0 ) = f (r) dA.
A(D) D

For the proofs of the facts about continuous functions that we just used, we must
refer you to that same course in real analysis. (Maybe your curiosity will be whetted
enough to study the subject some day.)

Non-integrable Functions, Measure Theory, and Some Bizarre Phenom-


ena The significance of rule (i) is not too clear unless one has seen an example of
a non-integrable function. ‘Non-integrable’ does not mean that you can’t calculate
the integral. It means that when you consider the (Riemann) sums which are sup-
posed to approximate the integral, they don’t stabilize around any fixed limit as
the dissections get finer and finer. Since the integral is defined as that limit, there
can be no well defined integral if there is no limit.

Everyone’s favorite non-integrable function (in the plane) is defined as follows. Let
the domain D be the unit square 0 ≤ x ≤ 1, 0 ≤ y ≤ 1. Define f (x, y) = 1 if both
coordinates x and y are rational numbers and f (x, y) = 0 if either x or y is not a
4.11. PROPERTIES OF THE INTEGRAL 205

rational number. Note that this is a highly discontinuous function since near any
point there are both points of the first type and points of the second type. When
one forms a Riemann sum ∑
f (x, y) ∆A
the point (x, y) in a typical element of area in the dissection could be either of the
first type or of the second
∑ type. If the choices are made so they are all of the first
type, the answer will be ∆A which is just the area of D, which is 1. If the choices
are made so they are all of the second type, then the sum will be 0. No matter
how fine the dissection is, we can always make the choices that way, so 1 and 0 are
always possible outcomes for the Riemann sum. That shows there is no stable limit
for these sums.

Late in the 19th century, mathematicians became dissatisfied with the definition of
the integral given by Riemann. The example indicates one of the reasons. Ratio-
nal numbers are much ‘rarer’ than irrational numbers. One can make a plausible
argument that the function f defined above is practically always equal to 0, so its
integral should be zero. In response to such concerns, the French mathematician
Lebesgue developed a more general concept of an integral which subsumes Rie-
mann’s theory but allows more functions (still not all functions) to be integrable.
Lebesgue’s theory was vastly generalized in the 20th century to what is called gen-
eral measure theory. Although the basic ideas of this theory are quite simple, it
requires a high level of technical proficiency with abstract concepts and proofs, so
it is usually postponed until the first year of graduate study in mathematics. For
this reason, although the theory is very general and very powerful, it is not well
understood by non-mathematicians. Despite that, many ideas in modern physics
(and other areas such as probability and statistics) use measure theory, so you will
probably encounter it in some form.

Of course, I can’t really tell you in a brief section what this theory is about, but it is
worth giving some hints. To explain it, consider the physical problem of determining
the total mass of a mass distribution distributed over space. As usual, we denote
the density function δ(x, y, z). Then for any subset E of R3 , the mass inside E is
given by ∫∫∫
m(E) = δ(x, y, z) dV.
E
Such a function is called a set function. It attaches to each set a number, called the
measure of the set; in this case the mass inside it. So far, this is just a matter of
notation, but the general theory allows us to consider measures m(E) which cannot
be expressed in terms of any density function. For example, suppose we have a
collection of point masses m1 , m2 , . . . , mn at positions r1 , r2 , . . . , rn . Then m(E)
can be defined to be the sum of the masses which happen to be inside E. There
is no function δ(x, y, z) which could give the density of such a mass distribution.
δ(x, y, z) would have to be zero except at the points where the masses are. At
such points, the density function would have to be infinite, but there would have
to be some way to account for possible differences among the masses. The power
of the general theory is that one can apply it in a very similar way to continuous
206 CHAPTER 4. MULTIPLE INTEGRALS

distributions (for which there is a density function) or to discrete distributions or


even to combinations of the two.

Given such a set function m(E), one can define the integral of a function f with
respect to that measure. Namely, consider appropriate dissections of the region E
into subsets Ei , and form sums

f (ri ) m(Ei ),
i

where ri is in Ei . If these stabilize around a limiting value, that limit is called the
integral and is denoted ∫∫∫
f dm.
E

(The details of what dissections to allow and how to take the limit are quite in-
volved.) To see how general this is, note that for the measure associated as above
with a finite set of point masses m1 , m2 , . . . , mn at points r1 , r2 , . . . , rn in E, we
have simply ∫∫∫ ∑
f dm = f (ri ) mi .
E i

This also generalizes the ordinary concept of integral since the set function V (E)
which attaches ∫∫∫
to each set its volume is a perfectly good measure, and the resulting
integral is just E
f dV .

There are two points raised in the above discussion meriting further discussion. The
concept of a density function is so useful, that physicists were reluctant to give it up,
even in the case of a point mass. For this reason, the Nobel prize winning physicist
Dirac invented a ‘function’ which today is called the ‘Dirac δ function’. Here is how
he reasoned. We will explain it in the one-dimensional case, but it generalizes easily
to two and three dimensions. Consider a unit mass on the real line R placed at the
origin. Associated with this is a measure on subsets I of R defined by m(I) = 1 if I
contains the origin and m(I) = 0 otherwise. This measure can also be described by
its so-called distribution function: F (x) = 0 if x < 0, and F (x) = 1 if x > 0, which
gives the value of m(I) for I = (−∞, x). (It is not hard to see that if you know the
distribution function F (x), you can reconstruct the measure.) Dirac’s idea amounts
to choosing the density function δ(x) to be the derivative F 0 (x). This derivative is
clearly 0 except at x = 0 where it is undefined. (F is not even continuous at 0, so it
certainly isn’t differentiable.) However, imagine that the derivative did make sense
at zero, and that the usual relation between derivatives and integrals holds for this
derivative. Then we would have
∫ b ∫ b
δ(x) dx = F 0 (x) dx = F (b) − F (a)
a a

and this would be 0 unless a < 0 ≤ b in which case it would be 1. Thus the integral
∫b
a
δ(x) dx would have exactly the right properties for the set function associated
with a point mass at the origin. Physicists liked this so much that they adopted
4.11. PROPERTIES OF THE INTEGRAL 207

the useful fiction that there actually is such a function δ(x), and they use it freely
nowadays in formulas and calculations. It all works out correctly, if one is careful,
because a statement involving the δ function usually makes sense as a statement
about set functions if one puts integral signs around it. We will see the δ function
on several occasions in this course.

It turns out that Dirac had a better idea than he might have realized. Subsequently,
mathematicians developed a rigorous concept called a generalized function or dis-
tribution which provides a firm foundation for practically everything the physicists
want to do. (The main exponent of this theory was the French mathematician
Laurent Schwartz.)

The final remark concerns the concept of measure. We said that the set function
m(E) is defined for every subset, but that was wrong. If the measure is to have
reasonable properties, for example, if it gives the expected values for length, area,
or volume (depending on whether we are discussing the theory in R, R2 , or R3 ),
then it is not possible for every subset to have a measure. There must be what are
called non-measurable sets. There is an interesting sidelight to the theory called the
Banach–Tarski Paradox. Banach and Tarski showed that it is possible to take a solid
sphere of radius 1 in R3 and decompose it into a small number of non-overlapping
subsets (about 5), move these sets by rigid motions (combinations of translations
and rotations), and then reassemble them into two non-overlapping solid spheres,
each of radius 1. In this process, no point in any of the three spheres is unaccounted
for. This seems to say that 2 = 1, and that would indeed be a consequence if the
subsets of the dissection were measurable sets and so had well defined volumes.
(Volume certainly has to satisfy the additivity rule for non-overlapping sets, and it
is certainly not changed by translation and rotations.) However, the subsets of the
dissection are not measurable sets, so there is no contradiction, just an apparent
contradiction or paradox. Moreover, the argument just shows that the relevant
dissection exists; it doesn’t give a physically realizable method to do it. In fact, no
one believes that is this is physically possible.

Exercises for 4.11.

1. Use
−|f (r)| ≤ f (r) ≤ |f (r)|
and rule (iv) in this section to prove
∫ ∫ ∫∫

f dA ≤ |f | dA.

D D
208 CHAPTER 4. MULTIPLE INTEGRALS
Chapter 5

Caculus of Vector Fields

5.1 Vector Fields

Multidimensional calculus may be defined as the study of derivatives and integrals


for functions Rn → Rm for various n and m. We have mostly studied scalar valued
functions Rn → R (where n = 2 or n = 3.) In physics, such functions are called
scalar fields. We now want to study functions F : Rn → Rn (again with n = 2 or
n = 3.) These are called vector fields. In general, one wants to picture the set on
which the function is defined (the domain) and the set in which it assumes values
as being in different places, but in the case m = n, they are both in the same place,
namely Rn . Hence, there is another way to view such a function, and it turns out to
be quite useful in physics. At each point in the domain of the function F, imagine
the vector F(r) placed with its tail at the point.

Example 93 Consider the function F : R3 → R3 defined by

GM
F(r) = − uρ
|r|2

r
where uρ = is a unit vector pointing directly away from the origin. This function
|r|
gives the gravitational force on a test particle at the point with position vector r
due to a mass M at the origin. It certainly makes sense in this case to view the
vector F(r) with its tail at the point, because that is where the force would act on
a test particle. Because of the minus sign, the force vector in fact points to the
origin. As we get closer to the origin its magnitude increases. It is not defined at
the origin, which should be excluded from the domain of the function F.

This vector field can also be specified in rectangular coordinates by using r =

209
210 CHAPTER 5. CACULUS OF VECTOR FIELDS

hx, y, zi and |r| = (x2 + y 2 + z 2 )1/2 . After some algebra, we get


 
x y z
F(x, y, z) = −GM , , .
(x2 + y 2 + z 2 )3/2 (x2 + y 2 + z 2 )3/2 (x2 + y 2 + z 2 )3/2
One of the problems in dealing with vector fields is to see through complicated
algebra to what is often quite simple geometry. The above expression is a good
example.

You can always think of a vector field as specifying the force at each point in
space, (or for plane vector fields at each point in the plane.) Vector fields are used
extensively in this way in the theory of gravitation and also in electromagnetic
theory.

Example 94 Consider the vector field in R2 defined by


v(r) = ruθ
where uθ is the unit vector in the positive θ direction as discussed previously. This
can also be expressed in rectangular coordinates
v(x, y) = h−y, xi.
To see this, note that
h−y, xi = h−r sin θ, r cos θi = rh− sin θ, cos θi = ruθ .
(See Chapter 1, Section 2.)

At each point in the plane, the vector v(r) is perpendicular to the position vector
r for that point. It is also tangent to a circle through the point with center at the
origin, and it is directed counter-clockwise. We have |v(r)| = r, so the magnitude
increases as we move away from the origin (where v(0) = 0.) One can get a physical
picture of this vector field as follows. Imagine a disk (such as a phonograph record)
rotating with constant angular velocity ω. Suppose that, by a new photographic
process, it is possible to snap a picture which shows the velocity vector v of each
element of the disk at the instant the picture is taken. These velocity vectors will
look like the vector field described above, except for a factor of proportionality.
They will be tangent to circles centered at the origin, and their magnitudes will be
proportional to the distance to the center (|v| = ωr). Note that in this model, the
picture will be the same whenever the picture is snapped. At a given point in the
plane with position vector r, the velocity vector v(r) will just depend on r, although
the particular element of the disk which happens to be passing through that point
will change (unless the camera, i.e., the coordinate system, rotates with the disk).
Of course, we could envision a somewhat more complicated situation in which the
disk speeds up or slows down, and then v would be a function both of position r
and time t.

The above discussion suggests a second physical model to picture vector fields, that
of fluid flow. The example was of a 2-dimensional field, but the idea works just
5.1. VECTOR FIELDS 211

as well in R3 . Imagine a fluid flowing in a region of space, for example, Lake


Michigan, or the atmosphere of the Earth. In the example, the relative positions of
the elements of the medium (a phonograph record) do not change, but in general,
that won’t be the case. Imagine a super hologram which will show at each instant
the velocity vector v of each element of the fluid. This will generally be a function
v = v(r, t) of position and of time. If we ignore the dependence on time, we get
a vector function v : R3 → R3 , or vector field. (In fluid dynamics, one often
distinguishes between steady flows where v does not depend on time and time
dependent flows where it does.) In fluid mechanics, one is actually more interested
in the momentum field defined by F(r, t) = δ(r, t)v(r, t), where δ(r, t) is a scalar
function giving the density of the fluid at position r at time t.

Streamlines Let F : Rn → Rn (n = 2 or n = 3) denote a vector field. (Assume


for simplicity that F does not also depend explicitly on another variable like time.)
Consider paths r = r(t) with the property that at each point of the path, the vector
field F(r(t)) at that point is tangent to the path. Since the velocity vector is also
tangent to the path, we can state this symbolically
dr
= F(r). (59)
dt
(Actually, there should be a factor c(r) of proportionality since the vectors are only
parallel. However, we are ordinarily only interested in the geometry of the path,
not how it is traced out in time. By a suitable change of parameterization, we can
arrange for c to be 1.) Such paths are called either lines of force or streamlines
depending on which physical model we have in mind. In the case of fluid flow, the
term “streamline” is self-explanatory. It is supposed to be the path followed by an
element of fluid as it moves with the flow. In the case of force fields, the concept,
line of force, is a bit mysterious. It was first introduced by Faraday in his work
on electricity and magnetism. He thought of the lines of force as having physical
reality, behaving almost like elastic bands pulling the objects together (or pushing
them apart in the case of repulsive forces.) I will leave further discussion of the
meaning to your physics professors.

Example 95 We shall find the streamlines for the vector field in R2 defined by
v(x, y) = h−y, xi.
From the previous discussion of this field, it is clear that they will be circles centered
at the origin, but let’s see if we can that derive that from equation (59). We get
 
dx dy
, = h−y, xi,
dt dt
which may be rewritten in terms of components
dx
= −y,
dt
dy
= x.
dt
212 CHAPTER 5. CACULUS OF VECTOR FIELDS

This is a system of differential equations, and we won’t study general methods for
solving systems until later. However, in the 2-dimensional case, there is a simple
trick which usually allows one to find the curves. Namely, divide the second equation
by the first to obtain
dy x
=− .
dx y
This may be solved by separation of variables,
y dy = −x dx
y2 x2
or =− +c
2 2
or x2 + y 2 = 2c.

These curves are circles centered at the origin as expected. Note that we don’t learn
grad F how they are traced out as functions of t.

Gradient Fields Given a scalar field f : Rn → R, we may always form a vector


field by taking the gradient.
F(r) = ∇f (r).

GM GM
Example 96 Let f (r) = =√ for r 6= 0. We in essence calculated
ρ x + y2 + z2
2
the gradient of this function in Example 2, Section 7, Chapter III. The answer is
GM
∇f = − uρ
ρ2
which is the gravitational field of a point mass at the origin.

Example 97 A metal plate is heated so the temperature at (x, y) is given by


T (x, y) = x2 + 2y 2 . A heat seeking robot starts at the point (1, 2). We shall find the
path it follows. “Heat seeking” means that at any point, it will move in the direction
of maximum increase of temperature, i.e., in the direction of ∇T = h2x, 4yi. Since
we aren’t told how fast it responds to the temperature gradient, its equations of
motion will be
dx
= c(2x),
dt
dy
= c(4y),
dt
where c = c(x, y) is an unknown function. We can’t actually determine the motion
along the path, but using the same trick as above, we can determine its shape.
dy y
=2
dx x
dy dx
so =2
y x
and ln |y| = ln |x|2 + c.
5.1. VECTOR FIELDS 213

If we exponentiate, we obtain

|y| = |x|2 ec = C|x|2

where C = ec is just another constant. Putting x = 1, y = 2 (where the robot


starts) yields C = 2. One can make a convincing argument that, considering the
starting point and the nature of the possible path, we may take x and y positive,
so the solution is
y = 2x2 ,
(1,2)
which describes a parabola. Note that, as before, the method depends on this being
a problem in the plane. If we had a z to deal with, it wouldn’t have worked.

It should be noted that while gradient fields are quite important, not every field is
a gradient. As we shall see later, gradient fields must have the property that line
integrals depend only on end points, and as we saw earlier, not every force field has
that property.

Exercises for 5.1.

1. Sketch the following vector fields by drawing some vectors at representative


points.
(a) F(x, y) = x2 i + y 2 j.
(b) F(x, y) = y 2 i + x2 j.
(c) F(x, y) = 3i − 4j.
(d) F(x, y, z) = hx, y, zi.
(e) F(x, y) = (xi − yj)(x2 + y 2 )−1/2 .

2. For each of the scalar fields, sketch its gradient field by drawing vectors at
representative points
(a) f (x, y) = x + 3y.
(b) f (x, y) = 3x2 + 4y 2 .
(c) f (x, y, z) = x2 + y 2 .

3. Find the streamlines of the plane vector field defined by F(r) = y i + x j.

4. The shape of a mountain is described by the equation z = 4000 − x2 − 4y 2 .


There is a shelter at (10, 14). Find a path from the shelter to the bottom of
the mountain (at z = 0) which descends everywhere as rapidly as possible.
Describe the projection of the path in the x, y-plane, thought of as a map of
the mountain, rather than the actual path on the mountain.
214 CHAPTER 5. CACULUS OF VECTOR FIELDS

F 5.2 Surface Integrals for Vector Fields


N
In this section, we shall use the notation F to denote a general vector field. The
mathematics won’t care, but you may find it useful on occasion to think of it as
either a force field or a momentum field for a fluid flow. Also, any dependence of F
on time won’t matter in what we do, so we shall assume F = F(r) is a function of
position alone.
S
Let F : R3 → R3 denote a vector field, and suppose S is a surface in R3 . Suppose
everything is reasonably smooth, so we don’t have to worry about singularities or
other bizarre behavior. At each point on S, choose a unit vector N perpendicular
to the surface. (Hence, N will usually vary from point to point on S.) Consider
the scalar valued function of position F · N. The integral of this function over the
surface ∫∫
F · N dS
S
is called the surface integral of the vector field F over the surface S. One must
exercise care in choosing the normal vectors since at any given point on the surface,
there will be two unit vectors perpendicular to the surface. Clearly, the direction of
the normal should not change precipitously between nearby points on the surface.
(The choice of the normals, called the orientation of the surface, can be a bit subtle,
and we shall come back to this important point later.) One often uses the notation

dS = N dS

so that the element of surface area becomes a vector quantity. Then the notation
for the surface integral becomes
∫∫
S F · dS.
S

Other notational variations include using a single integral sign, , and using some
2 possible N ’s other symbol, such as dσ or dA, for the vector element of surface area.
GM
Example 98 Let F(r) = − uρ and let S be the surface of a sphere of radius a.
ρ2
Assume all the normals point outward from the origin. Thus in this case, at each
point on the surface, N = uρ . Hence, since ρ = a on the sphere S,
GM GM
F·N=− uρ · uρ = − 2 .
a2 a
Thus, ∫∫ ∫∫
GM GM
F · N dS = − dS = − (4πa2 ) = −4πGM.
S a2 S a2
Notice that most of the algebra is really superfluous here. The vector field is parallel
to the unit normal, so the dot product is just the magnitude of the vector field on
the sphere, − GM
a2 . However, since this is constant, integrating it over the sphere
F

5.2. SURFACE INTEGRALS FOR VECTOR FIELDS 215

just results in that constant times the surface area of the sphere. With a bit of
practice, you should be able to do such a simple example in your head.

Interpretation for Fluid Flow Let F denote the momentum field of a steady fluid
flow (i.e., F(r) = δ(r)v(r), where δ(r) denotes density and v(r) velocity
∫∫ at the point
with position vector r.) Then it turns out that the surface integral S F·N dS gives
the rate of flow or the flux of the fluid through the surface. To see this, consider first
the special case where F, δ, and v are all constant, and the surface is a parallelogram
spanned by vectors a and b. In one unit of time, each element of fluid is displaced by
the vector v, so all the fluid which flows through the parallelogram will lie within the
parallelepiped spanned by a, b, and v. The volume contained therein is v · (a × b).
(See Chapter I, Section 4.)

Hence, the mass passing through the parallelogram per unit time is F
δ v · (a × b) = (δv) · (a × b) = F · (a × b).
N
Since A = |a × b| is the area of the parallelogram, we may rewrite a × b = AN
where N is a unit vector perpendicular to the parallelogram. Hence, A
flux through parallelogram = F · N A.
Note the significance of the direction of the normal vector in this context. If F and
N point to the same side of the parallelogram, the flux is positive, and if they point
to opposite sides, the flux is negative.

The above analysis may be extended to curved surfaces. The quantity F · N dS F


represents the flux through a small element of surface area dS, and integrating
N
gives the net flux through the entire surface.

It is common to use the term flux quite generally, even where the fluid flow model dS
is not appropriate. For example, in gravitational theory (and also the theory of
electromagnetic fields), there is nothing “flowing” in the ordinary sense. In this
case, the flux is interpreted as a measure of the “number” of lines of force passing S
through the surface. Since this is a mathematics course, I will leave such matters
for your physics professor to explain.

We continue with further examples.

Example 99 Let F(r) = z k and let S be the top hemisphere of the sphere of radius
a centered at the origin. Use outward normals. As above N = uρ . On the surface
of the sphere, we use φ, θ as intrinsic coordinates as usual. Then ρ = a, z = a cos φ,
and dS = a2 sin φ dφ dθ. Moreover, N = uρ , and since the angle between N and k
is φ, we have k · N = cos φ and
F · N = zk · N = a cos φ cos φ = a cos2 φ.
Hence,
∫∫ ∫ 2π ∫ π/2
F · N dS = a cos2 φ a2 sin φ dφ dθ
S 0 0
216 CHAPTER 5. CACULUS OF VECTOR FIELDS

where the limits were chosen to cover the hemisphere. Thus,


F ∫∫ ∫ 2π ∫ π/2
F · N dS = a3 cos2 φ sin φ dφ dθ
φ N S 0 0
π/2
cos3 φ
= a (2π) −
3
3 0
φ 2πa3 2πa3
= (1) = .
3 3

If the integration had been over the entire sphere, the answer would have been
4πa3 /3. Can you see why without actually doing the calculation? (Try drawing
a picture showing F and N on the bottom hemisphere and comparing with the
picture on the top hemisphere.)

It is worth remembering that on the surface of a sphere of radius a,

dS = N dS = uρ a2 sin φ dφ dθ.

Similarly, on the lateral surface of a right circular cylinder of radius a centered on


the z-axis, the outward unit normal vector N = ur points directly away from the
z-axis, and dS = adθ dz, so

dS = N dS = ur a dθ dz.

Surface Integrals for Parametrically Defined Surfaces In the previous chap-


ter, we calculated the surface area of a parametrically defined surface as follows.
Each small curvilinear rectangle on the surface was approximated by a tangent par-
∂r ∂r
allelogram, the sides of which were the vectors a = ∆u and b = ∆v. The
∂u ∂v
area of the latter was then used as an approximation for the area of the former.
The same reasoning may be used to calculate flux. The flux through a curvilinear
rectangle on the surface may be approximated by the flux through the (tangent)
parallelogram, i.e., as above, by F · (a × b) where F is the value of the vector field
at a corner of the parallelogram. Expanding this out yields

∂r ∂r
F(r(u, v)) · ( × ) ∆u ∆v.
∂u ∂v
Hence, ∫∫ ∫∫
∂r ∂r
F · dS = F(r(u, v)) · ( × ) du dv (60)
S D ∂u ∂v
where D is the domain of the parameterizing function r = r(u, v). Note that the
normal vectors in this case are given by

1 ∂r ∂r
N= ( × )
| ∂u
∂r
× ∂v |
∂r ∂u ∂v
5.2. SURFACE INTEGRALS FOR VECTOR FIELDS 217

so their directions are determined by the parametric representation. However, if


those directions don’t square with your expectations, it is easy enough to reverse
them. Just change the parametric representation by reversing the roles of the
parameters, hence, the order in the cross product.

It is seldom the case that one wants to use formula (60), although it underlies the
other calculations. In most cases, the surface is a familiar one such as a sphere or
a cylinder, in which case one can visualize the unit normals N directly, or it is the
graph of a function. (See the exercises for some examples where one must resort to
the general formula.)

Surface Integrals for the Graph of a Function Suppose the surface S is the
graph of a function z = f (x, y). In that case, we use the parametric representation
r = r(x, y) = hx, y, f (x, y)i, and some calculation shows

∂r ∂r
dS = × dx dy = h−fx , −fy , 1i dx dy.
∂x ∂y

(See the corresponding discussion for the surface area of a graph in Section 9,
Chapter IV.)

Hence, if the vector field is resolved in components, F = hFx , Fy , Fz i, the surface


integral takes the form
∫∫ ∫∫
F · dS = (−Fx fx − Fy fy + Fz ) dx dy
S D

where D is the domain of the function f in the x, y-plane. Note that in this case
the unit normal vector
1
N= √ h−fx , −fy , 1i
fx + fy 2 + 1
2

points generally upward.

Example 99, (revisited)√We use the same hemisphere but view it as the graph
of the function f (x, y) = a2 − x2 − y 2 with domain D a disk in the x, y-plane of
radius a, centered at the origin. Since F(x, y, z) = h0, 0, zi, we need not calculate
fx and fy . We have
∫∫ ∫∫ √
F · dS = a2 − x2 − y 2 dy dx.
S D| {z }
z

However, looking at this carefully reveals it to be the same integral as that for the
volume under the hemisphere, which is (1/2)(4πa3 )/3 = 2πa2 /3. Since the calcu-
lation of the double integral is so easy, this approach is simpler than the previous
approach using the intrinsic coordinates φ and θ on the sphere. However, in most
cases the intrinsic approach is superior.
218 CHAPTER 5. CACULUS OF VECTOR FIELDS

Sometimes, the surface needs to be decomposed into pieces in order to calculate the
S 3 flux.

Example 100 Let F(r) = r = hx, y, zi. Let S be the surface enclosing a right
circular cylinder of radius a and height h, centered on the z-axis, where this time
we include both the top and bottom as well as the lateral surface. We shall find the
flux out of the cylinder, that is the normals will be chosen so as to point out of the
S 2 cylinder.

We decompose the surface into three parts: the bottom S1 , the lateral surface S2 ,
and the top S3 .

The bottom surface is easiest. The outward unit normal is N = −k, and F(r) =
r = hx, y, 0i is perpendicular to N for points in the x, y-plane. Hence, F · N = 0 for
S 1 S1 , so the flux is zero.

The top surface is also not too difficult. The unit normal is N = k, but in the plane
z = h, we have F(r) = hx, y, hi. Hence, F · N = h, and
∫∫ ∫∫
F · N dS = h dS = h(πa2 ) = πa2 h
S3 S3

since the top surface is a disk of radius a.

Finally, we consider the lateral surface S2 . We may resolve F into horizontal and
vertical components by writing

F(r) = hx, y, zi = hx, y, 0i + h0, 0, zi = rur + zk,

but since r = a on the cylinder, we have

F(r) = aur + zk.

On the other hand, the outward unit normal is N = ur , so F·N = aur ·ur +zk·ur =
a. Hence, ∫∫ ∫∫
F · N dS = a dS = a (2πah) = 2πa2 h.
S2 S2
Here we used the fact that the surface area of the lateral surface of cylinder is the
height times the circumference of the base. Note that, in the above calculation, the
vertical component of the force is irrelevant since the normal is horizontal. Also,
the horizontal component is normal to the lateral surface of the cylinder, and that
makes calculating the dot product quite easy.

The total flux is the sum for the three surfaces

0 + πa2 h + 2πa2 h = 3πa2 h.

Orientation As we noted, at each point of a surface, there are generally two


possible directions for the unit normal vector. Explaining how one goes about
5.2. SURFACE INTEGRALS FOR VECTOR FIELDS 219

specifying those directions and seeing how they are related as one moves about the
surface is a bit trickier than you might imagine. The simplest case is that of a closed
surface, i.e., a surface S, bounding a solid region E in space. In that case, there are
two obvious orientations or ways of choosing the normal directions. We may use
the outward orientation, in which all the normals point away from the region, or
we may use the inward orientation, in which they all point into the region. If the
solid region is something like a solid sphere, cylinder, cube, etc., this is quite easy
to visualize. However, the principle works even in regions a bit more complicated
than this. Consider the for example, the solid region E between two concentric
spheres. The boundary of that region consists of the two spheres. (It comes in two
pieces, or in technical jargon, it is not connected.) The outward normals, relative
to E point away from the origin on the outer sphere and towards the origin for
the inner sphere. The same sort of thing happens for any surface consisting of the
(disconnected) boundary of any solid region having one or more holes. (Imagine a
hunk of swiss cheese.)

Outward Normals

For surfaces, which are not boundaries of solid regions, it is sometimes not so clear
how to choose an orientation. If the surface is given parametrically, r = r(u, v),
with the parameters given in some specified order, then this implies an orientation,
since at each point
∂r ∂r
×
∂u ∂v
determines the direction of the normal. Suppose however, the surface consists of
separate pieces which are connected one to another, but such that no one parameter-
ization works for the entire surface. The closed surface bounding a solid cylinder,
including top and bottom, is such an example. Another example, which is not
closed, would be the 5 faces of a cubic box open at the top. There is a fairly
straightforward way to specify an orientation for such a surface. Suppose that
such a surface is decomposed into separate patches, each with a smooth parametric
representation. That gives each patch a specific orientation, but it can always be
220 CHAPTER 5. CACULUS OF VECTOR FIELDS

reversed by reversing the order of the parameters. We hope to be able to choose the
orientations for the parametric patches so that they are coherently related. If this is
possible, we say the surface has an orientation. The difficulty is that as one crosses
an edge which is the common boundary between two adjacent patches, the direc-
tion of the normal may change quite radically, so it is a bit difficult to make precise
what we mean by saying the normals are coherently related. Fortunately, in most
interesting cases, it is fairly clear how the normals on adjacent patches should be
related. We will return to this point later during our discussion of Stokes’s Theorem
and related matters.
coherent normals It is quite surprising that there are fairly simple surfaces in R3 without coherent
orientations. The so called Moebius band is such a surface. One can make a Moebius
band by taking a long strip of paper, twisting it once and connecting the ends.
(Without the twist, we would get a long narrow “cylinder”.) See the diagram.
To see that the Moebius band is not orientable, start somewhere and choose a
direction for the normals near the starting point. Then continue around the band,
as indicated, choosing normals in roughly the same direction as you progress. When
you get all the way around the band, you will find that the normal direction you
have “dragged” with you now points opposite to the original direction. (To get back
the original direction, you will have to go around the band once again.) See the
Exercises for a parametric representation of the Moebius band—less one segment—
which exhibits this discontinuous reversal of the normal.

It does not make sense to try to calculate the flux through a non-orientable surface
such as a Moebius band, since you won’t be able to assign positive and negative
signs for the contributions from different parts of the surface in a coherent fashion.
5.2. SURFACE INTEGRALS FOR VECTOR FIELDS 221

Exercises for 5.2.

∫∫
1. Calculate the flux S
F · N dS through the indicated surface for the indicated
vector field.
(a) F = 2xi + 3yi − zk, S the portion of the plane x + y + z = 1 in the first
octant. Use the normals pointing away from the origin. (You can solve for z
and treat the surface as the graph of a function.)
(b) F = zk, S the lower half of the sphere x2 + y 2 + z 2 = 1. Use normals
pointing towards the origin.
(c) F = h2y, 3x, zi, S the portion of the paraboloid z = 9 − x2 − y 2 above
the xy-plane. Use normals pointing away from the origin. (The surface is the
graph of a function.)

2. Let F = −yi + xj and let S be the portion of the cone z = x2 + y 2 within
the cylinder x2 + y 2 = 4. Find the flux through the surface using normals
pointing away from the z-axis. You can treat the surface as the graph of a
function, but it might be faster to do the problem by visualizing geometrically
the relation of F to the normals to the surface.

3. Let S be a sphere of radius a centered at the origin, For each of the indicated
vector fields, find the flux out of the surface S.
(a) F = z 3 k. Hint: k · N = cos φ.
(b) F = xi + yj + zk = r.
(c) F = xi + yj = r ur . Hint: ur · N = sin φ.
(d) F = −yi + xj. Hint: At each point of S, the vector field F is tangent to
the circle of latitude through that point.

4. Let S be the closed cylinder of radius a, centered on the z-axis, with its base
in the x, y-plane, and extending to the plane z = h. Both the top and bottom
of the cylinder are considered part of S in addition to the cylindrical lateral
surface. Find the flux out of S for each of the following vector fields. Note
that the computations have to be done for each of the three components of S.
(a) F = zk.
(b) F = xi + yj + zk = r.
(c) F = xi + yj = r ur . Hint: On the lateral surface, N = ur .
(d) F = −yi + xj.

5. Let F(r) = r = hx, y, zi, and let S be the surface bounding the unit cube
in the first octant. (S consists of 6 squares, each of side 1.) Find the flux
through S using outward normals. Hint: The surface integral breaks up into 6
parts, three of which are zero. (Why?) The remaining 3 involving integrating
a constant over a square.
222 CHAPTER 5. CACULUS OF VECTOR FIELDS

6. Let F(r) = zk and let S be the surface bounding the tetrahedron cut off in
the first octant by the plane x + 2y + z = 2. Find the flux out of S. (Note
that S breaks up into four pieces.)

7. Consider the cylindrical surface given parametrically by

r = ha cos θ, a sin θ, si

where 0 ≤ θ < 2π, −b ≤ s ≤ b. It has radius a and height 2b. Note that
r(θ, s) approaches r(0, s) as θ → 2π. Hence, the entire surface is represented
smoothly by the parametric representation; the “seam” at θ = 2π being illu-
sory.
∂r ∂r
(a) Find n(θ, s) = × .
∂θ ∂s
(b) Show that n(θ, 0) → n(0, 0) as θ → 2π.

8. Consider the Moebius surface given parametrically by

r = h(a + s cos(θ/2)) cos θ, (a + s cos(θ/2)) sin θ, s sin(θ/2)i

where 0 ≤ θ < 2π, −b ≤ s ≤ b. (Assume b < a.) Note that r(θ, s) approaches
r(0, −s) as θ → 2π. Hence, the “seam” at θ = 2π is real, and the parametric
representation fails to be continuous across it.
∂r ∂r
(a) Find n(θ, 0) = (θ, 0) × (θ, 0).
∂θ ∂s
(b) Follow n(θ, 0) as 0 ≤ θ < 2π. In particular, note that n(θ, 0) → −n(0, 0)
as θ → 2π.

5.3 Conservative Vector Fields

Let F be a vector field defined on some open set in Rn (where as usual n = 2


or n = 3). For technical reasons, we also want to assume that the domain of the
function is connected , i.e., it can’t be decomposed into separate disjoint open sets.
We say that F is conservative if it is the gradient F = ∇f of some scalar field f .

For example, in R3 , if f (r) = 1/|r|, then F = −(1/|r|2 )uρ is conservative. This


inverse square law field arises in gravitation and also in electrostatics.

Note that the function f is not generally unique. For, if f is one such function,
then for any constant c, we have

∇(f + c) = ∇f + ∇c = ∇f = F.
5.3. CONSERVATIVE VECTOR FIELDS 223

Hence, f + c is another such function. The converse is also true. If f1 and f2 both m O m
work for the same conservative field F, then
-a a
F = ∇f1 = ∇f2 ⇒ ∇f1 = ∇f2 ⇒ ∇(f1 − f2 ) = 0.

However, it is not hard to see that a (smooth) function with zero gradient must be
constant. (Can you prove it? You need to use the fact that the domain is connected!)
It follows that f1 − f2 = c, i.e., any two functions with the same gradient differ by
a constant.

It is sometimes much easier to find the scalar function f and then take its gradient
than it is is to find the vector field F directly.

Example 101, (gravitational dipole) Suppose two point masses of equal mass m
are located at the points (a, 0, 0) and (−a, 0, 0). Let F1 denote the gravitational
force due to the first mass and F2 the force due to the second mass. To find the
combined force F = F1 + F2 directly requires some complicated vector algebra.
However, F1 = ∇f1 where
Gm
f1 (x, y, z) = √ .
(x − a)2 + y 2 + z 2

Here, the expression in the denominator is the distance from (x, y, z) to the first
mass. Similarly, the force due to the second mass is F2 = ∇f2 where
Gm
f2 (x, y, z) = √ .
(x + a)2 + y 2 + z 2

Hence, the combined force is

F = ∇f1 + ∇f2 = ∇(f1 + f2 ).


| {z }
f

Calculating f = f1 + f2 is much simpler than calculating F1 + F2 .


( )
1 1
f (x, y, z) = Gm √ +√ .
(x − a)2 + y 2 + z 2 (x + a)2 + y 2 + z 2

Of course, you still have to find the gradient of this function to find F = ∇f .

Line Integrals for Conservative Fields Conservative fields are particularly nice
to deal with when computing line integrals. Suppose F = ∇f is conservative, and
C is an oriented path in its domain going from point A to point B. (A and B
might be the same point, in ∫ which case the path would be a closed loop.) We may
calculate the line integral C F · dr as follows. Choose a parametric representation
r = r(t), a ≤ t ≤ b for C. Then
∫ ∫ b ∫ b
0
F · dr = F(r(t)) · r (t) dt = ∇f (r(t)) · r0 (t)dt.
C a a
224 CHAPTER 5. CACULUS OF VECTOR FIELDS

However, by the chain rule


B
F d
∇f (r(t)) · r0 (t) = f (r(t)).
dt
Hence,
∫ ∫ b
d b
F · dr = f (r(t)) dt = f (r(t))|a
dr C a dt
so ∫
F · dr = f (r(b)) − f (r(a)) = f (B) − f (A). (61)
C
A
In particular, the value of the line integral is path independent since it depends only
on the endpoints of the path.

Example 102 Consider the inverse square field F = −(1/|r|2 )uρ which is the
gradient of the function f defined by f (r) = 1/|r|. We have

1 1
F · dr = − .
C |rB | |rA |

F Compare this with the calculation in the example at the end of Chapter I, Section
6, where the relevant integral in the plane case was calculated directly.

It is important to note that not every vector field is conservative. The easiest way
to see this is to note that there are plenty of vector fields which do not have the
path independence property for line integrals.

Example 103 Let F(x, y) = h−y, xi = ruθ . Let C1 be the closed path which starts
and ends at the point (1, 0) (so A = B) and traverses a circle of radius 1 in the
counter-clockwise direction. This line integral has been calculated in exercises, and
the answer is 2π. On the other hand, if we choose C2 to be the same path but
traversed clockwise, the answer will be −2π. Finally, to emphasize the point, let C3
be the trivial path which just stays at the point (1, 0). The line integral for C3 will
clearly be zero. The answer is certainly not path independent, so the field is not
conservative.

The path independence property for a vector field in fact ensures that it is con-
n
∫servative. For suppose F is defined on some connected open set in R and that
C
F · dr depends only on the endpoints of C for every oriented path C in the domain
of F. We can construct a function f for F as follows. Choose a base point P0 in
the domain of F. For any other point P with position vector r in the domain of F
choose any path C in the domain from P0 to P . Define
∫ ∫ P
f (r) = F · dr = F · dr
C P0
5.3. CONSERVATIVE VECTOR FIELDS 225

where the second form of the notation emphasizes that the result depends only on
P r ’ (t)
the endpoints of the path. I claim that ∇f = F. To see this, choose a parameter-
izing function r(u), a ≤ u ≤ t for C where r(a) = r0 and r(t) = r. Then by the
fundamental theorem of calculus

d d t r (u)
f (r(t)) = F(r(u)) · r0 (u)du = F(r(t)) · r0 (t).
dt dt a
On the other hand, by the chain rule
d
f (r(t)) = ∇f (r(t)) · r0 (t),
dt P
so 0
0 0
∇f (r(t)) · r (t) = F(r(t)) · r (t).
However, since the path C is entirely arbitrary, its velocity vector r0 (t) at its endpoint
is also entirely arbitrary. That means that the dot products of ∇f and F with every
possible vector (including i, j, and k) are the same, and it follows that ∇f = F, as
claimed. This is an important enough fact to state formally. Theorem 5.4 Let
F be a vector field defined on some connected open set in Rn . F is conservative if
either of the following conditions holds: (i) F = ∇f for some function f or (ii) F
satisfies the path independence condition for line integrals in its domain.

It should be noted that if F is conservative, then the line integral for F around any
closed loop is zero. For, the result is independent of the path, and we may assume
the path is the trivial path which never leaves the starting point. (See the diagram
below.)

The converse is also true. If C F · dr = 0 for any closed loop C in the domain of
F, then F must be conservative. For, if C1 and C2 are two oriented paths which
start and end at the same point, then we may consider the closed path C formed
by traversing first C1 , and then the opposite C20 of the second path. We have
∫ ∫ ∫
0= F · dr = F · dr − F · dr
C C1 C2
∫ ∫
whence C1 F · dr = C2
F · dr. Thus, F has the path independence property for line
integrals.

C C
2
B
A
A=B

C
1
226 CHAPTER 5. CACULUS OF VECTOR FIELDS

Relation to the Work Energy Theorem in Physics For force fields, the notion
of ‘conservative field’ is intimately tied up with the law of conservation of energy.
To see why, suppose a particle moves under the influence of a conservative force
F = ∇f . Then, by Newton’s Second Law, we have

d2 r
m = ∇f.
dt2
dr
Take the dot product of both sides with the velocity vector v = . We get
dt
dv
m · v = ∇f · v. (62)
dt
There is another way to obtain this equation. Let
1
E= m(v · v) − f.
2
Then
dE 1 dv df
= m(2 · v) −
dt 2 dt dt
dv
=m · v − ∇f · v.
dt
dE
Thus, equation (62) is equivalent to the assertion that = 0, i.e., that E is
dt
constant.
m
This discussion should be familiar to you from physics. The quantity T = v · v =
2
m 2
|v| is called the kinetic energy, the quantity V = −f is usually called the
2
potential energy, and their sum E is called the total energy. Under the above
assumptions, the total energy is conserved.

Because of the above considerations, a function V = −f such that F = −∇V is


often called a potential function for F. (From a purely mathematical point of view,
the minus sign is an unnecessary complication, so mathematicians sometimes leave
it out when using the term ‘potential’. In this course, we shall follow the common
usage in physics and keep the minus sign.)

Finding Potential Functions If we are given a conservative vector field F, we


want a method for finding a function f such that F = ∇f . (V = −f will be
the associated potential function.) We can of course use the line integral method
described above. To this end, it is easiest to choose the path in a straightforward
standard way. One common choice is to choose a polygonal path with segments
parallel to the coordinate axes. For example, suppose F = hF1 , F2 i is a vector field
in the plane. Fix a point (x0 , y0 ) in its domain, and for any other point (x, y) in
its domain, consider the path consisting of two segments, the first parallel to the
5.3. CONSERVATIVE VECTOR FIELDS 227

x-axis from (x0 , y0 ) to (x, y0 ) and the second parallel to the y-axis from (x, y0 ) to
(x, y). It is easy to check that the line integral for this composite path is
∫ x ∫ y
0 0
f (x, y) = F1 (x , y0 )dx + F2 (x, y 0 )dy 0 . (63)
x0 y0

(Note that we have to use dummy variables x0 , y 0 to avoid confusing the values of
x and y at the endpoint from the values on the line segments.)

(x , y )
P

(x , y ’ )
P
0

(x , y0) (x’ , y0) (x , y0)


0

This formula may not always apply because the domain of the function can have
holes in it which may block one or the other of the vertical line segments for some
values of x or y.

Formula (63) can be a bit awkward to apply, but there is an equivalent method
which uses indefinite integrals instead. We illustrate it by an example.

Example 104 Let n = 2 and let F(x, y) = hx2 + 2xy, x2 + 2yi. We shall find a
function f as follows. If ∇f = F, then using components
∂f
= F1 (x, y) = x2 + 2xy,
∂x
∂f
= F2 (x, y) = x2 + 2y.
∂y
Integrate the first equation with respect to x to obtain
1 3
f (x, y) = x + x2 y + C(y).
3
Note that the indefinite integral as usual involves an arbitrary constant, but since
this was a ‘partial integration’ keeping y constant, this constant can in fact depend
on y. Now differentiate with respect to y to obtain
∂f
= x2 + C 0 (y).
∂y
228 CHAPTER 5. CACULUS OF VECTOR FIELDS

∂f
Comparing this with the previous expression for yields
∂y

x2 + C 0 (y) = x2 + 2y or C 0 (y) = 2y

from which we conclude C(y) = y 2 + E, where E is a constant independent of both


x and y. Hence,
1
f (x, y) = x3 + x2 y + y 2 + E
3
is the most general function which will work. If we just want one such function, we
can take E = 0, for example. Checking that this works yields
∂f
= x2 + 2xy
∂x
∂f
= x2 + 2y
∂y
as required.

It is very important to check the answer since you could easily make a mistake
in integrating. It might even be true that the field is not even conservative, but
through an error, you have convinced yourself that you have found a function f
with the right properties. There is a variation of the method which sometimes is a
little faster.

As above, look for f such that


∂f ∂f
= F1 (x, y) = x2 + 2xy, = F2 (x, y) = x2 + 2y.
∂x ∂y
Integrate the first equation with respect to x to obtain
1 3
f (x, y) = x + x2 y + C(y).
3
Next integrate the second equation with respect to y to obtain

f (x, y) = x2 y + y 2 + D(x).

If we compare these expressions, we see that we may choose C(y) = y 2 and D(x) =
1 3
x . Hence,
3
1
f (x, y) = x3 + x2 y + y 2
3
is the desired function.

Example 105 Let n = 3 and F(x, y, z) = hx + 2xy + z, x2 + z, x + yi. We find a


function f as follows. We want ∇f = F or, in terms of components,
∂f ∂f ∂f
= x + 2xy + z, = x2 + z, = x + y.
∂x ∂y ∂z
5.3. CONSERVATIVE VECTOR FIELDS 229

Integrating the first equation with respect to x yields


1 2
f (x, y, z) = x + x2 y + zx + C(y, z)
2
where the constant of integration may depend on y and z. Differentiating with
respect to y and comparing with the second equation yields
∂f ∂C
= x2 + = x2 + z.
∂y ∂y
∂C
Hence, = z from which we conclude
∂y
1 2
C(y, z) = zy + D(z) or f (x, y, z) = x + x2 y + zx + zy + D(z).
2
Differentiating with respect to z and comparing with the third equation yields
∂f dD
=x+y+ =x+y
∂z dz
from which we conclude D0 (z) = 0 or D(z) = E where E is a constant independent
of x, y, and z. Hence, the most general function f is given by
1 2
f (x, y, z) = x + x2 y + zx + zy + E.
2
For simplicity, we may take E = 0. You should check that the gradient of
1 2
f (x, y) = x + x2 y + zx + zy
2
is the original vector field F.

Example 106 Let n = 2 and F(x, y) = hx + 1, y + x2 i. A function f would have


to satisfy
∂f ∂f
= x + 1, = y + x2 .
∂x ∂y
Integrating the first equation with respect to x yields
1 2
f (x, y) = x + x + C(y).
2
Differentiating with respect to y and comparing with the second equation yields
∂f
= C 0 (y) = y + x2
∂y

which is impossible since C(y) is not supposed to depend on x. What went wrong
here is that the field is not conservative, i.e., there is no such function f , and the
method for finding one breaks down.
230 CHAPTER 5. CACULUS OF VECTOR FIELDS

Screening tests It would be nice to have a way to check in advance if a vector field
F is conservative before trying to find a function f with gradient F. Fortunately,
there are several ways to do this. First consider the case of a plane field F = hF1 , F2 i.
If ∇f = F for some function f , then
∂f ∂f
= F1 and = F2 .
∂x ∂y
If we differentiate the first equation with respect to y and the second with respect
to x, we obtain
∂2f ∂F1 ∂2f ∂F2
= and = .
∂y∂x ∂y ∂x∂y ∂x
If the function f is smooth enough (which we will normally want to be the case),
the two mixed partials are equal, so we conclude that if the (smooth) field F is
conservative, then it must satisfy the screening test
∂F1 ∂F2
= . (64)
∂y ∂x

Example 106 again F1 (x, y) = x + 1 and F2 (x, y) = y + x2 so


∂F1 ∂F2
= 0 6= = 2x
∂y ∂x
so F does not pass the screening test and is not conservative.

Unfortunately, it is possible for a vector field to pass the screening test but still not
be conservative. In other words, the test is a bit too loose.
−y x
Example 107 Let F(x, y) = h , 2 i. F may also be represented in
x2 + y x + y2
2
1
polar form as F = uθ . (Check that for yourself!) Notice that the domain of F
r
excludes the origin since the common denominator x2 + y 2 vanishes there. We have
∂F1 (x2 + y 2 )(−1) − (−y)(2y) y 2 − x2
= 2 2 2
= 2
∂y (x + y ) (x + y 2 )2
∂F2 (x2 + y 2 )(1) − x(2x) y 2 − x2
= 2 2 2
= 2
∂x (x + y ) (x + y 2 )2
so F passes the screening test. However, F is certainly not∫ conservative. In fact,
if C is a circle of any radius centered at the origin, then C F · dr = ±2π with the
sign depending on whether the circle is traversed counter-clockwise or clockwise. In
any case, it is certainly not zero which would be the case for a conservative vector
field. (The calculation of the integral is quite routine and in fact has been done
previously in exercises. You should do it again now for practice.)

It would be much better if we could be sure that any vector field which passes
the screening test is in fact conservative. We shall see later that this depends on
5.3. CONSERVATIVE VECTOR FIELDS 231

the nature of the domain of the vector field. In the present case, it is the ‘hole’
created by omitting the origin that is the cause of the difficulty. It is important to
note, however, that the issue we raise here is not a minor point of interest only to
mathematicians who insist of splitting hairs. The vector field in the example is in
fact of great physical significance, and plays an important role in electromagnetic
theory. There is a more complicated version of the screening test for vector fields in
space. Let F = hF1 , F2 , F3 i be a vector field defined on some connected open set in
R3 . If there is a function f such that ∇f = F, then writing this out in components
yields
∂f ∂f ∂f
= F1 , = F2 , and = F3 .
∂x ∂y ∂z
By computing all the mixed partials (and assuming the order doesn’t matter), we
obtain
∂F1 ∂2f ∂F2
= =
∂y ∂x∂y ∂x
∂F1 ∂2f ∂F3
= =
∂z ∂x∂z ∂x
∂F2 ∂2f ∂F3
= =
∂z ∂y∂z ∂y
so we get the more complicated screening test
∂F2 ∂F3
=
∂z ∂y
∂F1 ∂F3
=
∂z ∂x
∂F1 ∂F2
= .
∂y ∂x
The reason why we list the equations in this particular order will become clear in
the next section.

Example 108 Let F(x, y, z) = hy + z, x + z, x + yi. Find C F · dr for the straight
line path which goes from (1, −1, 2) to (0, 0, 3). We first check to see if the field
might be conservative. We have
∂F2 ∂F3
=1=
∂z ∂y
∂F1 ∂F3
=1=
∂z ∂x
∂F1 ∂F2
=1=
∂y ∂x
so it passes the screening test. We now try to find a function f . The equation
∂f /∂x = y + z yields f (x, y, z) = yx + zx + C(y, z). Differentiate with respect to
y to obtain x + ∂C/∂y = x + z or ∂C/∂y = z. This yields C(y, z) = zy + D(z).
232 CHAPTER 5. CACULUS OF VECTOR FIELDS

Hence, f (x, y, z) = yx + zx + zy + D(z). Differentiating with respect to z yields


x+y+D0 (z) = x+y or D0 (z) = 0. Hence, D(z) = E, and f (x, y, z) = xy+xz+yz+E
gives the most general function f . For convenience take E = 0. I leave it to you to
check that ∇f = F for f (x, y, z) = xy + xz + yz. To find the integral, we need only
evaluate the function f at the endpoints of the path.
∫ (0,0,3)
F · dr = f (0, 0, 3) − f (1, −1, 2) = 0 − (−4) = 4.
(1,−1,2)

Exercises for 5.3.

1. Find the force exerted by the gravitational dipole described in Example 101
by calculating ∇f for the function f derived in the example.
2. Four point masses each of mass m are placed at the points (±a, ±a, 0) in the
x, y-plane.
(a) Show that the gravitational potential at a point (0, 0, z) due to these four
4Gm
masses is given by V (0, 0, z) = − √ .
2a2 + z 2
(b) Determine the gravitational force on a unit test mass at the point (0, 0, z).
3. Find the gravitational potential at a point on the z-axis due to a mass M
uniformly distributed over a thin wire lying along the circle x2 + y 2 = a2 in
the x, y-plane. Calculate the gravitational force on a unit test mass at a point
on the z-axis.
4. Suppose a scalar function defined on all of Rn (with n = 2 or n = 3) satisfies
∇f = 0 everywhere. Show that f is constant. (Hint: Use formula (61). Note
that if the domain of f splits into two or more disconnected components, then
the argument won’t work because you can’t find a path from a point in one
component to a point in another component which stays in the domain of the
function. Hence, the function might assume different (constant) values on the
different components.)
5. Find a function f such that F = ∇f if one exists for each of the following
plane vector fields.
(a) F(x, y) = h3x2 + y 2 , 2xy + y 2 i.
(b) F(x, y) = h3x2 + y 2 , 3xy + y 2 i.
(c) F(x, y) = hey , xey i.
6. Find a function f such that F = ∇f if one exists for each of the following
vector fields in space.
(a) F(x, y, z) = hyz, xz, xyi.
(b) F(x, y, z) = hx2 , y 2 , z 2 i.
(c) F(x, y, z) = hz, x, yi.
5.4. DIVERGENCE AND CURL 233

7. Evaluate each of the following path independent line integrals. (In each case,
the vector field with the indicated components is conservative.)
∫ (3,5)
(a) (1,2) (x2 + 4y 2 )dx + (8xy)dy.
∫ (4,1,1)
(b) (1,−1,0) x dx + y dy + z dz.

8. Each of the following vector fields is not conservative. Verify that by showing
that it does not satisfy the relevant screening test and also by finding a closed
path for which the line integral is not zero.
(a) F(x, y) = h−y, 2xi.
(b) F(x, y, z) = hz, −x, ez xyi.

9. We know that the vector field F(x, y) = h−y/(x2 + y 2 ), x/(x2 + y 2 )i, (x, y) 6=
(0, 0) is not conservative. Ignore this fact and try to find a function f such
that F = ∇f . You should come up with something like f (x, y) = tan−1 (y/x).
Since the vector field is not conservative, no function exists. Explain the
seeming contradiction. Hint: Consider the domains of F and of f .

10. Suppose F is a conservative vector field. Then we know that the formula
∫ r
f (r) = F · dr
r0

defines a function such that F = ∇f . We know that choosing a different


base point r00 results in another such function f 0 , and f 0 differs from f by a
constant. How is the constant related to the base points r0 and r00 ?

5.4 Divergence and Curl

In this section we deal only with vector fields in space.

The gradient of a scalar function

∂f ∂f ∂f ∂f ∂f ∂f
∇f = h , , i=i +j +k
∂x ∂y ∂z ∂x ∂y ∂z

may be viewed as the result of applying the vector operator

∂ ∂ ∂ ∂ ∂ ∂
∇=h , , i=i +j +k
∂x ∂y ∂z ∂x ∂y ∂z

to the scalar field f . It is natural ask then if we may apply this vector operator to
a vector field F. There are two ways to do this.
234 CHAPTER 5. CACULUS OF VECTOR FIELDS

We may take the dot ‘product’


∂ ∂ ∂
∇·F=h , , i · hF1 , F2 , F3 i
∂x ∂y ∂z
∂F1 ∂F2 ∂F3
= + + .
∂x ∂y ∂z
The result is a scalar field called the divergence of F. It is also denoted div F.

Example 109 Let F(r) = x2 i + y 2 j + z 2 k. Then


∇ · F = 2x + 2y + 2z = 2(x + y + z).
Note that the result is always a scalar field.

We may also form the cross ‘product’


∂ ∂ ∂
∇×F=h , , i × hF1 , F2 , F3 i
∂x ∂y ∂z
∂F3 ∂F2 ∂F1 ∂F3 ∂F2 ∂F1
=h − , − , − i.
∂y ∂z ∂z ∂x ∂x ∂y
The result is a vector field called the curl of F. It is also denoted curl F.

Example 110 Let F = yzi + xzj = hyz, xz, 0i. Then


∇ × F = h0 − x, −(0 − y), z − zi = −xi + yj.
Note that the result is always a vector field.

If you refer back to the previous section, you will see that the three quantities which
must vanish in the screening test for a conservative vector field in space are just the
components of the curl. Hence, the screening test can be written more simply
∇ × F = 0.
In particular, every conservative vector field in space has zero curl. Still another
way to say the same thing is that
∇ × (∇f ) = 0
for every scalar field f .

We shall investigate the significance of the divergence and curl in succeeding sec-
tions.

The formalism of the operator ∇ leads to some fascinating formulas and expressions,
many of which have important applications. You will have an opportunity to explore
some of these in the exercises. Probably the most important elaboration is the
operator
∂2 ∂2 ∂2
∇2 = ∇ · ∇ = + +
∂x2 ∂y 2 ∂z 2
5.4. DIVERGENCE AND CURL 235

which is called the Laplace operator. If f is a scalar field, then ∇2 f is called the
Laplacian of f . The Laplace operator appears in many of the important partial
differential equations of physics. Here are some

∇2 f = 0 Laplace’s Equation
2
1 ∂ f
∇2 f = Wave Equation
c2 ∂t2
∂f
∇2 f = κ Diffusion or Heat Equation
∂t

Exercises for 5.4.

1. Calculate the divergence and curl of each of the following vector fields in R3 .
(a) F(x, y, z) = hx, y, zi.
(b) F(x, y, z) = −yi + xj.
1 √
(c) F(x, y, z) = 3 hx, y, zi where ρ = x2 + y 2 + z 2 . Hint: The algebra will
ρ
∂ρ x
be easier if you use = and the corresponding formulas for y and z.
∂x ρ
2. Prove that divergence and curl are linear, i.e., verify the formulas

∇ · (aF + bG) = a∇ · F + b∇ · G
∇ × (aF + bG) = a∇ × F + b∇ × G

where F and G are vector fields and a and b are constant scalars.

3. Verify the formula


∇ · (f F) = f ∇ · F + ∇f · F
where f is a scalar field and F is a vector field.
What should the corresponding formula for curl be?

4. Verify the formula


∇ · (∇ × F) = 0
for an arbitrary vector field F. This formula will play an important role later
in this chapter.

5. Verify the formula

∇ × (∇ × F) = ∇(∇ · F) − ∇2 F

where ∇2 hF1 , F2 , F3 i = h∇2 F1 , ∇2 F2 , ∇2 F3 i.


236 CHAPTER 5. CACULUS OF VECTOR FIELDS

1
6. Show that the scalar field defined by f (r) = satisfies Laplace’s equation
ρ
∇ f = 0 except at the origin where it is not defined. Hint: You can use a
2

previous exercise in which you considered −∇f .


7. Let F and G be two vector fields. Guess a formula for ∇ · (F × G) and then
verify that your formula is correct.

5.5 The Divergence Theorem

In the succeeding sections we shall discuss three important theorems about integrals
of vector fields: the divergence theorem, Green’s theorem, and Stokes’s theorem.
These theorems play specially important roles in the theory of electromagnetic
fields, and we shall introduce the theorems in the order they are needed in that
theory. We start with the divergence theorem, which is closely connected with the
theories of static electric fields and gravitational fields which share many common
N mathematical features.

Let F be a vector field defined on some open set in R3 . Moreover, suppose F


is smooth in the sense that the partial derivatives of the components of F are all
continuous. Let E be a solid region in the domain of F which is bounded by a finite
F collection of graphs of smooth functions. Denote by ∂E the surface bounding E.
Orient this boundary ∂E by specifying that the normals point away from the solid
region E. Some simple examples of such regions would be rectangular boxes, solid
spheres, solid hemispheres, solid cylinders, etc. However, we shall also allow more
E complicated regions such as a solid torus or the solid region between two spheres.
In the latter case the boundary ∂E consists of two disconnected components.
Theorem 5.5 (Divergence Theorem) Let F be a vector field in R3 and E a solid
region as above. Then
∫∫ ∫∫∫
F · dS = ∇ · FdV.
∂E E

Before trying to prove this theorem, we shall show how to use it and also investigate
what it tells us about the physical meaning of the divergence ∇ · F.

Using the Divergence Theorem to Calculate Surface Integrals It is gener-


ally true that the triple integral on the right is easier to calculate than the surface
integral on the left. That is the case first because volume integrals of scalar func-
tions are easier to find than surface integrals and secondly because the divergence
of F is often simpler than F.
∫∫
Example 111 Consider S F · dS where S is a sphere of radius R centered at the
origin, and F(x, y, z) = x i + y j. If we let E be the solid sphere enclosed by S, then
5.5. THE DIVERGENCE THEOREM 237

S = ∂E. ∇ · F = ∂x/∂x + ∂y/∂y = 2. Hence, N


∫∫ ∫∫∫
4 8
F · dS = 2 dV = 2 πR3 = πR3 . F
S E 3 3

You were asked to do this surface integral previously by direct calculation in an


exercise. You should go back and review the calculation. You will see that doing it
by means of the divergence theorem is quite a bit easier.

Example 112 Let F(x, y, z) = z 2 k and let S be the hemispherical surface x2 +


y 2 + z 2 = R2 , z ≥ 0. Use ‘upward’ normals. In this case the surface does not bound
0
a solid region. However, that is easy enough to remedy. Let√ S be the disk of radius
R in the x, y-plane. Then the solid hemisphere 0 ≤ z ≤ R − x2 − y 2 is bounded
2

on the top by S and on the bottom by S 0 . We may write ∂E = S ∪ S 0 where we


use ‘upward’ normals for S and downward normals for S 0 . Also, ∇ · F = 2z, and
∫∫ ∫∫∫
F · dS = 2z dV.
S∪S 0 E

The triple integral on the right is quite straightforward to do. For example, we
could switch to spherical coordinates and use z = ρ cos φ to obtain
∫∫∫ ∫ 2π ∫ π/2 ∫ R ( π/2 )
R4 cos2 φ
2z dV = 2 3
ρ cos φ sin φ dρ dφ dθ = 4π −
E 0 0 0 4 2 0
π
= R4 .
2
The surface integral on the left may be split up as as sum
∫∫ ∫∫ ∫∫
F · dS = F · dS + F · dS
S∪S 0 S S0
S
where the first integral in the sum is the one we want to find. However, the second
integral is quite easy to calculate. Namely, the field F = zk = 0 in the x, y-plane,
E
so any surface integral over a flat surface in the x, y-plane will be zero for this field.
Hence, the divergence theorem in this case comes down to
∫∫
π
F · dS + 0 = R4
S 2
∫∫ π 4
or S
F · dS = R . S’
2
The above example illustrates a common way to apply the divergence theorem to
find the flux through a surface which is not closed. Namely, one tries to add one or
more additional components so as to bound a solid region. In this way one reduces
the desired surface integral to a volume integral and other surface integrals which
may be easier to calculate.
238 CHAPTER 5. CACULUS OF VECTOR FIELDS

Interpretation of the divergence The divergence theorem gives us a way to


interpret the divergence of a vector field in a more geometric manner. To this end
consider a point P with position vector r in the domain of a (smooth) vector field
F. Let E be a small element of volume containing the point P . To be definite,
picture E as a small cube centered at P , but in principle it could be of any shape
whatsoever. The quotient ∫∫
1
F · dS
v(E) ∂E
(where v(E) denotes the volume of the set E and the surface integral is computed
using outward orientation) is called the flux per unit volume. I claim that the
divergence of F at the point P is given by
∫∫
1
∇ · F(r) = lim F · dS. (65)
E→P v(E) ∂E

The importance of this formula is two-fold. First of all, it relates the divergence
directly to the concept of flux. In effect, it allows us to think of the divergence of F
as being a measure of the sources of the field. For example, for a gravitational or
electrostatic field, we can think of the lines of force emanating from mass or charge,
and the divergence of the field is related to the mass or charge density. In the case
of a momentum field for a fluid flow, the interpretation of divergence based on this
formula is a bit more complicated, but similar considerations apply.

Secondly, the formula gives us a characterization of the divergence which does not
depend on the use of a specific coordinate system. The same formula applies if we
use different axes for rectangular coordinates or even if we use curvilinear coordi-
nates such as cylindrical or spherical coordinates.

To derive formula (65), we argue as follows. By the average value property for triple
integrals, we have ∫∫∫
0 1
∇ · F(r ) = ∇ · F dV
v(E) E
for an appropriate point r0 in E. (See Chapter IV, section 11 where the correspond-
ing property for double integrals was discussed.) By the divergence theorem, the
volume integral may be replaced by a surface integral, so
∫∫
0 1
E ∇ · F(r ) = F · dS.
v(E) ∂E

Now take the limit as E → P . Since r is the coordinate vector of P , it is clear that
r0 → r. Hence,
∫∫∫
1
∇ · F(r) = lim ∇ · F(r0 ) = lim ∇ · F dV
E→P E→P v(E) E

as required.

As an application of formula (1), we show that the inverse square law vector field
F = (1/|r|2 )uρ has zero divergence. (You should have done this directly in an
5.5. THE DIVERGENCE THEOREM 239

exercise by doing the messy calculation in rectangular coordinates.) To see this,


consider an element of volume E centered at a point P with position vector r 6= 0.
Assume in particular that E is a curvilinear spherical cell, i.e., a typical element
of volume for spherical coordinates. (See Chapter IV, Section 6.) E is bounded
on the inside and outside by spherical surfaces of radii ρ1 and ρ2 respectively. It
is bounded on either side by planes (θ = θ1 , θ = θ2 ) and on ‘top’ and ‘bottom’
by conical surfaces (φ = φ1 , φ = φ2 ), and each of those four bounding surfaces is
parallel to the field F. Hence, the flux through the boundary ∂E of E is zero except
possibly for the inner and outer surfaces, each of which is a spherical rectangle. Let
these two rectangles have areas A1 and A2 respectively. Since F is parallel to the
normal N on each of these surfaces, and is otherwise constant, we get for the flux
through the outer surface
1
A2
ρ2 2
and similarly for the inner surface with 1 replacing 2 and with the sign reversed
since the direction of the normal is reversed. Hence, the net flux is
1 1
A2 − 2 A1
ρ2 2 ρ1
However, both curvilinear rectangles, can be described by the same angular limits
φ1 ≤ φ ≤ φ2 , θ1 ≤ θ ≤ θ2 —only the value of ρ changes. Hence,
∫ θ2 ∫ φ2 ∫ θ2 ∫ φ2
A1 = ρ21 sin φ dφ dθ = ρ1 2 sin φ dφ dθ
θ1 φ1 θ1 φ1
or ∫ ∫
θ2 φ2
A1
= sin φ dφ dθ.
ρ1 2 θ1 φ1

Since the same calculation would work for A2 /ρ22 and give exactly the same value,
we conclude that the net flux through the boundary ∂E is zero. If we take the limit
as E → P , we still get zero, so the divergence is zero.

Note that the upshot of this argument is that since the inverse square law has its
‘source’ at the origin, streamlines (lines of force) entering any element of volume
not including the origin all leave and no new ones originate, so the net flux is zero.

We note in passing that if A is the area of any region on a sphere of radius ρ, the
quantity A/ρ2 is called the solid angle subtended by the region. This quantity is
independent of the radius ρ of the sphere in the sense that if we consider a different
(concentric) sphere and the projected region on the other sphere, the ratio of area
to ρ2 does not change.

Exercises for 5.5.

1. Let F(r) = r. (Then ∇ · F = 3.) Verify that the divergence theorem is


true for each of the following solid regions E by calculating the outward flux
240 CHAPTER 5. CACULUS OF VECTOR FIELDS

∫∫
∂E
F · NdS through the boundary of E and checking that it is three times
the volume.
(a) E is a solid sphere of radius a centered at the origin.
(b) E is a cube of side a in the first octant with opposite vertices at (0, 0, 0)
and (a, a, a).

2. Let F = x2 i + y 2 j + z 2 k. Use the divergence theorem to calculate the flux out


of the following solid regions.
(a) The solid cube of side 1 in the first octant with opposite vertices at (0, 0, 0)
and (1, 1, 1).
(b) The inside of the cylinder x2 + y 2 = a2 , 0 ≤ z ≤ h.
∫∫
3. Find S F · dS where F(x, y, z) = x3 i + y 3 j + z 3 k and S is a sphere of radius
a centered at the origin. Use outward normals.
∫∫ 2
4. Find S F · dS where F(x, y, z) = hx + ey , cos(xz), sin(x3 + y 3 )i and S is
the boundary of the solid region bounded below by z = x2 + y 2 and bounded
above by the plane z = 4. Use outward normals.

5. Let f and g be scalar fields which are both sufficiently smooth on a solid
region E and its boundary. Apply the divergence theorem to f ∇g to obtain
Green’s first identity
∫∫ ∫∫∫
f ∇g · N dS = (∇f · ∇g + f ∇2 g)dV.
∂E E

Note that ∇g · N is just the directional derivative of g in the direction of the


normal vector N. This is sometimes called the normal derivative of g on the
surface S = ∂E.
Reverse the roles of f and g and subtract to obtain Green’s second identity
∫∫ ∫∫∫
(f ∇g · N − g ∇f · N)dS = (f ∇2 g − g∇2 f )dV.
∂E E


1 3a a
6. Let F = 2 uρ , and let S be a disk of radius in the plane z = with
|r| 2 2
center on the z-axis.
∫∫ Orient S with upward normals. Use the divergence the-
orem to calculate F · n dS. Hint: Form a closed surface with a spherical
S
cap S 0 of radius a. Calculate the surface integral for the cap and use the fact
that ∇ · F = 0 between the spherical cap and the disk.

7. Suppose that it is known that the flux of the vector field F out of every
sufficiently small cube is zero. Show that ∇ · F = 0.
5.6. PROOF OF THE DIVERGENCE THEOREM 241

5.6 Proof of the Divergence Theorem

In this section, we shall give two proofs of the divergence theorem. The first is not
really a proof but what is sometimes called a ‘plausibility argument’. It clarifies
some of the ideas and helps you understand why the theorem might be true. The
second proof is less enlightening but is mathematically correct. Even if you are
not interested in the rigorous details, you should study the first part of this section
because it introduces arguments commonly used in applications.

The first argument goes as follows. Since the right hand side of the formula in the
theorem is a triple integral, consider a dissection of the solid region E into small
cells formed as usual by three mutually perpendicular families of planes.

E E
i j

Adjacent internal cells

Dissection
Cross sectional views

Most of these cells will be small rectilinear boxes, but some, those on the boundary,
will have one curved side. Let Ei be a typical cell. Using the interpretation of
divergence as the limit of flux per unit volume, we have for Ei small enough
∫∫
1
F · dS ≈ ∇ · F(ri ) (66)
∆Vi ∂Ei

where ∆Vi is the volume of Ei and ri is the coordinate vector of a point in Ei . If


we cross multiply and add, we get
∑ ∫∫ ∑
F · dS ≈ ∇ · F(ri ) ∆Vi .
i Ei i

If we make the
∫∫∫ dissections finer and finer and take the limit, the sum on the right
approaches E
∇ · F dV . The sum on the left merits further discussion. Suppose
that the numbering is such that cells Ei and Ej are adjacent, so they share a
242 CHAPTER 5. CACULUS OF VECTOR FIELDS

common face. On that face, the normal relative to Ei will point opposite to the
normal relative to Ej . (Each normal points away from one cell and into the other.)

As a result, the common face will appear in both terms of the sum
∫∫ ∫∫
F · dS + F · dS
∂Ei ∂Ej

but with opposite signs. As a result, all internal components of the boundaries of
the cells appear twice in the sum so as to cancel, and the only components left are
those associated with the external boundary ∂E. In other words,
∑ ∫∫ ∫∫
F · dS = F · dS.
i ∂Ei ∂E

Putting this all together, we get


∫∫ ∫∫∫
F · dS = ∇ · F dV
∂E E

as required.

The above argument is not logically valid for the following reason. The approxima-
tion (66) comes from the formula
∫∫
1
∇ · F(r) = lim F · dS (67)
E→P v(E) ∂E

which was derived from the divergence theorem. Clearly we can’t use a consequence
of the divergence theorem to prove the divergence theorem or we will get in a vicious
logical circle. To repair this argument, one would have to derive formula (67) without
using the divergence theorem. This is not too hard to do if one assumes the solid
E always has some specific form such as a rectangular box. However, since some of
the cells in a dissection will have curved faces, that does not suffice. It is necessary
to derive formula (67) for very general regions E. To the best of my knowledge,
there is no simple way to do that without in essence hiding a proof of the divergence
theorem in the argument.

A correct proof of the divergence theorem proceeds as follows. First write


F = F1 i + F2 j + F3 k.
Suppose we can verify the three formulas
∫∫ ∫∫∫
∂F1
F1 i · dS = dV
∂x
∫ ∫ ∂E ∫ ∫ ∫E
∂F2
F2 j · dS = dV
∂y
∫ ∫ ∂E ∫ ∫ ∫E
∂F3
F3 k · dS = dV.
∂E E ∂z
5.6. PROOF OF THE DIVERGENCE THEOREM 243

Then adding these up will give the divergence theorem. Clearly, we need only verify
one of the three formulas since the arguments will be basically the same in the three
cases. We shall verify the third formula. In essence, that means we restrict to the
case F = F3 k, ∇ · F = ∂F3 /∂z. Consider a dissection of the solid region E such
that each cell Ei is either a rectangular box or at worst bounded on either the top
or the bottom by the graph of a smooth function. (It is believable that any region
one is likely to encounter can be so decomposed, but in any case we can prove the
theorem only for such regions.) As above, cancellation on internal interfaces yields
∑ ∫∫ ∫∫
F · dS = F · dS.
i ∂Ei ∂E

Similarly,
∑ ∫∫∫ ∫∫∫
∇ · F dV = ∇ · F dV.
i Ei E

Hence, it will suffice to prove


∫∫ ∫∫∫
F · dS = ∇ · F dV
∂Ei Ei

for each of the cells Ei . However, by assumption each of these cells is bounded on
top and bottom by graphs of smooth functions z = f (x, y) and z = g(x, y). For
the cells which are just boxes, the two functions are constant functions, for a cell z = f(x,y)
on the top boundary, the bottom function is constant, and for a cell on the bottom
boundary, the top function is constant. In any case, this reduces the problem to
verifying the formula of the divergence theorem for a region which can be described
by g(x, y) ≤ z ≤ f (x, y) where (x, y) ranges over some domain D in the x, y-plane.
(The domain D will in most cases be just a rectangle.) We now compute the surface
integral for such a region under the assumption, as above, that F = F3 k. Since the
field is vertical, it is parallel to the sides of the region, so the only contributions to
the flux come from the top and bottom surfaces. On the top surface, we have

dS = h−fx , −fy , 1idy dx


z = g(x,y)
so the flux is ∫∫
F3 (x, y, f (x, y)) dy dx. D
D

Similarly, on the bottom surface, we have

dS = hgx , gy , −1idy dx

where the signs are reversed because we need the downward pointing normals.
Hence, the flux is ∫∫
− F3 (x, y, g(x, y)) dy dx,
D
244 CHAPTER 5. CACULUS OF VECTOR FIELDS

and the net flux through the boundary of the region is


∫∫ ∫∫
F3 (x, y, f (x, y)) dy dx − F3 (x, y, g(x, y)) dy dx =
D
∫∫
D

(F3 (x, y, f (x, y)) − F3 (x, y, g(x, y))) dy dx.


D
Next we calculate the volume integral.
∫∫∫ ∫ ∫ ∫ f (x,y)
∂F3 ∂F3
dV = dz dy dx
E ∂z D g(x,y) ∂z
∫∫
f (x,y)
= F3 (x, y, z)|g(x,y) dy dx
∫ ∫D
= (F3 (x, y, f (x, y)) − F3 (x, y, g(x, y))) dy dx.
D
Comparing, we see that the answers are the same which proves that the surface
integral equals the volume integral. That completes the proof of the divergence
theorem.

Note that the second proof finally comes down to an application of basic integration
theory and ends up using the fundamental theorem of calculus. The divergence
theorem may be viewed just as a higher dimensional extension of the fundamental
theorem.

Exercises for 5.6.

1. Let F be a vector field. Suppose that it is known that the flux out of every
sufficiently small rectangular box is zero. Using the reasoning in the proof
of the divergence theorem, show that the flux out of any rectangular box of
any size whatsoever is zero. Don’t use the divergence theorem itself, only the
reasoning in the proof.

5.7 Gauss’s Law and the Dirac Delta Function

We return to our discussion of inverse square laws. In particular, let F = (1/|r|2 )uρ
and let S be any surface bounding a solid region E in R3 . Since F has a singularity
at the origin, assume the surface does not pass through the origin. It is easy to
calculate the flux through S if the origin is not in E. For, since ∇·F = 0 everywhere
in the region E, the divergence theorem tells us
∫∫ ∫∫∫
F · dS = (0) dV = 0.
S E
5.7. GAUSS’S LAW AND THE DIRAC DELTA FUNCTION 245

S
*O

*O

S’

The situation is somewhat more complicated if the origin is in the solid region
bounded by S. Note that in this case the divergence theorem does not apply
because the smoothness hypothesis on F fails at a point in E. However, we can
calculate the flux by the following trick. Consider a small sphere S 0 centered at the
origin but otherwise entirely inside the surface S. Let E 0 be the solid obtained by
removing the interior of the small sphere S 0 from E. Thus, E 0 is the solid region
between S 0 and S. The boundary of E 0 consists of the two surfaces S and S 0 . Using
‘outward’ normals (i.e., away from E 0 ), the normals point outward on S and inward
on S 0 . Since the field is smooth in E 0 , we may apply the divergence theorem to
obtain ∫∫ ∫∫∫
F · dS = ∇ · F dV = 0.
S∪S 0 E0

On the other hand,


∫∫ ∫∫ ∫∫
F · dS = F · dS + F · dS,
S∪S 0 S S0

so ∫∫ ∫∫
F · dS = − F · dS.
S S0

This reduces the calculation of the flux through the surface S to the calculation of
the flux through a sphere S 0 centered at origin. The latter calculation was basically
done in Example 1 of Section 2 of this chapter. You should do it over again for
practice. The answer turns out to be 4π if the sphere is given outward orientation.
In the present case the orientation is reversed which changes the sign, but the above
formula gives one last change of sign, so we conclude for the original surface S,
∫∫
F · dS = 4π.
S

To summarize, for the field F = (1/|r|2 )uρ ,


∫∫ {
0 if 0 is outside S
F · dS = .
S 4π if 0 is inside S
246 CHAPTER 5. CACULUS OF VECTOR FIELDS

Gauss’s Law The electric field of a point charge q located at the point r0 is given
by Coulomb’s law
q
E= u (68)
|r − r0 |2
where u is a unit vector pointing directly away from the point source. (We have
dropped some important physical constants for the sake of mathematical simplicity.)
Coulomb’s law is analogous to Newton’s law of gravitation for a point mass.

The calculation of flux discussed above applies just as well well to such a field. The
flux out of any closed surface is either 0 if the source is outside the surface and it is
4πq if the source is inside the surface. (The only change is a shift of the source from
the origin to the point r0 and multiplication by the magnitude of the charge.) More
generally, suppose we have many point sources with charges q1 , q2 , . . . , qn located
at position vectors r1 , r2 , . . . , rn . Let Ei be the electric field of the ith charge, and
let
E = E1 + E2 + · · · + En
be the resulting field from all the charges. If S is any closed surface, we will have
∫∫ ∫∫ ∫∫ ∫∫
E · dS = E1 · dS + E2 · dS + · · · + En · dS.
S S S S

The ith integral on the right will be either 0 or 4πqi depending on whether the ith
source is outside or inside S. Hence, we get
∫∫ ∑
E · dS = 4π qi = 4πQ
S qi inside S

where Q is the sum of the charges inside the surface. This is a special case of
Gauss’s Law in electrostatics which asserts that for any electrostatic field, the flux
out of a closed surface is 4πQ where Q is the total charge contained within.

(The constant 4π in Gauss’s law depends on the form we adopted for Coulomb’s
law, which is something of an oversimplification. It is convenient for a purely math-
ematical discussion, but in reality Coulomb’s law involves some additonal constants
which depend on the system of units employed. For example, for one commonly
used system of units, Coulomb’s law becomes
1 q
E= u
4π |r − r0 |2
where  is the so-called permittivity. In that case, the factor 4π disappears, and
Gauss’s law becomes ∫∫
Q
E · dS = .
S 
We leave further discussion of such matters to your physics professor.)

Gauss’s law is very important in electrostatics, and is closely related to the diver-
gence theorem, but you should avoid confusing the two.
q
2

5.7. GAUSS’S LAW AND THE DIRAC DELTA FUNCTION 247

The Dirac Delta Function There is a way to recover the divergence theorem for
solid regions in which the vector field has singularities. It involves use of the Dirac
Delta function which was discussed in Chapter IV, Section 11. For example, for the
inverse square law, we can write formally
1
∇·( uρ ) = 4πδ(r)
|r|2

where as previously δ(r) is ‘defined’ to be 0 except at r = 0, and it is required to


satisfy {
∫∫∫
0 if 0 is not in E
δ(r) dV = .
E 1 if 0 is in E
With this interpretation, the divergence theorem
∫∫ ∫∫∫
F · dS = ∇ · F dV
∂E E

is true, since both sides are either 0 or 4π depending on whether the origin is outside
or inside E. Note however that if a normal function vanishes everywhere except at
one point, its triple integral is zero. Thus, it is important to remember that the
Dirac Delta ‘function’ is not a function in the usual sense.

Exercises for 5.7.

1
1. Let F = uρ . For each of the following surfaces determine the outward
|r|2
flux.
(a) The ellipsoid x2 /4 + y 2 /16 + z 2 = 1.
(b) The surface of the cube with vertices at (±2, ±2, ±2).
(c) The sphere x2 + y 2 + (z − 3)2 = 1.
(d) (Optional) The surface of the unit cube in the first octant with opposite
vertices at (0, 0, 0) and (1, 1, 1). Hint: F blows up at (0, 0, 0) so you can’t
apply the divergence theorem without modification. This does not create
problems for the surface integrals on the three faces in the coordinate planes
because these are zero anyway. (Why?) Let a be small, and consider the part
of the cube outside the sphere of radius a centered at the origin. Apply the
divergence theorem to that solid region and let a approach 0. The answer is
π/2.
2. Using the divergence theorem and Gauss’s law, show that ∇ · E = 0 for an
electrostatic field E in empty space (where there is no charge). Hint: By
Gauss’s Law, the flux out of any sufficiently small sphere centered at a point
where there is no charge is zero.
248 CHAPTER 5. CACULUS OF VECTOR FIELDS

3. Electrostatic fields are always conservative. If E = ∇f for a function f , show


that f satisfies Laplace’s equation ∇2 f = 0 if the charge density is zero.

4. Suppose that E is a smooth vector field in R3 which is a spherically symmetric


about the origin and it satisfes ∇ · E = ρ. Find E.

5. Suppose it is known that E is a smooth vector field which is cylindrically


sin r
symmetric about the z-axis and it satisies ∇ · E = . Find E.
r

5.8 Green’s Theorem

We consider the analogue of the divergence theorem in R2 .

Let F = hF1 , F2 i = F1 i + F2 j be a (smooth) vector field defined on some open set


in the plane. Let D be a plane region bounded by a curve C = ∂D. (D is analogous
to E and C to S = ∂E.) The analogue of the flux is the integral over the curve C


F · N ds
C

where N denotes the outward unit normal to C at each point of C and ds stands for
the element of arc length. This flux integral can also be expressed in rectangular
coordinates as follows. If we write dr = dx i + dy j, then the vector dy i − dx j is
perpendicular to dr. Moreover, if we assume that C is traversed in such a way that
the region D is always on the left, then
√ this vector does point away from D. (See
the diagram.) Since, its magnitude is (dy)2 + (−dx)2 = ds, we have

N ds = dy i − dx j

and
∫ ∫
F · N ds = −F2 dx + F1 dy.
C C
5.8. GREEN’S THEOREM 249

N ds
dr
C

Example 113 Let F(x, y) = hx, yi = r, and let D be a disk of radius R. If the
boundary C = ∂D is traversed in the counter-clockwise direction, the region will
always be on the left. We can see geometrically, that
F · N ds = R R dθ = R2 dθ
so ∫ ∫ 2π
F · N ds = R2 dθ = 2πR2 .
C 0
We could also calculate this analytically as follows. Choose the parametric repre-
sentation r = hR cos θ, R sin θi. Thus, F
ds
x = R cos θ dx = −R sin θ dθ N ds
y = R sin θ dy = R cos θ dθ.
Hence,
∫ ∫ 2π
−F2 dx + F1 dy = (−R sin θ)(−R sin θ dθ) + (R cos θ)(R cos θ dθ)
C 0
∫ 2π
= R2 dθ = 2πR2 .
0

Using the above definition for flux, the plane version of the divergence theorem
looks very much like the version in space.
∫ ∫∫
F · N ds = ∇ · F dA. (69)
∂D D

On the left, as noted above, the flux is an integral over a curve rather than a
surface, and on the right, we have a double integral rather than a triple integral. In
addition, since F = hF1 , F2 i, the divergence of F is
∂F1 ∂F2
∇·F= + .
∂x ∂y
250 CHAPTER 5. CACULUS OF VECTOR FIELDS

Formula (69) is one form of Green’s Theorem in the plane. It holds if the components
of F have continuous partial derivatives..

The proof of Green’s Theorem is very similar to the proof of the divergence theorem
in space. In fact, it is even easier since regions in the plane are easier to deal with
than regions in space. We won’t go over it again.

Green’s Theorem is usually stated in another equivalent form. Let F = hF1 , F2 i


be a vector field in R2 , and consider the perpendicular field G = hG1 , G2 i where
G1 = F2 and G2 = −F1 . Applying (69) to G yields

∫ ∫∫
G · N ds = ∇ · G dA.
∂D D

However,
F
G · N ds = −G2 dx + G1 dy = −(−F1 )dx + F2 dy = F1 dx + F2 dy

and this is nothing other than what we called F · dr when we were discussing line
integrals. Hence, the integral on the left becomes the line integral

F · dr.
G C

Similarly, the divergence of G is

∂G1 ∂G2 ∂F2 ∂(−F1 )


+ = +
∂x ∂y ∂x ∂y
∂F2 ∂F1
= − .
∂x ∂y

Hence, we get the following alternate form of the theorem.

Theorem 5.6 (Green’s Theorem) Let F be a smooth vector field in the plane.
Suppose D is a a region contained in the domain of F which is bounded by a finite
collection of smooth curves. Then
∫ ∫∫
∂F2 ∂F1
F · dr = − dA. (70)
∂D D ∂x ∂y
5.8. GREEN’S THEOREM 251

F D

dr

Applications of Green’s Theorem Green’s Theorem may be used to calculate


line integrals by reducing to easier double integrals. This is analogous to using the
divergence theorem to calculate surface integrals in terms of volume integrals.

Example 114 Let F(x, y) = h−y, xi and let C be the rectangle with vertices
(1, 2), (3, 2), (3, 3), and (1, 3). Assume C is traversed
∫ in the counter-clockwise di-
rection. We can use Green’s theorem to calculate C F · dr as follows. Let D be the
region enclosed by the rectangle, so C = ∂D. Note that C is traversed so that D is
always on the left. Then,
∫ ∫∫
∂F2 ∂F1
F · dr = − dA
C ∂x ∂y
∫ ∫D
= (1 − (−1)) dA = 2A(D) = 2 × 2 = 4.

Example 115 Let


−y x 1
F(x, y) = i+ 2 j = uθ for (x, y) 6= (0, 0),
x2 + y 2 x + y2 r

(x − 1)2 y 2
and let C be the ellipse + = 1. Assume C is traversed counter-clockwise.
9 4
One is tempted to try to use Green’s theorem for the region D contained inside C.
Unfortunately, the vector field is not smooth at the origin, so the theorem does not
apply to D. However, we can attempt the same trick we used in the case of the
inverse square field for a surface enclosing the origin. (See Section 7.)

Namely, choose a circle C 0 centered at the origin with radius small enough to fit
inside the ellipse C. Let D0 be the region lying between C 0 and C. F is smooth in
D0 , so Green’s theorem does apply. The boundary of D comes in two disconnected
components: ∂D = C ∪ C 0 . Also, with C traversed counter-clockwise, D0 will be on
its left as required, but C 0 must be traversed clockwise for D0 to be on its left.
252 CHAPTER 5. CACULUS OF VECTOR FIELDS

C’

With these assumptions,


∫ ∫∫
∂F2 ∂F1
F · dr = − dA.
C∪C 0 D ∂x ∂y

The integrand on the right was calculated in Example 7 of Section 3.

There we showed that


∂F2 y 2 − x2 ∂F1
= 2 =
∂x (x + y 2 )2 ∂y
so that the difference is zero. Hence, the integral on the right is zero. Expanding
the integral on the left, we obtain
∫ ∫
F · dr + F · dr = 0.
C C0

The second line integral has been done many times for this vector field in this
course, and the answer is −2π. The minus sign, of course, arises because the path
is traversed clockwise, which is opposite to the usual direction. Transposing, we
obtain ∫
F · dr = 2π.
C

Note that this same argument would have worked for any path C which is the
boundary of a region D containing the origin. In fact, for F = (1/r)uθ , we have
∫ {
0 if the origin is not in D
F · dr = .
∂D 2π if the origin is in D
5.8. GREEN’S THEOREM 253

The case in which D contains the origin is covered by the argument used in the
example—by excising a small disk from D. The case in which D does not contain
the origin follows directly from Green’s theorem, since in that case the integrand C
on the right is continuous and is zero everywhere in D.

One sometimes need the line integral C (1/r)uθ · dr for a closed curve C which goes
around the origin more than once. In that case, the curve must intersect itself (or
overlap) and it cannot be the boundary of a bounded region D. The integral is O
±2πn, where n is the number of times the curve goes around the origin, and the
sign depends on whether it is traversed counter-clockwise or clockwise. (Can you
prove that?) The case n = 0 corresponds to the curve not going around the origin
at all.

Area by Green’s theorem Both Green’s theorem and the divergence theorem
are ‘normally’ used to calculate the left hand side by reducing it to the right hand
side. However, there are occasions where one reverses this. For example, consider
the vector field F(x, y) = h−y, xi. The double integral in Green’s theorem is
∫∫ ∫∫
(1 − (−1)) dA = 2 dA = 2A(D). C
D D

Hence, Green’s theorem gives us the following formula for the area of D
∫ D
1
A(D) = −y dx + x dy. (71)
2 ∂D
This seems a bizarre way to calculate an area, but it is sometimes useful.

x2 y 2
Example 116 Let D be the area inside the ellipse 2 + 2 = 1. Parameterize the
a b
ellipse ∂D by
x = a cos θ y = b sin θ 0 ≤ θ ≤ 2π.
Then
dx = −a sin θdθ dy = b cos θdθ
so
−y dx + x dy = −(b sin θ)(−a sin θdθ) + (a cos θ)(b cos θdθ) = ab dθ.
It follows from (71) that
1
A(D) = ab(2π) = πab.
2
You should compare this with the usual way of calculating this area, and also the
method using the change of variables formula and the Jacobian.

It should be noted that the area can also be calculated using the line integral of the
vector field F = h−y, 0i or F = h0, xi which give you the method you learned in
your single variable calculus course. (Why?) Formula (71) is more symmetric, and
so has a better chance of simplifying the calculation.

Exercises for 5.8.


254 CHAPTER 5. CACULUS OF VECTOR FIELDS

1. Let F = hx, yi. (Then ∇ · F = 2.) Verify formula (69) (the first form of
Green’s Theorem) for each of the following curves by calculating the outward
flux through the given curve and checking that it is twice the area enclosed.
(a) A circle of radius a centered at the origin.
(b) A square of side a in the first quadrant with opposite vertices at (0, 0) and
(a, a).
∂F2 ∂F1
2. Let F = h−y, xi. (Then − = 2.) Verify formula (70) (the second
∂x ∂y
form of Green’s Theorem) for each of the following curves by calculating the
line integral for the given curve and checking that it is twice the area enclosed.
(a) A circle of radius a centered at the origin, traversed counterclockwise.
(b) A square of side a in the first quadrant with opposite vertices at (0, 0) and
(a, a), traversed counterclockwise.

3. Use Green’s Theorem (second form) to calculate C (y 2 )dx + (x2 )dy where C
is the path which starts at (0, 0) goes to (3, 0) then to (3, 3) and finally back
to (0, 0).

4. Use Green’s Theorem (second form) to calculate C −y dx + x dy where C is
the semi-circular path from (a, 0) to (−a, 0) with y ≥ 0. Hint: C is not a
0
closed path, but you can form a closed path by adding ∫ ∫ linear path C from
the
∫(−a, 0) to (a, 0). Use Green’s Theorem to relate C to C 0 through the use of
C∪C 0
.

5. Let F(x, y) = hy + sin(x2 ), (1 + y 2 )1/5 i. Calculate C F · dr for C the ellipse
x2 y2
+ = 1, traversed counterclockwise. Hint: You should remember what
a2 b2
the area of an ellipse is.
3 ∫
6. Let F(x, y) = hex , xyi. Calculate C F · dr for C the path bounded below by
the graph of y = x2 and above by the graph of x = y 2 and traversed in the
counterclockwise direction.

−y x
7. What is 2 + y2
dx + 2 dy for each of the following curves? (a) The
C x x + y2
ellipse x2 /9 + y 2 /16 = 1 traversed counterclockwise. (b) The triangle with
vertices (1, 1), (2, 3), and (0, 6) traversed clockwise.
−y x √
8. Throughout this problem, take F = i + j where r = x2 + y 2 .
r r
∂Q ∂P 1
(a) Show that − = .
∂x ∂y r

(b) Show by direct calculation that C F · dr = 2π where C is the circle
x2 + y 2 = 1 traversed counterclockwise.
(c) Let C be the square with vertices at the four points (±2, ±2) and assume
C is traversed counterclockwise. Use Green’s Theorem for a region with a
5.9. STOKES’S THEOREM 255

hole to show that


∫ ∫ π/4 ∫ 2 sec θ
F · dr = 2π + 8 dr dθ.
C 0 1

(d) Evaluate the double integral to complete the calculation.



(e) Calculate C F · dr directly. Hint: Calculate the line integral for one side
of the square and multiply by 4.
9. Find the area bounded by each of the following curves by using the formula
1∫
−y dx + x dy. Compute the area by another method and compare the
2 C
answers.
(a) y = 0, x = 1, y = x2 . Hint: You could parameterize the parabola by
r = h(1 − t), (1 − t)2 i, 0 ≤ t ≤ 1.
(b) r = ht2 , t3 i, −1 ≤ t ≤ 1 and x = 1. Hint: For an alternate method note
that the parametric representation describes the two curves y = ±x3/2 .
10. Let P0 = (0, 0), P1 = (x1 , y1 ), P2 = (x2 , y2 ) be the vertices of a triangle in
the plane. Derive a formula for its area using Green’s Theorem. Hint: You
should get the same answer as you would by taking half the cross product of
the vectors from the origin to the other vertices.
11. Derive the first form of Green’s theorem (69) from the divergence theorem
using the ‘pillbox argument’ as follows. Let F = hF1 , F2 , 0i and assume it
depends only on x and y. Let D be a bounded region in the x, y-plane,
and let E be a ‘pillbox’∫∫ produced by extending
∫ it upward one unit in the
z-direction. (a) Show ∂E F · N dS = ∂D F · N ds as follows. First note
that the flux through the top and bottom are zero. (Why?) Then calculate
the flux through
∫∫∫ the sides ∫∫
using an element of area of height 1 and base ds.
(b) Show E
∇ · F dV = D
∇ · F dA by considering an element of volume
which is a thin ‘column’ of height 1 and base dA.

5.9 Stokes’s Theorem

Stokes’s theorem is a generalization of the second form of Green’s theorem (for line
integrals). To motivate it, notice that the integrand
∂F2 ∂F1

∂x ∂y
in the double integral looks like one of the components of the curl. To make this
more explicit, view the plane vector field F as a vector field in space with its third
component zero, i.e., put
F = hF1 , F2 , 0i.
256 CHAPTER 5. CACULUS OF VECTOR FIELDS

Then
∂F3 ∂F2 ∂F1 ∂F3 ∂F2 ∂F1
∇×F=h − , − , − i
∂y ∂z ∂z ∂x ∂x ∂y
∂F2 ∂F1
= h0, 0, − i.
∂x ∂y
(The first two components are zero because F3 = 0 and F is a plane vector field so
its components F1 and F2 are functions only of x and y. However, we don’t actually
have to worry about that for the moment.) If we treat D as a surface which happens
to lie in the x, y-plane, and if we use the upward pointing normal N = k, we have
∫ ∫∫
∂F2 ∂F1
(∇ × F) · N dS = − dA.
D D ∂x ∂y
Hence, Green’s theorem can be rewritten
∫ ∫∫
F · dr = (∇ × F) · N dS.
∂D D

z
N
S

N=k
y S
D
D

Stokes’s theorem asserts that this same formula works for any surface in space and
the curve bounding it. That is, it works if we are careful to arrange the orientation
of the surface and the curve carefully, so we will devote some attention to that
point.

Suppose then that S is an oriented surface in space. That means that a unit normal
vector N has been specified at each point of S, and that these normals are related
to each other in some coherent way as we move around on the surface. For example,
if the surface is given parametrically by r = r(u, v), then the formula
∂r ∂r
dS = × du dv
∂u ∂v
5.9. STOKES’S THEOREM 257

gives a preferred normal direction at each point. More generally S might consist of
S
a finite collection of such surfaces which are joined together along smooth curves
along their edges. An example of such a surface would be three faces of a cube with
a common vertex. In such cases, it is not so easy to explain how the unit normals S
should be related when you cross and edge, but in most cases it is intuitively clear
2
S1
what to do. (We shall discuss how to do this rigorously later.)

Generally, such a surface S will have a well defined boundary ∂S, which will be a S3
smooth curve in space or a finite collection of such curves. There will be two possible
ways to traverse this boundary, and we specify one, i.e., we choose an orientation
for the boundary. In particular, that means that at each point, we specify a unit
tangent vector T. At each point on the boundary consider the cross-product N × T
of the unit normal to the surface and the unit tangent vector to its boundary. This S = S + S + S
1 2 3
cross-product will be tangent to the surface and perpendicular to its boundary. (See
the diagram.) Hence, it either points in towards the surface or out away from it.
We shall say that the orientation of the surface and the orientation of its boundary
are consistent if N × T always points toward S. This can be said more graphically
as follows. If you imagine yourself—suitably reduced in size—walking around the
edge of the surface in the preferred direction for ∂S, with your head in the direction
of the preferred normal, then the region will always be on your left. As you see this
is a natural generalization to a curved surface of the relationship we required for a
plane region in Green’s theorem.

T
N

Nx T

Theorem 5.7. (Stokes’s Theorem) Let F be a vector field in space with continu-
ous partial derivatives. Let S be a surface obtained by patching together smooth
surfaces along a finite collection of smooth curves and such that ∂S consists of a
finite collection of smooth curves. Assume finally that the orientations of S and ∂S
are consistent. Then ∫ ∫∫
F · dr = (∇ × F) · N dS.
∂S S

We will discuss the proof of Stokes’s theorem in the next section, but first we give
some applications.
258 CHAPTER 5. CACULUS OF VECTOR FIELDS

N Applications of Stokes’s Theorem By now you should be beginning to get the


idea. Stokes’s theorem is normally used to calculate a line integral by setting it
equal to a surface integral.
S
Example 117 Let F(r) = h−y, x, 0i = ruθ , and let C be the ellipse which is the
intersection of the cylinder x2 + y 2 = 1 with the plane x + y + z = 2. Assume C
is traversed in the counter-clockwise
∫ direction when viewed from above. We shall
S use Stokes’s theorem to calculate C F · dr. To do this, we need to find a surface S
with boundary ∂S = C. Since we are in space, there are in fact infinitely many such
surfaces. We shall do the problem two different ways, each of which is instructive.

D First, the portion of the plane x + y + z = 2 contained within the ellipse is one
possible S. To be consistent with the orientation of the curve C, we need to use the
‘upward’ pointing normals for S. (See the diagram.)

The curl of this vector field is easily calculated: ∇ × F = h0, 0, 2i. Hence,
∫ ∫∫ ∫∫
F · dr = (∇ × F) · dS = (2k) · dS.
C S S

Thus to complete the calculation, we must evaluate the surface integral on the right.
The easiest way to do this is to treat the surface as the graph of the function given
by z = f (x, y) = 2 − x − y with domain D the disk, x2 + y 2 ≤ 1, in the x, y-plane.
Then
dS = h−fx , −fy , 1idy dx = h−1, −1, 1idy dx
and the surface integral is
∫∫ ∫∫
(2k) · dS = 2 dA = 2 A(D) = 2π12 = 2π.
S D

Thus, ∫
F · dr = 2π.
C

Here is another way to do it. Let S be the surface made up by taking the part of
the cylinder x2 + y 2 = 1 between the ellipse and the x, y-plane together with the
disk x2 + y s ≤ 1 in the x, y-plane. You can think of S as obtained by taking a tin
can with bottom in the x, y-plane and cutting it off by the plane x + y + z = 2.
The result is an open can with a slanting elliptical top edge C. It is a little difficult
to see, but the proper direction for the normal vectors on the lateral cylindrical
surface is inward toward the z-axis. Then, as you cross the circular edge in the
x, y-plane to the disk forming the bottom component of S, you need to choose the
upward pointing normal. Since ∇ × F = 2k is parallel to the lateral surface, the
flux through that is zero. On the bottom surface, we have (∇ × F) · N = 2k · k = 2.
Hence, the flux through the bottom surface is 2 times the area of the disk, i.e., 2π.
Thus, ∫ ∫∫
F · dr = (∇ × F) · N dS = 0 + 2π = 2π.
C S
5.9. STOKES’S THEOREM 259
bottom
normal
There is one interesting variation of the above calculation. Let S be just the lateral
surface of the cylinder between the plane x + y + z = 2 and the x, y-plane. Then
∂S has two components: the top edge C and the bottom edge C 0 which is the circle
x2 + y 2 = 1 in the x, y-plane. Since we need to choose the inward pointing normals
on the lateral surface (for consistency with the orientation of C), we need to choose
the clockwise orientation of C 0 for consistency with that inward normal. Then
∫ ∫∫
F · dr = (∇ × F) · N dS = 0
C∪C 0 S

since as above ∇ × F = 2k is parallel to S. Hence, C


∫ ∫
F · dr + F · dr = 0
C C0

or lateral
∫ ∫
F · dr = − F · dr. normal
C C0

However, C 0 F · dr is an integral we have done several times in the past, and it
is equal to −2π. (Actually, we did it with the opposite orientation and got 2π.)
Hence, the final answer is 2π. C’
The above example illustrates the interesting ways geometric reasoning can help
when applying Stokes’s theorem. Sometimes you should look for a non-obvious
surface which may allow you to simplify the calculation. Also, it is sometimes
useful to use Stokes’s theorem to reduce the desired line integral to another line
integral which is easier to calculate. The second principle is also illustrated by the
next example.

Example 118 Let


1 −y x
F(r) = uθ = h 2 , , 0i .
r x + y 2 x2 + y 2

Note that F blows up on the z-axis, so its domain is R3 with the z-axis deleted.
This field, except for some constant, is the magnetic field intensity produced by a
unit of current flowing in a thin wire along the z-axis. The lines of force are circles
centered on the z-axis. Its curl is ∇ × F = 0. (You should do the calculation which
is very similar to the one done for the analogous vector field in the plane in Section
7.)

Let C be any closed curve which goes once around the z-axis in the counter-clockwise
direction when viewed from above. Since ∇ × F = 0, you might think you could
simply span the curve C by any surface whatsoever and the resulting surface integral
would be zero. Unfortunately, any surface spanning such a curve must intersect the
z-axis. Since F is singular on the z-axis, Stokes’s theorem does not apply for such
260 CHAPTER 5. CACULUS OF VECTOR FIELDS

C a surface. Instead, we must proceed as indicated above. Let C 0 be a small circle


in the x, y-plane centered at the origin. Let S 0 be any surface extending from C 0
to C. If the normals on S 0 are consistent 0
∫ with the orientation of C, then C must
be traversed in the clockwise direction. C 0 F · dr has been calculated several times
(e.g., once in the previous section); it equals −2π. Thus, by Stokes’s theorem
∫ ∫ ∫∫
F · dr + F · dr = (∇ × F) · dS = 0
C C0 S0

or
∫ ∫
S’ F · dr = − F · dr = −(−2π) = 2π.
C C0

The line integral is zero if the curve C does not go around the z-axis. (Can you
C’ prove that?)

Physical interpretation of Curl As in the case of the divergence of a vector field,


we can use Stokes’s theorem to give an interpretation of ∇×F which is independent
of any coordinate system.

Fix a point P with position vector r at which we want to calculate ∇ × F. Choose


a normal direction N at P . In the plane passing through P perpendicular to N,
consider a small circle C of radius R centered at P . Traverse C in the counter-
clockwise
∫ direction when looked from the ‘top’ relative to N. The line integral
C
F · dr is called the circulation of the vector field along the closed curve C. If F
is the momentum field of a fluid flow, you can think of the circulation as indicating
the average twisting effect of the field along the curve. By Stokes’s theorem, we
have ∫ ∫∫
F · dr = (∇ × F) · N dS
C S
where S is the disk spanning the circle C. However, by the average value property
for integrals, we have
∫∫
(∇ × F) · N dS = (∇ × F)(r0 ) · N A(S)
S

where the curl has been evaluated at an appropriate point with position vector r0
in the disk S. Thus,

1
(∇ × F)(r0 ) · N = F · dr.
A(S) C
If we take the limit as R → 0, i.e., as C shrinks to the point P , r0 → r, so

1
(∇ × F)(r) · N = lim F · dr.
C→P A(S) C

The quantity on the right is called the limiting circulation per unit area about the
axis N. In the fluid flow model, it can be thought of as the twisting effect on a
5.9. STOKES’S THEOREM 261

small ‘paddle wheel’ with axis N. As its axis is shifted, the paddle wheel will spin x F
faster or slower, and it may even reverse direction. This all depends on the relation
between the curl (∇ × F)(r) at the point and the axis N of the paddle wheel. N
In the above analysis, we used a family of small circles converging to P , but the
analysis would work as well for any family of curves converging to P . For example,
they could be squares or triangles in the plane perpendicular to N. Indeed, the
curves need not be plane curves perpendicular to the specified normal direction N
as long as their normals all converge to N.

We can use this interpretation of the curl (in the most general form) to show that
1 S C
∇×F = 0 for the field F(r) = uθ considered in Example 118 above. It is probably
r
easier to do that by direct calculation from the formula, but the geometric method
is instructive. We base the calculation on cylindrical coordinates. Fix a point P not
on the z-axis. To show ∇ × F is zero at P , it suffices to show that its components
in three mutually perpendicular directions are zero. We shall show that

(∇ × F) · N = 0

for N = ur , (pointing radially away from the z-axis), N = uθ , (pointing tangent to


circles centered on the z-axis), and N = k, (pointing parallel to the z-axis). For ur ,
consider a curvilinear rectangle centered at the point P on a cylinder (of radius r)
through the point P . You should examine the diagram carefully to make sure you
understand the direction in which this rectangle must be traversed to be consistent
with normal direction N = ur . In particular, note that the bottom edge is traversed
in the direction of positive θ and the top edge in the direction of negative θ.

The circulation (line integral) of F along this rectangle decomposes into 4 terms,
one for each side of the rectangle. F is perpendicular to each of the vertical edges,
so the line integrals for those are zero. The line integral for the bottom edge is
easy to calculate because the vector field is tangent to the edge and constant on
1
it. (The answer is r∆θ = ∆θ where ∆θ is the angle subtended by the edge at
r
the z-axis, but the actual value is not needed in the argument.) The calculation of
the line integral for the top edge is exactly the same except that the sign changes
because it is traversed in the opposite direction. Hence, the net circulation around u
the rectangle is zero. If we divide by the area of the rectangle, and take the limit as r
the rectangle shrinks to the point, we still get zero. It follows that (∇ × F) · ur = 0.

The calculations showing that (∇ × F) · uθ = 0 and (∇ × F) · k = 0 are similar, and


are left to the student as exercises. (The hardest one is the one for k.)

Exercises for 5.9.


1. Let F(x, y, z) = −yi + xj + zk. Use Stokes’s Theorem to evaluate C
F · dr for
each of the following paths.
262 CHAPTER 5. CACULUS OF VECTOR FIELDS

(a) A circle of radius a in the plane z = 5 centered on the z axis and traversed
counterclockwise when viewed from above.
(b) A circle of radius a in the plane x = 3 centered on the x-axis and traversed
counterclockwise when viewed from in front.
(c) The intersection of the paraboloid z = x2 + y 2 with the paraboloid z =
8 − x2 − y 2 traversed counterclockwise when viewed from above.

2. For each of the following surfaces S determine the proper coherent direction
of the normal vector if the boundary C = ∂S is traversed in the indicated
direction.
(a) The half open cylinder x2 + z 2 = 1, 0 ≤ y ≤ 2, closed off on the left by the
disk x2 + z 2 ≤ 1 in the x, z-plane. Assume the boundary x2 + z 2 = 1, y = 2
is traversed counterclockwise with respect to the positive y-axis, i.e., it comes
toward you and down in the first octant.
(b) The half open cylinder as in part (a), but open on the left and closed off
on the right. Assume C is also traversed counterclockwise with respect to the
positive y-axis.
(c) The part of the paraboloid z = 25 − x2 − 2y 2 above the plane x + y + z = 1.
Assume the boundary C is traversed counterclockwise with respect to the
positive z-axis.

3. For each of the following surfaces S with the indicated normal vectors, describe
the proper coherent orientation for its boundary C = ∂S.

(a) The part of the cone z = x2 + 4y 2 below the plane 3x + 4y + z = 10.
Use normals pointing roughly away from the z-axis.
(b) The 3 faces of the unit cube in the first octant which lie in the coordinate
planes. Use normals pointing into the first octant. Make sure you indicate
the proper orientation for each of the 6 components of the boundary.
(c) The portion of the hyperboloid of one sheet x2 − y 2 + z 2 = 1 between the
planes y = 1 and y = −1. Use normals pointing away from the y-axis. Hint:
Note that the boundary consists of two disconnected curves.

4. Let F(x, y, z) = −yi + xj + zk. Use Stokes’s Theorem to evaluate C F · dr
where C is the intersection of the surface of the unit cube in the first octant
x y
with the plane z = + . Assume the curve is traversed counterclockwise
10 12
when viewed from above. Hint: Think of C as the boundary of the surface
consisting of the vertical faces of the cube below the curve together with the
bottom face.
∫ 2
5. Evaluate C
(sin(x3 ) + y)dx + (ey + x)dy + cos z dz for C the intersection of
x2 y2 z2
the ellipsoid + + = 1 with the paraboloid z = x2 + y 2 . Find the
4 9 25
answer for both possible orientations of this curve.
5.9. STOKES’S THEOREM 263

6. Let S be a simple closed surface in R3 and let F be a smooth vector field with
domain containing S.
(a) Assume F is smooth on the interior E of S. Use
∫∫ the divergence theorem
and the formula ∇ · (∇ × F) = 0 to prove that S (∇ × F) · dS = 0. Use
outward normals.
(b) Do not assume that F is necessarily smooth on the interior of S. Prove the
above formula by means of Stokes’s Theorem as follows. Decompose S = S1 ∪
S2 into a union of two subsurfaces which meet only on their common boundary
C. Note that C is traversed in opposite directions for the two surfaces if its
orientation is consistent with the use of the outward orientation of normals
on both.
7. Let F = hyz, −xz, xyi. Calculate ∇ × F. Let C be the unit ∫circle
∫ x2 + y 2 = 1
in the x, y-plane traversed counterclockwise. (a) Calculate ∇ × F · N dS
S
for S the unit disk in the x, y-plane. (b) Make the same calculation for S the
hemisphere x2 + y 2 + z 2 = 1, z ≤ 0. (c) Make the same calculation if S is
made up of the cylindrical surface x2 + y 2 = 1, 0 ≤ z ≤ 1 capped off on top by
the the plane disk z = 1, x2 + y 2 ≤ 1. Note that in each case the boundary of
S is the curve C, so the answers should all be the same (in this case 0). Make
sure you choose the proper direction for the normal vectors for each surface.
8. Let F be a smooth vector field in R3 .

(a) Assume that C F · dr = 0 for every sufficiently small rectangular path,
whatever its orientation. Show that ∇ × F = 0.

(b) Suppose we only assume that C F · dr = 0 for every sufficiently small
rectangular path which lies in a plane parallel to one of the coordinate planes.
Can we still conclude that ∇ × F = 0? Explain.
1 √
9. Let F = uθ where r = x2 + y 2 .
r
(a) Let C1 be a circular arc which is part of a circle of radius r parallel to
the x, y-plane, centered on the z-axis and with the arc subtending an angle

α. Assume C1 is oriented in the counterclockwise direction when viewed from
above. Show that F · dr = α.
C1
(b) Consider the following path C in a plane parallel to the x, y-plane. Start
at the point with cylindrical coordinates (r, θ, z) and move in the positive r-
direction to (r + ∆r, θ, z), then along an arc (as above) to (r + ∆r, θ + ∆θ, z),
then to (r, θ + ∆θ,
∫ z), and finally back on a circular arc to (r, θ, z). Using part
(a), show that F · dr = 0.
C
(c) Show that (∇ × F) · k = 0.
(d) Using a similar argument, show that (∇ × F) · uθ = 0.
264 CHAPTER 5. CACULUS OF VECTOR FIELDS

5.10 The Proof of Stokes’s Theorem

We shall give two proofs. The first is not really a proof but a plausibility argument.
You should study it because it will help you understand why the theorem is true.
The second is a correct proof, but you can probably skip it unless you are specially
interested in seeing a rigorous treatment.

The first argument is based on the ‘physical interpretation’ of the curl. Let F be
a smooth vector field in space and let S be a surface with boundary ∂S. Imagine
S decomposed into many small curvilinear parallelograms Si . For each Si , choose
a point with position vector ri inside Si , and let Ni be the normal vector at that
point.

N
N Nj
S i

S
S j
i
S
Cancellation on
common boundary

If Si is small enough, we can treat it as if it were an actual parallelogram passing


through ri and normal to Ni . Then, according to the physical interpretation of the
curl, to a high degree of approximation we have

1
(∇ × F)(ri ) · Ni ≈ F · dr.
A(Si ) ∂Si
Hence, ∫
F · dr ≈ (∇ × F)(ri ) · Ni A(Si ),
∂Si
and, adding up, we obtain
∑∫ ∑
F · dr ≈ (∇ × F)(ri ) · Ni A(Si ).
i ∂Si i
5.10. THE PROOF OF STOKES’S THEOREM 265

If we take the limit as the number of curvilinear parallelograms


∫∫ goes to infinity,
the sum on the right approaches the surface integral S (∇ × F) · N dS. Consider
the sum on the left. Assume for each Si that the orientation of the boundary ∂Si
is consistent with the unit normal Ni . Let Si and Sj be two adjacent curvilinear
parallelograms which meet along a common edge Cij . Look at the diagram. If
the normals Ni and Nj are ‘coherently related’ to one another, then the direction
assigned to the common edge Cij by ∂Si will be opposite to the direction assigned
to it by ∂Sj . Hence, the two line integrals for this common edge will cancel one
another. That means that all internal segments of the boundaries of the curvilinear
parallelograms will cancel, and the only portions left will be the external boundary
∂S. Thus,

∑∫ ∫
F · dr = F · dr.
i ∂Si ∂S

It follows that the line integral in Stokes’s theorem equals the surface integral as
required.

Note that this argument is not valid in the form we presented it. The reason is that
we derived the physical interpretation of the curl from Stokes’s theorem. Since that
interpretation was a crucial step in the argument, the logic contains a vicious circle.
One way around this would be to derive the physical interpretation of the curl by
an independent argument. However, I know of no such argument which does not
contain a hidden proof of Stokes’s theorem.

A closer examination of the argument helps us understand the idea of orientation


for a surface. We suppose the surface can be decomposed into patches, each small
enough so that it can be assigned a coherent set of unit normals. (For example,
we can assume each patch is given by a parametric representation.) For any given
patch, the normal directions will impose a consistent orientation on its boundary.
We say that the normals on the patches are coherently related if common boundary
segments are traced in opposite directions for adjacent patches. (See the diagram.)

As we mentioned earlier in Section 2,

it may not in fact be possible to assign a coherent set of normals to the entire
surface. The Möbius band is an example of a surface for which that is not possible,
i.e., it is non-orientable. The diagram shows an attempt to decompose a Möbius
band into patches with coherent normals and boundaries.
266 CHAPTER 5. CACULUS OF VECTOR FIELDS

Inner tube with hole cut in it.


Cancellation fails
on this segment

It should also be noted that the method of forming a surface by putting together
coherently related simple patches can lead to some fairly complicated results. See
the diagram for an example. Stokes’s theorem still applies to the more general
surfaces.

The Correct Proof (May Be Skipped) Let S be an oriented surface which can be
decomposed into smooth patches as described above. By further decomposition, if
necessary, we may assume that each patch is the graph of a function. By arguing
as above about mutual cancellation along common edges, the line integral along ∂S
may be expressed as the sum of the line integrals for the boundaries of the individual
patches. Similarly the surface integral may be expressed as the sum of the surface
integrals for the individual patches. Thus it suffices to prove Stokes’s theorem (that
the line integral equals the surface integral) for each individual patch. Thus, we are
reduced to proving the theorem for the graph of a function. Suppose then that S is
the graph of the function expressed by z = f (x, y) with domain D in the x, y-plane.
(Essentially the same argument will work for graphs expressible by x = g(y, z) or
y = h(x, z).) Assume the orientation of S is the one such that the z-component of
N is always positive. (For the reverse orientation, just reverse all the signs in the
arguments below.)
5.10. THE PROOF OF STOKES’S THEOREM 267

S
S


First we calculate ∂S F · dr. Choose a parametric representation x = x(t), y =
y(t), a ≤ t ≤ b for ∂D, the boundary of the parameter domain. Then, x = x(t), y =
y(t), z = f (x(t), y(t)) will be a parametric representation of ∂S. Moreover, if ∂D is
traversed so that D is on the left, the resulting orientation of of ∂S will be consistent
with the generally upward orientation of S, as above. Then

F · dr = F1 dx + F2 dt + F3 dz.

However, dz = fx dx + fy dy yields

F · dr = F1 dx + F2 dy + F3 (fx dx + fy dy)
= (F1 + F3 fx )dx + (F2 + F3 fy )dy.

Hence,
∫ ∫ ∫
F · dr = (F1 + F3 fx )dx + (F2 + F3 fy )dy = G1 dx + G2 dy
∂S ∂D ∂D

where

G1 (x, y) = F1 (x, y, f (x, y)) + F3 (x, y, f (x, y))fx (x, y)


G2 (x, y) = F2 (x, y, f (x, y)) + F3 (x, y, f (x, y))fy (x, y)

Note that G = hG1 , G2 i is a plane vector field obtained by putting z = f (x, y) and
so eliminating the explicit dependence on z. Now, we may use Green’s theorem in
the plane to obtain
∫ ∫∫
∂G2 ∂G1
G1 dx + G2 dy = − dy dx.
∂D D ∂x ∂y
268 CHAPTER 5. CACULUS OF VECTOR FIELDS

The calculation of the partials on the right is a little tricky. By the chain rule

∂ ∂F2 ∂x ∂F2 ∂y ∂F2 ∂z


F2 (x, y, f (x, y)) = + +
∂x ∂x ∂x ∂y ∂x ∂z ∂x
∂F2 ∂F2 ∂F2
= (1) + (0) + fx
∂x ∂y ∂z
∂F2 ∂F2
= + fx .
∂x ∂z
The notation is a little confusing. On the left, we are taking the partial derivative
with respect to x after making the substitution z = f (x, y). The function being
differentiated is thus a function of x and y alone. On the right, the partial derivatives
are taken before making the substitution, so at that stage x, y and z are treated as
independent variables. Similar calculations yield

∂ ∂F3 ∂F3
F3 (x, y, f (x, y)) = + fx
∂x ∂x ∂z
∂ ∂F1 ∂F1
F1 (x, y, f (x, y)) = + fy
∂y ∂y ∂z
∂ ∂F3 ∂F3
F3 (x, y, f (x, y)) = + fy .
∂y ∂y ∂z

Thus,
( )
∂G2 ∂F2 ∂F2 ∂F3 ∂F3
= + fx + + fx fy + F3 fyx
∂x ∂x ∂z ∂x ∂z
( )
∂G1 ∂F1 ∂F1 ∂F3 ∂F3
= + fy + + fy fx + F3 fxy .
∂y ∂y ∂z ∂y ∂z

(The product rule has been used for the second terms in the expressions for G1 and
G2 .) Hence, subtracting, we obtain for the integrand
( ) ( )
∂G2 ∂G1 ∂F2 ∂F1 ∂F2 ∂F3 ∂F3 ∂F1
− = − + − fx + − fy .
∂x ∂y ∂x ∂y ∂z ∂y ∂x ∂z

∫∫
Next we evaluate S
F · dS. We have

dS = h−fx , −fy , 1i dy dx.

Also,  
∂F3 ∂F2 ∂F1 ∂F3 ∂F2 ∂F1
∇×F= − , − , −
∂y ∂z ∂z ∂x ∂x ∂y
so
[( ) ( ) ]
∂F3 ∂F2 ∂F1 ∂F3 ∂F2 ∂F1
(∇×F)·dS = − (−fx ) + − (−fy ) + − dy dx,
∂y ∂z ∂z ∂x ∂x ∂y
5.11. CONSERVATIVE FIELDS, RECONSIDERED 269

and it is not hard to check that the expression in brackets is the same integrand as
above. It follows that
∫ ∫ ∫∫ ∫∫
∂G2 ∂G1
F · dr = G1 dx + G2 dy = − dy dx = ∇ × F · dS.
∂S ∂D D ∂x ∂y S

That completes the proof.

Exercises for 5.10.

1. Consider the surface consisting of the three faces of the unit cube in the first
octant which lie in the coordinate planes. Assume that the normals are chosen
to point into the first octant. Determine the orientation of the boundary of
each of the three faces and verify that cancellation occurs as described in the
proof of Stokes’s Theorem along common edges.

5.11 Conservative Fields, Reconsidered

Let F denote a vector field in Rn where n = 2 or n = 3. We shall look again at the


screening tests to determine whether F might be conservative. For a vector field in
the plane (n = 2), the test is
∂F2 ∂F1
− =0
∂x ∂y
(equation (2) in Section 3), and for a vector field in space (n = 3), the test may be
written
∇×F=0
(Section 4). We pointed out in our earlier discussion that a vector field could pass
the screening test but still not be conservative. The most important example of
1
such a field is F = uθ . However, this can only happen if the geometry of the
r
domain of the vector field is sufficiently complicated. In this section we shall look
into the matter in more detail.

First consider the plane case (n = 2). We said previously that an open set D is
connected if it can’t be decomposed into disjoint open sets, i.e., if it comes in ‘one
piece’. There is a somewhat related but much subtler notion. If a connected region
D in R2 does not have any ‘holes’ in it, it is called simply connected. The diagram
gives some examples of regions in the plane which are simply connected and which
are not simply connected.
270 CHAPTER 5. CACULUS OF VECTOR FIELDS

simply connected not simply connected

A slightly more rigorous characterization of ‘simply connected’ is as follows. Con-


sider a simple closed curve C in the plane. By that we mean that the curve does
not cross itself anywhere. It is intuitively clear that such a curve bounds a region in
the plane which we call the interior of the curve. The region D is simply connected
if for any simple closed curve C which lies entirely in D, the interior of C also lies
C in D. The idea is that if there were a ‘hole’ in D, even one consisting of a single
missing point, then one could find a curve which goes around the hole and then
the interior of that curve would contain at least one point not in D. (The actual
definition of the term ‘simply connected’ is a bit more involved, but for regions in
Interior the plane, the above characterization is equivalent.)

Note that many common regions in the plane are simply connected. For example,
all rectangles, disks, etc. are simply connected connected.

Theorem 5.8 Let F = hF1 , F2 i be a plane vector field which is smooth on its
domain of definition D. Suppose D is simply connected. Then F is conservative if
and only if it passes the screening test

∂F2 ∂F1
− = 0.
∂x ∂y

Proof. If F is conservative, then we know it passes the screening test.

Suppose conversely that ∂F ∂x − ∂y = 0. One way to show that F is conservative is


2 ∂F1

to show that C F · dr = 0 for every closed curve C in the domain D. In principle, we
must show this for every closed curve, but in fact it is enough to show it for every
simple closed curve, i.e., every closed curve with does not cross itself. Suppose then
that C is a simple closed curve, and let D0 denote the interior of C. By hypothesis,
D0 is entirely contained within D where the vector field F is assumed to be smooth.
5.11. CONSERVATIVE FIELDS, RECONSIDERED 271

Hence, Green’s theorem applies and we may conclude that C


∫ ∫∫
∂F2 ∂F1
F · dr = − dA
C 0 ∂x ∂y
∫ ∫D
= (0) dA = 0
D0

as required.

Remarks for those curious about the details. There are some tricky points
which were glossed over in the above discussion. First, the assertion that every
simple closed curve bounds a region in the plane is actually a deep theorem called
the Jordan Curve Theorem. That theorem is quite difficult to prove in full generality.
Fortunately, there are tricky ways to get around that for what we want here. Thus,
to show F is conservative, it suffices to choose a base point r0 and to define a
function f with gradient F using the formula

f (r) = F · dr
C

where C is any path from r0 to r. Since this can be any path, it might as well be a
polygonal path with edges are parallel to one of the coordinate axes. The Jordan
Curve theorem is much easier to verify for such paths. Similarly, the reduction from
curves which do cross themselves to curves which do not is not hard to justify for
such polygonal paths. See the diagram for a hint about how to do it.

P closed polygonal path breaks


0 up into simple ones
polygonal path

1 1
Example Let F(x, y) = uθ = 2 h−y, xi. The domain of this function is
r x + y2
the plane R2 with the origin deleted. Hence, it is not simply connected. Thus
the theorem does not apply. Indeed, the field is not conservative but does pass
272 CHAPTER 5. CACULUS OF VECTOR FIELDS

the screening test. However, we may choose simply connected subdomains and
consider the vector field on such a subdomain. For example, let D be the part
of the plane to the right of the y-axis and not including the y-axis. This region
is simply connected—i.e., there are no holes—and F does pass the screening test,
so the theorem tells us there must be a function f such that F = ∇f everywhere
π/2 on D. In fact, you showed previously in an exercise that the function defined by
f (x, y) = tan−1 (y/x) is just such a function. (Check again that ∇f = F in case you
don’t remember the exercise.) The natural question then—posed in the exercise—is
why can’t we use this same function for the original domain of F? Let’s see what
happens if we try. Note first that in the right half plane, we have θ = tan−1 (y/x)
θ π so there is an obvious way to try to extend the definition of the function f . Let
f (x, y) be the polar coordinate θ of the point (x, y). Unfortunately, θ is not uniquely
defined. If we start working forward from the positive y-axis, θ starts at π/2 and
increases. If we start working backward from the negative y-axis, θ starts at −π/2
θ −π and gets progressively more negative. On the negative x-axis, these two attempts
to extend the definition of f (x, y) will run into a problem. If we approach from
above the proper value will be π, but if we approach from below the proper value
will be −π. There is no way around this difficulty. We know that because the field
is not conservative. Hence, there is no continuous function f such that F = ∇f on
−π/2 the entire domain of F.

The above example illustrates a very important principle in mathematics and its
applications. We may have a problem (for example a system of differential equa-
tions) which, for every point where the problem makes sense, has a solution which
works in a sufficiently small neighborhood of that point. In that case, we say that
the problem is solvable locally. That does not guarantee, however, that the prob-
lem can be solved globally, i.e., that there is a single continuous solution which
works everywhere. Since every point has a simply connected neighborhood (a small
rectangle or small disk), any vector field which passes the screening test is locally a
gradient, but may not be globally a gradient. The issue of ‘local solutions’ versus
‘global solution’ has only recently entered the awareness of physicists and other
scientists. The reason is that much of physics was concerned with solving differen-
tial equations or systems of differential equations, and the solution methods tend
Solutions obtained in small to give local solutions. However, the importance of global solutions has become
neighborhoods may conflict increasingly clear in recent years. The above ideas can be extended to vector fields
after traversing a closed loop.
in space, but the geometry is more complicated because the concept ‘hole’ can be
more complicated. Let D be a connected open set in R3 . Suppose that any simple
closed curve C in D is the boundary of an orientable surface S which also lies entirely
in D. In this case we shall say that D spans curves. Most common regions, e.g.,
solid spheres or rectangular boxes, have this property. The region consisting of R3
with a single point deleted also has this property. For, if a curve C goes around the
missing point, it can be spanned by a surface which avoids the point. On the other
hand, the region obtained by deleting the entire z-axis from R3 does not have the
property since any curve going around the z-axis cannot be spanned by a surface
which is not pierced by the z-axis.
5.11. CONSERVATIVE FIELDS, RECONSIDERED 273

z-axis

S
C
No spanning surface
C can avoid z-axis.

The analogue of the above theorem holds in space: If F is a smooth vector field
on some open set D in R3 and D spans curves, then F is conservative if and only
if it satisfies the screening test ∇ × F = 0. The proof is very similar, but it uses
Stokes’s theorem instead of Green’s theorem. You should work it out for yourself.
This notion is a bit more complicated than you might think at first. A curve in
R3 might not intersect itself but could still have quite complicated geometry. For
example, it might be knotted in such a way that it can not be straightened out
without being cut. Also, we didn’t go into how complicated the ‘surfaces’ spanning
such a curve may be.

The term ‘simply connected’ may also be defined for regions E in space, but the
definition is not the same as that for ‘E spans curves’ given above. However, the
notions are fairly closely related, and a region which is simply connected does span
curves (but not necessarily vice versa).

Exercises for 5.11.

1. Tell if each of the following regions is simply connected. If not, give a reason
for your conclusion
(a) The set of all points in R2 except for the semi-circle x2 + y 2 = 1, y > 0.
(b) The set of all points in R2 except for (1, 0) and (−1, 0).
(c) The region in R2 between x2 + y 2 = 1 and x2 + y 2 = 4.

2. Tell if each of the following regions in R3 spans curves.


(a) The set of all points in R3 except for the line segment from (0, 0, 0) to
(1, 0, 0).
(b) The interior of the torus in R3 obtained by rotating the circle (x−2)2 +y 2 =
1 about the z-axis.
274 CHAPTER 5. CACULUS OF VECTOR FIELDS

(c) The region in R3 between the sphere x2 + y 2 + z 2 = 1 and the sphere


x2 + y 2 + z 2 = 4.
(d) The set of all points in R3 except for those on the circle x2 + y 2 = 1, z = 0
in the x, y-plane.

3. You showed previously by a messy direct calculation that ∇ × (∇f ) = 0 for


any scalar field f in space. Here is an alternate argument with avoids the
messy calculation. (However, it depends on Stokes’s theorem, the proof of
which is messy enough in its own right!)
Use the ‘physical interpretation of the curl’ to show that ∇ × F · N = 0 for
any conservative field F = ∇f and any vector N. (Use the path independence
property for conservative fields.) Then conclude that ∇ × F = 0.

5.12 Vector Potential

Let F be a vector field in R3 . We said that F is conservative if F = ∇f for some


scalar field f . Electrostatic fields are conservative, so the previous mathematics
is good to know if you are studying such fields. Magnetic fields, however, are not
conservative, so a different but related kind of mathematics is appropriate for them.
We go into that now.

Let F be a vector field in space. Another vector field A is called a vector potential
for F if
∇ × A = F.
Note that such an A is not unique. Namely, if U is any field satisfying ∇ × U = 0,
then
∇ × (A + U) = ∇ × A + ∇ × U = F + 0 = F.
Hence, a vector potential for F, if there is one, may always be modified by such
a U. Since any conservative U satisfies ∇ × U = 0, there is a very wide choice
of possible vector potentials. This is to be distinguished from the case of scalar
potentials which are unique up to an additive constant.

There is also a screening test to determine if F might have a vector potential A.


Namely, you should have checked the identity

∇ · (∇ × A) = 0,

i.e., divergence following curl is always 0. (If you didn’t do that exercise, do it now!)
That means, if ∇ · F 6= 0, then there is no point looking for a vector potential. If it
does vanish, then it is worth a try.
5.12. VECTOR POTENTIAL 275

Example 119 Let F(x, y, z) = hx, y, −2zi. We have


∇·F=1+1−2=0
so F passes the screening test. We look for a vector potential A as follows. We try
to find one with A3 = 0. This is plausible for the following reason. Suppose we
found a vector potential with A3 6= 0. In that case, we could certainly find a scalar
field f such that A3 = ∂f /∂z. Then,
∇ × (A − ∇f ) = ∇ × A − ∇ × (∇f ) = ∇ × A.
However, the third component of A − ∇f is A3 − ∂f /∂z = 0.

Assume now that ∇ × A = F where A3 = 0. Then


∂A2 ∂A1 ∂A2 ∂A1
− = x, = y, − = −2z.
∂z ∂z ∂x ∂y
Thus, from the first two equations
A2 = −xz + C(x, y)
A1 = yz + D(x, y).
These can be substituted in the third equation to obtain
∂C ∂D
−z + −z− = −2z
∂x ∂y
or
∂C ∂D
− = 0.
∂x ∂y
There are no unique solutions C, D to this equation. That reflects the great freedom
we have in choosing a vector potential. There are a variety of methods one could use
to come up with explicit solutions, but in the present case, it is clear by inspection
that
C(x, y) = D(x, y) = 0
will work. Hence, A1 = yz, A2 = −xz, A3 = 0 is a solution and
A = hyz, −xz, 0i
is a vector potential. You should check that it works.

Not every field F which passes the screening test ∇ · F = 0 has a vector potential.
1
Example 120 Let F = uρ be our friend the inverse square law. We have
ρ2
remarked several times that ∇ · F = 0 for an inverse square law, so F passes the
screening test. However, F 6= ∇ × A for any possible A. Namely, we know from
flux calculations made previously that for this inverse square law
∫∫
F · dS = 4π
S
276 CHAPTER 5. CACULUS OF VECTOR FIELDS

for any sphere S centered at the origin. On the other hand, it is not too hard to
see that ∫
∇ × A · dS = 0
S
S1 for any vector field A. To see this, divide the sphere into an upper hemisphere S1
and a lower hemisphere S2 which meet at the equator, which is a common boundary
for both. Note that ∂S1 is traversed in the opposite direction from ∂S2 , although
they are the same curve if orientation is ignored. By Stokes’s theorem, we have
∫ ∫∫
A · dr = ∇ × A · dS,
S ∫∂S1 ∫ ∫S1
1 A · dr = ∇ × A · dS.
∂S2 S2

If we add these two equations, we get zero on the left because the line integrals are
S for the same curve traced in opposite directions. On the right, we get
2
∫∫ ∫∫ ∫∫
∇ × A · dS + ∇ × A · dS = ∇ × A · dS
S1 S2 S
S
2 so it is zero.

The property analogous to ‘E spans curves’ is E spans surfaces. A connected open


set E in R3 has this property if, for every simple closed surface S in E, the interior
of S is also in E. The set E consisting of R3 with the origin deleted does not span
surfaces since the interior of any sphere centered at the origin is not entirely in E.
On the other hand, the set E consisting of R3 with the entire z-axis deleted does
span surfaces. Do you see why?

Theorem 5.9 Let F be a smooth vector field on an open set E in R3 which spans
surfaces. Then F has a vector potential if and only if ∇ · F = 0.

The proof of this theorem is much too difficult to attempt here. Warning! The
author has not been completely honest about the concept ‘E spans surfaces’ because
the phrase ‘simple closed surface’ was not defined. This phrase evokes the idea
of something straightforward like the surface of a sphere or a cube. However, it
is necessary for the above theorem to hold that we consider much more general
surfaces. For example, a toral surface, i.e., the surface of a doughnut shaped solid,
is an example of a ‘simple closed surface’. In general, any surface which may be
viewed as the boundary of a plausible solid region (and to which the divergence
theorem may be applied) is a possible candidate. (If you want to learn more about
this subject, you should plan someday to study algebraic topology and in particular
a famous result called DeRham’s Theorem.) Probably the only cases you ever need
to apply the theorem to are where E is equal to all of R3 or at worst the interior
of a really simple closed surface like a sphere or a cube.

Summary of Relations among Fields in R3 Some the relations among scalar


and vector fields in R3 described in the previous sections can be summarized in the
5.12. VECTOR POTENTIAL 277

following table.

Scalar Fields f, g, . . .
↓ ∇
Vector Fields F, E, A, . . .
↓ ∇×
Vector Fields F, B, . . .
↓ ∇·
Scalar Fields

The way to interpret this table is as follows. The vertical arrows indicate operations.
The first, the gradient operation (∇), takes scalar fields into vector fields. The
second, the curl operation (∇×), takes vector fields into vector fields. The third,
the divergence operation (∇·), takes vector fields into scalar fields.

The result of doing two successive operations is zero. Thus, ∇ × (∇f ) = 0 and
∇ · (∇ × A) = 0. Moreover, at the second and third levels we have screening tests.
If a vector field at the given level passes the test (i.e., the operation at that level
takes it to zero), one may ask if the vector field comes from a ‘potential’ at the
previous level. Under suitable connectedness assumptions, the answer will be yes.

Exercises for 5.12.

1. In each case check if the given field has divergence zero. If so, look for a vector
potential. (In each case, the domain is all of R3 which spans surfaces, so there
must be a vector potential if the divergence is zero.)
(a) hx2 , y 2 , −2(x + y)zi.
(b) hy, x, zi
(c) hz, x, yi.
2. Let  
x y 1
F(x, y, z) = , 2 ,0 = ur .
x + y x + y2
2 2 r
(a) Show that ∇ · F = 0 except on the z-axis where F is undefined. Hint:
You can save yourself some time by referring to the formula for divergence in
cylindrical coordinates in Section 13.
(b) Find a vector potential for F. (The domain of F does span surfaces. Can
you see that?)
3. You showed in a previous exercise by a messy direct calculation that ∇ ·
(∇ × A) = 0 for any vector field A in space. Derive this fact by the following
278 CHAPTER 5. CACULUS OF VECTOR FIELDS

conceptual argument which relies on both the divergence theorem and Stokes’s
theorem. (However, recall that the proofs of those
∫∫theorems are not so easy.)
Use the argument in Example 120, to show that S (∇ × A) · dS = 0 for any
spherical surface in the domain of A. Then use the ‘physical interpretation
of the divergence’ to conclude that ∇ · (∇ × A) = 0.

4. Let F = hF1 , F2 , F3 i be a vector field in space satisfying ∇ · F = 0. Suppose


the domain of F is all of R3 . Fix values y0 , z0 , and define
∫ z ∫ y
0 0
A1 = F2 (x, y, z )dz − F3 (x, y 0 , z0 )dy 0 ,
z0 y0
∫ z
A2 = − F1 (x, y, z 0 )dz 0 ,
z0
A3 = 0.

Show that ∇ × A = F. Hint: You will have to use the general rule for
differentiating integrals
∫ b ∫ b
∂ ∂f
f (s, t, . . . )ds = (s, t, . . . )ds
∂t a a ∂t

which asserts that you can interchange differentiation and integration with
respect to different variables. This rule applies for functions with continuous
partials on a bounded interval. Note however that to differentiate with respect
to the upper limit of an integral, you need to use the fundamental theorem of
calculus.
This is a special case of the theorem stated in the section since R3 certainly
spans surfaces.

5.13 Vector Operators in Curvilinear Coordinates

This section is included here because it depends on what we have just done. How-
ever, most of you should probably skip it for now and come back to it when you
need the formulas given here, which is likely to be the case at some point in your
studies. You should glance at some of the formulas, for future reference. Of course,
if you really want a challenging test of your understanding of the material in the
previous sections, you should study this section in detail.

Gradient in Cylindrical and Spherical Coordinates For a scalar field f in


the plane, we found in Chapter III, Section 7 that

∂g 1 ∂g
∇f = ur + uθ . (72)
∂r r ∂θ
5.13. VECTOR OPERATORS IN CURVILINEAR COORDINATES 279

where on the right g is the function of r, θ obtained by substituting x = r cos θ, y =


r sin θ in f , i.e., f (x, y) = g(r, θ).

The corresponding formula in cylindrical coordinates is


∂g 1 ∂g ∂g
∇f = ur + uθ +k (73)
∂r r ∂θ ∂z
where f (x, y, z) = g(r, θ, z). That is fairly clear because the change from rectan-
gular coordinates x, y, z to cylindrical coordinates r, θ, z involves only the first two
coordinates.

There is a corresponding formula for spherical coordinates.


∂g 1 ∂g 1 ∂g
∇f = uρ + uφ + uθ . (74)
∂ρ ρ ∂ρ ρ sin φ ∂θ
where f (x, y, z) = g(ρ, φ, θ). uρ , uφ , and uθ are unit vectors in the ρ, φ, and θ
directions respectively. At any point in space, uρ points directly away from the
origin, uφ is tangent to the circle of longitude through the point, and points from
north to south, and uθ is tangent to the circle of latitude through the point and
points from west to east.

To derive this formula, we argue as follows.

df = ∇f · dr.

Assume
∇f = uρ Aρ + uφ Aφ + uθ Aθ .
The trick is to express dr in terms of these same unit vectors. Consider a spherical
cell with one corner at the point with spherical coordinates (ρ, φ, θ) and dimensions
dρ, ρdφ, and ρ sin φdθ.

dr

ρ dφ
d ρ

ρ sin φ d θ
280 CHAPTER 5. CACULUS OF VECTOR FIELDS

Then ignoring small errors due to the curvature of the sides, we have

dr = uρ dρ + uφ ρdφ + uθ ρ sin φdθ.

Hence,
∇f · dr = Aρ dρ + Aφ ρdφ + Aθ ρ sin φdφ.
However,
∂g ∂g ∂g
dg = dρ + dφ + dθ
∂ρ dφ ∂θ
∂g
so putting df = dg and comparing coefficients of dρ, dφ, and dθ gives Aρ = ,
∂ρ
1 ∂g 1 ∂g
Aφ = , and Aθ = .
ρ dφ ρ sin φ ∂θ
All this can be summarized by saying that the gradient operation has the following
form in each coordinate system:

Polar Coordinates in the plane:


∂ 1 ∂
∇ = ur + uθ .
∂r r ∂θ
Cylindrical Coordinates in space:
∂ 1 ∂ ∂
∇ = ur + uθ +k .
∂r r ∂θ ∂z
Spherical Coordinates in space:
∂ 1 ∂ 1 ∂
∇ = uρ + uφ + uθ .
∂ρ ρ ∂φ ρ sin φ ∂θ

Divergence in Cylindrical Coordinates Let F = ur Fr + uθ Fθ + kFz be the


representation of the vector field in terms of cylindrical coordinates. Then we have
∂ 1 ∂ ∂
∇ · F = (ur + uθ + k ) · (ur Fr + uθ Fθ + kFz ).
∂r r ∂θ ∂z
One may calculate this formally using the fact that the three vectors ur , uθ , and
k are mutually perpendicular, but one must be careful to use the product rule and
apply the differentiation operators to these unit vectors. We have
∂ ∂ ∂
ur = 0 ur = uθ ur = 0
∂r ∂θ ∂z
∂ ∂ ∂
uθ = 0 uθ = −ur uθ = 0
∂r ∂θ ∂z
∂ ∂ ∂
k=0 k=0 k = 0.
∂r ∂θ ∂z
5.13. VECTOR OPERATORS IN CURVILINEAR COORDINATES 281

(These rules can be derived by geometric visualization or by writing ur = cos θi +


sin θj and uθ = − sin θi + cos θj and taking the indicated partial derivatives.) The
result of the calculation, which you should check, is
( )
1 ∂(rFr ) ∂Fθ ∂(rFz )
∇·F= + +
r ∂r ∂θ ∂z
1 ∂(rFr ) 1 ∂Fθ ∂Fz
= + + .
r ∂r r ∂θ ∂z

The above calculation is done by following a plausible but arbitrary formal scheme.
Hence, there is no particular reason to believe that it gives the correct answer.
There are, after all, other formal schemes that one could employ. To show that the
formula is correct, we need another argument, and we give that in what follows.

The basic method is to calculate the divergence as flux per unit volume by picking
an appropriate family of curvilinear boxes which shrink to a point.

Consider a curvilinear box centered at the point with cylindrical coordinates (r, θ, z).
It has the following 6 faces.

The top and bottom faces are circular wedges centered at (r, θ, z + dz) and (r, θ, z −
dz); their common area is r(2dθ)(2dr), and the normals are ±k.

The far and near side faces are rectangles centered at (r, θ + dθ, z) and (r, θ − dθ, z);
their common area is (2dr)(2dz), and the normals are ±uθ .

The outer and inner faces are cylindrical rectangles centered at (r + dr, θ, z) and
(r − dr, θ, z); their areas are respectively (r + dr)(2dθ)(2dz) and (r − dr)(2dθ)(2dz),
and the normals are ±ur .

To calculate the flux out of the surface S of the box we argue as follows. First,
for any face only the component of F perpendicular to that face is relevant: Fz
for the top and bottom faces, Fθ for the side faces, and Fr for the outer and inner
faces. Secondly, to a first approximation, the flux through a face equals the value
of the relevant component at the center of the face multiplied by its area and a sign
depending on the direction of the normal. (The reason why it suffices to use the
value of the component at the center of the face, rather than attempting to integrate
over the face, is that to a first approximation we may assume the component is a
linear function of the coordinates so, if we did integrate, each positive variation from
the value at the center would be canceled by a corresponding negative variation.)

We now perform this calculation for the faces of the box.

For the top face the flux is

Fz (r, θ, z + dz)(r2dθ)(2dr).
282 CHAPTER 5. CACULUS OF VECTOR FIELDS

However, to a first degree of approximation


∂Fz
Fz (r, θ, z + dz) = Fz (r, θ, z) + dz
∂z
so the flux is
∂Fz
(Fz (r, θ, z) +
dz)r(2dθ)(2dr).
∂z
Similarly, the flux through the bottom face is
∂Fz
−(Fz (r, θ, z) − dz)r(2dθ)(2dr).
∂z
(Note the normal in that case is −k.) Hence, the total flux for the two faces is,
after cancellation
∂Fz ∂Fz ∂Fz
(2 dz)r(2dθ)(2dr) = (2dz)r(2dθ)(2dr) = dV.
∂z ∂z ∂z
A comparable computation for the two side faces yields

∂Fθ ∂Fθ
(2 dθ)(2dr)(2dz) = (2dθ)(2dr)(2dz)
∂θ ∂θ
1 ∂Fθ 1 ∂Fθ
= ( )(2rdθ)(2dr)(2dz) = ( )dV.
r ∂θ r ∂θ
Note that we had to multiply (and divide) by the extra factor of r to change the θ
increment to a distance.

The flux computation for the outer and inner faces is a bit different because the
area as well as the component Fr is a function of the radial variable r. Thus for the
outer face, the flux would be

Fr (r + dr, θ, z)((r + dr)2dθ)(2dz).

It is useful to rewrite this

((r + dr)Fr (r + dr, θ, z))(2dθ)(2dz)

and consider the quantity in parentheses as a function of r. Then making the linear
approximation, the flux is
∂(rFr )
(rFr (r, θ, z) + dr)(2dθ)(2dz)
∂r
and similarly for the inner face it is
∂(rFr )
−(rFr (r, θ, z) − dr)(2dθ)(2dz).
∂r
Thus the net flux for the outer and inner faces is
∂(rFr ) 1 ∂(rFr ) 1 ∂(rFr )
(2 dr)(2dθ)(2dz) = ( )(2dr)(r2dθ)(2dz) = ( )dV.
∂r r ∂r r ∂r
5.13. VECTOR OPERATORS IN CURVILINEAR COORDINATES 283

If we add up the three net fluxes, we get


1 ∂(rFr ) 1 ∂Fθ ∂Fz 1 ∂(rFr ) 1 ∂Fθ ∂Fz
( )dV + ( )dV + ( )dV = ( + ( )+ )dV.
r ∂r r ∂θ ∂z r ∂r r ∂θ ∂z
If we now divide by dV to get the flux per unit volume we get for the divergence
1 ∂(rFr ) 1 ∂Fθ ∂Fz
∇·F= + ( )+
r ∂r r ∂θ ∂z
as required. Remark. The formula (74) for the divergence may be described in words
as follows. For each coordinate there is a multiplier which changes the coordinate
to a distance. In this case the r and z multipliers are 1 but the θ multiplier is r. To
obtain the divergence, multiply each component by the other two multipliers and
then take the partial with respect to the relevant coordinate. Add up the results
and then divide by the product of the multipliers. Divergence in Spherical
Coordinates Let F = Fρ uρ + Fφ uφ + Fθ uθ be a resolution of the vector field F in
terms of unit vectors appropriate for spherical coordinates. Then formally,
∂ 1 ∂ 1 ∂
∇ · F = (uρ + uφ + uθ ) · (uρ Fρ + uφ Fφ + uθ Fθ ).
∂ρ ρ ∂φ ρ sin φ ∂θ
Again this should be computed formally using the product rule and the rules
∂ ∂ ∂
uρ = 0 uρ = uφ uρ = sin φuθ
∂ρ ∂φ ∂θ
∂ ∂ ∂
uφ = 0 uφ = −uρ uφ = cos φuθ
∂ρ ∂φ ∂θ
∂ ∂ ∂
uθ = 0 uθ = 0 uθ = − sin φuρ − cos φuφ
∂ρ ∂φ ∂θ
These formulas can be derived geometrically or by using

uρ = hsin φ cos θ, sin π sin θ, cos φi


uφ = hcos φ cos θ, cos π sin θ, − sin φi
uθ = hcos θ, sin θ, 0i.

The resulting formula for the divergence, which you should check, is
( )
1 ∂(ρ2 sin φFρ ) ∂(ρ sin φFφ ) ∂(ρFθ )
∇·F= 2 + + .
ρ sin φ ∂ρ ∂φ ∂θ
Again, this must be verified by an independent argument. The reasoning for that is
pretty much the same as in the case of cylindrical coordinates, but the curvilinear
box is the one appropriate for spherical coordinates and hence somewhat more
complicated. I leave it as a challenge for you to do the appropriate flux calculations.

The same rule works for interpreting this. Use multipliers 1 for ρ, ρ for φ, and
ρ sin φ for θ for the coordinates, and then multiply each component by the other
284 CHAPTER 5. CACULUS OF VECTOR FIELDS

two multipliers and then take the partial with respect to the relevant coordinate.
Add up the results and then divide by the product of the multipliers. Curl in

Cylindrical Coordinates We have

∂ 1 ∂ ∂
∇ × F = (ur + uθ + k ) × (ur Fr + uθ Fθ + kFz ).
∂r r ∂θ ∂z

Again, this can be calculated formally using the appropriate rules, and the result is

1 ∂(Fz ) ∂(rFθ ) ∂Fr ∂Fz 1 ∂(rFθ ) ∂Fr


∇×F= ( − )ur + ( − )uθ + ( − )k.
r ∂θ ∂z ∂z ∂r r ∂r ∂θ

This may be thought of as the determinant of the matrix


 
(1/r)ur uθ (1/r)k
 ∂/∂r ∂/∂θ ∂/∂z  .
Fr rFθ Fz

Again, the above formulas have been derived purely formally, so one must justify
them by another argument. To do this we calculate the components of the curl as
circulation per unit area by picking an appropriate family of curvilinear rectangles
which shrink to a point. We first calculate ∇ × F · ur .

Consider the curvilinear rectangle which starts at the point with cylindrical coordi-
nates (r, θ − dθ, z − dz), goes to (r, θ + dθ, z − dz), then to (r, θ + dθ, z + dz), then to
(r, θ −dθ, z +dz), and then finally back to (r, θ −dθ, z −dz). This curvilinear rectan-
gle is traced on a cylinder of radius r and is centered at (r, θ, z). Its dimensions are
2dz and r(2dθ). We calculate the circulation for this path which you should note
has the proper orientation with respect to the outward normal ur to the cylinder.

On any side of this curvilinear rectangle we need only consider the component of F
parallel to that side: Fθ for the segments in the θ-direction and Fz for the segments
in the z-direction.

To a first approximation, we may calculate the line integral F · dr for any side of
the curvilinear rectangle by taking its value at the center of the side and multiplying
by the length of the side. The reason this works is that to a first approximation
there is as much positive variation on one side of the center point as there is negative
variation on the other side, so the two cancel out.

This circulation is

Fθ (r, θ, z − dz)r(2dθ) + Fz (r, θ + dθ, z)(2dz)


−Fθ (r, θ, z + dz)r(2dθ) − Fz (r, θ − dθ, z)(2dz)
5.13. VECTOR OPERATORS IN CURVILINEAR COORDINATES 285

We have the first order approximations


∂Fθ
Fθ (r, θ, z − dz) = Fθ (r, θ, z) − dz,
∂z
∂Fθ
Fθ (r, θ, z + dz) = Fθ (r, θ, z) + dz,
∂z
and putting these in the first and third terms for the circulation yields the net result
∂Fθ ∂Fθ
− (2dz)r(2dθ) = − dA.
∂z ∂z
Similarly, the first order approximations
∂Fz
Fθ (r, θ + dθ, z) = Fz (r, θ, z) + dθ
∂θ
∂Fz
Fz (r, θ − dθ, z) = Fz (r, θ, z) − dθ
∂θ
put in the second and fourth terms of the circulation yield the net result
∂Fz 1 ∂Fz 1 ∂Fz
(2dθ)(2dz) = r(2dθ)(2dz) = dA.
∂θ r ∂θ r ∂θ
Combining these terms yields
∂Fθ 1 ∂Fz 1 ∂Fz ∂Fθ
− dA + dA = ( − )dA.
∂z r ∂θ r ∂θ ∂z
Now divide by the area dA of the curvilinear rectangle to obtain
( )
1 ∂Fz ∂Fθ 1 ∂Fz ∂(rFθ )
∇ × F · ur = − = − .
r ∂θ ∂z r ∂θ ∂z

The calculation of ∇ × F · uθ is very similar. Use the rectangle centered at (r, θ, z)


which starts at (r − dr, θ, z + dz), goes to (r + dr, θ, z + dz), then to (r + dr, θ, z − dz),
then to (r − dr, θ, z − dz) and finally back to (r − dr, θ, z + dz). The net result is
∂Fr ∂Fz
∇ × F · uθ = − .
∂z ∂r

The calculation of ∇ × F · k is a bit more complicated. Consider the curvilinear


‘rectangle’ centered at (r, θ, z) which starts at (r − dr, θ − dθ, z), goes to (r + dr, θ −
dθ, z), then to (r + dr, θ + dθ, z), then to (r − dr, θ + dθ, z) and finally back to
(r − dr, θ − dθ, z). Note that this is oriented properly with respect to the normal
vector k.

To a first approximation, the circulation F · dr is

Fr (r, θ − dθ, z)(2dr) + Fθ (r + dr, θ, z)(r + dr)(2dθ)


−Fr (r, θ + dθ, z)(2dr) − Fθ (r − dr, θ, z)(r − dr)(2dθ).
286 CHAPTER 5. CACULUS OF VECTOR FIELDS

We have the first order approximations


∂Fr
Fr (r, θ − dθ, z) = Fr (r, θ, z) − dθ,
∂θ
∂Fr
Fr (r, θ + dθ, z) = Fr (r, θ, z) + dθ,
∂θ

and putting these in the first and third terms for the circulation yields the net result
∂Fr 1 ∂Fr 1 ∂Fr
− 2dθ (2dr) = − r(2dθ)(2dr) = − dA.
∂θ r ∂θ r ∂θ
For the second and fourth terms, the reasoning is a bit more complicated. Since both
r and Fθ change, we also need to consider the variation of r. Put H(r, θ, z) = rFθ ).
Then the second and fourth terms in the circulation become

H(r + dr, θ, z)(2dθ) − H(r − dr, θ, z)(2dθ).

The relevant first order approximations are

∂(rFθ )
H(r + dr, θ, z)(r + dr) = rFθ (r, θ, z) + dr,
∂r
∂(rFθ )
H(r − dr, θ, z)(r − dr) = rFθ (r, θ, z)r − dr.
∂r
Put these in the second and fourth terms for the net result
∂(rFθ ) 1 ∂(rFθ ) 1 ∂(rFθ )
2dr(2dθ) = (2dr)r(2dθ) = dA.
∂r r ∂r r ∂r
Combining yields the following first order approximation for the circulation
( )
1 ∂Fθ 1 ∂(rFθ ) 1 ∂(rFθ ) ∂Fr
− dA + dA = − dA,
r ∂θ r ∂r r ∂r ∂θ

and dividing by dA gives


( )
1 ∂(rFθ ) ∂Fr
∇×F·k= − .
r ∂r ∂θ

We may summarize this information finally by writing

1 ∂(Fz ) ∂(rFθ ) ∂Fr ∂Fz 1 ∂(rFθ ) ∂Fr


∇×F= ( − )ur + ( − )uθ + ( − k
r ∂θ ∂z ∂z ∂r r ∂r ∂θ
as we claimed above. Curl in Spherical Coordinates Formally, we have

∂ 1 ∂ 1 ∂
∇ × F = (uρ + uφ + uθ ) × (uρ Fρ + uφ Fφ + uθ Fθ ).
∂ρ ρ ∂φ ρ sin φ ∂θ
5.13. VECTOR OPERATORS IN CURVILINEAR COORDINATES 287

The result of working this out is


( )
1 ∂(ρ sin φFθ ) ∂(ρFφ )
∇×F= − uρ
ρ2 sin φ ∂φ ∂θ
( ) ( )
1 ∂Fρ ∂(ρ sin φFθ ) 1 ∂(ρFφ ) ∂Fφ
+ − uφ + − uθ
ρ sin φ ∂θ ∂ρ ρ ∂ρ ∂φ

which may also be expressed as the determinant of the matrix


 
(1/(ρ2 sin φ))uρ (1/(ρ sin φ))uθ (1/ρ)uφ
 ∂/∂ρ ∂/∂φ ∂/∂θ  .
Fρ ρFφ ρ sin φFθ

The analysis is pretty much the same as in the case for cylindrical coordinates. The
justification uses curvilinear rectangles appropriate for spherical coordinates.

Exercises for 5.13.

1. Determine the form of the Laplacian operator ∇2 = ∇·∇ in polar coordinates.

2. Do the same for cylindrical coordinates.

3. Do the same for spherical coordinates.


288 CHAPTER 5. CACULUS OF VECTOR FIELDS
Part II

Differential Equations

289
Chapter 6

First Order Differential


Equations

6.1 Differential Forms

Differential forms are an alternate way to talk about vector fields, and for some ap-
plications they are the preferred way. They are used frequently in thermodynamics
and also in the study of first order differential equations.

We shall concentrate on the case of forms in R2 . There is a corresponding theory


for Rn with n > 2, but it is much more difficult.

Let F = hF1 , F2 i be a plane vector field,


∫ and let C be a path in the plane. Unless F
is conservative, the line integral W = C F · dr is a path dependent quantity, so we
might write symbolically W = W (C). You may also have encountered path depen-
dent quantities in your chemistry course when studying thermodynamics. Namely, if
a gas satisfies an equation of state of the form f (p, v, T ) = 0, it is possible to choose
two of the three variables to be independent variables and (at least locally) solve
for the third in terms of those. Suppose, for example, that v, T are the independent
variables.

In this context, we can think of the path C as representing a sequence of changes


of state of the gas leading from the state characterized by (v1 , T1 ) to the state
characterized by (v2 , T2 ). Such a sequence of changes of state will yield a change
in the heat q in the system, and this change will generally depend on the path C.
For ‘infinitesimal changes’ dv, dT , the first law of thermodynamics asserts that the
change in q is given by

du − p dv

291
292 CHAPTER 6. FIRST ORDER DIFFERENTIAL EQUATIONS

where u = u(p, v, T ) is a function of the state of the system called its internal energy.
(v 2 , T 2 ) du can be expressed in terms of dp, dv and dT , and since dp can be expressed in
terms of dv and dT , the above expression for the change in q can be put ultimately
in the form
M (v, T )dv + N (v, T )dT.
Then the total change in the heat q along the path C will be the line integral
∫ ∫
du − p dv = M dv + N dT.
C C

As mentioned above, it depends on the path C.

(v 1 , T 1 ) Leaving the thermodynamics to your chemistry professors, let us consider the basic
mathematical situation. Given a pair of functions M (x, y), N (x, y), the expression

M (x, y) dx + N (x, y) dy

is called a differential form. We shall always assume that the component functions
M and N are as smooth as we need them to be on the domain of the form. You
can think of the form as giving the change of some quantity for a displacement
dr = hdx, dyi in R2 . Associated to such a differential form is the vector field
F = hM, N i, and the form is just the expression F · dr appearing in line integrals
for F. Similarly, given a vector field F = hM, N i, we may consider the differential
form M dx + N dy. Thus, the two formalisms, vector fields and differential forms,
are really just alternate notations for the same concept. This may seem to add
needless complication, but there are situations, e.g., in thermodynamics, where it is
easier to think about a problem in the language of forms than it is in the equivalent
language of vector fields.

We can translate many of the notions we encountered in studying plane vector fields
to the language of differential forms.

Recall first that a plane vector field F = hM, N i is conservative if and only if it is
the gradient of a scalar function F = ∇f . Then

M dx + N dy = F · dr = ∇f · dr.

You should recognize the expression on the right as the differential of the function

∂f ∂f
df = ∇f · dr = dx + dy.
∂x ∂y

Thus, hM, N i is conservative if and only if the associated form M dx + N dy equals


the differential df of a function. Such forms are called exact. For exact forms, the
path independence property looks particularly simple:
∫ ∫
M dx + N dy = df = f (end of C) − f (start of C).
C C
6.1. DIFFERENTIAL FORMS 293

Recall the screening test for the field F = hM, N i

∂N ∂M
= (75)
∂x ∂y

which in some (but not all) circumstances will tell us if the field is conservative. We
say that the corresponding differential form M dx + N dy is closed if its components
satisfy equation (75).

We now translate some of the things we learned about vector fields to the language
of forms.

(a) “Every conservative field passes the screening test (75)” becomes “Every exact
form is closed”.
1
(b)“The field uθ satisfies the screening test but is not conservative” becomes
r
−y x
“ 2 2
dx + dy is closed but not exact”.
x +y x + y2
2

(c) “If the domain of a field is simply connected, then the field is conservative if
and only if it passes the screening test” becomes “If the domain of a form is simply
connected, then the field is exact if and only if it is closed”.

Note that because of (c), a closed form M dx+N dy defined on some open set in the
plane is always locally exact. That is, for any point in its domain, we can always
choose a small neighborhood, i.e., a disk or a rectangle, and a function f defined
on that neighborhood such that

M dx + N dy = df

on that neighborhood. However, we can’t necessarily find a single function which


will work everywhere. If we can find an f which works everywhere in the domain of
the form, the form would be exact, but we might say ‘globally exact’ to emphasize
the distinction.

Exercises for 6.1.

1. Tell whether or not each of the differential forms M (x, y)dx+N (x, y)dy below
is exact. If it is exact, find a function f such that df = M dx + N dy. (In each
case the domain of the form is R2 , so every closed form is exact.)
(a) (2x sin y + y 3 ex )dx + (x2 cos y + 3y 2 ex )dy
1
(b) ( y 2 − 2yex )dx + (y − ex )dy
2
(c) (2xy 3 )dx + (3x2 y 2 )dy
294 CHAPTER 6. FIRST ORDER DIFFERENTIAL EQUATIONS

2. The variables need not be called x and y. In succeeding chapters, we will


generally use t and y. In each of the following cases, check to see if the given
form is exact, and if it is, find a function f of which it is the differential.
(a) (3ty + y 2 )dt + (t2 + ty)dy
(b) x2 etx dt + (1 + xt)etx dx.

3. A differential form P (x, y, z) dx + Q(x, y, z) dy + R(x, y, z) dz in R3 is called


closed if
∂R ∂Q ∂R ∂P ∂Q ∂P
= , = , and = .
∂y ∂z ∂x ∂z ∂x ∂y
It is called exact if it is of the form df for an appropriate function f (x, y, z).
What do each of these statements say about the corresponding vector field
F(x, y, z) = P i + Qj + Rk? Give a condition on a domain E in R3 such that
every closed form on E will be exact.

4. Check that each of the following differential forms is closed and find a function
f such that it equals df .
(a) yz dx + xz dy + xy dz.
(b) (y + x cos(xt)) dt + t dy + t cos(xt) dx.

5. Show that every differential form of the form M (t) dt + N (y) dy is closed.
Show that it is also exact because it is the differential of the function f (t, y) =
A(t) + B(y) where A(t) and B(y) are indefinite integrals
∫ ∫
A(t) = M (t) dt and B(y) = N (y) dy.

6.2 Using Differential Forms

We want to develop general methods for solving first order differential equations. In
most applications, the independent variable represents time, so instead of calling the
variables x and y, we shall call them t and y with t being the independent variable
and y being the dependent variable. Then a first order differential equation can
usually be put in the form
dy
= f (t, y) (76)
dt
where f is some specified function. A solution of (76) is a function y = y(t) defined
on some real t-interval such that

y 0 (t) = f (t, y(t))

is valid for each t in the domain of the solution function y(t).


6.2. USING DIFFERENTIAL FORMS 295

dy 2t + y
Example 121 Consider = − . We shall ‘solve’ this equation by some
dt t + 2y
formal calculations using differential forms, and later we shall try to see why the
calculations should be expected to work.

First, cross multiply to obtain

(t + 2y)dy = −(2t + y)dt

or
(2t + y)dt + (t + 2y)dy = 0. (77)
Consider the differential form appearing on the left of this equation. It is closed
since
∂ ∂
(t + 2y) = 1 = (2t + y).
∂t ∂y
(Don’t get confused by the fact that what was previously called x is now called
t.) Since the domain of this form is all of R2 (the t, y-plane), and that is simply
connected, the form is exact. Thus, it is the differential of a function, df = (2t +
y)dt+(t+2y)dy. We may find that f by the same method we used to find a function
with a specified conservative vector field as gradient. After all, it is just the same
∂f ∂f
theory in other notation. Thus, since df = dt + dy, we want
∂t ∂y
∂f ∂f
= 2t + y = t + 2y.
∂t ∂y
Integrating each of these, we obtain

f (t, y) = t2 + yt + C(y) f (t, y) = ty + y 2 + D(t),

and comparing, we see that we may take C(y) = y 2 , D(t) = t2 . Thus,

f (t, y) = t2 + ty + y 2

will work. (Check it by taking its differential!) Thus, equation (77) may be rewritten

df = d(t2 + ty + y 2 ) = 0

from which we conclude


f (t, y) = t2 + ty + y 2 = c
for some constant c. We obtain this way an infinite collection of level curves of the
function f as a “solution” to the original differential equation. In order to determine
a unique solution, we must specify a point (t0 , y0 ) lying on the level curve or impose
some other equivalent condition. This corresponds to requiring that the solution
of the differential equation satisfy an initial condition y(t0 ) = y0 . (Refer back to
Chapter II where the issue of initial conditions was first discussed.) For example,
if we know that y = 2 when t = 1, we have

12 + (1)(2) + 22 = c or c = 7.
296 CHAPTER 6. FIRST ORDER DIFFERENTIAL EQUATIONS

y Hence, the corresponding “solution” is t2 + ty + y 2 = 7. We can solve this for y in


terms of t by applying the quadratic formula to the equation y 2 + ty + (t2 − 7) = 0.
This yields √ √
−t ± t2 − 4(t2 − 7) −t ± 28 − 3t2
y(t) = =
2 2
but only the plus sign gives a solution satisfying the condition y = 2 when t = 1.
1 √
Hence
t the solution we end up with is y(t) = (−t + 28 − 3t2 ).
2
Note that not every value of c yields a solution which makes sense. Thus, requiring
y = 0 when t = 0 yields c = 0 or t2 + yt + y 2 = 0 and the locus of this equation
in R2 is the point (0, 0). (Why?) This is not a total surprise, since the right hand
dy 2t + y
side of the original differential equation =− is undefined at (0, 0).
dt t + 2y
Analysis of the Solution Process We went from an equation of the form
dy
28 = f (t, y) (78)
dt
3
to one of the form
M dt + N dy = 0 (79)
28 M (t, y)
3 where f (t, y) = − . (We could take M = f and N = 1, but in many cases,
N (t, y)
as in Example 1, f is actually a quotient.) The two equations are not quite the
same. A solution of (78) is a collection of functions y = y(t), one for each initial
condition, satisfying the differential equation, while a solution of (79) is a family of
level curves f (t, y) = c in R2 . The relation between the two is that the graph of
each y(t) is a subset of some one of the level curves.

In general, I shall say that a smooth curve C in the plane is a solution curve to
C equation (79) if at each point on the curve, the equation is true for each displacement
vector hdt, dyi which is tangent to the curve. Note that at any point in the domain of
the differential form, the equation (79) determines the ratio dy/dt, so it determines
the vector hdt, dyi up to multiplication by a scalar. That means that the line along
(dt,dy) which the vector points is determined. (There is some fiddling you have to do if
one or the other of the coefficients M, N vanishes. However, if they both vanish at
a point, the argument fails and hdt, dyi is not restricted at all.) A general solution
of (79), then, will be a family of such curves, one passing through each point of the
domain of the differential form—except possibly where both coefficients vanish.

If y = y(t) is a solution of the differential equation (78), then, at each point of its
graph, we also have the relation
M (t, y)
dy = y 0 (t)dt = f (t, y)dt = − dt
N (t, y)
so the graph is at least part of a solution curve to
M (t, y)dt + N (t, y)dy = 0. (79)
6.2. USING DIFFERENTIAL FORMS 297

Conversely, the reasoning can be reversed, so that any section of a solution curve to
(79) which happens to be the graph of a function will be a solution to the original
differential equation.

Example 121 illustrates this quite well. The curve t2 + ty + y 2 = 7 is in fact an


ellipse in the t, y-plane, but we can pick out a segment of that ellipse by solving
1 √ √ √
y(t) = (−t + 28 − 3t2 for 28 − 3t2 > 0 (i.e., − 28/3 < t < 28/3), and that
2
yields a solution of the original differential equation satisfying y = 2 when t = 1.

There are differences between curves satisfying M dt + N dy = 0 and graphs of


functions satisfying dy/dt = f (t, y). First of all, a solution curve for the differential
form could be a vertical line, and that certainly can’t be the graph of a function.
Secondly, at a point where N = 0 but M 6= 0, we must have dt = 0. At such a point *
the proposed tangent line to the curve is vertical, and the curve might not look like
the graph of a function. However, this is quite reasonable, since at such a point
the differential equation dy/dt = −M/N has a singularity on the right. Finally, at
points at which M = N = 0, all bets are off since no unique tangent line is specified.
Such points are called critical points of the differential form.

Integrating Factors The strategy suggested in Example 1 for solving an equation critical point
of the form
M dt + N dy = 0
is to look for a function f so that df = M dt + N dy, and then to take the level
curves of that function. Of course, this will only be feasible if the differential form
is exact, so you may wonder what to try if it is not exact. The answer is to try to
make it exact by multiplying by an appropriate function. Such a function is called
an integrating factor. Thus, we want to look for a function µ(t, y) such that
µ(M dt + N dy) = (µM )dt + (µN )dy
is exact.

Example 122 We shall try to solve y dt − t dy = 0. Note that


∂(−t) ∂y
= −1 6= 1 =
∂t ∂y
so the form is not closed and hence it cannot be exact. We shall try to find µ(t, y)
so that
(µy)dt − (µt)dy
is exact. Since every exact form is closed, we must have
∂(−µt) ∂(µy)
= or
∂t ∂y
∂µ ∂µ
− t−µ= y+µ or
∂t ∂y
∂µ ∂µ
t+ y = −2µ. (80)
∂t ∂y
298 CHAPTER 6. FIRST ORDER DIFFERENTIAL EQUATIONS

At this point, it is not clear how to proceed. Equation (80) alone does not completely
determine µ, and indeed that is no surprise because in general there may be many
possible integrating factors. Of course, we need only one. To find a µ we proceed
in effect by making educated guesses. In particular, we shall look for an integrating
factor which depends only on t, i.e., we assume µ = µ(t). Of course, this may not
work, but we have nothing to lose by trying. With this assumption, equation (80)
becomes

t = −2µ or
dt
dµ dt
= −2 which yields
µ t
ln |µ| = −2 ln |t| + c.

Since we only need one integrating factor, we may take c = 0, so we get

ln |µ| = ln |t|−2 or
1
µ = ± 2.
t
Again, since we only need one integrating factor, we may as well take µ = 1/t2 .

Having done all that work, you may think you are done, but of course, all we now
know is that
1 y 1
(y dt − t dy) = 2 dt − dy
t2 t t
is probably exact. Hence, it now makes sense to look for a function f (t, y) such that
1
df = 2 (y dt − t dy). If you remember the quotient rule, you will see immediately
t
that f (t, y) = −y/t is just such a function. However, let’s be really dumb and
proceed as if we hadn’t noticed that. We use the usual method to find f . Integrate
the equations
∂f y ∂f 1
= 2 =−
∂t t ∂y t
to obtain
y y
f = − + C(y) f = − + D(t).
t t
Clearly, we can take C = D = 0, so we do obtain f (t, y) = −y/t as expected.
Hence, the general solution curves are just the level curves of this function

f (t, y) = −y/t = c or y = −ct.

Clearly, it doesn’t matter what we call the constant, so we can drop the sign and
write the general solution as
y = ct.
This is the family of all straight lines through the origin with the exception of the
y-axis.
6.2. USING DIFFERENTIAL FORMS 299

There is one problem with the above analysis. What we actually obtained were
solution curves for the equation
µ(M dt + N dy) = 0.
rather than the original equation M dt + N dy = 0. If for example, µ(t, y) = 0
has some solutions, that locus would be added as an additional extraneous solution
which is not a solution of the original equation. In this example, µ = 1/t2 never
vanishes, so we don’t have any extraneous solutions. However, it has a singularity
at t = 0, so its domain is smaller than that of the original form y dt − t dy (which is
defined everywhere in R2 .) Thus, it is possible that the curve t = 0, i.e., the y-axis is
a solution curve for the original problem, which got lost when the integrating factor
was introduced. In fact that is the case. If t = 0, so is dt = 0, so y dt − t dy = 0.

Hence, the general solution is the set of all straight lines through the origin. Note
also that the origin itself is a critical point of y dt − t dy = 0.

In general, there are many possible integrating factors for a given differential form.
1
Thus, in this example, µ = is also an integrating factor and results in the
yt
equation
1 dt dy
(y dt − t dy) = − =0
yt t y
dt dy
or =
t y
which is what we would have obtained by trying separation of variables. That would
be the most appropriate method for this particular problem, but we did it the other
way in order to illustrate the method in a simple case.

If you thoroughly understand the above example, you will be able to work many
problems, but it is worthwhile also describing the general case. To see if the form
(µM )dt + (µN )dy
is closed, apply the screening test
∂(µN ) ∂(µM )
=
∂t ∂y
which yields
∂µ ∂N ∂µ ∂M
N +µ = M +µ or
∂t ∂t ∂y ∂y
[ ]
1 ∂µ ∂µ ∂M ∂N
N− M = − . (81)
µ ∂t ∂y ∂y ∂t

Unfortunately, it is not generally possible to solve this equation! This is quite dis-
appointing because it also means that there is no general method to solve explicitly
300 CHAPTER 6. FIRST ORDER DIFFERENTIAL EQUATIONS

the equation M dt + N dy = 0 or the related equation dy/dt = f (t, y). There are
however many special cases in which the method does work. These are discussed in
great detail in books devoted to differential equations.

Typically one tries integrating factors depending only on one or the other of the
variables or perhaps on their sum. This is basically a matter of trial and error.

Example 123 Consider


1
(t + ey )dt + ( t2 + 2tey )dy = 0.
2
You should apply the screening test to check that the differential form on the left
is not closed. We look for an integrating factor by considering equation (81) which
in this case becomes
[ ]
1 ∂µ 1 2 ∂µ ∂(t + ey ) ∂( 12 t2 + 2tey )
( t + 2te ) −
y y
(t + e ) = −
µ ∂t 2 ∂y ∂y ∂t

or
[ ]
1 ∂µ 1 2 ∂µ
( t + 2tey ) − (t + ey ) = ey − (t + 2ey ) = −(t + ey ).
µ ∂t 2 ∂y

(This was derived by substituting in equation (81), but you would have been just as
well off deriving it from scratch by applying the screening test directly to the form
µ(t + ey )dt + µ( 12 t2 + 2tey )dy. There is no need to memorize equation (81).) It is
apparent that assuming that ∂µ/∂y = 0 (i.e., µ depends only on t) leads nowhere.
Try instead assuming ∂µ/∂t = 0. In that case, the equation becomes
1 dµ
− (t + ey ) = −(t + ey ) or
µ dy
1 dµ
= 1.
µ dy
(Note that the partial derivative becomes an ordinary derivative since we have
assumed µ is a function only of y.) This can be solved easily to obtain as a possible
integrating factor
µ = ey .
I leave the details to you.

To continue, we expect that the form


1
ey (t + ey )dt + ey ( t2 + 2tey )dy
2
is exact. Hence, we look for f (t, y) such that
∂f ∂f 1
= tey + e2y = t2 ey + 2te2y .
∂t ∂y 2
6.2. USING DIFFERENTIAL FORMS 301

Integrating the first equation with respect to t and the second with respect to y
yields
1 2 y 1 2 y
f (t, y) = t e + te2y + C(y) f (t, y) = t e + te2y + D(t).
2 2
Comparing, we see that C = D = 0 will work, so we obtain f (t, y) = 12 t2 ey + te2y .
Hence, the general solution is
1 2 y
t e + te2y = C.
2

Notice that in this case the integrating factor neither vanishes nor has any sin-
gularities, so we don’t have to worry about adding extraneous solutions or losing
solutions.

Note that in the above analysis we did not worry too much about the geometry of
the domain. We know that it is not generally true that a closed form is exact, so
even if we find an integrating factor, we may still not be able to find a f defined
on the entire domain of the original differential form. However, this issue is not
usually worth worrying about because there are so many other things which can
go wrong in applying the method. In any case, we know that locally closed forms
are exact, so that in any sufficiently small neighborhood of a point, we can always
find a f which works in that neighborhood. Since, in general, solution methods for
differential equations only give local solutions, this may be the best we can hope
for. Extending local solutions to a global solution is often a matter of great interest
and great difficulty.

Exercises for 6.2.

1. Find a general solution of each of the following equations. (The differential


forms are closed.)
(a) et cos y dt + (2y − et sin y) dy = 0.
(b) (x2 − y 2 )dx + (y 2 − 2xy)dy = 0.

2. (a) By rewriting as a problem about differential forms, find a general solution


of the differential equation
dy 2t − 6y
= .
dt 6t + 3y
You need not express the answer in the form y = y(t). A relation between t
and y will suffice.
(b) What is the form of the solution in (a) if y(1) = 2?
(c) Express the solution found in part (b) in the form y = y(t). What should
the domain of this function be?
302 CHAPTER 6. FIRST ORDER DIFFERENTIAL EQUATIONS

3. Find a general solution of each of the following equations. (You will need to
find integrating factors.)
t t
(a) (1 + 6 2 )dt + 2 dy = 0.
y y
(b) (y + e−x )dx + dy = 0.
(c) (1 + t + y)dt − (1 + t)dy = 0.
dx
4. Solve = t − x given x = 2 when t = 0. Do this by rewriting as a problem
dt
in differential forms and finding an appropriate integrating factor.

6.3 First Order Linear Equations

The simplest first order equations are the linear equations, those of the form
dy
+ a(t)y = b(t), (82)
dt
where a(t) and b(t) are known functions. Equation (82) may also be written in the
dy
form = f (t, y) by taking f (t, y) = b(t) − a(t)y.
dt
Examples
dy
= ky
dt
dy
− ky = j
dt
dy 1 1
+ y= 2
dt t t
The first equation is that governing exponential growth or decay, and it was solved
in Chapter II. The general solution is y = Cekt .

An example of a non-linear first order equation is


dy
+ y 2 = t.
dt

Equation (82) can be put in the form

(a(t)y − b(t))dt + dy = 0

and solved by the methods of the previous section. You should try it as an exercise
to see if you understand those methods. However, we shall use another method
which is easier to generalize to second order linear equations.
6.3. FIRST ORDER LINEAR EQUATIONS 303

First consider the so-called homogeneous case where b(t) = 0, i.e., we want to solve

dy
+ a(t)y = 0.
dt

This may be done fairly easily by separation of variables.

dy
= −a(t)dt
y

ln |y| = − a(t)dt + c


where a(t)dt denotes any antiderivative

∫ ∫
|y| = e− a(t)dt+c
= e− a(t)dt c
e .

Put C = ±ec , depending on the sign of y, so the general solution takes the the form

y = Ce− a(t)dt
, (83)

where the constant C is determined as usual by specifying an initial condition. Of


course, you could rederive this in each specific case if you remember the method,
but you will probably save yourself considerable time by memorizing formula (83).

Example 124 Consider

dy 1
+ y=0 for t > 0.
dt t

Note that in this case t = 0 is a singularity of the coefficient a(t). Hence, we would
not expect the solution for t < 0 to have anything to do with the solution for t > 0.

We have
∫ ∫
dt
a(t)dt = = ln t.
t

(Of course, you could find the most general indefinite integral or antiderivative by
adding ‘+c’, but we only need one antiderivative.) Hence, according to formula
(83), the general solution is
C
y = Ce− ln t = .
t

The graphs for some values of C are sketched below


304 CHAPTER 6. FIRST ORDER DIFFERENTIAL EQUATIONS

Note how one specific solution may be picked out by specifying an initial condition
y(t0 ) = y0

Consider next the inhomogeneous case where b(t) 6= 0, i.e.,


dy
+ a(t)y = b(t).
dt
Denote by ∫
h(t) = e− a(t)dt

the solution of the corresponding homogeneous equation with C = 1. We try to


find a solution of the inhomogeneous equation of the form y = u(t)h(t) where u(t)
is a function to be determined. (This method is called “variation of parameters”.)
We have
d(uh)
+ auh = b,
dt
du dh
h+u + auh = b,
dt (dt )
du dh
h+u + ah = b.
dt dt
Since h was chosen to be a solution of the homogeneous equation, the quantity in
parentheses vanishes. Hence, the above equation reduces to
du b
= .
dt h
This is easy to solve. The general solution is

b(t)
u= dt + C
h(t)
where again the indefinite integral notation means that one specific antiderivative
is selected. Since y = hu, we obtain the following general solution of the inhomo-
geneous equation ∫
b(t)
y = h(t) dt + C h(t). (84)
h(t)
6.3. FIRST ORDER LINEAR EQUATIONS 305

Note the form of this equation. The term Ch(t) = Ce− a(t)dt is a general solution
of the homogeneous equation. The first term is the particular solution of the in-
homogeneous equation obtained by setting C = 0. This is a theme which will be
repeated often in what follows

A general solution of the inhomogeneous equation is obtained by adding a general


solution of the homogeneous equation to a particular solution of the inhomogeneous
equation.

Example 125 Consider

dy 1 1
+ y= 2 for t > 0.
dt t t
1
We determined h(t) = in the previous example where we solved the homogeneous
t
equation. Moreover,
∫ ∫
1/t2 1
dt = dt = ln t
1/t t
where as suggested we pick one antiderivative. Then from equation (84), the general
solution is
1 C
y = ln t + .
t t

It is important to note that equation (84) provides us, at least in principle, with a
complete solution to the problem, a situation which will not arise often in our study
of differential equations,

Sometimes Guessing is Okay In general, the solution of a first order differential


equation is uniquely determined if an initial condition is specified. (We have seen
how this works in several examples, and we shall discuss the theory behind it later.)
Hence, if you are able to guess a solution that works, that is a perfectly acceptable
way to proceed. We can adapt that principle to finding the general solution of a
first order linear equation as follows. We may write the general solution (84) in the
form
y = p(t) + Ch(t)

where p(t) = h(t) (b(t)/h(t))dt is the particular solution obtained by setting C = 0.
Suppose we can guess some particular solution p1 (t) (which might be different from
p(t).) Then, for some value of the constant C1 ,

p1 (t) = p(t) + C1 h(t) or p(t) = p1 (t) − C1 h(t).

Hence, the general solution may be written

y = p1 (t) − C1 h(t) + Ch(t) = p1 (t) + (C − C1 )h(t)


= p1 (t) + C 0 h(t)
306 CHAPTER 6. FIRST ORDER DIFFERENTIAL EQUATIONS

where C 0 = C − C1 is also an arbitrary constant. What this says is that the general
solution of the inhomogeneous equation is the sum of any particular solution and
the general solution of the homogeneous equation.

Example 126 Consider the equation


dy
− ky = I (85)
dt
where k and I are constants. The homogeneous equation
dy dy
− ky = 0 or = ky
dt dt
has the general solution y = Cek t. We try to find a particular solution of the inho-
mogeneous equation by guessing. The simplest thing to try is a constant solution
y = A. Substituting in (85) yields
I
0 − kA = I or A=− .
k
Hence,
I
y=− + Cekt
k
is a general solution of the inhomogeneous equation. Note that this is a bit simpler
than using equation (84).

Using Definite Integrals in the Formulas The notation a(t) dt stands for any
antiderivative of the function a(t). Thus
∫ ∫
1 2 1
t dt = t and t dt = t2 + 1
2 2
are both true statements. This ambiguity can lead to confusion. To avoid this, we
may on occasion use definite integrals with variable upper limits. Thus
∫ t
a(s) ds
t0

is definitely an antiderivative for a(t) because the fundamental theorem of calculus


tells us that ∫
d t
a(s) ds = a(t).
dt t0
With this notation, we may write
∫t
− a(s)ds
h(t) = e t0
.
Note that h(t0 ) = e0 = 1. In addition, we may write the general solution of the
inhomogeneous equation
∫ t
b(s)
y = h(t) ds + Ch(t).
t0 h(s)
6.3. FIRST ORDER LINEAR EQUATIONS 307

Note that y(t0 ) = 0 + Ch(t0 ) = C, so it may also be written


∫ t
b(s)
y = h(t) ds + y0 h(t)
t0 h(s)

where y0 = y(t0 ).

Note that we have been careful to use a ‘dummy variable’ s in the integrand. Some-
times people are a bit sloppy and just use t both for the integrand and the upper
limit, but that is of course wrong.

Exercises for 6.3.

1. Find a general solution for each of the following linear differential equations.
Either use the general formula, or find the general solution of the correspond-
ing homogeneous solution and guess a particular solution.
dy
(a) − (sin t)y = 0.
dt
dx 2t
(b) − x = 1.
dt 1 + t2
dy 1
(c) = y + x.
dx x
2. Solve each of the following initial value problems. Use the general formula or
guess as appropriate.
dp
(a) + 2p = 6 given p(0) = 4.
dt
dy
(b) t + 2y = 3t given y(1) = 4.
dt
dx
(c) (1 + t2 ) + 2t x = et given x(0) = 0.
dt
3. The charge q on a capacitor of capacitance C in series with a resistor of
resistance R and battery with voltage V satisfies the differential equation
dq q
R + = V.
dt C
Assume q(0) = 0. Find limt→∞ q(t).
4. In a certain chemical reaction, the rate of production of substance X follows
the rule
dx
= −.001 x + .01 y
dt
where x is the amount of substance X and y is the amount a catalyst Y .
If the catalyst also decomposes according to the rule y = 2e−.002 t , find x(t)
assuming x(0) = 0.
308 CHAPTER 6. FIRST ORDER DIFFERENTIAL EQUATIONS

5. Using the methods of the previous section, find an integrating factor µ = µ(t)
for
(b(t) − a(t)y)dt − dy = 0
and then solve the resulting equation. You should get the general solution of
the inhomogeneous equation derived in this section.

6.4 Applications to Population Problems

With the little you now know, it is possible to solve a large number of problems both
in natural and social sciences. To do so, you need to construct a model of reality
which can be expressed in terms of a differential equation. While the mathematics
that follows may be impeccable, the conclusions will still only be as good as the
model. If the model is based on established physical law, as for example the theory
of radioactive decay, the predictions will be quite accurate. On the other hand,
in some other applications, there is little or no reason to accept the model, so the
results will be of questionable value.

Population Growth with Immigration Let p = p(t) denote the number of


individuals in a population. (The individuals could be people, bacteria, radioactive
atoms, etc.) One common assumption about population growth is that the number
of births per unit time is proportional to the size of the population, i.e., the rate of
change of p(t) due to births is of the form bp(t) for some constant b called the birth
rate. Similarly, it is assumed that the number of deaths per unit time is of the form
dp(t) where d is another constant called the death rate. We may combine these and
write symbolically
dp
= bp − dp = (b − d)p = kp
dt
where k = b − d is a constant combining both the birth and death rates. The
solution to this equation is p = p0 ekt where p0 = p(0). Thus according to this
model, population grows exponentially if k > 0 and declines exponentially if k < 0.
Suppose in addition to natural growth due to births and deaths, we also have
immigration taking place at a constant rate I. (We can assume I is the net effect
of immigration—into the population—and emigration—out of the population. It
could even be negative in the case of net outflow.) The differential equation becomes
dp
= kp + I
dt
dp
or − kp = I.
dt
We solved this problem (with slightly different notation) in the previous section.
The general solution is
I
p = − + Cekt .
k
6.4. APPLICATIONS TO POPULATION PROBLEMS 309

If p = p0 at t = 0, we have
I I
p0 = − +C or C = p0 + .
k k
Hence, the solution can also be written
I I
p=− + (p0 + )ekt .
k k
Note that in this model if k > 0 and p0 + I/k > 0, then population grows exponen-
tially, even if I < 0, i.e., even if there is net emigration.

The idea that natural populations grow exponentially was first popularized by
Thomas Malthus (1766-1834), and it is a basic element of modern discussions of
population growth. Of course, the differential equation model ignores many com-
plicated factors. Populations come in discrete chunks and cannot be governed by a
differential equation in a continuous variable p. In addition, most biological popu-
lations involve two sexes, only one of which produces births, and individuals vary
in fertility due to age and many other factors. However, even when all these factors
are taken into consideration, the Malthusian model seems quite accurate as long as
the birth rate less the death rate is constant.

The Logistic Law The Malthusian model (with k > 0) leads to population growth
which fills all available space within a fairly short time. That clearly doesn’t happen,
so limiting factors somehow come into play and change the birth rate so it no
longer is constant. Various models have been proposed to describe real populations
more accurately. One is the so-called logistic law. Assume I = 0—i.e., there is
no net immigration or emigration—and that the ‘birth rate’ has the form k =
a − bp where a and b are positive constants. Note that there is very little thought
behind this about how populations behave. We are just choosing the simplest non-
constant mathematical form for k. About the only thought involved is the decision
to make the constant term positive and the coefficient of p negative. That is, we
assume that as population grows, the birth rate goes down, although no mechanism
is suggested for why that might occur. With these assumptions, the differential
equation becomes
dp
= (a − bp)p.
dt
This is not a linear equation, but it can be solved by separation of variables.

dp dp
= dt or = t + c.
(a − bp)p (a − bp)p
Assume p > 0 and a − bp > 0. In effect, that means we are looking for solution
curves in the t, p-plane between the lines p = 0 and p = a/b. If we find any solutions
which start in this region and leave it, we will be in trouble, so we will have to go
back and redo the analysis. Under these assumptions, the integration on the left
yields (by the method of partial fractions)
( )
1 1 1 p
− ln(a − bp) + ln p = ln .
a a a a − bp
310 CHAPTER 6. FIRST ORDER DIFFERENTIAL EQUATIONS

(I tried to do this using Mathematica, but it kept getting the sign of one of the
factors wrong. You should review the method of partial fractions and make sure
you understand where the answer came from! If you can’t do it, ask for help from
your instructor.) Hence, we obtain
( )
p
ln = at + ac = at + c0
a − bp

which after exponentiating yields

p 0
= eat ec = Ceat . (86)
a − bp

If p(0) = p0 , we have
p0
= C.
a − bp0

Equation (86) may be solved with some simple algebra to obtain

p0
aeat
Caeat a − bp0
p= =
1 + Cbeat p0 b at
1+ e
a − bp0

which simplifies to

ap0 eat
p=
a − bp0 + bp0 eat
p0 eat
= .
b
1 + p0 (eat − 1)
a

To see what happens for large t, let t → ∞. For this divide both numerator and
denominator by eat as in

p0 p0 a
p= → = .
b b b
e−at + p0 (1 − e−at ) p0
a a

Hence, the solution approaches the line p = a/b asymptotically from below, but it
never crosses that line. See the graph below which illustrates the population growth
under the logistic law. Initially, it does appear to grow exponentially, but eventually
population growth slows down and population approaches the equilibrium value
pe = a/b asymptotically.
6.4. APPLICATIONS TO POPULATION PROBLEMS 311

a/b

p
0

If you want to read a bit more about population models, see Section 1.5 of Differ-
ential Equations and Their Applications by M. Braun. However, you should take
this with a grain of salt, since it is all based on assuming a particular model and
then fiddling with parameters in the model to fit observation.

Note that the graphs of the solutions are also asymptotic to the line p = 0 as
t → −∞. It may not be clear how to interpret this in terms of populations, but
there is no problem with the mathematics. Thus, each of the solutions obtained
above is defined for −∞ < t < ∞, and its graph is contained entirely in the strip
0 < p < a/b. Moreover, the lines p = 0 and p = a/b are also solutions, and there
are solutions above and below these lines. (See the Exercises.) Indeed, by suitable
choice of the arbitrary constants, you can find a solution curve passing through
every point in the plane, and moreover these solution curves never cross.

The fact that solution curves don’t cross (except in very special circumstances) is
a consequence of the basic uniqueness theorem and will be discussed in the next
section. However, we can use it here to justify our contention that that 0 < p(t) <
a/b if 0 < p0 < a/b. What we have to worry about is the possibility that some
other solution curve (not one arrived at above) could start inside the desired strip
and later leave it. However, to do so, it would have to cross one of the bounding
lines, and we saw that they also are solution curves, so that never happens.

Exercises for 6.4.

1. (a) The US population was approximately 4×106 in 1790 and 92×106 in 1910.
312 CHAPTER 6. FIRST ORDER DIFFERENTIAL EQUATIONS

Assuming (incorrectly) Malthusian growth without immigration, estimate the


‘birth rate’ k.
(b) If the population in 1980 was approximately 250 × 106 , would this be
consistent with a Malthusian model without immigration?
2. The ‘birth rate’ of a certain population is .02 per year. Because of overcrowd-
ing, this induces a constant emigration of 106 per year. What does this predict
about population in t years if the initial population is 10 × 106 .
3. (a) Show that p = a/b and p = 0 are also solutions of the logistic equation.
Which values of p0 yield these solutions?
(b) Find the solutions of the logistic equation arising from the assumption p0 >
a/b. Show that their solution curves approach the line p = a/b asymptotically
from above as t → ∞.
(c) Find the solutions of the logistic equation arising from the assumption
p0 < 0. Show their solution curves approach the line p = 0 asymptotically as
t → −∞.
4. Consider the population model described by the equation
dp
= .04 p − .01 p2 − .03.
dt
(This is a logistic model with some emigration.) Assume p(0) = 103 and find
p(t).
5. In a chemical reaction a substance X is formed from two substances A and
B. The reaction rate is governed by the rule
dx
= k(a − x)(b − x)
dt
where x = x(t) is the amount of the substance X present at time t. Here, a
and b are constants related to the amounts of substances A and B initially
present and k is a constant of proportionality. Assume x(0) = 0 and find x(t).
What happens as t → ∞.

6.5 Existence and Uniqueness of Solutions

I mentioned earlier that the first order equations


dy
= f (t, y)
dt
often cannot be solved explicitly. In such cases, one must resort to graphical or
numerical techniques in order to get approximate solutions. Before attempting
6.5. EXISTENCE AND UNIQUENESS OF SOLUTIONS 313

that, however, one should be confident that there actually is a solution. Clearly, if
we can’t find the solution, we have to use other methods to convince ourselves it
exists. In this section, we describe some of what is known in this area.

Consider the basic initial value problem: find y = y(t) such that

dy
= f (t, y) where y(t0 ) = y0 .
dt

There are two basic questions one may ask.

1. Existence of a solution. Under what circumstances can we be sure there is a


solution y = y(t) defined for some t-interval containing t0 ? In this connection,
we may also ask what is the largest possible domain for such a solution.

2. Uniqueness of solutions. Under what circumstances is the solution unique?


That is, can there be two or more solutions satisfying the same initial condi-
tion?

It would be foolish to devote a lot of time and energy to finding a solution without
first knowing the answers to these questions.

Example 127 Consider

dy
= y2 where y(0) = y0 ,
dt

and we assume y0 > 0. This equation is easy to solve by separation of variables.

dy
= dt
y2
1
− =t+C
y

so for t = 0

1
− =C
y0
1 1
and − =t−
y y0
y0
so y = .
1 − y0 t

The graph of this solution is a hyperbola asymptotic horizontally to the t-axis and
asymptotic vertically to the line t = 1/y0 .
314 CHAPTER 6. FIRST ORDER DIFFERENTIAL EQUATIONS

t = 1/y
0

Thus the solution has a singularity at t = 1/y0 , and there is no coherent relationship
between the solution to the right and left of the singularity. We conclude that there
is a solution defined on the interval −∞ < t < 1/y0 , but that solution can’t be
extended smoothly to a solution on a larger interval. Note that the differential
equation itself provides no hint that the solution has to have a singularity. The
function f (t, y) = y 2 is smooth in the entire t, y-plane. The lesson to be learned
from this is that at best we can be sure that the initial value problem has a solution
locally in the vicinity of the initial point (t0 , y0 ).

The above solution is unique. The reasoning for that is a little tricky. Basically,
the solution was deduced by a valid argument from the differential equation, so if
there is a solution, it must be the one we found. Unfortunately, the solution process
involved division by y 2 , so there is a minor complication. We have to worry about
the possibility that there might be a solution with y(t) = 0 for some t. To eliminate
that we argue as follows. If y(t1 ) 6= 0 for a given t1 , by continuity y(t) 6= 0 for all t
in an interval centered at t1 . Hence, the above reasoning is valid and the solution
satisfies −1/y(t) = t + C in that interval, which is to say its graph is part of a
hyperbola. However, none of these hyperbolas intersect the t-axis, so no solution
which is non-zero at one point can vanish at another. Notice, however, that y(t) = 0
for all t does define a solution. It satisfies the initial condition y(0) = 0.

Example 128 Consider


dy y
= .
dt t
I leave it to you to work out the general solution by separation of variables. It is
y = Ct. For every t0 6= 0 and any y0 , there is a unique such solution satisfying the
initial condition y( t0 ) = y0 . (Take C = y0 /t0 .) If t0 = 0, there is no such solution
satisfying y(0) = y0 if y0 6= 0. For y0 = 0, every one of these solutions satisfies
the initial condition y(0) = 0, so the solution is not unique. This is not entirely
surprising since f (t, y) = y/t is not continuous for t = 0. Note however, that all the
solutions y = Ct are defined and continuous at t = 0.
6.5. EXISTENCE AND UNIQUENESS OF SOLUTIONS 315

Example 129 Consider


dy
= 3y 2/3 where y(0) = 0.
dt
We can solve this by separation of variables
dy
= dt
3y 2/3
y 1/3
=t+C
3/3

so putting y = 0 when t = 0 yields

0=0+C or C = 0.
1/3
Hence, y =t or y = t3

is a solution. (You should check that it works!) Unfortunately, the above analysis
would exclude solutions which vanish, and it is easy to see that

y(t) = 0 for all t

also defines a solution. Hence, there are (at least) two solutions satisfying the initial
condition y(0) = 0.

3
y=t

y=0

Note that in this case f (t, y) = 3y 2/3 is continuous for all t, y. However, it is not
terribly smooth since fy (t, y) = 2y −1/3 blows up for y = 0.

Propagation of Small Errors in the Solution The above example suggests that
the uniqueness of solutions may depend on how smooth the function f (t, y) is. We
shall present an argument showing how uniqueness may be related to the behavior
of the partial derivative fy (t, y). You might want to skip this discussion your first
time through the subject since the reasoning is quite intricate.
316 CHAPTER 6. FIRST ORDER DIFFERENTIAL EQUATIONS

Suppose y1 (t) and y2 (t) are two solutions of dy/dt = f (t, y) which happen to be
close for one particular value t = t0 . (The extreme case of ‘being close’ would be
‘being equal’.) Let (t) = y1 (t) − y2 (t). We want to see how far apart the solutions
can get as t moves away from t0 , so we try to determine an upper bound on the
function (t). We have
dy1
= f (t, y1 (t))
dt
dy2
= f (t, y2 (t))
dt

Subtracting yields
d(y1 − y2 ) dy1 dy2
= − = f (t, y1 (t)) − f (t, y2 (t)).
dt dt dt
However, if the difference  = y1 − y2 is small enough we have approximately
f (t, y1 ) − f (t, y2 ) ≈ fy (t, y2 (t))(y1 − y2 ) = fy (t, y2 ).
That means that (t) approximately satisfies the differential equation
d
= fy (t, y2 (t)).
dt
y (t) ε (t)This equation is linear, and it has the solution
1 ∫
fy (t,y2 (t))dt
 = Ce , (87)
y (t) so we can take the right had side as an estimate of the size of the error as a function
2
of t. Of course, the above reasoning is a bit weak. First of all, we haven’t defined
what we mean by an ‘approximate’ solution of a differential equation. Moreover,
the approximation is only valid for  small, but we want to use the result to see if
 is small. That is certainly circular reasoning. However, it is possible to make all
t t this precise by judicious use of inequalities, so the method can give you an idea of
0 how the difference of two solutions propagates.

There are two important conclusions to draw from the estimate in (87). First,
assume the function fy is continuous, so the exponential on the right will be well
behaved. In that case if the difference between two solutions  is initially 0, the
constant C will be 0, so the difference  will vanish for all t, at least in the range
where all the approximations are valid. Second, assume the function fy (t, y2 (t))
has a singularity at t0 , and the exponent in (87) diverges to −∞ as t → t0 . In
that case, the exponential would approach zero, meaning that we could have  =
y1 (t) − y2 (t) 6= 0 for t 6= t0 but have it approach 0 as t → t0 . In other words, the
graphs of the two solutions might cross at (t0 , y0 ). In Example 129, that is exactly
what occurs. We have fy = 2y −1/3 , so if we take y2 = t3 and y1 = 0, we have
∫ ∫ ∫
fy (t, y2 (t))dt = 2(t3 )−1/3 dt = 2 t−1 dt = ln t2
∫ 2
fy (t,y2 (t))dt
so e = eln t = t2 ,
6.5. EXISTENCE AND UNIQUENESS OF SOLUTIONS 317

and this does in fact approach 0 as t → 0. The Basic Existence and Unique- y (t)
1
ness Theorem

Theorem 6.10 Assume f (t, y) is continuous on an open set D in the t, y-plane. y2(t)
Let (t0 , y0 ) be a point in D. Then there is a function y(t) defined on some interval
(t1 , t2 ) containing the point t0 such that

dy(t)
= f (t, y(t)) for every t in (t1 , t2 )
dt
and y(t0 ) = y0 .

Moreover, if fy (t, y) exists and is continuous on D, then there is at most one such
t
0
solution defined on any interval containing t0 .

D
y0

t 1 t 0 t 2

The proof of this theorem is quite involved, so it is better left for a course in Real
Analysis. However, most good books on differential equations include a proof of
some version of this theorem. See for example, Section 1.10 of Braun which uses
the method of Picard iteration.

Example 130 Consider the logistic equation

dp
= ap − bp2 .
dt

Here f (t, p) = ap − bp2 is continuous for all (t, p), so a solution exists satisfying
any possible initial condition p(t0 ) = p0 . Similarly fp (t, p) = a − 2bp is continuous
everywhere, so every solution is unique. Thus, graphs of solutions p = p(t) never
cross.
318 CHAPTER 6. FIRST ORDER DIFFERENTIAL EQUATIONS

Example 131 Consider


dy
= (y − t)1/3 .
y=t dt
Here f (t, y) = (y − t)1/3 is continuous for all (t, y) so a solution always exists satis-
1 1
fying any given initial condition. However, fy (t, y) = has singularities
3 (y − t)2/3
along the line y = t. Hence, we cannot be sure of the uniqueness of solutions
satisfying initial conditions of the form y0 = t0 .

Let us return to the second part of the existence question: how large a t-interval
can we choose for the domain of definition of the solution. We saw in Example 1

that this cannot be determined simply by examining the domain of f (t, y). Here is
another example.
−π/2 π/2
dy
Example 132 Consider the initial value problem = 1 + y 2 with y(0) = 0.
dt
The equation can be solved by separation of variables, and the solution is y = tan t.
(Work it out yourself.) The domain of the continuous branch of this function passing
through (0, 0) is the interval −π/2 < t < π/2.

Even in cases where the differential equation can’t be solved explicitly, it is some-
times possible to determine an interval on which there is a solution. We illustrate
how to do this in the above example (without solving the equation), but the rea-
soning is quite subtle, and you may not want to go through it at this time .

Example 132, continued We consider the case 0 ≤ t. (The reasoning for t < 0
is similar.) The problem is to find a value a such that when 0 ≤ t ≤ a, the solution
won’t grow so fast that it will become asymptotic to some line t = t1 with 0 < t1 < a.
Since y 0 = 1 + y 2 ≥ 1, there is certainly no problem with the solution going to −∞
in such an interval, so we only concern ourselves with positive asymptotes. Suppose
that we want to be sure that 0 ≤ y ≤ b. Then we would have y 0 = 1 + y 2 ≤ 1 + b2 ,
i.e., the graph would lie under the a line (starting at the initial point (0, 0)) with
slope 1 + b2 . That line has equation y = (1 + b2 )t so it intersects the horizontal line
b b
y = b in the point ( 2
, b). Thus, for a given b, we could take a = , and
1+b 1 + b2
the solution could not go to ∞ for 0 < t < a.

The trick is to maximize a by finding the maximum value of b/(1 + b2 ) over all
possible b > 0. This may be done by elementary calculus. Setting

d b (1 + b2 ) − b(2b) 1 − b2
= = =0
db 1 + b2 (1 + b2 )2 (1 + b2 )2

yields b = 1 (since we know b > 0). It is easy to see that this is indeed a maximum
point and the maximum value is 1/(1 + 12 ) = 1/2. It follows that we can be sure
there is a solution for 0 ≤ t < 1/2. Similar reasoning (or a symmetry argument)
shows that there is also a solution for −1/2 < t ≤ 0.
6.5. EXISTENCE AND UNIQUENESS OF SOLUTIONS 319

Note that this reasoning gave us an interval (−1/2, 1/2) somewhat smaller than
that obtained above for the exact solution.

y = Mt

b/M

The analysis in the above example involves ‘lifting oneself by one’s bootstraps’ in
that one uses ‘b’ (a bound on y) to find ‘a’ (a bound on t), whereas what we really
want is the reverse. It only works well if f (t, y) is independent of t or is bounded by
functions independent of t. In general, there may be no effective way to determine
an interval on which we can be sure there is a solution, although the existence
theorem assures there is such an interval.

Exercises for 6.5.

dy
1. (a) Solve the initial value problem = t(y − 1) given y(0) = 1 by separation
dt
of variables.
(b) How can you conclude that this is the only solution?
dy
2. (a) Solve the initial value problem = y 4/5 given y(0) = 0 by separation of
dt
variables.
(b) Note that y = 0 is also a solution to this initial value problem. Why
doesn’t this contradict the uniqueness theorem?
dy √
3. (a) Solve the initial value problem = 1 − y 2 given y(0) = 1 by separation
dt
of variables.
(b) Note that y = 1 is also a solution to this initial value problem. Why
doesn’t this contradict the uniqueness theorem?
320 CHAPTER 6. FIRST ORDER DIFFERENTIAL EQUATIONS

dy
4. (a) Solve = 2t(1 + y 2 ), y(0) = 0 by separation of variables.
dt
(b) What is the largest possible domain of definition for the solution obtained
in part (a)?
5. Consider the initial value problem y 0 = t2 + y 2 given y(0) = 0. This cannot be
solved explicitly by any of the methods we have discussed, but the existence
and uniqueness theorem tells us there is a unique solution defined on some
t-interval containing t = 0.
1
Show that (−a, a) is such an interval if a ≤ √ as follows. Fix a value b > 0.
2
Since y 0 = t2 +y 2 ≤ a2 +b2 for 0 ≤ t ≤ a, 0 ≤ y ≤ b (a rectangle with opposite
corners at (0, 0) and (a, b)), it follows that a solution curve starting at (0, 0)
b
will pass out of the right hand side of the rectangle provided a2 + b2 ≤ .
a
This may be rewritten ab2 − b + a3 ≤ 0. Fix a and determine a condition such
that the resulting quadratic expression in b does assume negative values, i.e.,
such that the equation ab2 − b + a3 = 0 has two unequal real roots, at least
one of which is positive.
Do you think this initial value problem has a solution defined for −∞ < t <
∞?

6.6 Graphical and Numerical Methods

What if the initial value problem for a first order differential equation cannot be
solved explicitly? In that case one may try to use graphical or numerical techniques
to find an approximate solution. Even if one can’t get very precise values, it is
sometimes possible to get a qualitative understanding of the solution.

The Direction Field For each point (t, y) in R2 , f (t, y) specifies the slope that
the graph of a solution passing through that point should have. One may draw
small line segments at selected points, and it is sometimes possible to sketch in a
good approximation to the solution.

Example 133 Consider


dy √
= t 1 + y3 where y(0) = 1.
dt
We can make an attempt to solve this by separation of variables to obtain
dy
√ = t dt
1 + y3

dy 1
so √ = t2 + c.
1+y 3 2
6.6. GRAPHICAL AND NUMERICAL METHODS 321

Unfortunately, the left hand side cannot be integrated explicitly in terms of known
functions, so there is no way to get an explicit solution y = √ y(t). In the diagram
below, I sketched some line segments with the proper slope t 1 + y 3 at points in
the first quadrant. Also, starting at the point (0, 1), I attempted to sketch a curve
which is tangent to the line segment at each point it passes through. I certainly
didn’t get it exactly right, but the general characteristics of the solution seem clear.
It increases quite rapidly and it may even have an asymptote around t = 2.

t=1 t=2

We may verify analytically that the graph has an asymptote as follows. We have

dy √ √
= t 1 + y 3 > t y 3 = ty 3/2 for t > 0, y > 0.
dt

Hence, in the first quadrant


dy
> t dt
y 3/2
so
∫ y ∫ t
du 1 2
> t0 dt0 = t .
1 (u)3/2 0 2

Note that we have to use definite integrals in order to preserve the inequality, and
we also used that y > 1 along the solution curve. The latter is apparent from the
direction field, although a detailed proof might be tricky. Integrating on the left,
322 CHAPTER 6. FIRST ORDER DIFFERENTIAL EQUATIONS

we obtain
y
2 1
− 1/2 > t2
u 1 2
1 1
−2( √ − 1) > t2
y 2
√ 1
or y>
1 − 14 t2
1
or y> ,
(1 − 14 t2 )2

and the expression on the right has an asymptote at t = 2.

Euler’s Method

There are a variety of numerical methods for solving differential equations. Such
a method can generate a table of values yi which are approximations to the true
solution y(ti ) for selected values ti of the independent variable. Usually, the values ti
are obtained by dividing up a specified interval [t0 , tf ] into n equally spaced points.

y(t i)

yi

t t t f
0 i

The simplest method is called Euler’s method. It is based on the linear approxima-
tion
y(t + h) = y(t) + y 0 (t)h + o(h)
where as in Chapter III, Section 4,

the notation ‘o(h)’ indicates a quantity small compared to h. (limh→0 o(h)/h = 0.)
Since y 0 (t) = f (t, y(t)), we have approximately

y(t + h) ≈ y(t) + f (t, y)h.


6.6. GRAPHICAL AND NUMERICAL METHODS 323

Using this, we may describe Euler’s method by the following Pascal-like pseudo-
code, where t0 and y0 denote the initial values of t and y, tf is the ‘final’ value of
t, n is the number of steps from t0 to tf , and h = (tf − t0 )/n is the step size.

read t0 , y0 , tf , n;
h := (tf − t0 )/n;
t := t0 ;
y := y0 ;
while (t ≤ tf + h/2) do y’(t)h
begin
yy := y + f (t, y) ∗ h; y(t)
t := t + h;
y := yy;
show(t, y);
end;
end. h

t t+h

Note the construction ‘t ≤ tf + h/2’. This complication is needed because of the


way numerical calculations are done in a computer. Namely, the calculation of h
will involve either a round-off error or truncation error in almost all cases. As a
result, n iterations of the step t := t + h could yield a value for t slightly greater
than tf , but it will certainly yield a value short of tf + h/2. Adding the h/2 will
insure that the loop stops exactly where we want it to.

Euler’s method is very simple to program, but it is not too accurate. It is not
hard to see why. At each step, we make an error due to our use of the linear
approximation rather than the true solution at that point. In effect that moves us
onto a neighboring solution curve. After many steps, these errors will compound ,
i.e., the errors from all previous steps will add additional errors at the current step.
Hence, the cumulative error after a large number of steps can get very large. See
Figure 1 for the graphs of some solutions of the initial value problem dy/dt =
1 − y, y(0) = 0 by Euler’s method with different values of n. The exact solution in
this case is y = 1 − e−t . In this case, the approximate solutions all lie above the
exact solution. Can you see why?
324 CHAPTER 6. FIRST ORDER DIFFERENTIAL EQUATIONS

0.8

0.6

0.4

0.2

t
0 0.5 1 1.5 2 2.5 3
0

Figure 1. Euler’s method for n = 5, 10, 25 and exact solution

One can estimate the error due to the approximation in Euler’s method. It turns
out that the error in the calculation of the last value y(tf ) is O(h). That means if
you double the number of steps, you will generally halve the size of the error. See
Braun Section 1.13.1 for a discussion of the error analysis. Thus, in principle, you
ought to be able to get any desired degree of accuracy simply by making the step size
small enough. However, the error estimate O(h) is based on the assumption of exact
arithmetic. Numerical considerations such as round-off error become increasingly
important as the step size is made smaller. (For example, in the extreme case, if
the number of steps n is chosen large enough, the computer may decide that the
step size h is zero, and the while loop won’t terminate!) As a result, there is a point
of diminishing returns where the answer actually starts getting worse as the step
size is decreased. To get better answers one needs a more sophisticated method.
2nd guess
1st guess The Improved Euler Method Euler’s method may be improved by incorporating
a feedback mechanism which tends to correct errors. We start off as before with
the tangent approximation

yy: = y + f (t, y) ∗ h;

but then we use this provisional value yy at t + h to calculate the putative slope
f (t + h, yy) at (t + h, yy). This is now averaged with the slope f (t, y) at (t, y), and
the result is used to determine a new value yyy at t + h

yyy: = y + ((f (t, y) + f (t + h, yy))/2) ∗ h;

Here is some Pascal like pseudo code implementing this algorithm.


read t0 , y0 , tf , n;
6.6. GRAPHICAL AND NUMERICAL METHODS 325

h := (tf − t0 )/n;
t := t0 ;
y := y0 ;
while (t ≤ tf + h/2) do
begin
m := f (t, y);
yy := y + m ∗ h;
t := t + h;
yyy := y + ((m + f (t, yy))/2) ∗ h;
y := yyy;
show(t, y);
end;
end.

See Figure 2 for graphs showing the behavior of the improved Euler method for the
initial value problem y 0 = 1 − y, y(0) = 0 for various n. Note that the approxima-
tions lie below the exact solution in this case. Can you see why? Note also that the
improved Euler method does give better accuracy than the Euler method for the
same value of n.

0.8

0.6

0.4

0.2

t
0 0.5 1 1.5 2 2.5 3
0

Figure 2. Improved Euler for n = 5, 10, 25 versus exact solution.

One can also estimate the error due to the approximation in the improved Euler
method. It turns out that the error in the calculation of the last value y(tf ) is
O(h2 ). That means if you double the number of steps, you will generally reduce
the size of the error by a factor of 1/4. See Braun Section 1.13.1 for a discussion of
the error analysis.
326 CHAPTER 6. FIRST ORDER DIFFERENTIAL EQUATIONS

There are many methods for wringing out the last bit of accuracy when attempt-
ing to solve a differential equation numerically. One very popular method is the
Runge-Kutte method which tries to use quadratic approximations rather than linear
approximations. There are people who have devoted a considerable part of their
professional careers to the study of numerical solutions of differential equations, and
their results are embodied in a variety of software packages.

Exercises for 6.6.

1. Sketch the direction field for the differential equation y 0 = t2 + y 2 in the


rectangle 0 ≤ t ≤ 1, 0 ≤ y ≤ 3. Try to sketch solution curves for the following
initial conditions. Use as many points as you feel necessary for an accurate
sketch. One shortcut is to plot selected points with slopes m for various m’s.
Thus, all points with slope m = 1, lie along the circle t2 + y 2 = 1. (You may
also use a software package for sketching direction fields and solutions if you
can find one and figure out how to use it.)
(a) y(0) = 0. (b) y(0) = 1.
On the basis of your sketch would you expect any problems in in either case in
a numerical method (such as Euler’s method or the improved Euler’s method)
to find y(1.1)?

2. (optional) Program the improved Euler’s method for y 0 = t2 + y 2 , y(0) = 1,


and try to determine y(1.1) for (a) y(0) = 0 and for (b) y(0) = 1.
Chapter 7

Second Order Linear


Differential Equations

7.1 Second Order Differential Equations

The general second order differential equation may be put in the form
d2 y
= f (t, y, y 0 ),
dt2
and a solution is a function y = y(t) defined on some t-interval t1 < t < t2 and
satisfying
y 00 (t) = f (t, y(t), y 0 (t)) for t1 < t < t2 .
Note that the function f on the right is a function of three independent variables,
for which t, the function y(t), and its derivative y 0 (t) are substituted to check a
solution.

As we saw in specific cases in Chapter II, a general solution of a second order


equation involves two arbitrary constants, so you need two conditions to determine
a solution completely. The fundamental existence theorem asserts that if f (t, y, y 0 )
is continuous on its domain, then there exists a solution satisfying initial conditions
of the form
y(t0 ) = y0
y 0 (t0 ) = y00 .
Equil.
(Note that both the solution and its derivative are specified at t0 .) The fundamental
uniqueness theorem asserts that if fy (t, y 0 , y 0 ) and fy0 (t, y, y 0 ) exist and are continu- y
ous, then on any given interval containing t0 there is at most one solution satisfying
the given initial conditions. m

327
328 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

Example 134 As you saw earlier in this course and in your physics course, the
differential equation governing the motion of a weight at the end of spring has the
form
d2 y k
=− y
dt2 m
where y denotes the displacement of the weight from equilibrium. Similar differential
equations govern other examples of simple harmonic motion. The general solution
can be written
√ √
y = C1 cos( k/m t) + C2 sin( k/m t)

or y = A cos( k/m t + δ)

and in either case there are two constants which may be determined by specifying
an initial displacement y(t0 ) = y0 and an initial velocity y 0 (t0 ) = y00 .

In physics, you may also have encountered the equation for damped simple harmonic
motion which has the form
d2 y k b dy
=− y− .
viscous dt2 m m dt

medium You may also have seen a discussion of solutions of this equation. Again, two
arbitrary constants are involved.

We solved the equation for undamped simple harmonic motion in Chapter II by


means of a trick which will always work if the function f (t, y, y 0 ) = f (y, y 0 ) does not
depend explicitly on the independent variable t. In that case, you can put u = y 0
and then by the chain rule
d2 y du du dy du
= = = u,
dt2 dt dy dt dy
so the equation can be put in the form
du
u = f (y, u).
dy
This equation can then be solved to express u = dy/dt in terms of y, and then the
resulting first order equation may be solved for y. This method would also work for
the equation for damped simple harmonic motion. (Try it!).

There is one other circumstance in which a similar trick will work: if f (t, y, y 0 ) =
f (t, y 0 ) does not actually involve the dependent variable y. Then we can put u = y 0
and we have a first order equation of the form
du
= f (t, u).
dt

Except for these special cases, there are no good methods for solving general second
order differential equations. However, for linear equations, which we shall consider
7.2. LINEAR SECOND ORDER DIFFERENTIAL EQUATIONS 329

in the rest of this chapter, there are powerful methods. Fortunately, many important
second order differential equations arising in applications are linear.

Exercises for 7.1.

1. The exact equation satisfied by a mass at the end of a pendulum of length L


is d2 θ/dt2 = −(g/L) sin θ. Put u = dθ/dt and reduce to a first order equation
in θ and t. Do not try to solve this equation.
2. Find a general solution for y 00 = (y 0 )2 by putting u = y 0 and first solving for
u.

7.2 Linear Second Order Differential Equations

A second order differential equation is called linear if it can be put in the form
d2 y dy
+ p(t) + q(t)y = f (t)
dt2 dt
where p(t), q(t), and f (t) are known functions. This may be put in the form
d2 y/dt2 = f (t, y, y 0 ) with f (t, y, y 0 ) = f (t) − q(t)y − p(t)y 0 . motor

Example 135 The equation for forced, damped harmonic motion has the form
d2 y b dy k F0
+ + y= cos ωt.
dt2 m dt m m
The term on the right represents a periodic driving force with period 2π/ω.

Example 136 The equation


d2 y 2t dy α(α + 1)
− + y = 0,
dt2 1 − t2 dt 1 − t2
where α is some constant, is called Legendre’s equation. Its solutions are used to
describe the shapes of the electron ‘clouds’ in pictures of atoms you see in chemistry
books.

Example 137 The equations


( )2
d2 y dy
2
+ 2 + ty = 0
dt dt
d2 y g
2
+ sin y = 0
dt L
330 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

are non-linear equations. The second of these is the equation for the motion of a
pendulum. This is usually simplified by making the approximation sin y ≈ y.

Existence and uniqueness for linear differential equations is a bit easier.

Theorem 7.11 Le p(t), q(t), and f (t) be continuous functions on the interval t1 <
t < t2 and let t0 be a point in that interval. Then there is a unique solution of

y 00 + p(t)y 0 + q(t)y = f (t)

on that same interval which satisfies specified initial conditions

y(t0 ) = y0
y 0 (t0 ) = y00 .

Note that the new element in the statement is that the solution is defined and
unique on the same interval that the differential equation is defined and continuous
on. In the general case, for a non-linear differential equation, we might have to take
a smaller interval.

We won’t try to prove this theorem in this course.

The basic strategy for solving a linear second order differential equation is the same
as the strategy for solving a linear first order differential equation.

1. ”(i)” Find a general solution of the homogeneous equation

y 00 + p(t)y 0 + q(t)y = 0. (88)

Such a solution will involve two arbitrary constants.


2. ”(ii)” Find one particular solution of the inhomogeneous equation

y 00 + p(t)y 0 + q(t)y = f (t). (89)

Then a general solution of the inhomogeneous equation can be obtained by


adding the general solution of the homogeneous equation to the particular
solution.
3. ”(iii)” Use the initial conditions to determine the constants. Since there are
two initial conditions and two constants, we have enough information to do
that.

Let’s see why this strategy should work. Suppose yp is one particular solution of
the inhomogeneous equation, i.e.,

yp00 + p(t)yp0 + q(t)yp = f (t).


7.2. LINEAR SECOND ORDER DIFFERENTIAL EQUATIONS 331

Suppose that y is any other solution of the inhomogeneous equation, i.e.,


y 00 + p(t)y 0 + q(t)y = f (t).
If we subtract the first of these equations from the second and regroup terms, we
obtain
y 00 − yp00 + p(t)(y 0 − yp0 ) + q(t)(y − yp ) = f (t) − f (t) = 0.
Thus if we put u = y − yp , we get
u00 + p(t)u0 + q(t)u = 0,
so u is a solution of the homogeneous equation, and
y(t) = yp (t) + u(t).
Note how important the linearity was in making this argument work.

Example 138 Consider


y 00 + 4y = 1 where y(0) = 0, y 0 (0) = 1.

The homogeneous equation


y 00 + 4y = 0
has the general solution
y = C1 cos(2t) + C2 sin(2t).
(This is the equation for simple harmonic motion discussed in the previous section.)

To find a particular solution of the inhomogeneous equation, we note by inspection


that a constant solution y = A should work. Then,
y 00 + 4y = 0 + 4A = 1
so A = 1/4. Hence, yp = 1/4 is a particular solution. Hence, the general solution
of the inhomogeneous equation is
1
y= + C1 cos(2t) + C2 sin(2t) .
4
|{z} | {z }
general, homogeneous
particular

Finally, we determine the constants as follows. First note that


y 0 = −2C1 cos 2t + 2C2 cos 2t.
Then at t = 0,
1 1
y=0= + C1 cos 0 + C2 sin 0 = + C1
4 4
y 0 = 1 = −2C1 sin 0 + 2C2 cos 0 = 2C2 .
332 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

Thus, C1 = −1/4 and C2 = 1/2, and the solution matching the given initial condi-
tions is
1 1 1
y = − cos 2t + sin 2t.
4 4 2

Exercises for 7.2.

1. Consider the differential equation y 00 + y = 1.


(a) What is a general solution of the homogeneous equation y 00 + y = 0?
(b) Assume that there is a particular solution of the inhomogeneous equation
of the form y = A. Find A.
(c) Find a general solution of the inhomogeneous equation.
(d) Find a solution to the initial value problem y 00 + y = 1, y(0) = 1, y 0 (0) =
−1.
2. Consider the differential equation y 00 + 9y = 2 cos(2t).
(a) What is a general solution of the homogeneous equation y 00 + 9y = 0?
(b) Assume that there is a particular solution of the inhomogeneous equation
of the form y = A cos(2t). Find A.
(c) Find a general solution of the inhomogeneous equation.
(d) Find a solution to the initial value problem y 00 + 9y = 2 cos(2t), y(0) =
1, y 0 (0) = 0.
3. For which intervals does the existence theorem guarantee a solution for Leg-
endre’s equation?

7.3 Homogeneous Second Order Linear Equations

To solve
y 00 + p(t)y 0 + q(t)y = 0
we need to come up with a solution with two arbitrary constants in it. Suppose that
somehow or other, we have found two different solutions y1 (t) and y2 (t) defined on
a common t-interval t1 < t < t2 .

Example 139 Two solutions of the equation

y 00 + 4y = 0

are y1 (t) = cos(2t) and y2 (t) = sin(2t).


7.3. HOMOGENEOUS SECOND ORDER LINEAR EQUATIONS 333

The important insight on which the whole theory is based is that anything of the
form
y(t) = c1 y1 (t) + c2 y2 (t), (90)
where c1 and c2 are constants, is again a solution. To see why, we calculate as
follows. We have

y100 + p(t)y10 + q(t)y1 = 0


y200 + p(t)y20 + q(t)y2 = 0

so multiplying the first equation by c1 , the second by c2 and adding, we obtain

c1 y100 + c2 y200 + p(t)(c1 y10 + c2 y20 ) + q(t)(c1 y1 + c2 y2 ) =


(c1 y1 + c2 y2 )00 + p(t)(c1 y1 + c2 y2 )0 + q(t)(c1 y1 + c2 y2 ) = 0,

which says exactly that y = c1 y1 + c2 y2 is a solution. Note that this argument


depends very strongly on the fact that the differential equation is linear.

The function y = c1 y1 + c2 y2 is called a linear combination of y1 and y2 . Thus, we


may restate the above statement as follows: any linear combination of solutions of
a linear homogeneous differential equation is again a solution.

The above analysis provides a method for finding a general solution of the homo-
geneous equation: find a pair of solutions y1 (t), y2 (t) and use y = c1 y1 (t) + c2 y2 (t).
However, there is one problem with this; the two solutions might be essentially the
same. For example, suppose y1 = cy2 (i.e., y1 (t) = cy2 (t) for all t in their common
domain). Then

y = c1 y1 + c2 y2 = c1 c y2 + c2 y2 = (c1 c + c2 )y2 = c0 y2

so the solution does not involve two arbitrary constants. Without two such con-
stants, we won’t necessarily be able to match arbitrary initial values for y(t0 ) and
y 0 (t0 ).

With the above discussion in mind we make the following definition. A pair of func-
tions {y1 , y2 }, defined on a common domain, is called linearly independent if neither
is a constant multiple of the other. Otherwise, the pair is called linearly dependent.
There are a couple of subtle points in this definition. First, linear independence
(or its negation linear dependence) is a property of the pair of functions, not of the
functions y1 and y2 themselves. Secondly, if y1 = cy2 then we also have y2 = (1/c)y1
except in the case y1 is identically zero. In the exceptional case, c = 0. Thus, a pair
of functions, one of which is identically zero, is always linearly dependent. On the
other hand, if y1 = cy2 , and neither y1 nor y2 vanishes identically on the domain,
then c won’t be zero.

Assume now that y1 , y2 constitute a linearly independent pair of solutions of

y 00 + p(t)y 0 + q(t)y = 0,
334 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

defined on an interval t1 < t < t2 on which the coefficients p(t) and q(t) are
continuous. Suppose t0 is a point in that interval, and we want to match initial
conditions y(t0 ) = y0 , y 0 (t0 ) = y00 at t0 . We shall show that this is always possible
with a general solution of the form y = c1 y1 + c2 y2 .

Example 139, continued As above, consider

y 00 + 4y = 0.

with solutions y1 (t) = cos 2t and y2 (t) = sin 2t. Their quotient is tan 2t, which is
not constant, so neither is a constant multiple of the other, and they constitute a
linearly independent pair. Let’s try to match the initial conditions

y(π/2) = 1
y 0 (π/2) = 2.

Let
y = c1 y1 (t) + c2 y2 (t) = c1 cos 2t + c2 sin 2t (91)
so
y 0 = c1 y10 (t) + c2 y20 (t) = −2c1 sin 2t + 2c2 cos 2t. (92)
Putting t = π/2, we need to find c1 and c2 such that

y(π/2) = c1 cos π + c2 sin π = −c1 = 1


y 0 (π) = −2c1 sin π + 2c2 cos π = −2c2 = 2.

The solution is c1 = −1, c2 = −1. Thus the solution of the differential equation
matching the desired initial conditions is

y = − cos 2t − sin 2t.

(You should check that it works!)

Let’s see how this would work in general. Using (91) and (92), matching initial
conditions at t = t0 yields

y(t0 ) = c1 y1 (t0 ) + c2 y2 (t0 ) = y0


y 0 (t0 ) = c1 y10 (t0 ) + c2 y20 (t0 ) = y00 .

Solve this pair of equations for c1 and c2 by the usual method you learned in
high school. To find c1 , multiply the first equation by y20 (t0 ), multiply the second
equation by y2 (t0 ) and subtract. This yields

c1 [y1 (t0 )y20 (t0 ) − y10 (t0 )y2 (t0 )] = y0 y20 (t0 ) − y00 y2 (t0 ).

Hence, provided the coefficient of c1 is not zero, we obtain

y0 y20 (t0 ) − y00 y2 (t0 )


c1 = .
y1 (t0 )y20 (t0 ) − y10 (t0 )y2 (t0 )
7.3. HOMOGENEOUS SECOND ORDER LINEAR EQUATIONS 335

Similarly, multiplying the second equation by y1 (t0 ) and the first by y10 (t0 ) and
subtracting yields
y00 y1 (t0 ) − y0 y10 (t0 )
c2 = .
y1 (t0 )y20 (t0 ) − y10 (t0 )y2 (t0 )

Note that the denominators are the same. Also, the above method will work only
if this common denominator does not vanish. (In Example 139, the denominator
was (−1)(−2) = 2.) Define
[ ]
y (t) y2 (t)
W (t) = y1 (t)y20 (t) − y10 (t)y2 (t) = det 10 .
y1 (t) y20 (t)

This function is called the Wronskian of the pair of functions {y1 , y2 }. Thus, we
need to show that the Wronskian W (t0 ) 6= 0 at the initial point t0 . To this end,
note first that the Wronskian cannot vanish identically for all t in the domain of
the differential equation. For,

d y2 (t) y 0 (t)y1 (t) − y2 (t)y10 (t) W (t)


= 2 2
= ,
dt y1 (t) y1 (t) y1 (t)2

so if W (t) vanishes for all t, the quotient y2 /y1 is constant, and y2 is a constant
multiple of y1 . That contradicts the linear independence of the pair {y1 , y2 }. (Ac-
tually, the argument is a little more complicated because of the possibility that the
denominator y1 might vanish at some points. See the appendix at the end of this
section for the details.)

There is still the possibility that W (t) vanishes at the initial point t = t0 , but W (t)
does not vanish identically. We shall show that can’t ever happen for functions
y1 , y2 which are solutions of the same homogeneous linear differential equation, i.e.,
the Wronskian is either never zero or always zero in the domain of the differential
equation. To see this, first calculate

W 0 (t) = (y1 (t)y20 (t) − y10 (t)y2 (t))0


= y10 (t)y20 (t) + y1 (t)y200 (t) − (y100 (t)y2 (t) + y10 (t)y20 (t))
= y1 (t)y200 (t) − y100 (t)y2 (t).

On the other hand, using the differential equation, we can express the second deriva-
tives of the solutions

y100 (t) = −p(t)y10 (t) − q(t)y1 (t)


y200 (t) = −p(t)y20 (t) − q(t)y2 (t).

Putting these in the above formula yields

W 0 (t) = y1 (t)(−p(t)y20 (t) − q(t)y2 (t)) − (−p(t)y10 (t) − q(t)y1 (t))y2 (t)
= −p(t)(y1 (t)y20 (t) − y10 (t)y2 (t)) = −p(t)W (t).
336 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

This shows that the Wronskian satisfies the first order differential equation
dW
= −p(t)W,
dt
and we know how to solve such equations. The general solution is

W (t) = Ce− p(t)dt
. (93)
The important thing about this formula is that the exponential function never
vanishes. Hence, the only way W (t) can vanish for any t whatsoever is if C = 0, in
which case W (t) vanishes identically.

We summarize the above discussion as follows. A pair {y1 , y2 } of solutions of the


homogeneous linear equation
y 00 + p(t)y 0 + q(t)y = 0
is linearly independent if and only if the Wronskian never vanishes.

Example 139, again The Wronskian of the pair {y1 = cos 2t, y2 = sin 2t} is
[ ]
cos 2t sin 2t
det = 2 cos2 2t + s sin2 2t = 2.
−2 sin 2t 2 cos 2t
According to the theory, it is not really necessary to find W (t) for all t. It would
have sufficed to find it just at t0 = π/2, as in essence we did before, and see that it
is not zero.

It sometimes seems a bit silly to calculate the Wronskian to see if a pair of solutions
is independent since one feels it should be obvious whether or not one function is a
multiple of another. However, for complicated functions, it may not be so obvious.
For example, for the functions y1 (t) = sin 2t and y2 (t) = sin t cos t it might not
be clear that the first is twice the second if we did not know the trigonometric
identity sin 2t = 2 sin t cos t. For more complicated functions, there may be all sorts
of hidden relations we just don’t know.

Example 140 Consider the equation


2t 0 6
y 00 + y + y=0 − 1 < t < 1.
1−t 2 1 − t2
To find the form of the Wronskian, calculate
∫ ∫
2t
p(t) dt = dt = − ln(1 − t2 ).
1 − t2
Thus,
2
W (t) = Celn(1−t )
= C(1 − t2 ).

Warning: We have remarked that the Wronskian never vanishes. That conclusion
is valid only on intervals for which the coefficient function p(t) is continuous. If p(t)
7.3. HOMOGENEOUS SECOND ORDER LINEAR EQUATIONS 337


has a singularity, it is quite possible to have the antiderivative p(t)dt approach ∞
as t approaches the singularity. In that case the exponential

e− p(t)dt

would approach 0 as t approaches the singularity. That is the case in the previous
example at t = 1 and t = −1 which are singularities of p(t) = 2t/(1 − t2 ). Hence,
the fact that W (t) = C(1 − t2 ) vanishes at those points does not contradict the
validity of the general theory.

An alternate form of the formula for the Wronskian using definite integrals with
dummy variables is sometimes useful.
∫t
− p(s)ds
W (t) = W (t0 )e t0
. (94)

Appendix on the Vanishing of the Wronskian Suppose the Wronskian W (t) =


y1 (t)y20 (t) − y10 (t)y2 (t) vanishes identically, but y1 (t1 ) = 0 for some specific value t1
in the interval where y1 and y2 are defined. Then the argument showing y2 (t)/y1 (t)
is constant fails because there will be a zero in the denominator when applying the
quotient rule. Let’s see what we can do in that case. y10 (t1 ) 6= 0 because otherwise

y1 (t1 ) = z(t1 ) = 0
y10 (t1 ) = z 0 (t1 ) = 0,

where z(t) is the function with is identically zero for all t. By the uniqueness
theorem, that would mean that y1 is identically zero, which is not the case. Hence,
using the fact that y10 (t1 ) 6= 0, we can conclude from

W (t1 ) = y1 (t1 )y20 (t1 ) − y10 (t1 )y2 (t1 ) = −y10 (t1 )y2 (t1 ) = 0

that y2 (t1 ) = 0. Thus, by the same reasoning y20 (t1 ) 6= 0. Let c = y20 (t1 )/y10 (t1 ).
Then

y2 (t1 ) = cy1 (t1 ) = 0


y20 (t1 ) = cy10 (t1 ) by the definition of c

Hence, by the uniqueness theorem, y1 (t) = cy2 (t) for all t in the common interval
on which the solutions are defined.

Note that this is actually quite subtle. In case one of the two functions never van-
ishes, the quotient rule suffices to show that the identical vanishing of the Wronskian
implies the pair of functions is linearly dependent. However, if both functions vanish
at some points, we must use the fact that they are both solutions of a homogeneous
linear differential equation and apply the basic uniqueness theorem! Solutions of
such equations frequently vanish at isolated points so the subtle part of the argu-
ment is necessary.
338 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

Exercises for 7.3.

1. The equation y 00 + 6y 0 + 9y = 0 has solutions y1 = e−3t and y2 = te−3t . Find


a solution y(t) satisfying y(0) = 1, y 0 (0) = 0.

2. Consider the differential equation t2 y 00 + 4ty 0 + 2y = 0.


1 1
(a) Show that y1 (t) = and y2 (t) = 2 are solutions for 0 < t < ∞.
t t
(b) Show that the pair of solutions {y1 , y2 } is linearly independent.
(c) Compute the Wronskian W (t) for this pair of solutions and note that it
doesn’t vanish for 0 < t < ∞.
(d) What happens to the Wronskian W (t) as t → 0?

3. For each of the following pairs of functions, calculate the Wronskian W (t) on
the interval −∞ < t < ∞. In each case, see if you have enough information
to decide whether the pair can be a linearly independent pair of solutions
of a second order linear homogeneous differential equation on the interval
−∞ < t < ∞.
(a) {sin 3t, cos 3t}.
(b) {e2t , e−t }.
(c) {t2 − t, 3t}.
(d) {te−t , e−t }.

4. Suppose {y1 (t), y2 (t)} is a linearly independent pair of solutions of

2t 0 1
y 00 + 2
y + y=0
1+t 1 + t2

defined on the interval −1 < t < 1. Suppose y1 (0) = 1, y1 0 (0) = 2, y2 (0) =


2, y2 0 (0) = 1. Find W (t). Is the pair of solutions linearly independent?

5. Suppose {u1 , u2 } is a linearly independent pair of solutions of a second order


linear homogeneous differential equation. Let y1 (t) = u1 (t)+u2 (t) and y2 (t) =
u1 (t) − u2 (t).
(a) Show that the Wronskian Wy (t) of the pair {y1 , y2 } is related to the
Wronskian Wu (t) of the pair {u1 , u2 } by Wy (t) = −2Wu (t). Conclude that
{y1 , y2 } is also a linearly independent pair of solutions.
(b) Show that {y1 , y2 } is a linearly independent pair of solutions without using
the Wronskian. Hint: Assume y1 = cy2 and derive a similar equation for u1
and u2 .
7.4. HOMOGENEOUS EQUATIONS WITH CONSTANT COEFFICIENTS 339

7.4 Homogeneous Equations with Constant Coef-


ficients

Consider the differential equation

y 00 + py 0 + qy = 0

where p and q are constants. In this case, we can solve the homogeneous equation
completely. Since the differential equations arising in some important applications
fall in this category, this is quite fortunate. For example, as indicated in Section 1,
the equation for damped harmonic motion has constant coefficients.

The essential idea behind the solution method is to search for solutions of the form

y = ert

where r is a constant to be determined. This seems a bit arbitrary, but we rely


here on the experience of generations of mathematicians who have worked with
differential equations. Thus, we take advantage of their discoveries, so we don’t
have to rediscover everything ourselves.

We have

y = ert
y 0 = rert
y 00 = r2 ert

so
y 00 + py 0 + qy = r2 ert + prert + qert = (r2 + pr + q)ert .
Since ert never vanishes, it follows that y = ert is a solution of the differential
equation if and only if
r2 + pr + q = 0. (95)
This converts the problem of solving a differential equation into a problem of solving
an algebraic equation of the same order. The roots of equation (95) are
p 1√ 2
r1 = − + p − 4q,
2 2
p 1√ 2
r2 = − − p − 4q.
2 2
Note that these will be different as long as p2 −4q 6= 0. Corresponding to these roots
are the two solutions of the differential equation: y1 = er1 t and y2 = er2 t . Moreover,
if r1 6= r2 , then linear independence is no problem since the ratio er1 t /er2 t =
e(r1 −r2 )t is not constant. Then,

y = c1 er1 t + c2 er2 t
340 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

is a general solution.

Example 141 Consider


y 00 + 3y 0 − 4y = 0.
The corresponding algebraic equation is
r2 + 3r − 4 = 0.
The roots of this equation are easy to determine by factoring: r2 + 3r − 4 =
(r + 4)(r − 1). They are r1 = −4, r2 = 1. Hence, the general solution is
y = c1 e−4t + c2 et .
The exact shape of the graph of such a solution will depend on the constants c1 and
c2 .

c 1=0, c 2= 1

c 1= 1, c 2= 0

c1 = 1 c = -1
2

Suppose the roots of the equation r2 + pr + q = 0 are equal, i.e., p2 − 4q = 0. Then,


by the quadratic formula, r = r1 = r2 = −p/2. The method only gives one solution
y1 = ert . Hence, we need to find an additional independent solution. The trick is
to know that in this case y = tert is another solution. For,
y = tert
y 0 = ert + trert
y 00 = rert + rert + tr2 ert = r2 tert + 2rert ,
so
y 00 + py 0 + qy = (r2 t + 2r + pt + p + qt)ert
= [(r2 + pr + q)t + 2r + p]ert = 0.
7.4. HOMOGENEOUS EQUATIONS WITH CONSTANT COEFFICIENTS 341

(Note that these calculations only work in the case r = −p/2 is a double root of the
quadratic equation r2 + pr + q = 0.) The general solution is

y = c1 ert + c2 tert = (c1 + c2 t)ert .

Example 142 Consider


y 00 + 4y 0 + 4y = 0.
The corresponding algebraic equation is

r2 + 4r + 4 = (r + 2)2 = 0

so r = −2 is a double root. Hence, y1 = e−2t and y2 = te−2t constitute an


independent pair of solutions, and

y = c1 e−2t c2 te−2t = (c1 + c2 t)e−2t

is the general solution.

c1 = 1
c =0
2

c1 = 0
c =1
2

There is one problem with the method in the case of unequal roots r1 , r2 . Namely,
if p2 − 4q < 0, the solutions will be complex numbers.

Example 143 Consider


y 00 + y 0 + y = 0.
The algebraic equation is
r2 + r + 1 = 0
342 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

and its roots are


1 1√ 2 1 1√
r1 = − + 1 −3=− + 3i
2 2 2 2
1 1√
r2 = − − 3i
2 2
where i2 = −1. This would suggest that the two basic solutions should be
√ √
y1 = e(−1/2+ 3 i/2)t
and y2 = e(−1/2− 3 i/2)t
.
Unfortunately, you probably have never seen complex exponentials, so we shall make
a detour to review complex numbers and talk about their exponentials.

Exercises for 7.4.

1. Find a general solution for each of the following differential equations.


(a) y 00 − 4y = 0.
(b) y 00 − 5y 0 + 6y = 0.
(c) 4y 00 − 2y 0 − 2y = 0.
(d) y 00 + 6y 0 + 9y = 0.
(e) 4y 00 − 4y 0 − y = 0.
2. Solve each of the following initial value problems.
(a) y 00 − 2y 0 − 3y = 0 given y(0) = 0, y 0 (0) = 2.
(b) y 00 − 4y 0 + 4y = 0 given y(0) = 0, y 0 (0) = −1.
3. Calculate the Wronskian of the pair {er1 t , er2 t }. Show that it is never zero as
long as r1 6= r2 . Apply a similar analysis to the pair {ert , tert }.
4. Suppose r1 6= r2 are two real numbers. Investigate circumstances under which
y = C1 er1 t + C2 er2 t can be zero. Is it possible that it never vanishes? What
is the maximum number of values of t for which such a function can vanish if
the constants C1 and C2 are not zero?
5. Show that y = C1 ert + C2 tert vanishes for at most one value of t if C1 and C2
are not both zero.
6. The differential equation t2 y 00 + αty 0 + βy = 0 is called Euler’s Equation.
(a) Show that y = tr is a solution if and only if r is a root of the quadratic
equation r2 + (α − 1)r + β = 0.
(b) Show that if the quadratic equation has two different real roots r1 and r2 ,
then {tr1 , tr2 } is a linearly independent pair of solutions defined on the interval
0 < t < ∞. Hence, in this case the general solution of Euler’s equation has
the form y = C1 tr1 + C2 tr2 .
(c) Find a general solution of t2 y 00 − 4ty 0 + 6y = 0.
7.5. COMPLEX NUMBERS 343

7.5 Complex Numbers

A complex number is an expression of the form

α = a + bi

where a and b are real numbers. a is called the real part of α, and is often denoted
Re(α). b is called the imaginary part of α, and it is often denoted Im(α). The
set of all complex numbers is usually denoted C. Complex numbers are added,
subtracted, and multiplied by the usual rules of algebra with the additional rule
i2 = −1. Here are some examples of such calculations.

a + bi + c + di = a + c + (b + d)i,
(a + bi)(c + di) = ac + adi + bci + bdi2 = ac − bd + (ad + bd)i.

Complex numbers may be represented geometrically by points (or vectors) in R2 :


the point (a, b) corresponds to the complex number α = a + bi. The horizontal axis Imaginary axis
is called the real axis because all numbers of the form a = a + 0 i correspond to
points (a, 0) on it. Similarly, the vertical axis is called the imaginary axis because
all numbers of the form ib correspond to points (0, b) on it. This geometric picture
of complex numbers is sometimes called the Argand diagram. The length of the
(a, b)
vector ha, bi is called the modulus or absolute value of the complex number and is α
denoted |α|. Thus,

|α| = a2 + b2 .
Similarly, the angle θ that the vector makes with the real axis is called the argument θ
of α. Of course, the modulus and argument of a complex number are just the polar
coordinates of the corresponding point in R2 . Real axis
If α = a + bi is a complex number, α = a − bi is called its complex conjugate.
Geometrically, the complex conjugate of α is obtained by reflecting α in the real
axis. Note that the two solutions of the quadratic equation

r2 + pr + q = 0
α
are conjugate complex numbers in the case p − 4q < 0.
2

The following rules apply for complex conjugation.

1. α + β = α + β.
α
2. αβ = αβ.

3. |α|2 = αα.
344 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

The proofs of these rules are done by calculating both sides and checking that they
give the same result. For example,

|α|2 = a2 + b2 and
αα = (a + bi)(a − bi) = a − abi + abi − b2 i2 = a2 + b2 .
2

Complex numbers may also be divided. Thus, if α = a + bi and β = c + di 6= 0,


then

α αβ αβ
= =
β ββ |β|2
(a + bi)(c − di) ac + bd bc − ad
= = 2 + 2 i.
c2 + d2 c + d2 c + d2
For example,
1 1 −i −i
= = = −i.
i i −i 1
(That is also clear from i2 = −1.)

The next problem is to make an appropriate definition for eα where α is a complex


number. We will use this to consider possible solutions of a differential equation of
the form u(t) = eρt where ρ is complex and t is a real variable. Such a function u
has domain a t-interval on the real line and takes complex values. That is denoted
schematically by u : R → C. Since we may identify C with R2 by means of the
Argand diagram, this is not really anything new. We suppose that we have available
all the usual tools we need for such functions, i.e., differentiation, integration, etc.

Here are some properties we expect the complex exponential to have.

1. ”(a)” eα+β = eα eβ .

d αt
2. ”(b)” e = αeαt .
dt
3. ”(c)” It should agree with the ordinary exponential for α real.

Let’s apply these rules and see how far they take us in determining a possible
definition. First, let α = a + bi. Then by (a)

eα = ea+bi = ea ebi .

Since by (c) we already know what ea is, we need only define ebi . Suppose, as a
function of the real variable b

eib = c(b) + is(b)


7.5. COMPLEX NUMBERS 345

where c(b) and s(b) denote real valued functions. Since ei0 = e0 = 1, we know that
the functions c(b) and s(b) must satisfy
c(0) = 1 s(0) = 0.
Also, by (b), we must have
d ib
e = ieib
db
or c0 (b) + is0 (b) = i(c(b) + is(b)) = −s(b) + ic(b).
Comparing real and imaginary parts yields
c0 (b) = −s(b)
s0 (b) = c(b).
It is clear how to choose functions with these properties:
c(b) = cos b
s(b) = sin b,
so the proper choice for the definition of eib is
eib = cos b + i sin b,
and the proper definition of eα with α = a + bi is
ea+bi = ea cos b + iea sin b. (96)

It is not hard to check from the definition (96) that properties (a), (b), and (c) are
true.

Also, this exponential has the property that eα never vanishes for any complex
number α. For,
ea+bi = ea cos b + iea sin b = 0
only if its real and imaginary parts vanish, i.e., ea cos b = ea sin b = 0. Since ea never
vanishes, this can happen only if cos b = sin b = 0. However, the cosine function
and the sine function don’t have any common vanishing points, so there is no such
b.

Finally, here are some useful formulas we shall use repeatedly.


1
= e−bi = cos(−b) + i sin(−b) = cos b − i sin b
eib
which tells us that the inverse of eib is the same as the complex conjugate. This
may also be seen from the fact that the product of ebi and its conjugate is |eib |2 ,
since
|eib |2 = cos2 b + sin2 b = 1.

Exercises for 7.5.


346 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

1. Calculate each of the following quantities


(a) (1 + 3i)(2 − 5i).
(b) (1 + 2i)(1 + i) − 3.
2+i
(c) .
3 − 2i
(d) (1 + i)4 .

2. Show that α = ei(2π/3) satisfies the equation α3 = 1. There are two other
complex numbers satisfying that equation. What are they? Draw the Argand
diagrams for each of these numbers.

3. Note that eiπ = cos π + i sin π = −1, Using this information, find all com-
plex numbers β satisfying β 3 = −1. Hint. They are all of the form eiθ for
appropriate values of θ.

4. (a) Show that |eib | = 1 for any real number b.


(b) Show that α = |α|eiθ for any complex number α. Hint: Consider the polar
coordinates of the point in the complex plane corresponding to α.

5. Using the definition


ea+bi = ea cos b + iea sin b
prove the formula
d (a+bi)t
e = (a + bi)e(a+bi)t .
dt
6. Prove the formula
eix+iy = eix eiy
for x, y real. This is a special case of the law of exponents for complex expo-
nentials. To prove it you will have to use appropriate trigonometric identities.

7. Show that there is only one possible choice for real valued functions c(b), s(b)
such that c0 (b) = −s(b), s0 (b) = c(b) and c(0) = 1, s(0) = 0. Hint: If
c1 (b), s1 (b) is another set of functions satisfying these conditions, then put
u(b) = c(b) − c1 (b), v(b) = s(b) − s1 (b). Show that u0 (b) = −v(b), v 0 (b) = u(b)
and u(0) = v(0) = 0. Then show that u2 + v 2 is constant by taking its
derivative. What is that constant, and can you conclude that u = v = 0?

7.6 Complex Solutions of a Differential Equation

In our previous discussion of the differential equation

y 00 + py 0 + qy = 0 (97)
7.6. COMPLEX SOLUTIONS OF A DIFFERENTIAL EQUATION 347

we considered solutions y = y(t) which were functions R → R. However, our


analysis of the method of solution suggests that we try to extend this by looking
for solutions which are functions R → C. Any such function may be expressed

y = y(t) = u(t) + iv(t) (98)

where the real and imaginary parts u(t) and v(t) define real valued functions.
Putting (98) in (97) yields

u00 + iv 00 + p(u0 + iv 0 ) + q(u + iv) = 0


or u00 + pu0 + qu + i(v 00 + pv 0 + qv) = 0
or u00 + pu0 + qu = 0 and v 00 + pv 0 + qv = 0.

Thus, a single complex solution really amounts to a pair of real valued solutions.
From this perspective, the previous theory of purely real valued solutions appears as
the special case where the imaginary part is zero, i.e., y = u(t) + i0 = u(t). Also, all
the rules of algebra and calculus still apply for the more general functions, so we may
proceed just as before except that we have the advantages of using complex algebra.
The only tricky point is to remember that all constants in the extended theory are
potentially complex numbers whereas previously they were real. In particular, if

r2 + pr + q = 0

has two conjugate complex roots r1 , r2 (which is the case if p2 − 4q < 0), then
y1 = er1 t and y2 = er2 t form a linearly independent pair of functions, and a general
(complex) solution has the form

y = c1 er1 t + c2 er2 t ,

where c1 , c2 are arbitrary complex constants.

Example 144 Consider

y 00 + 4y = 0 where y(π/2) = 1, y 0 (π/2) = 2.

First we solve
r2 + 4 = 0
to obtain the two conjugate complex roots r1 = 2i, r2 = −2i. The general solution
is
y = c1 e2it + c2 e−2it .
To match the given initial conditions, note that y 0 = 2ic1 e2it −2ic2 e−2it . At t = π/2,
we have
eiπ = e−iπ = −1,
so

y(π/2) = c1 eπi + c2 e−πi = −c1 − c2 = 1


y 0 (π/2) = 2ic1 eπi − 2ic2 e−πi = −2ic1 + 2ic2 = 2.
348 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

Multiply the first equation by 2i and add to obtain

−4ic1 = 2i + 2
1+i 1−i
or c1 = − =− .
2i 2
Here we used
1+i 1
= + 1 = −i + 1.
i i
Similarly, subtraction yields

−4ic2 = 2i − 2
i−1 1+i
or c2 = − =− .
2i 2
Hence, the solution matching the given initial conditions is
1
y = − [(1 − i)e2it + (1 + i)e−2it ].
2
If you recall, we solved this same problem with real valued functions earlier, and
this answer does not look at all the same. However, if we expand the exponentials,
we get

(1 − i)e2it = (1 − i)(cos 2t + i sin 2t) = cos 2t + sin 2t + i(sin 2t − cos 2t)


(1 + i)e−2it = (1 + i)(cos 2t − i sin 2t) = cos 2t + sin 2t + i(cos 2t − sin 2t).

Hence, adding these and multiplying by −1/2 yields

y = − cos 2t − sin 2t

which is indeed the same solution obtained before. Notice that the solution is
entirely real—its imaginary part is zero—although initially it did not look that
way.

The fact that we ended up with a real solution in the above example is not an
accident. That will always be the case when p, q and the initial values y0 and y00
are real. The reason is that if we write the solution

y = u(t) + iv(t),

then the imaginary part v(t) satisfies the initial conditions v(t0 ) = 0, v 0 (t0 ) = 0, so,
by the uniqueness theorem, it must be identically zero.

It is quite common in applications to use complex exponentials to describe oscilla-


tory phenomena. For example, electrical engineers have for generations preferred
to represent alternating currents, voltages, and impedances by complex quantities.
However, it is sometimes easier to work with real functions. Fortunately, there is a
simple way to convert. If r = a + bi is one of a pair of complex conjugate roots of
the equation
r2 + pr + q = 0
7.6. COMPLEX SOLUTIONS OF A DIFFERENTIAL EQUATION 349

then
ert = eat+ibt = eat cos bt + ieat sin bt

is a solution, and

y1 = eat cos bt and y2 = eat sin bt

are two real solutions. They form a linearly independent pair since their quotient
is not constant. Hence,

y = c1 eat cos bt + c2 eat sin bt

(where the constants c1 and c2 are real) is a general real solution. Note that in this
analysis, we only used one of the two roots (and the corresponding ert ). However,
the other root is the complex conjugate r = a − bi, so it yields real solutions

eat cos(−bt) = eat cos(bt) and eat sin(−bt) = −eat sin(bt)

which are the same functions except for sign.

Example 144, again The roots are 2i and −2i. So taking a = 0, b = 2 yields the
solutions
e0t cos 2t = cos 2t and e0t sin 2t

which confirms what we already know.

Exercises for 7.6.

1. Find a general complex solution for each of the following differential equations.
(a) y 00 + y 0 + y = 0.
(b) y 00 + 2y 0 + 2y = 0.
(c) 2y 00 − 3y 0 + 2y = 0.

2. Find a general real solution for each of the differential equations in the previous
problem.

3. Solve the initial value problem

y 00 + 9y = 0 given y(0) = 1, y 0 (0) = −1.

Solve the problem two ways. (a) Find a general complex solution and deter-
mine complex constants to match the initial conditions. (b) Find a general
real solution and determine real constants to match the initial conditions. (c)
Verify that the two different solutions are the same.
350 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

4. Solve the initial value problem

y 00 − 2y 0 + 5y = 0 given y(0) = 1, y 0 (0) = −1.

Solve the problem two ways. (a) Find a general complex solution and deter-
mine complex constants to match the initial conditions. (b) Find a general
real solution and determine real constants to match the initial conditions. (c)
Verify that the two different solutions are the same.

5. Calculate the Wronskian of the pair of functions {eat cos(bt), eat sin(bt)}. Show
that it never vanishes.

7.7 Oscillations

The differential equation governing the motion of a mass at the end of spring may
be written
d2 y dy
m 2 +b + ky = 0, (99)
dt dt
where m > 0 is the mass, b ≥ 0 represents a coefficient of friction, and k > 0 is the
spring constant. The equation governing the charge Q on a capacitor in a simple
oscillatory circuit is
d2 Q dQ 1
L 2 +R + Q = 0, (100)
dt dt C
where L is the inductance, R is the resistance, and C is the capacitance in the
circuit. Both these equations are of the form y 00 + py 0 + qy = 0 with p ≥ 0 and
q > 0, and we now have the tools in hand to completely solve that equation. We
shall do that in the context of (99) for mechanical oscillations, but the theory applies
equally well to electrical oscillations. Hence, we may take p = b/m and q = k/m,
but many of the formulas are a bit neater if we multiply everything through by m
as in (99).

As we saw in the previous sections, the first step is to solve the quadratic equation

mr2 + br + k = 0.

There are three cases:

1. ”(a)” b2 − 4km > 0. The roots are real and unequal.

2. ”(b)” b2 − 4km = 0. The roots are real and equal.

3. ”(c)” b2 − 4km < 0. The roots are complex conjugates and unequal.
7.7. OSCILLATIONS 351

We treat each in turn.



Case (a). Assume b2 > 4km, and put ∆ = b2 − 4km. Then according to the
quadratic formula, the roots are

−b + ∆
r1 =
2m
−b − ∆
r2 = ,
2m
so the general solution is
1 1
y = c1 e 2m (−b+∆)t + c2 e 2m (−b−∆)t .

Some typical solutions for different values of the constants are indicated in the
diagram.

c 1 = 0, c 2 = 1

c 1 = 1, c2 = 0

c 1 = -1, c 2= 1

√ √
Note that since ∆ = b2 − 4km < b2 = b, both roots are negative. Hence, both
exponentials approach zero as t → ∞. If the signs of the constants differ, then
the solution will vanish for precisely one value of t, but otherwise it will never
vanish. Thus, the solution dies out without actually oscillating. This is called the
overdamped case. Case (b). Suppose b2 = 4km. Then the quadratic equation has
two equal roots r = r1 = r2 = −b/2m. The general solution is

y = c1 e− 2m t + c2 te− 2m t = (c1 + c2 t)e− 2m t .


b b b
352 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

This solution also vanishes for exactly one value of t and approaches zero as t → ∞.
This is called the critically damped case.

-(b/2m)t
e

-(b/2m)t
t e

Case (c). Suppose b2 < 4km. The roots are


√ √
−b + b2 − 4km b 4km − b2
r= =− +i
2m 2m 2m

and its complex conjugate r. Put


√ √ ( )2
4km − b2 k b
ω1 = = − .
2m m 2m

We may obtain two linearly independent real solutions by taking the real and imag-
inary parts of
ert = e(− 2m +iω1 )t = e− 2m t (cos ω1 t + i sin ω1 t).
b b

These are y1 = e− 2m t cos ω1 t and y2 = e− 2m t sin ω1 t so the general real solution is


b b

y = c1 e− 2m t cos ω1 t + c2 e− 2m t sin ω1 t
b b

= e− 2m t (c1 cos ω1 t + c2 sin ω1 t).


b

The expression in parentheses oscillates with angular frequency ω1 while the expo-
nential approaches zero as t → ∞. This is called the underdamped case.
7.7. OSCILLATIONS 353

If b = 0, then the system will oscillate indefinitely with angular frequency ω0 =



k/m. ω0 /2π is called the resonant frequency of the system, even in the case
b 6= 0. If k/m is much larger than (b/2m)2 , then ω0 ≈ ω1 and the system will
oscillate for quite a long time before noticeably dying out. See the Exercises for
more discussion of these points.

Exercises for 7.7.

1. If m is measured in grams, y in centimeters, and t in seconds, then the spring


constant k should be measured in gm/sec2 and the coefficient of friction b in
gm/sec. Assuming we use these units, determine in each of the following cases
ω0 ω1
if the system oscillates. If so determine the frequencies and .
2π 2π
(a) m = 1, k = 0.5, b = 0.005.
(b) m = 4, k = 1, b = 4.
2. Show that the three cases for the differential equation governing
√ the charge on
L
a capacitor break up as follows. (a) Overdamped: R > 2 . (b) Critically
√ √ C
L L
damped: R = 2 . (c) Underdamped: R < 2 .
C C
√ ( )2
1 1 R
Show that in the underdamped case, ω0 = √ and ω1 = − .
LC LC 2L
3. Inductance L is measured in henries, resistance R is measured in ohms, and
ω0 ω1
capacitance is measured in farads. Find the frequencies and if L = 0.5
2π 2π
−4
henries, R = 50 ohms, and C = 2 × 10 farads.
354 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

What are the full names of the famous physicists after whom these units are
named?
4. Approximately how many oscillatory cycles are required in the previous prob-
lem for the amplitude of the charge to drop to half its initial value?

7.8 The Method of Reduction of Order

In the case of equal roots, the equation with constant coefficients

y 00 + py 0 + qy = 0

has the solution ert and also a second solution tert not dependent on the first. This
is a fairly common situation. We have a method to find one solution y1 (t) of

y 00 + p(t)y 0 + q(t)y = 0,

but we need to find another solution which is not a constant multiple of y1 (t). To
this end, we look for solutions of the form y = v(t)y1 (t) with v(t) not constant. We
have

y = vy1
y 0 = v 0 y1 + vy10
y 00 = v 00 y1 + v 0 y10 + v 0 y10 + vy100 = v 00 y1 + 2v 0 y10 + vy100 .

Thus

y 00 + p(t)y 0 + q(t)y = v 00 y1 + 2v 0 y10 + vy100 + pv 0 y1 + pvy10 + qvy1


= v 00 y1 + (2y10 + py1 )v 0 + v(y100 + py10 + qy1 ).

Since y1 is a solution, we have y100 + py10 + qy1 = 0. Hence, y 00 + py 0 + qy = 0 amounts


to the requirement
2y10 (t) + p(t)y1 (t) y 0 (t)
v 00 + a(t)v 0 = 0 where a(t) = =2 1 + p(t).
y1 (t) y1 (t)
This may be treated as a first order equation in v 0 , and the general solution is

v 0 = Ce− a(t)dt
.

Since we need only one solution, we may take C = 1. Moreover,


∫ ∫ 0 ∫
y1 (t)
a(t)dt = 2 dt + p(t)dt
y1 (t)
∫ ∫
= 2 ln |y1 (t)| + p(t)dt = ln y1 (t) + p(t)dt.
2
7.8. THE METHOD OF REDUCTION OF ORDER 355

Hence,
∫ ∫
v 0 = e− = e− ln y1 (t) e−
2
a(t)dt p(t)dt
or
1 ∫
v0 = e− p(t)dt
. (101)
y1 (t)2
Equation (101) may be integrated once more to determine v and ultimately another
solution y2 = vy1 .

Example 145 Consider


y 00 + py 0 + qy = 0
where p and q are constant and r = −p/2 is a double root. Take y1 = ert . Then
rert
a(t) = 2 + p = 2r + p = 0.
ert
Hence, v 0 satisfies
v 00 = 0
from which we derive v 0 = c1 , v = c1 t + c2 . Again, since we need only one solution,
we may take c1 = 1, c2 = 0 to obtain v = t. This yields finally a second solution
y2 = ty1 = tert , which is what we decided to try before.

Example 146 Consider Legendre’s equation for α = 1


2t 0 2
y 00 − y + y = 0.
1−t 2 1 − t2
You can check quite easily that y1 (t) = t defines a solution. We look for a linearly
independent solution of the form y = v(t)t. In this case p(t) = −2t/(1 − t2 ) so

p(t)dt = ln(1 − t2 )

and by (101)
1 − ln(1−t2 ) 1
v0 = e = 2 .
t2 t (1 − t2 )
The right hand side may be integrated by partial fractions to obtain
( )
1 1 1+t
v = − + ln
t 2 1−t
so we end up ultimately with
( )
t 1+t
y2 = vt = −1 + ln .
2 1−t
You would not be likely to come up with that by trial and error!

Exercises for 7.8.


356 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

1. One solution of (1 + t2 )y 00 − 2ty 0 + 2y = 0 is y1 (t) = t. Find a second solution


by the method of reduction of order.
2. One solution of y 00 − 2ty 0 + 2y = 0 is y1 = t. Use the method of reduction of
order to find another solution. Warning. You won’t be able to determine v
explicitly in this case, but push the integration as far as you can.
3. (a) Find one solution of Euler’s equation t2 y 00 − 4ty 0 + 6y = 0.
(b) Find a second solution by the method of reduction of order.
(c) Solve t2 y 00 − 4ty 0 + 6y = 0 given y(1) = 0, y 0 (1) = 1.

7.9 The Inhomogeneous Equation. Variation of


Parameters

We now investigate methods for finding a particular solution of the inhomogeneous


equation
y 00 + p(t)y 0 + q(t)y = f (t).
The first method is called variation of parameters. Let {y1 , y2 } be a linearly in-
dependent pair of solutions of the homogeneous equation. We look for particular
solutions of the inhomogeneous equation of the form

y = u1 y1 + u2 y2 (102)

where u1 and u2 are functions to be determined. (The idea is that c1 y1 + c2 y2


would be a general solution of the homogeneous equation, and maybe we can get
a solution of the inhomogeneous equation by replacing the constants by functions.)
We have
y 0 = u01 y1 + u1 y10 + u02 y2 + u2 y20 .
In order to simplify the calculation, look for u1 , u2 satisfying

u01 y1 + u02 y2 = 0 (103)

so
y 0 = u1 y10 + u2 y20 . (104)
Then
y 00 = u01 y10 + u1 y100 + u02 y20 + u2 y200 . (105)
Hence, using (102), (104), and (105), we have

y 00 + py 0 + qy = u01 y10 + u1 y100 + u02 y20 + u2 y200 + p(u1 y10 + u2 y20 ) + q(u1 y1 + u2 y2 )
= u01 y10 + u02 y20 + u1 (y100 + py10 + qy1 ) + u2 (y200 + py20 + qy2 )
= u01 y10 + u02 y20
7.9. THE INHOMOGENEOUS EQUATION. VARIATION OF PARAMETERS357

because y100 + py10 + qy1 = y200 + py20 + qy2 = 0. (Both y1 and y2 are solutions of the
homogeneous equation.) It follows that y 00 + p(t)y 0 + q(t)y = f (t) if and only if

u01 y10 + u02 y20 = f (t).

Putting this together with (103) yields the pair of equations

u01 y1 + u02 y2 = 0
u01 y10 + u02 y20 = f (t) (106)

These can be solved for u01 and u02 by the usual methods. The solutions are
−y2 f y1 f
u01 = u02 = .
y1 y20 − y10 y2 y1 y20 − y10 y2

The denominator in each case is of course just the Wronskian W (t) of the pair
{y1 , y2 }, so we know it never vanishes. We may now integrate to obtain
∫ ∫
y2 (t)f (t) y1 (t)f (t)
u1 = − dt u2 = dt.
W (t) W (t)
Finally, we obtain the particular solution

yp (t) = y1 (t)u1 (t) + y2 (t)u2 (t)


∫ ∫
y2 (t)f (t) y1 (t)f (t)
= −y1 (t) dt + y2 (t) dt. (107)
W (t) W (t)

Example 147 Consider the equation


4 6
y 00 − y 0 + 2 y = t for t > 0.
t t
The homogeneous equation is a special case of Euler’s equation as discussed in the
exercises.

It has the solutions y1 = t2 and y2 = t3 and it is clear that these form a linearly
independent pair. Let’s apply the above method to determine a particular solution.
[ ] [2 ]
y1 y2 t t3
W (t) = det 0 = det = 3t4 − 2t4 = t4 .
y1 y20 2t 3t2

Hence, taking f (t) = t, the variation of parameters formula gives


∫ 3 ∫ 2
t t t t
yp = −t2 4
dt + t 3
dt
t t4
= −t2 t + t3 ln t = t3 (ln t − 1).

Thus the general solution is

y = yp + c1 y1 + c2 y2 = t3 (ln t − 1) + c1 t2 + c2 t3 .
358 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

The variation of parameters formula may also be expressed using definite integrals
with a dummy variable.
∫ t ∫ t
y2 (s)f (s) y1 (s)f (s)
yp = −y1 (t) ds + y2 (t) ds
t0 W (s) t0 W (s)
∫ t
y1 (t)y2 (s) − y2 (t)y1 (s)
= f (s) ds. (108)
t0 W (s)

Exercises for 7.9.

1. Find a general solution of each of the following inhomogeneous equations by


variation of parameters.
(a) y 00 − 4y = e−t .
(b) y 00 + 4y = cos t.
(c) y 00 − 4y 0 + 4y = t.
(d) y 00 − 4y 0 + 4y = e−2t .
(e) y 00 − 2y 0 + 2y = et cos t.
2. Solve the initial value problem y 00 + 5y 0 + 6y = e4t given y(0) = 0, y 0 (0) = 1.
3. Find a general solution of t2 y 00 − 2y = t. Hint: The homogeneous equation is
a special case of Euler’s Equation.

7.10 Finding a Particular Solution by Guessing

The variation of parameters method is quite general, but it often involves a lot of
unnecessary calculation. For example, the equation

y 00 + 5y 0 + 6y = e4t

has the linearly independent pair of solutions y1 = e−2t , y2 = e−3t . Because of all
the exponentials appearing in the variation of parameters formula, there is quite a
lot of cancellation, but it is apparent only after considerable calculation. You should
try it out to convince yourself of that. On the other hand, a bit of experimentation
suggests that something of the form y = Xe4t might work, and if we put this in the
equation, we get

16Xe4t + 5(4Xe4t ) + 6Xe4t = e4t


or 42Xe4t = e4t
7.10. FINDING A PARTICULAR SOLUTION BY GUESSING 359

from which we conclude that X = 1/42. Thus, yp = (1/42)e4t is a particular


solution, and
1 4t
y= e + c1 e2t + c2 e3t
42
is the general solution.

Of course, guessing won’t work in complicated cases—see the previous section for
an example where guessing would be difficult—but fortunately it often does work in
the cases important in applications. For this reason, it is given a name: the method
of undetermined coefficients. The idea is that we know by experience that guesses of
a certain form will work for certain equations, so we try something of that form and
all that is necessary is to determine some coefficient(s), as in the example above. In
this section, we shall consider appropriate guesses for the equation
y 00 + py 0 + qy = Aeαt , (109)
where p, q and A are real constants, and α is a (possibly) complex constant. This
covers applications to oscillatory phenomena. There is a lot more known about
appropriate guesses in other cases, and if you ever need to use it, you should refer
to a good book on differential equations. (See Section 2.5 of Braun for a start.)

The appropriate guess for a particular solution of (109) is y = Xeαt where X is a


(complex) constant to be determined. We have
y = Xeαt
y 0 = Xαeαt
y 00 = Xα2 eαt
so we need to solve
y 00 + py 0 + qy = (α2 + pα + q)Xeαt = Aeαt
for X. It is obvious how to do that as long as the expression in parentheses does
not vanish. That will be the case when α is not a root of r2 + pr + q = 0, i.e., the
exponential on the right is not a solution of the homogeneous equation.

Hence, if α is not a root of the equation r2 + pr + q = 0, then X = A/(α2 + pα + q),


and a particular solution of the inhomogeneous equation (109) is given by
A
yp = eαt . (110)
α2 + pα + q

We still have to deal with the case that α is a root of the equation r2 + pr + q =
0. Exactly what to do in this case depends on the nature of the roots of that
equation. Assume first that the roots r1 , r2 are unequal, and α = r1 . In that case,
the appropriate guess is y = Xteαt . We have
y = Xteαt
y 0 = Xeαt + Xtαeαt
y 00 = 2Xαeαt + Xtα2 eαt
360 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

so we need to solve

y 00 + py 0 + qy = X(2α + p)eαt + Xt(α2 + pα + q)eαt = Aeαt .

Since α is a root, the second term in parentheses vanishes, so we need to solve

X(2α + p) = A.

Since α, by assumption is not a double root, α 6= −p/2, so the coefficient does not
vanish, and we have X = A/(2α + p). Hence, the particular solution is
At αt
yp = e . (111)
2α + p

The final case to consider is that in which α = −p/2 is a double root of r2 +pr +q =
0. Then the appropriate guess is y = Xt2 eαt . We shall omit the details here, but
you should work them out to see that you understand the process. The answer is
X = A/2, and
At2 αt
yp = e
2
is a particular solutions.

Do you see a rule for what the denominators should be ?

The above analysis has wide ramifications because of the fact that α can be complex.
In particular, the equation

y 00 + py 0 + qy = Aeiωt = A cos ωt + iA sin ωt, (112)

where ω is a positive real constant, may be thought of as a pair of real equations

u00 + pu0 + qu = A cos ωt (113)


00 0
v + pv + qv = A sin ωt (114)

where u and v are respectively the real and imaginary parts of y. That suggests the
following strategy for solving an equation of the form (113). Solve (112) instead,
and then take the real part. We shall now do that under the various assumptions
on whether or not α = iω is a root of r2 + pr + q = 0.

Assume first that iω is not a root of r2 + pr + q = 0. That will


√ always be the case,
for example, if p 6= 0 since the roots are of the form −p/2 ± p2 − 4q/2. Then the
particular solution of (112) is
A
yp = eiωt .
−ω 2 + ipω + q

Let Z = q − ω 2 + ipω. Then we can write the denominator in the form

Z = |Z|eiδ
7.10. FINDING A PARTICULAR SOLUTION BY GUESSING 361

where √
|Z| = (q − ω 2 )2 + p2 ω 2
and δ is the argument of Z, so

tan δ = .
q − ω2
(If q = ω 2 , interpret the last equation as asserting that δ = ±π/2 with the sign
depending on the sign of p.) Then the particular solution may be rewritten
A iωt A i(ωt−δ)
yp = e = e . z
|Z|eiδ |Z|
If we take the real part of this, we obtain the following particular solution of the
equation u00 + pu0 + qu = A cos ωt:

A δ
up = cos(ωt − δ). (115)
|Z|
q- ω 2
We still have to worry about the case in which iω is a root of r2 + pr + q = 0. As
mentioned above, we must have p = 0 and ω 2 = q. The roots ±iω are unequal, so
the particular solution of (112) in this case is
At
yp = eiωt
p + 2iω
At
= (−i)(cos ωt + i sin ωt)

At
= (sin ωt − i cos ωt).

Taking the real part of this yields the following particular solution of u00 +pu0 +qu =
A cos ωt in the case ω 2 = q:
At
up = sin ωt. (116)

Exercises for 7.10.

1. Find a particular solution of y 00 + 2y 0 − 3y = t by trying a solution of the form


y = At + B.
2. Find a particular solution of y 00 + 2y 0 − 3y = et by trying a solution of the
form y = (At + B)et . Would a solution of the form y = Aet work?
3. Find a particular complex solution of each of the following.
(a) y 00 + y 0 + y = eit .
(b) y 00 + 4y 0 + 5y = 6e2it .
(c) y 00 + 3y 0 + 2y = 4e3it .
(d) y 00 − 2y 0 + 2y = 2e(2+i)t .
362 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

4. Find the general solution of y 00 + 9y = e3it .

5. Find a particular real solution of each of the following. Hint. Consider the
appropriate complex equation.
(a) y 00 + 4y = 3 cos 3t.
(b) y 00 + 4y 0 + 5y = 6 cos 2t.
(c) y 00 + 3y 0 + 2y = 4 cos 3t.
(d) y 00 − 2y 0 + 2y = 2e2t cos t.

6. Find a particular solution of y 00 + 4y = 3 cos 3t by trying something of the


form y = A cos 3t + B sin 3t.

7. Find the general solution of each of the following.


(a) y 00 + 9y = 3 cos 3t.
(b) y 00 + 4y 0 + 5y = 6 cos 2t.
(c) y 00 + y 0 = sin 2t.

7.11 Forced Oscillations

Consider the equation for a mass at the end of a spring being driven by a periodic
force
d2 y dy
m 2 +b + ky = F0 cos ωt (117)
dt dt
motor where m, b, k, and F0 are positive constants.
R
The electrical analogue of this is the equation
dI Q
L + RI + = E0 cos ωt (118)
V dt C
L
holding for the circuit indicated in the diagram, where Q is the charge on the
dQ
capacitor C, I = is the current, R is the resistance, and L is the inductance.
C dt
The term on the right represents a periodic driving voltage.
V = E0 cos ω t
As previously, we shall analyze the mechanical case, but the conclusions are also
valid for the electric circuit. The fact that the same differential equation may govern
different physical phenomena was the basis of the idea of an analogue computer.

Dividing through by m, we may write equation (118)

d2 y b dy k F0
+ + y= cos ωt (119)
dt2 m dt m m
7.11. FORCED OSCILLATIONS 363

Its general solution will have the form

y = yp (t) + h(t)

where yp (t) is one of the particular solutions considered in the previous section and
h(t) is a general solution of the homogeneous equation considered in Section 7. If
you refer back to Section 7, you will recall that provided b > 0, h(t) → 0 as t → ∞.
Hence, if we wait long enough, the particular solution will predominate. For this
reason, the solution of the homogeneous equation is called the transient part of the
solution, and the particular solution is called the steady state part of the solution. In
most cases, all you can observe is the steady state solution. Since we have assumed
that b > 0, (i.e., p 6= 0), we are in the case where iω is not a root of r2 + pr + q = 0.
Hence, the desired steady state solution is, from (15) in the previous section,

A
y= cos(ωt − δ)
|Z|

where

F0
A=
m √
√ k b2
|Z| = (q − ω ) + p ω = ( − ω 2 )2 + 2 ω 2
2 2 2 2
m m
1√
= (k − mω ) + b ω
2 2 2 2
m
pω (b/m)ω
tan δ = =
q − ω2 k/m − ω 2

= .
k − mω 2

Hence, the steady state solution is

F0
y=√ cos(ωt − δ).
(k − mω 2 )2 + (bω)2


where tan δ = .
k − mω 2
364 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

Driving force

Response lags by δ/ω

Note that the response to the driving force lags by a phase angle δ. This lag is a
fundamental aspect of forced oscillators, and it is difficult to understand without a
thorough understanding of the solution of the differential equation.

The amplitude of the steady state solution


F0
A(ω) = √
(k − mω 2 )2 + (bω)2
may be plotted as a function of the angular frequency ω.

Α(ω)

ω
ω2

Its maximum occurs for the value



k b2
ω2 = − .
m 2m2
(This is derived by the usual method from calculus: set the derivative with respect
to ω equal to zero and calculate. See the Exercises.) If you recall, the quantity
7.11. FORCED OSCILLATIONS 365

ω0 = k/m is called the resonant frequency of the system. Using that, the above
expression may be rewritten

b2
ω2 = ω0 2 − .
2m2
In Section 7, we introduced one other quantity:

b2
ω1 = ω0 2 − ,
4m2
which is the angular frequency of the damped, unforced oscillator. We have the
inequalities
ω0 > ω1 > ω2 .
If b is small, all three of these will be quite close together.

For many purposes, it is more appropriate to consider the square of the velocity
v 2 = (dy/dt)2 instead of the displacement y. For example, the kinetic energy of
the system is (1/2)mv 2 . (Similarly, in the electrical case the quantity RI 2 gives
a measure of the power requirements of the system.) If as above we ignore the
transient solution, we may take
dy F0 ω
= −√ sin(ωt − δ).
dt (k − mω 2 )2 + (bω)2

and the square of its amplitude is given by

F0 2 ω 2
B(ω) = .
(k − mω 2 )2 + (bω)2

If we divide through by ω 2 , this may be rewritten

F0 2
B(ω) = .
(k/ω − mω)2 + b2

It is easy to see where B(ω) attains its maximum, even without calculus. The
maximum occurs when the denominator is at a minimum, but since the denominator
is a sum of squares, its minimum occurs when its first term vanishes, i.e., when
k
− mω = 0
ω
k
i.e., ω2 = .
m

Thus, the maximum of B(ω) occurs for ω = ω0 = k/m. This is one justification
for calling ω0 the resonant frequency.

Exercises for 7.11.


366 CHAPTER 7. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

1. A spring is subject to the periodic force F (t) = 2 cos(10πt). For each of


the following values of m, b and k, determine the amplitude and phase of the
response.
(a) m = 1, k = 0.5, b = 0.005.
(b) m = 4, k = 1, b = 4.

2. In each of the cases in the previous problem, assume the spring is displaced
one unit of distance at t = 0 and is released with no initial velocity. Estimate
in each case the number of cycles of the forced oscillation required before
the transient response drops to one percent of the response due to the forced
oscillation.

3. Show that the maximum of


F0
A(ω) = √
(k − mω 2 )2 + (bω)2

k b2
occurs for the value ω2 = − .
m 2m2
4. Show
√ that the phase angle δ = π/2 for the resonant frequency ω = ω0 =
k/m. Thus, at resonance, the force and the response are one quarter cycle
out of phase.
5. Show that the current I = dQ/dt in the solution of equation (118) is of the
form
E0
I= cos(ωt − δ)
|Z|
1
where Z = R + i(ωL − ) and δ is the argument of Z. Hint: Replace
ωC
E0 cos(ωt) by E0 eiωt in equation (118), differentiate once to obtain a second
order equation for I, and then apply the results of Section 10.
In the theory of alternating current circuits, the quantity Z is called the
(complex) impedance. By analogy with Ohm’s Law, the complex quantity
E0 /Z describes the current in the sense that its absolute value is its magnitude
and its argument describes its phase.
Chapter 8

Series

8.1 Series Solutions of a Differential Equation

There are several second order linear differential equations which arise in mathe-
matical physics which cannot be solved by the methods introduced in the previous
chapter. Here are some examples
2t 0 α(α + 1)
y 00 − y + y=0 Legendre’s equation
1 − t2 1 − t2
t2 y 00 + ty 0 + (t2 − ν 2 )y = 0 Bessel’s equation
00 0
ty + (1 − t)y + λy = 0 Laguerre’s equation
A related equation we studied (in the Exercises to Chapter VII, Sections 4 and 8)
is
α β
y 00 + y 0 + 2 y = 0 Euler’s equation.
t t
These equations and others like them arise as steps in solving important partial
differential equations such as Laplace’s equation, the wave equation, the heat equa-
tion, or Schroedinger’s equation. Their solutions are given appropriate names: Leg-
endre functions, Bessel Functions, etc. Sometimes these special functions can be
expressed in terms of other known functions, but usually they are entirely new func-
tions. One way to express these solutions is as infinite series, and we investigate
that method here.

Example Consider
(1 − t2 )y 00 − 2ty 0 + 6y = 0 (120)
which is Legendre’s equation for α = 2. The simplest possible functions are the
polynomial functions, those which can be expressed as combinations of powers of t
y = a 0 + a 1 t + a 2 t2 + a 3 t3 + a 4 t4 + · · · + a d td .

367
368 CHAPTER 8. SERIES

The highest power of t occurring in in such an expression is called its degree. We shall
try to find a solution of (120) which is a polynomial function without committing
ourselves about its degree in advance. We have
y = a0 + a1 t + a2 t2 + a3 t3 + a4 t4 + · · · +
y0 = a1 + 2a2 t + 3a3 t2 + 4a4 t3 + . . .
y 00 = 2a2 + 6a3 t + 12a4 t2 + . . .
Note that these equations tell us that y(0) = a0 and y 0 (0) = a1 , so the first two
coefficients just give us the initial conditions at t = 0.

We have
1 y 00 = 2a2 + 6a3 t + 12a4 t2 + 20a5 t3 + . . .
−t2 y 00 = − 2a2 t2 − 6a3 t3 − . . .
−2ty 0 = − 2a1 t − 4a2 t2 − 6a3 t3 − . . .
6y = 6a0 + 6a1 t + 6a2 t2 + 6a3 t3 + . . .
The sum on the left is (1 − t2 )y 00 − 2ty 0 + 6y which is assumed to be zero. If we add
up the coefficients of corresponding powers of t on the right, we obtain
0 = 2a2 + 6a0 + (6a3 + 4a1 )t + 12a4 t2 + (20a5 − 6a3 )t3 + . . .
Setting the coefficients of each power of t equal to zero, we obtain
2a2 + 6a0 = 0 6a3 + 4a1 = 0 12a4 = 0 20a5 − 6a3 = 0 ...
Thus,
a2 = −3a0
2
a3 = − a1
3
a4 = 0
3 1
a5 = a3 = − a1
10 5
..
.
The continuing pattern is more or less clear. Each succeeding coefficient an is
expressible as a multiple of the coefficient an−2 two steps lower. Since a4 = 0, it
follows that all coefficients an with n even must vanish for n ≥ 4. On the other hand,
the best that can be said about the coefficients an with n odd is that ultimately
each of them may be expressed in terms of a1 . It follows that if we separate out
even and odd powers of t, the solution looks something like
2 1
y = a0 − 3a0 t2 + a1 t − a1 t3 − a1 t5 + . . .
3 5
2 3 1 5
= a0 (1 − 3t ) + a1 (t − t − t + . . . ).
2
3 5
8.2. DEFINITIONS AND EXAMPLES 369

Consider first the solution satisfying the initial conditions a0 = y(0) = 1, a1 =


y 0 (0) = 0. The corresponding solution is

y1 = 1 − 3t2

which is a polynomial of degree 2, just as expected. (However, notice that we had


no way to predict in advance that it would be of degree 2.) The meaning of the
‘solution’ obtained by putting a0 = 0 and a1 = 1 is not so clear. It appears to be a
‘polynomial’
2 1
y1 = t − t3 − t5 + . . .
3 5
which goes on forever. In fact, it is what is called an power series. This is just one
example of many where one must represent a function in terms of an infinite series.
We shall concern ourselves in the rest of this chapter with the theory of such series
with the ultimate aim of understanding how they may be used to solve differential
equations. For many of you this may be review, but you should pay close attention
since some points you are not familiar with may come up.

Exercises for 8.1.

2t 0 2
1. Find a polynomial solution of y 00 − y + y = 0 as follows. First
1−t 2 1 − t2
multiply through by 1 − t to avoid denominators. Then apply the method
2

discussed in this section. Which of the two sets of initial conditions, a0 =


1, a1 = 0 or a0 = 0, a1 = 1, results in a polynomial solution?

2. Find a polynomial solution of (1 − t2 )y 00 − 2ty 0 + 12y = 0.

3. Apply the method of this section to the first order differential equation y 0 = y.
Show that with the initial condition y(0) = a0 = 1 you obtain the ‘polynomial’
solution
1 1 1
y(t) = 1 + t + t2 + t2 + t3 + . . .
2 3! 4!
This is actually an infinite series which, we shall see in the ensuing sections,
represents the function et .

8.2 Definitions and Examples

A series is a sequence of terms un connected by ‘+’ signs

u1 + u2 + u3 + · · · + un + . . .
370 CHAPTER 8. SERIES

The idea is that if there are infinitely many terms, the summation process is sup-
posed to go on forever. Of course, in reality that is impossible, but perhaps after we
have added up sufficiently many terms, the contributions of additional terms will
be so negligible that they really won’t matter. We shall make this idea a bit more
precise below.

The terms of the series can be positive, negative or zero. They can depend on
variables or they can be constant. The summation can start at any index. In the
general theory we assume the first term is u1 , but the summation could start just
as well with u0 , u6 , u−3 , or any convenient index.

Here are some examples of important series


1 + t + t2 + · · · + tn + . . . geometric series
1 1 1
1 + + + ··· + + ... harmonic series
2 3 n
t3 t5 t2n+1
t − + − · · · + (−1)n + ... Taylor series for sin t
3! 5! (2n + 1)!
In the first series, the general term is tn and the numbering starts with n = 0. In
the second series, the general term is 1/n and the numbering starts with n = 1.

Series are often represented more compactly using ‘Σ’-notation




un .
n=1

For example,


tn is the geometric series
n=0
∑∞
1
is the harmonic series
n=1
n

The notion of the sum of an infinite series is made precise as follows. Let
s1 = u1
s2 = u1 + u2
s3 = u1 + u2 + u3
..
.

n
sn = u1 + u2 + · · · + un = uj
j=1
..
.
8.2. DEFINITIONS AND EXAMPLES 371

sn is called the nth partial sum. It is obtained by adding the nth term un to the
previous partial sum, i.e.,
sn = sn−1 + un .
Consider the behavior of the sequence of partial sums s1 , s2 , . . . , sn , . . . as n → ∞.
If this sequence approaches a finite limit
lim sn = s
n→∞

then we say the series converges and that s is its sum. Otherwise, we say that the
series diverges.

Example Consider the geometric series




1 + t + t + ··· + t + ··· =
2 n
tn .
n=0

The partial sums are


s0 = 1
s1 = 1 + t
s2 = 1 + t + t2
..
.
1 − tn+1
sn = 1 + t + t2 + · · · + tn =
1−t
..
.
The formula for sn should be familiar to you from high school algebra, it is the sum
of the first n + 1 terms of a geometric progression with starting term 1 and ratio t.
It applies as long as t 6= 1. Let n → ∞. There are several cases to consider.
1
Case (a). |t| < 1. Then, limn→∞ tn = 0 so limn→∞ sn = . Hence, in this case
1−t
the series converges and the sum is 1/(1 − t).

Case (b) t = 1. Then sn = n → ∞, so the series diverges. We might also say in a


case like this that the sum is ∞.

Case (c) t = −1. Then the series is


1 − 1 + 1 − 1 + 1 − ...
and the partial sums sn alternate between 1 (n odd) and 0 (n even). As a result
they don’t approach any definite limit as n → ∞.

Note that in cases (b) and (c) it is not true that 1/(1−t) is the sum of the series, since
that sum is not well defined. However, in case (c) (t = −1), we have 1/(1 − t) = 1/2
which is the average of the two possible partial sums 1 and 0.
372 CHAPTER 8. SERIES

Case (d). Assume t > 1. In this case, it might be more appropriate to write

tn+1 − 1
sn = . (121)
t−1
As n → ∞, tn+1 → ∞, so the series diverges. However, since tn increases without
limit, it would also be appropriate in this case to say the sum is ∞.

Case (e). Assume t < −1. Then in (121), the term tn+1 oscillates wildly between
positive and negative values, and the same is true of sn . Hence, sn approaches no
definite limit as n → ∞, and the series diverges.
∑∞ 1
To summarize, the geometric series tn converges to the sum for −1 <
n=0
1−t
t < 1 and diverges otherwise.

Properties of Convergence We list some important properties of convergent and


divergent series. 1. Importance of the tail. Whether or not a series converges does
not depend on any finite number of terms. Thus, one should not be misled by what
happens to the first few partial sums. (In this context ‘few’ might mean the first
10,000,000 terms!) Convergence of a series depends on the limiting behavior of the
sequence of partial sums, and no finite number of terms in that sequence will tell
you for sure what is happening in the limit.

Of course, the actual sum of a convergent series depends on all the terms.

2. The general term un of a convergent series always approaches zero. For, as noted
earlier
sn = sn−1 + un .
If sn approaches a finite limit s, then it is not hard to see that sn−1 approaches this
same limit s. Hence,

lim un = lim sn − lim sn−1 = s − s = 0.


n→∞ n→∞ n→∞

This rule provides a quick check for divergence in special cases. For example, the
series
1 2 n
+ + ... + ...
2 3 n+1
cannot possibly converge because
n
lim = 1 6= 0.
n→∞ n+1
n 1
(That limit can be calculated using L’Hôpital’s Rule or rewriting = .)
∑∞ n + 1 1 + 1/n
3. The harmonic series n=1 1/n diverges. This is important because it shows us
that the converse of the previous assertion is not true. That is, it is possible that
8.2. DEFINITIONS AND EXAMPLES 373

un → 0 as n → ∞ and the series still diverges. Many people find this counter-
intuitive. Apparently, it is hard to believe that the sum of infinitely many things
can be a finite number, and having convinced oneself that it can happen, one seeks
an explanation. The explanation that comes to mind is that it can happen because
the terms are getting smaller and smaller. However, as the example of the harmonic
series shows, that is not enough. There are of course many other examples. Here is a
proof that the harmonic series diverges. We use the principle that if sn approaches a
finite limit s, then any subsequence obtained by considering infinitely many selected
values of n would also approach this same finite limit. Consider, in particular,
1 3
s2 = 1 + =
2 2 ( )
1 1 3 1 4
s4 = s2 + + > + 2 =
3 4 2 4 2
( )
1 1 1 1 4 1 5
s8 = s4 + + + + > + 4 =
5 6 7 8 2 8 2
( )
1 1 5 1 6
s16 = s8 + + · · · + > +8 =
|9 {z 16} 2 16 2
8 terms
..
.

Continuing in this way, we can show in general that


k+2
s2k > .
2
Since the right hand side is unbounded as k → ∞, the sequence s2k cannot approach
a finite limit. It follows that sn cannot approach a finite limit. The harmonic series
also provides a warning to those who would compute without thinking. If you
naively add up the terms of the harmonic series using a computer, the series will
appear to approach a finite sum. The reason is that after the partial sums attain
a large enough value, each succeeding term will get lost in round-off error and not
contribute to the accumulated sum. The exact ‘sum’ you will get will depend on
the word size in the computer, so it will be different for different computers. We
shall see other examples of this phenomenon below. The moral is to be very careful
about how you add up a large number of small terms in a computer if you want an
accurate answer. 4. Algebraic manipulation of series. One may perform many of
the operations with series that one is familiar with for finite sums, but some things
don’t work.

For example, the sum or difference of two convergent series is convergent. Similarly,
any multiple of a convergent series is convergent.
∑∞ 1 ∑∞ 1
Example 148 Each of the series n=0 n
and n=0 n is a geometric series with
2 3
a |t| < 1. (t = 1/2 for the first and t = 1/3 for the second.) Hence, both series
374 CHAPTER 8. SERIES

converge and so does the series


∑∞ ( )
1 1
+ .
n=0
2n 3n

In fact, its sum is obtained by adding the sums of the two constituent series
1 1 3 7
+ =2+ = .
1 − 1/2 1 − 1/3 2 2

Example 149

∑ ∞
∑ 1 t
tn+1 = t tn = t =
n=0 n=0
1−t 1−t
is valid for −1 < t < 1.

On occasion, you can combine divergent series to get a convergent series.

Example 150 Consider the two series


∑∞ ∞

1 1
and .
n=1
n n=1
n + 1

The first is the harmonic series, which we saw is not convergent. The second series
is
1 1 1 1
+ + + ··· + + ...
2 3 4 n+1
which is the harmonic series with the first term omitted. Hence, it is also divergent.
However, the difference of these two divergent series is
∞ (
∑ ) ∞

1 1 1
− = ,
n=1
n n+1 n=1
n(n + 1)

and this series is convergent. (This follows by one of the tests for convergence we
shall investigate in the next section, but it may also be checked directly. See the
Exercises.) However, not all manipulations are valid.

Example 151 We saw that the series

1 − 1 + 1 − 1 + 1 − 1 + ...

is not convergent. However, we may produce a convergent series of regrouping terms

(1 − 1) + (1 − 1) + (1 − 1) + · · · = 0 + 0 + 0 + . . .

or
1 + (−1 + 1) + (−1 + 1) + · · · = 1 + 0 + 0 + 0 + . . .
8.2. DEFINITIONS AND EXAMPLES 375

Generally speaking, regrouping of terms so as to produce a new series won’t produce


correct results, although in certain important cases it does work. We shall return
to this point in Sections 3 and 5.

Exercises for 8.2.

1. In each of the following cases, determine if the the indicated sequence sn


converges, and if so find its limit. The sequence could have arisen as the
sequence of partial sums of a series, but it could also have arisen some other
way. Usually the simplest way to determine the limit is to use L’Hôpital’s
Rule, but in some cases other methods may be necessary. (If you don’t know
L’Hôpital’s Rule, ask your instructor.)
n2 + 2n + 3
(a) sn = .
3n3 − 5n2
99n
(b) sn = 1 − . (Don’t use L’Hôpital’s Rule.)
100n
(c) sn = 1 − (−1)n .
sin(nt)
(d) sn = where t is a variable.
n
ln n
(e) sn = .
n
( )1/n
1
(f) sn = . (Consider ln sn and apply L’Hôpital’s Rule to that.)
n

2. In each of the following cases, determine if the indicated series converges.


∑∞
(a) 1 + 1/5 + (1/5)2 + · · · = n=0 (1/5)n .
∑∞
(b) 1 + 2 + 3 + · · · = n=1 n.
∑∞
(c) 1 − 4 + 9 − · · · = n=1 (−1)n+1 n2 .
∑∞ n 2
(d) 2 − 2/3 + 2/9 − · · · = n=0 (−1) .
3n
∑∞
(e) (1 + t) − (1 + t)2 + (1 + t)3 − · · · = n=1 (−1)n+1 (1 + t)n . where t > 0.
Does it matter how small t is? Suppose t = 10−100 . What if −2 < t < 0?
∑∞ n
(f) 1/2 − 2/3 + 3/4 − · · · = n=1 (−1)n .
n+1
( )
∑∞ 1 1
(g) n=1 − n .
3n 4
∑∞
(h) n=1 cos(nπ)
∑∞ 2n + 99n
(i) n=0 .
100n
376 CHAPTER 8. SERIES

3. Consider the repeating decimal

x = .2315231523152315 . . .
∑∞
Treating this as the sum of the infinite series n=0 atn where a = .2315 and
t = .0001, determine x.
Every infinite decimal which repeats in this way can be evaluated by this
method. The method can be adapted also to infinite decimals which eventually
start repeating.

4. Suppose that when a ball is dropped from a given height, it bounces back to
r times that height where 0 < r < 1. If it is originally dropped from height h,
find the total distance it moves as it bounces back and forth infinitely many
times. Hint: Except for the initial drop, each bounce involves moving through
the same distance twice.

5. Two trains initially 100 miles apart approach, each moving at 20 mph. A bird,
flying at 40 mph, starts from the first train, flies to the second train, returns
to the first train, ad infinitum until the two trains meet. Use a geometric
series to find the total distance the bird flies.
There is an easier way to do this. Do you see it? A story is told of the famous
mathematician John von Neumann who when given the problem immediately
responded with the answer. In response to the remark, “I see you saw the
easy method”, he is supposed to have answered “What easy method?”

6. For

∑ ∑∞
1 1 1
= − ,
n=1
n(n + 1) n=1
n n + 1

1
show that sn = 1 − . Hint: The partial sum sn is what is called a
n+1
‘collapsing sum’, i.e., mutual cancellation for adjacent terms eliminates every-
1 1
thing except the ‘1’ in the first term ‘1 − ’ and the ‘ ’ in the last term
2 n+1
1 1
‘ − ’.
n n+1
Conclude that the series converges and the sum is 1.

7. What is wrong with the following argument?

1 − 1 + 1 − 1 + · · · = 1 − (1 − 1 + 1 − . . . )
or x=1−x

where x is the sum. It follows that x = 1/2.


8.3. SERIES OF NON-NEGATIVE TERMS 377

8.3 Series of Non-negative Terms

Some of the strange things that happen with series result from trying to balance one
‘infinity’ against another. Thus, if infinitely many terms are positive, and infinitely
many terms are negative, the sum of the former might be ‘+∞’ while the sum of
the latter ∑might be ‘−∞’, and the result of combining the two is anyone’s guess.
For series n un in which all the terms have the same sign, that is not a problem,
so such series are much easier to deal with. In this section we shall consider series
in which all terms un ≥ 0, but analogous conclusions are valid for series in which
all un ≤ 0.
∑∞
Let n=1 un be such a series with non-negative terms un . In the sequence of partial
sums we have
sn = sn−1 + un ≥ sn−1 ,

i.e., the next term in the sequence is never less than the previous term. There are
only two possible ways such a sequence can behave.

1. ”(a)” The sequence sn can grow without bound. In that case we say sn → ∞
as n → ∞.

2. ”(b)” The sequence sn can remain bounded so no sn ever exceeds some pre-
assigned upper limit. In that case, the sequence must approach a finite limit
s, i.e., sn → s as n → ∞.

The fact that non-decreasing sequences behave this way is a fundamental property
of the real number system called completeness.

1 2 3 4 5 6 7 8 n n+1
378 CHAPTER 8. SERIES

Example Consider the decimal expansion of π. It may be considered the limit of


the sequence

s1 = 3
s2 = 3.1
s3 = 3.14
s4 = 3.141
s5 = 3.1415
s6 = 3.14159
..
.

This is a sequence of numbers which is bounded above (for example, by 4), so (b)
tells us that it approaches a finite limit, and that limit is the real number π.

Note that for series, with terms alternately positive and negative, the above principle
does not apply.

Example For the series 1 − 1 + 1 − 1 + 1 − . . . , the partial sums are

s1 = 1
s2 = 0
s3 = 1
s4 = 0
..
.

so although the sequence is bounded above (and also below), it never approaches a
finite limit.

For series consisting of terms all of the same sign (or zero), rearranging the terms
does not affect convergence or divergence or the sum when convergent. We won’t
try to prove this in this course because we would first have to define just precisely
what we mean by ‘rearranging’ the terms of a series. You can find a proof in any
good calculus book. (See for example Calculus by Tom M. Apostol).
∑∞
The Integral Test Suppose we have a series n=1 un where each term is positive
(un > 0) and moreover the terms decrease

u1 > u2 > · · · > un > . . .

Suppose in addition that limn→∞ un = 0. Often it is possible to come up with a


function f (x) of a real variable such that un = f (n) for each n. This function should
be continuous and it should also be decreasing and approach 0 as x → ∞. ∑ Then,
there is a very simple test to check whether or not the series converges. n un
∫∞
converges if and only if the improper integral 1 f (x) dx is finite.
8.3. SERIES OF NON-NEGATIVE TERMS 379


Example 152 The harmonic series n 1/n satisfies the above conditions and we
may take f (x) = 1/x. Then
∫ ∞ ∫ X
1 dx X
dx = lim = lim ln x|1 = lim ln X = ∞.
1 x X→∞ 1 x X→∞ X→∞

It follows that the series diverges, a fact we already knew.

The reason why the integral test works is fairly clear from some pictures. From the
diagram below

u
n

u1
u2

1 2 3 4 5 6 7 n n+1

∫ n+1
sn = u1 + u2 + · · · + un ≥ f (x)dx
1
∫∞ ∫ n+1
Hence, if 1 f (x)dx = limn→∞ 1 f (x)dx = ∞, it follows that the sequence sn
is not bounded and (a) applies, i.e., sn → ∞ and the series diverges. On the other
hand from the diagram below,

u
n
u
1 u2

1 2 3 4 5 6 7 n n+1
380 CHAPTER 8. SERIES

∫ n
sn − u1 = u2 + u3 + · · · + un ≤ f (x)dx.
1

Hence, by similar reasoning if the integral is finite, the sequence sn is bounded, so


by (b) it approaches a limit, and the series converges.
∑∞ 1
Example 153 The series 1 with p > 0 is called the ‘p-series’. The conditions
np
of the integral test apply, and we may take f (x) = 1/xp . Then, with the exception
of the case p = 1, which we have already dealt with, we have

∫ ∫ X

dx X
dx x−p+1 1
= lim = lim = lim [X 1−p − 1].
1 xp X→∞ 1 xp X→∞ −p + 1 X→∞ 1 − p
1

1 1
If p > 1, then X 1−p = → 0 so the above limit is
(finite), and the series
X p−1 p−1
converges. On the other hand, if 0 < p < 1, then 1 − p > 0 and X 1−p → ∞, so
the above limit is not finite and the series diverges. For p = 1, the series is the
harmonic series which we already discussed. Hence, we conclude that the ‘p-series’
diverges for 0 < p ≤ 1 and converges for p > 1.

Error estimates with the integral test. The integral ∑ test may be refined to
estimate the speed at which a series converges. Suppose n un is convergent and
its sum is s. Decompose s as follows

s = u1 + u2 + · · · + un + un+1 + un+2 + . . .
| {z } | {z }
sn Rn

∑∞
Here, Rn = s − sn = j=n+1 uj is the error you make if you stop adding terms
after the nth term. It is sometimes called the remainder. From the diagram below,
you can see that

∫ ∞ ∞
∑ ∫ ∞
f (x)dx < Rn = un < f (x)dx. (122)
n+1 j=n+1 n
8.3. SERIES OF NON-NEGATIVE TERMS 381

u
n
u
n+1

n n+1

∑∞
Example 154 Consider the series n=1 1/n2 . We have
∫ ∞ ∫ ∞
dx dx
< Rn < .
n+1 x2 n x2

Evaluating the two integrals, we obtain

1 1
< Rn < .
n+1 n
This gives us a pretty good estimate of the size of the error. For n large, the
upper and lower estimates are about the same, so is makes sense to say Rn ≈ 1/n.
For example, I used Mathematica to add up 100 terms of this series and got (to
5 decimal places) 1.63498, but it is known that the true sum of the series is π 2 /6
which (to 5 decimal places) is 1.64493. As you see, they differ in the second decimal
place, and the error is about 1/100 = .01 as predicted.

A common blunder. It is often suggested that when adding up terms of a series


on a computer, it suffices to stop when the next term you would add is less than
the error you are willing to tolerate. The above example shows that this suggestion
is nonsense. For, if you were willing to accept an error of .0001 = 10−4 , you would
then stop when 1/n2 < .10−4 or n2 > 104 or n > 100. However, the above analysis
shows that the actual error at that stage would be about 1/100 = .01.

This rule makes even less sense for a divergent series like the harmonic series because
it tells you that its sum is finite.

The Comparison Test By a variation of the reasoning used to derive ∑ the integral
test, one may derive the following criteria to determine if a series n un∑of non-

negative terms converges or diverges. (a) If you can find a convergent series n=1 an
∑∞
such that un ≤ an for every n, then n=1 un also converges. (b) If you can find a
382 CHAPTER 8. SERIES

∑∞ ∑∞
divergent series n=1 bn of non-negative terms such that bn ≤ un , then n=1 un
also diverges.

∑∞ ln n
Example 155 The series n=1 diverges because it can be compared to the
n
harmonic series, i.e.,
ln n 1
≥ for n > 1,
n n
∑ ∑
so ln n/n diverges since 1/n diverges. (Why doesn’t it matter that the com-
parison fails for n = 1?)

Example 156 Consider the series



∑ n
3−n+2
.
n=1
n

We have
n 1 1 1 1 2
= 2 < 2 < 2 = 2 = 2.
n3 − n + 2 n − 1 + 2/n n −1 n − n2 /2 n /2 n
The first equality is obtained by dividing numerator and denominator ∑ by n. Each
2
inequality
∑ is obtained
∑ by making the denominator
∑ smaller. Since n 2/n =
2 2 2
2 n 1/n and ∑ n 1/n converges, so does n 2/n . Hence, the comparison test
tells us that n n/(n3 − n + 2) converges.

The last example illustrates a common strategy when applying the comparison
test. We try to estimate the approximate behavior of un for large n and thereby
come up with an appropriate comparison series. We then try (usually by a tricky
argument) to show that the given series is less than the comparison series (if we
expect convergence) or greater than the comparison series (if we expect divergence.)
Unfortunately, it is not always easy to dream up the relevant comparisons. Hence,
it is often easier to use the following version of the comparison test.
∑ ∑
The Limit Comparison Test Suppose n un and n cn are series of non-
un ∑
negative terms such that limn→∞ exists and is not 0 or ∞. Then n un con-
∑ cn ∑
verges∑if and only if n cn converges. If the limiting ratio
∑ is 0, and n cn converges,

then n un converges. If the limiting ratio is ∞, and n cn diverges, then n un
diverges.

Example 156, again un = n/(n3 − n + 2) looks about like cn = 1/n2 . Consider


the ratio
un n n2 n3 1
= 3 = 3 = →1
cn n −n+2 1 n −n+1 1 − 1/n2 + 2/n3

as n → ∞. Since 1 is∑finite and non-zero, and since the comparison series n 1/n2
converges, the series n n/(n3 − n + 2) also converges.
8.3. SERIES OF NON-NEGATIVE TERMS 383

It is important to emphasize that this test does not work for series for which some
terms are positive and some are negative.

The Proof The proof of the limit comparison test is fairly straightforward, but if
you are not planning to be a Math major and you are willing to accept such things
on faith, you should skip it. Suppose
un
lim =r with 0 < r < ∞.
n→∞ cn

Then, for all n sufficiently large


r un
< < 2r.
2 cn

(All the values of un /cn have to be very close to r when n is large enough.) Hence,
for all n sufficiently large,
r
cn < un < 2rcn
2
∑ ∑ ∑
If n cn converges, then∑ so does n 2rcn , so by the
∑ comparison test, so does n un .
By similar reasoning, if n cn diverges, so does n un . Note that we have used the
remark about the importance of the tail—see Section 2—which asserts that when
deciding on matters of convergence or divergence, it is enough to look only at all
sufficiently large terms.

Exercises for 8.3.

1. Apply the integral test in each of the following cases to see if the indicated
series converges or diverges.
∑∞ 1
(a) n=1
3n + 2
∑∞ 1
(b) n=1 2 .
n +1
∑∞ 1
(c) n=1 √ .
2n + 1
∑∞ 1
(d) n=2 .
n ln n
∑∞ ln n
(e) n=1 3 .
n
2. Why doesn’t the integral test apply to each of the following series?
∑∞ 1
(a) n=1 (−1)n+1 .
n
∑∞ n
(b) n=1 .
n+1
384 CHAPTER 8. SERIES

3. Apply one of the two comparison tests to determine if each of the following
series converges or diverges.
∑∞ 1
(a) n=1
3n + 2
∑∞ 1
(b) n=1 2 .
n +1
∑∞ 1
(c) n=1 √ .
2n + 1
∑∞ ln n
(d) n=2 .
n
∑∞ ln n
(e) n=1 3 .
n
4. Determine if each of the following series converges or diverges by applying one
of the tests discussed in this section.
∑∞ n+1
(a) n=1 3 .
n + 3n + 1
∑∞ n2 + 2n + 2
(b) n=1 3 .
n +n+2
∑∞ 1
(c) n=1 .
2n − 1
∑∞ n
(d) n=1 n .
e
∑∞ 1 + sin nt
(e) n=1 where t is a real number.
n2
∑∞ 1
5. (a) Estimate the error we make in calculating n=1 if we stop after 1000
n3
terms.
∑∞1
(b) Estimate the least number of terms we need to use to calculate n=1
n3
accurately to within 5 × 10−16 . (This is close to but not the same as asking
that it be accurate to 15 decimal places.)
∑∞ 1
6. (a) Estimate how many terms of the series n=1 are necessary to
n(n + 1)
−6
calculate the sum accurately to with 5 × 10 . (This will require a little work
∫ dx
because you have to find .
x(x + 1)
(b) Write a computer program to do the calculation. See what you get and
compare it to the true answer which you know to be 1. (See Section 2.)
∑∞ 1
7. (a) Estimate how many terms of the series n=2 are necessary to com-
n ln n
pute the sum accurately to within 5 × 10−4 .
(b) Why is this problem silly?
8.4. ALTERNATING SERIES 385

∑ ∑
8. (Optional) Suppose n un and n cn are series of non-negative terms such
un ∑ ∑
that → 0. Show that if n cn converges, then so also does n un . Hint:
cn
Show that for all n sufficiently large, un < cn and use the previous version of
the comparison test.

8.4 Alternating Series

A series with alternating positive and negative terms is called an alternating series.
Such a series is usually represented

v1 − v2 + v3 − v4 + . . .

where v1 , v2 , v3 , . . . are all positive. Of course, variations of this scheme are possible.
For example, the first term might have an index other than 1, and also there is no
reason not to start with a negative term. However, to simplify the discussion of the
general theory, we shall assume the series is as presented above. For alternating
series, we start to encounter the problem of ‘balancing infinities’, but they are still
relatively well behaved. Also, many series important in applications are alternating
series.

Examples of Alternating Series




tn if t is negative (123)
n=0
∑∞
(−1)n+1 1 1
= 1 − + − ... (124)
n=1
n 2 3

∑ x2n x2 x4 x6
(−1)n =1− + − + ... (125)
n=0
(2n)! 2! 4! 6!

(123) is a special case of the geometric series, and we shall see later that (125) is a
series for sin x. Theorem 8.12 Let v1 − v2 + v3 − . . . be an alternating series with

v1 > v 2 > · · · > v n > . . . (all > 0).

If limn→∞ vn = 0, then the series converges. If s is the sum of the series, then the
error Rn = s − sn after n terms satisfies

|Rn | < vn+1 .

That is, the absolute value of the error after n terms is bounded by the next term
of the series.
386 CHAPTER 8. SERIES

Example The series

1 1 1
1− + − · · · + (−1)n+1 + . . .
2 3 n
1
converges since → 0. Also,
n
1 1 1
s = 1 − + − · · · + (−1)n+1 +Rn
| 2 3 {z n
}
sn

1
where |Rn | < . According to Mathematica, for n = 100, sn = 0.688172 (to 6
n+1
decimal places), and the theorem tells us the error is less than 1/100 = .01. Hence,
the correct value for the sum is somewhere between .67 and .69.

Suppose on the other hand, we want to use enough terms to get an answer such
that the error will be less than .0005 = 5 × 10−4 . For that, we need

1
≤ 5 × 10−4 or
n+1
1 1
n+1≥ = 104 = 2 × 103 .
5 × 10−4 5
Thus, n = 2000 would work.

An aside on rounding.

In the above example, we asked for enough terms to be sure that the error is less
that 5 × 10−4 . This is different from asking that it be accurate to three decimal
places, although the two questions are closely related. For example, suppose the
true answer to a problem is 1.32436... An error of +.0004 would change this to
1.32476.. which would be rounded up to the incorrect answer 1.325 if we used only
three decimal places. An alternative to rounding is truncating, i.e. cutting off all
digits past the desired one. That would solve our problem in the case of a positive
error, but an error of −.0004 would in this case result in the incorrect answer 1.323
if we truncated.

By using extra decimal places, it is possible to reduce the likelihood of a false


rounding error, but it is not possible to eliminate it entirely. For example, suppose
the true answer is 2.426499967 and we insist on enough terms so that the error
is less that 5 × 10−8 . Then, an error of +.00000004 would cause us to round up
incorrectly to 2.427.

Since the issue of rounding can be so confusing, we shall instead generally ask that
the error be smaller that five in the next decimal position. You should generally
read a request for a certain number of decimal places in this way but you should
also remember that rounding may change the answer a bit in such cases.
8.4. ALTERNATING SERIES 387

The Proof. The proof of the theorem is a little tricky, but it is illuminating enough
that you should try to follow it. You will get a better idea of how the error behaves,
and that is important. The diagram below makes clear what happens.

v
2
v 4

v
6

s s s s5 s s
2 4 6 3 1
v5

v3

Here is the same thing in words. If n is odd,

sn−1 < sn−1 + un − un−1 = sn+1 . (126)


| {z }
>0

Hence, the even numbered partial sums sn form an increasing sequence. Similarly,

sn+1 = sn−1 − (un − un−1 ) < sn−1 , (127)


| {z }
>0

so the odd numbered partial sums sn form a decreasing sequence. Also, if n is odd,

sn+1 = sn − vn+1 < sn ,

so it follows that every odd partial sum is greater than every even partial sum:

s2 < s4 < s6 < · · · < s7 < s5 < s3 < s1 .

The even partial sums form an increasing sequence, bounded above, so by the com-
pleteness property (b), they must approach a finite limit s0 . By similar reasoning,
since the odd partial sums form a decreasing sequence, bounded below, they also
approach a finite limit s00 . Moreover, it is not too hard to see that for n even s s=s’=s’’ s
n n+1
0 00
sn ≤ s ≤ s ≤ sn+1 .

However, sn+1 − sn = vn+1 → 0 as n → ∞. It follows that s00 − s0 → 0 as n → ∞, v


but since s00 − s0 does not depend on n, it must be equal to zero. In other words n+1
the even partial sums and the odd partial sums must approach the same limit
s = s0 = s00 . It follows that limn→∞ sn = s, and the series converges. n even
388 CHAPTER 8. SERIES

Note also that we get the following refined error estimate from the above analysis.

sn < s < sn + vn+1 if n is even


sn − vn+1 < s < sn if n is odd.

Exercises for 8.4.

1. Determine if each of the following series converges.


∑∞(−1)n+1
(a) n=1 .
n2
∑∞ (−1)n n
(b) n=1 .
n+1
∑∞ (−1)n n
(c) n=0 .
n2 + 1
∑∞ (−1)n
(d) n=2 .
ln n
∑∞ (−1)n+1
(e) n=1 .
n1/n

∑∞
(−1)n+1
2. How many terms of the series n=1 are needed to be sure that the
n
−11
answer is accurate to within 5 × 10 . (The series actually adds up to ln 2.)

3. We shall see later that

∑∞
1 1 1 (−1)n
= 1 − + + ··· = .
e 2 3! n=0
n!

(a) How many terms of this series are necessary to calculate 1/e to within
5 × 10−5 .
(b) Do that calculation. (You may want to program it.)
(c) Compare with 1/e as evaluated on your calculator or by computer.

∑∞ (−1)n+1
4. Calculate n=1 to with 5 × 10−3 . (The sum of the series is actually
n2
π 2 /12.)
8.5. ABSOLUTE AND CONDITIONAL CONVERGENCE 389

8.5 Absolute and Conditional Convergence


∑ ∑
A series n un is called
∑ ∑absolutely convergent if the series n |un | converges. If
n un converges, but n |un | diverges, the original series is called conditionally
convergent. Absolutely convergent series are quite well behaved, and one can ma-
nipulate them almost as though they were finite sums. Conditionally convergent
series, on the other hand, are often hard to deal with.

Examples The series


∑∞
(−1)n+1
1 − 1/4 + 1/9 − · · · + (−1)n+1 (1/n2 ) + · · · =
n=1
n2

is absolutely convergent because n 1/n2 converges. The series

∑∞
(−1)n+1
1 − 1/2 + 1/3 − · · · + (−1) n+1
(1/n) + · · · =
n=1
n

is conditionally convergent.
∑ It does converge by the alternating series test, but the
series of absolute values n 1/n diverges.

Of course, for series with non-negative terms, absolute convergence means the same
thing as convergence. Also, for series with all terms un ≤ 0, absolute∑conver-
gence
∑ means the same thing as convergence. (Just consider the series − n un =
n (−u n ) for which all the terms are non-negative.)
∑ ∑
Theorem 8.13 If n |un | converges, then n un converges. That is, an absolutely
convergent series is convergent.

This theorem may look ‘obvious’, ∑ but that is the result of the choice of terminology.
The relation
∑ between the series n un and the corresponding series of absolute
values n |un | is rather subtle. For example, even if they both converge, they
certainly won’t have the same sum.

The proof. Assume n |un | converges. Define two new series as follows. Let
{
un if un > 0
pn =
0 otherwise
{
|un | = −un if un < 0
qn = .
0 otherwise

Then n pn is the series
∑ obtained when all negative terms of the original series are
replaced by 0’s, and n qn is the negative of the series obtained if all positive terms
of the original series are replace by 0’s. Both are series of non-negative terms, and

un = pn + (−qn ) = pn − qn for each n.


390 CHAPTER 8. SERIES


(Either pn or −qn is zero, and the other ∑
is un .) Hence,
∑ to show that n un con-
verges, it would suffice to show that both n pn and n qn converge. Consider the
former. For each n, we have
0 ≤ pn ≤ |un |.
Indeed, pn equals one or the other of ∑ the bounds depending on∑the sign of un .
Hence, by the comparison
∑ test, since n |un | converges, so does ∑n pn . A similar
argument shows that n qn converges. Hence, we conclude that n un converges.
That completes the proof.

One consequence of the above argument is that for absolutely convergent series,
rearranging the terms of the series does not affect convergence or divergence or the
sum when convergent. The reason is that for an absolutely convergent series,
∑ ∑ ∑
un = pn − qn , (128)
n n n

and each of the series on the right is a convergent series of non-negative terms. For
such series,∑rearranging∑ terms is innocuous. Suppose on the other hand that the
two
∑ series n pn and n qn are both divergent. Then (128) would assert that the
n un is the difference of two ‘infinities’, and any attempt to make sense of that is
fraught with peril. However, in that case
∑ ∑ ∑
|un | = pn + qn
n n n

is a sum of two
∑divergent series of non-negative terms and certainly doesn’t converge,
so the series n un is not absolutely convergent.

The Ratio Test There is a fairly simple test which will establish absolute conver-
gence for series to which it applies. Consider the ratio
|un+1 |
|un |
of succeeding terms of the series of absolute values. Suppose this approaches a finite
limit r. (Note that we must have r ≥ 0 ∑ since it is a limit of non-negative terms.)
That means that, for large n, the series n |un | looks very much like a geometric
series of the form ∑ ∑
arn = a rn .
n n
However, the geometric series converges exactly when 0 ≤ r < 1. We should be
able to conclude from this that the original series also converges exactly in those
circumstances. Unfortunately, because the comparison involves only a limiting ratio,
the analysis is not precise enough to tell us what happens when r = 1. The precise
statement of the test is the following.

The Ratio Test Suppose


|un+1 |
r = lim
n→∞ |un |
8.5. ABSOLUTE AND CONDITIONAL CONVERGENCE 391


exists. If r < 1, the series a un is absolutely convergent. If r > 1, the series

n un diverges. If r = 1, the test provides no information.

Example 157 Consider the series


∑∞
xn
.
n=0
n!

(As you may already know, this is the series for ex .) To apply the ratio test, we
need to consider
|xn+1 /(n + 1)!| |x|n+1 n! |x|
= = .
|x /n!|
n (n + 1)! |x| n n+1
The limit of this as n → ∞ is r = 0 < 1. Hence, by the ratio test, the series
∑ n
n x /n! is absolutely convergent for every possible x.

Example 158 Consider the series



∑ xn
(−1)n .
n=1
n

(As we shall see this is the series for ln(1 + x) at least when it converges.) The ratio
test tells us to consider
|(−1)n+1 xn+1 /n + 1| |x|n+1 n n
= = |x|.
|(−1) x /n|
n n n + 1 |x|n n+1
n
Since, limn→∞ = 1, the limiting ratio in this case is |x|. Thus, the series
∑∞ n + 1
n=1 (−1) x /n converges absolutely if |x| < 1 and diverges if |x| > 1. Unfortu-
n n

nately, the ratio test does not tell us what happens when |x| = 1. However, we can
settle those case by using other criteria. Thus, for x = 1. the series is
∑∞
(−1)n
n=1
n
and that converges by the alternating series test. On the other hand, for x = −1,
the series is
∑∞ ∑ (−1)2n ∑1
(−1)n
(−1)n = =
n=1
n n
n n
n
which is the harmonic series, so it diverges.

The detailed proof that the ratio test works is a bit subtle. We include it here for
completeness, but you will be excused if you skip it.

|un+1 |
The proof. Suppose limn→∞ = r. That means that if r1 is slightly less
|un |
than r, and r2 is slightly greater than r, then for all sufficiently large n
|un+1 |
r1 < < r2 .
|un |
392 CHAPTER 8. SERIES

Moreover, by making n large enough, we can arrange for the numbers r1 and r2 to
be as close to r as we might like.

By the general principle that the tail of a series dominates in discussions of conver-
gence, we can ignore the finite number of terms for which the above inequality does
not hold. Renumbering, we may assume
r1 |un | < |un+1 | < r2 |un |
holds for all n starting with n = 1. Then,
|u2 | < r2 |u1 |, |u3 | < r2 |u2 | < r2 2 |u1 |, . . . , |un | < |u1 |r2 n−1
and similarly for the lower bound, so putting a = |u1 |, we may assume
ar1 n−1 < |un | < ar2 n−1 (129)
for all n. Suppose r < 1. Then, by taking r2 sufficiently
∑ close∑to r, we may assume
that r2 < 1. So we may use (129) ∑ to compare n |un | to n ar2 n−1 which is a
convergent geometric series. Hence, n |un | converges. Suppose on the other hand
that r > 1. Then by choosing r2 sufficiently close to r, we may assume r2 > 1.
Putting this in (129) yields
1 < |un |

for all n. Hence, un → 0 as n → ∞ is not possible, and n un must diverge.

Note that the above argument breaks down if r = 1. In that case, we cannot assume
that r2 < 1 or r1 > 1, so neither part of the argument works.

The Root Test There is a test which is related to the ratio test which is a bit
simpler to use. Instead of considering the ratio of successive terms, one considers
the nth root |un |1/n . If this approaches a finite limit r, then
∑ roughly speaking, for
large n,
∑ |un | behaves like r n
. Thus, we are led to compare n |un | to the geometric
series n rn . We get convergence or divergence, as in the ratio test, which depends
on the value of the limit r. As in the ratio test, the analysis is not precise enough
to tell us what happens if that limit is 1.

The Root Test Suppose ∑ limn→∞ |un |1/n = r. If 0 ≤ r < 1, then n un converges
absolutely. If 1 < r, then un diverges. If r = 1, it may converge or diverge.

Example 159 Consider the series


∑∞
x2 x3 xn
x+ + + ··· = .
4 9 n=1
n2

We have n 1/n
x |x|
= 2/n
n2 n
so in this case
lim |un |1/n = |x| lim n−2/n .
n→∞ n→∞
8.5. ABSOLUTE AND CONDITIONAL CONVERGENCE 393

The limit on the right is evaluated by a tricky application of L’Hôpital’s rule. The
idea is that if limn→∞ ln an = L, then limn→∞ an = eL . In this case,

2 −2 ln n −2/n
lim ln(n−2/n ) = lim (− ln n) = lim = lim = 0.
n→∞ n→∞ n n→∞ n n→∞ 1
(The next to last step was done using L’Hôpital’s rule.) Hence,

lim n−2/n = e0 = 1.
n→∞

It follows that the limiting ratio for the series is r = |x|. Thus, if |x| < 1 the
∑ xn
series n 2 converges absolutely, and if |x| > 1, it diverges. For |x| = 1, we must
n ∑
settle the question by other means. For x = 1, the series is n 1/n2 which∑ we know
converges. (It is a ‘p-series’ with p = 2 > 1.) For x = −1, the series is ∑n (−1)n /n2 ,
and we just decided its series of absolute values converges. Hence, n (−1)n /n2
converges absolutely. (You could also see it converges by the alternating series test,
but absolute convergence is stronger.)

The proof of the root test is similar to that of the ratio test. Again you will be
excused if you skip it, but here it is.

The proof. Suppose limn→∞ |un |1/n = r. If r1 is slightly less than r and r2 is
slightly larger, then, for n sufficiently large,

r1 < |un |1/n < r2 .

Again, by the principle that we may ignore finitely many terms of a series when
investigating convergence, we may assume the inequalities holds for all n. Raising
to the n power, we get
r1 n < |un | < r2 n .

If r < 1, we may assume r2 < 1, and compare n |un | with a convergent geometric
series. On the other hand, if 1 < r, we may assume 1 < r2 , and then |un | → ∞
which can’t happen for a convergent series.

Exercises for 8.5.

1. In each case, tell if the indicated series is absolutely convergent, conditionally


convergent, or divergent. If the ratio or root test does not work, you may
want to try some other method.
∑∞
(a) n=1 tn where |t| < 1.
∑∞ (−1)n (n + 4)
(b) n=1 .
5n + 6
∑∞ (−1)5n+2 tn
(c) n=1 where |t| < 1.
n2
394 CHAPTER 8. SERIES

∑∞ (−1)n
(d) n=2 .
ln n
∑∞ (−2)n
(e) n=1 .
nn
∑∞ (−1)n n!
(f) n=0 . Hint: You should know what lim (1 + 1/n)n is from your
nn n→∞
previous courses.
∑∞ n+1
(g) n=1 3 .
2n + 4n + 5
∑∞ tn
2. Determine all values of t such that n=0 √ converges. For which of
n+1
those values does it converge absolutely?

8.6 Power Series

At the beginning
∑∞ of this chapter, we saw that we might want to consider series of
the form n=0 an tn as solutions of differential equations. A bit more generally, if
we were given initial conditions at t = t0 , we might consider series of the form


an (t − t0 )n .
n=0

Such a series is called a power series centered at t0 .

A power series will generally converge for some, but perhaps not all, values of the
variable. Usually, you can determine those values for which it converges by an
application of the ratio or root test.

Example 160 Consider the power series

∑∞
n
n
(t − 3)n .
n=0
2

To apply the ratio test, consider


n+1
|t − 3|n+1 |t − 3| n + 1
2n+1 = .
n 2 n
|t − 3| n
2n
However,
n+1
lim =1
n→∞ n
8.6. POWER SERIES 395

(by L’Hôpital’s rule, or by dividing numerator and denominator by n). Hence, the
limiting ratio is |t − 3|/2. Hence, the series converges absolutely if

|t − 3|
<1
2
i.e., |t − 3| < 2
i.e., −2<t−3<2
i.e., 1 < t < 5.

Similarly, if |t − 3|/2 > 1, the series diverges. That inequality amounts to t < 1 or
5 < t. The case |t − 3|/2 = 1, i.e., t = 1 or t = 5, must be handled separately. I
leave the details to you. It turns out that the series diverges both for t = 1 and
t = 5. Hence, the exact range of convergence is

1<t<5

and the series converges absolutely on this interval.

2 2

1 3 5

The above analysis could have been done equally well using the root test.

The behavior exhibited in Example 160, or in ∑similar examples in the previous


section, is quite general. For any power series n an (t − t0 )n there is a number R
such that the series converges absolutely in the interval

t0 − R < t < t0 + R,

diverges outside that interval, and may converge or diverge at either endpoint. R
is called the radius of convergence of the series. (The terminology comes from
the corresponding concept in complex analysis where the condition |z − z0 | < R
characterizes all points in the complex plane inside a circular disk of radius R
centered at z0 .) R may be zero, in which case the series converges only for t = t0
(where all terms but the first are 0). R may be infinite, in which case the series
converges for all t. In many cases, it is finite, and it may be important to know its
value.
396 CHAPTER 8. SERIES

R R

t t t +R
0 -R 0 0

Understanding how the ratio or root test is used to determine the radius of conver-
gence gives you a good idea of why a power series converges in a connected interval
rather than at a random collection of points, so you should work enough examples
to be sure you can do such calculations. Unfortunately, in the general case, neither
the ratio nor the root test may apply, so the argument justifying the existence of
an interval of convergence is a bit tricky.

We shall outline the argument here, but you need not study it at this time if the
examples satisfy you that it all works.

The argument why the range of convergence is an interval. If the series


converges only for t = t0 , then we take R =∑ 0, and we are done. Suppose instead
that there is a t1 6= t0 at which the series n an (t1 − t0 )n converges. Then the
general term
an (t1 − t0 )n → 0
as n → ∞. One consequence is that the absolute value of the general term must be
bounded, i.e., there is a bound M such that

|an ||t1 − t0 |n < M

for all n. Put R1 = |t1 − t0 |. Then, if |t − t0 | < R1 ,


( )n
|t − t0 |
|an ||t − t0 | = |an ||t1 − t0 |
n n
< M rn
R1
where
∑ r = |t − t0 |/R1 < 1. ∑ Comparing with a geometric series, we see that
|a
n n ||t − t n | n
converges, so n an (t − t0 ) is absolutely convergent as long as
n

|t − t0 | < R1 .

Consider next all possible R1 > 0 such that the series converges absolutely in the
range t0 − R1 < t < t0 + R1 . If there is no upper bound to this set of numbers,
then the series converges absolutely for all t. In that case, we set the radius of
convergence R = ∞. Suppose instead that there is some upper bound to the
set of all such R1 . By a variation of the completeness arguments we have used
before (concerning bounded increasing sequences),
∑ it is possible to show in this case
that there is a single value R such that n an (t − t0 )n converges absolutely for
|t − t0 | < R, but that the series diverges for |t − t0 | > R. This R is the desired
radius of convergence.
8.6. POWER SERIES 397

Differentiation and Integration of power Series The rationale behind the use
of series to solve differential equations is that they are generalizations of polyno-
mial functions. By the usual rules of calculus, polynomial functions are easy to
differentiate or integrate. For example, if

f (t) = a0 + a1 (t − t0 ) + a2 (t − t0 )2 + · · · + an (t − t0 )n + . . .

then

f 0 (t) = 0 + a1 1 + 2a2 (t − t0 ) + · · · + nan (t − t0 )n−1 + . . .

That is, to differentiate a polynomial (or any finite sum), you differentiate each term,
and then add up the results. A similar rule works for integration. Unfortunately,
these rules do not always work for infinite sums. It is possible for the derivatives of
the terms of a series to add up to something other than the derivative of the sum
of the series.

Example Consider the series


∞ (
∑ )
sin nt sin(n + 1)t
− .
n=1
n n+1

The partial sums are


1
s1 (t) = sin t − sin 2t
2
1 1 1 1
s2 (t) = sin t − sin 2t + sin 2t − sin 3t = sin t − sin 3t
2 2 3 3
..
.
1
sn (t) = · · · = sin t − sin(n + 1)t
n+1
..
.

| sin(n + 1)t| 1
However, ≤ for any t, so its limit is 0 as n → ∞. Hence,
n n
lim sn (t) = sin t
n→∞

so the series converges for every t and its sum is sin t. On the other hand, the series
of derivatives is
∑∞
(cos nt − cos(n + 1)t).
n=1

The partial sums of this series are calculated essentially the same way, and

s0n (t) = cos t − cos(n + 1)t.


398 CHAPTER 8. SERIES

For most values of t, cos(n + 1)t does not approach a definite limit as n → ∞. (For
example, try t = π/2. You alternately get 0, −1, or 1 for different values of n.)
Hence, the series of derivatives is generally not even a convergent series.

Even more bizarre examples exist in which the series of derivatives converges but
to something other than the derivative of the sum of the original series.

Similar problems arise when you try to integrate a series term by term. Some of
this will be discussed when you study Fourier Series and Boundary Value Problems,
next year. Fortunately, for a power series, within its interval of absolute convergence
the derivative of the sum is the sum of the derivatives and similarly for integrals.
This fact is fundamental for any attempt to use power series to solve a differential
equation. It may also be used to make a variety of other calculations with series.
∑∞
Theorem 8.14 Suppose n=0 an (t − t0 )n converges absolutely to f (t) for |t − t0 | <
R. Then
∑∞
(a) f 0 (t) = n=1 nan (t − t0 )n−1 for |t − t0 | < R.
∫t ∑∞ an
(b) t0 f (s)ds = n=0 (t − t0 )n+1 for |t − t0 | < R. Moreover, the series
n+1
∑∞ n−1
∑∞ an
n=1 nan (t−t0 ) and n=0 (t−t0 )n+1 have the same radius of convergence
∑∞ n+1
as n=0 an (t − t0 )n .

The proof of this theorem is a bit involved. An outline is given in the appendix to
this section, which you may want to skip.

Example 161 We know




1
= tn for |t| < 1.
1 − t n=0

(1 is the radius of convergence.) Hence,


∑∞ ∑∞
1 n−1
= nt = (n + 1)tn .
(1 − t)2 n=1 n=0

The last expression was obtained by substituting n + 1 for n. This has the effect
of changing the lower limit ‘n = 1’ to ‘n + 1 = 1’ or ‘n = 0’. Such substitutions
are often useful when manipulating power series, particularly in the solution of
differential equations.

The differentiated series also converges absolutely for |t| < 1.

Example 162 If replace t by −t in the formula for the sum of a geometric series,
we get
∑∞ ∑∞
1
= (−t)n = (−1)n tn
1 + t n=0 n=0
8.6. POWER SERIES 399

and this converges absolutely for | − t| = |t| < 1. Hence, by (b), we get
∫ t ∑ (−1)n ∑∞
ds (−1)n+1 n
= tn+1 = t
0 1 + s n=0 n + 1 n=1
n

and this is valid for |t| < 1. Note the shift obtained by replacing n by n − 1. Doing
the integral on the left yields
∑∞
(−1)n+1 n t2 t3
ln(1 + t) = t = t − + + ... for |t| < 1.
n=1
n 2 3

Series of this kind may be used to make numerical calculations. For example,
suppose we want to calculate ln 1.01 to within 5 × 10−6 , i.e., loosely speaking, we
want the answer to be accurate to five decimal places. We can use the series

(.01)2 (.01)3 (.01)n


ln(1 + .01) = .01 − + + · · · + (−1)n+1 + ...
2 3 n
and include enough terms so the error Rn satisfies |Rn | < 5 × 10−6 . Since the series
is an alternating series with decreasing terms, we know that |Rn | is bounded by the
(.01)n+1
absolute value of the next term in the series, i.e., by . Hence, to get the
n+1
desired accuracy, it would suffice to choose n large enough so that

(.01)n+1 10−2(n+1)
= < 5 × 10−6 .
n+1 n+1
There is no good way to determine such an n by a deductive process, but trial and
error works reasonably well. Certainly, n = 2 would be good enough. Let’s see if
n = 1 would also work.
10−4
= .5 × 10−4 = 5 × 10−5 ,
2
and that is not small enough. Hence, n = 2 is the best that this particular estimate
of the error will give us. Hence,

(.01)2
ln(1.01) = .01 − ≈ 0.00995.
2
You should check this with your calculator to see if it is accurate. (Remember
however that the calculator is also using some approximation, and it doesn’t know
the true value any better than you do.)

The estimate Rn of the error after using n terms of a series is tricky to determine.
For alternating series, we know that the next term rule applies, but we shall see
examples later of where much more sophisticated methods need to be used.

Appendix. Proof of the Theorem


400 CHAPTER 8. SERIES

We first note that it is true that the sum of a power series within its radius of
convergence is always a continuous function. The proof is not extremely hard, but
we shall omit it here. You may see some further discussion of this point in your later
mathematics courses. We shall use this fact implicitly several places in what follows.
In particular, knowing the sum of a series is continuous allows us to conclude it is
integrable and also to apply the fundamental theorem of calculus.

Statement(a) in the theorem may∑ be proved once statement (b) has been established.

To see this, argue as follows. Let n=0 an∑ (t−t0 )n = f (t) have radius of convergence

R > 0. Consider the differentiated series n=1 nan (t − t0 )n−1 . If the ratio (or root)
test applies, it is not hard to see that this series has the same radius of convergence
R as the original series. (See the Exercises.) If the ratio test does not apply, there
is a tricky argument to establish this fact in any case. Suppose that point is settled.
Define
∑∞
g(t) = nan (t − t0 )n−1 for |t − t0 | < R.
n=1
0
(We hope that g(t) = f (t), but we don’t know that yet.) Assume (b) is true and
apply it to the series for g(t). We get
∫ t ∑∞ ∑
n
g(s)ds = an (t − t0 )n = an (t − t0 )n .
t0 n=1
n n=1

However,

∑ ∞

f (t) = an (t − t0 )n = a0 + an (t − t0 )n
n=0 n=1
∫ t
= a0 + g(s)ds.
t0

This formula, together with the fundamental theorem of calculus assures us that
f (t) is a differentiable function and

d t
f 0 (t) = 0 + g(s)ds = g(t)
dt t0
as needed. The proof of (b) is harder. Let



f (t) = an (t − t0 )n for |t − t0 | < R.
n=0

Fix t > t0 . (The argument for t < t0 is similar.) We want to integrate f (s) over
the range t0 ≤ s ≤ t. For any given N , we may decompose the series for f (s) into
two parts

N ∞

f (s) = an (s − t0 )n + an (s − t0 )n .
n=0 n=N +1
| {z }
RN (s)
8.6. POWER SERIES 401

Integrate both sides to obtain


∫ t N ∫
∑ t ∫ t
f (s)ds = an (s − t0 )n ds + RN (s)ds
t0 n=0 t0 t0

∑ ∫ t
N
an
n+1 t
= (s − t0 ) t0
+ RN (s)ds
n=0
n+1 t0

∑N ∫ t
an
= (t − t0 )n+1 + RN (s)ds.
n=0
n+1 t0

To complete the proof, we need to see what happens in the last equality as N → ∞.
The first term is the N th partial sum of the desired series, so it suffices to show
that the second error term approaches zero. However.
∫ t ∫ t

R (s)ds ≤ |RN (s)|ds.
N
t0 t0

On the other hand, we have



∑ ∞

n
|RN (s)| = an (s − t0 ) ≤ |an ||s − t0 |n

n=N +1 n=N +1


≤ |an ||t − t0 |n .
n=N +1

(This follows by the same∑ reasoning as in the comparison test.) Note that since
|t − t0 | < R, the series n an (t − t0 )n converges absolutely, so the series on the
right of the above inequality is the tail of a convergent series; call it TN . Then, we
have
|RN (s)| ≤ TN

for |s − t0 | < |t − t0 |, where TN is independent of s. Hence,


∫ t ∫ t ∫ t


RN (s)ds ≤ |RN (s)|ds ≤ TN ds = TN (t − t0 ).

t0 t0 t0

Since TN is the tail of a convergent series, it goes to zero as N → ∞. That completes


the proof.

Exercises for 8.6.

∑∞
1. Show that the power series n=0 n!tn has radius of convergence R = 0.
402 CHAPTER 8. SERIES

2. Find the interval of convergence and the radius of convergence for each of the
following power series.
∑∞
(a) n=0 ntn .
∑∞ 3n
(b) n=0 2 (t − 5)n .
n
∑∞ n!
(c) n=0 (t − 1)n .
(2n)!
∑∞ n2
(d) n=0 (t + 1)n .
10n
∑∞ 2n 2n
(e) n=0 t .
(2n)!
3. Using the geometric series


1
f (t) = = (−1)n tn ,
1 + t n=0

1 f 00 (t) 1
find series expansions for −f 0 (t) = 2
and = . For which
(1 + t) 2 (1 + t)3
values of t does the theorem in the section assure you that these expansions
are valid?
ln(1 + t)
4. Find series expansion for f (t) = t ln(1 + t) and g(t) = . What are
t
the intervals of convergence of these series?

5. Assume the expansion

∑∞
t3 t5 (−1)n t2n+1
f (t) = t − + − ··· =
3 5 n=0
2n + 1

1
is valid for −1 < t < 1. Show that f 0 (t) = . Given that f (0) = 0,
1 + t2
conclude f (t) = tan−1 t.

6. Assume that the expansion

∑∞ n
t2 t3 t
f (t) = 1 + t + + + ··· =
2 3! n=0
n!

is valid for all t. Show that f 0 (t) = f (t). Given that f (0) = 1, what can you
conclude about f (t)?

7. Suppose the ratio test applies to the power series n an tn and it has radius
∑ ∑ an n+1
of convergence R. Show that the series n nan tn−1 and n t also
n+1
have radius of convergence R.
8.7. ANALYTIC FUNCTIONS AND TAYLOR SERIES 403

8. Let sn (t) = ne−nt . (You may assume that sn (t) is the nth partial sum of an
appropriate series.) Show that limn→∞ sn (t) = 0 for t 6= 0. Show on the other
∫1
hand that 0 sn (t)dt = 1 − e−n . Conclude that
∫ 1 ∫ 1
lim sn (t)dt 6= lim sn (t)dt
0 n→∞ n→∞ 0

in this case.

8.7 Analytic Functions and Taylor Series

Our aim, as enunciated at the beginning of this chapter, is to use power series to
solve differential equations. So suppose that


f (t) = an (t − t0 )n for |t − t0 | < R,
n=0

where R is the radius of convergence of the series on the right. (Note that this
assumes that R > 0. There are power series with R = 0. Such a series converges
only at t = t0 and won’t be of much use. See the Exercises for the previous section
for an example.) A function f : R → R is said to be analytic if at each point t0
in its domain, it may be represented by such a power series in an interval about t0 .
Analytic functions are the best possible functions to use in applications. For exam-
ple, we know by the previous section, that such a function is differentiable. Even
better, its derivative is also analytic because it has a power series expansion with
the same radius of convergence. Hence, the function also has a second derivative,
and by extension of this argument, must have derivatives of every order. Moreover,
it is easy to relate the coefficients of the power series to these derivatives. We have


f (t) = an (t − t0 )n = a0 + a1 (t − t0 ) + a2 (t − t0 )2 + . . .
n=0
∑∞
f 0 (t) = nan (t − t0 )n−1 = a1 + 2a2 (t − t0 ) + . . .
n=1
∑∞
f 00 (t) = n(n − 1)an (t − t0 )n = 2a2 + . . .
n=2
..
.


f (k) (t) = n(n − 1)(n − 2) . . . (n − k + 1)an (t − t0 )n = k(k − 1) . . . 2 · 1ak + . . .
n=k
..
.
404 CHAPTER 8. SERIES

Here f (k) denotes the kth derivative of f . Note that the series for f (k) (t) starts off
with k!ak . Make sure you understand why! Now put t = t0 in the above formulas.
All terms involving t − t0 to a positive power vanish, and we get
f (t0 ) = a0
f 0 (t0 ) = a1
f 00 (t0 ) = 2a2
..
.
f (k) (t0 ) = k!ak
..
.
We can thus solve for the coefficients
a0 = f (t0 )
a1 = f 0 (t0 )
f 00 (t0 )
a2 =
2
..
.
f (k) (t0 )
ak =
k!
..
.
Hence, within the radius of convergence, the power series expansion of f (t) is
∑∞
f (n) (t0 )
f (t) = (t − t0 )n .
n=0
n!

This series is called the Taylor series of the function f .

The above series was derived under the assumption that f is analytic, but we can
form that series for any function whatsoever, provided derivatives of all orders exist
at t0 . If the function is analytic, then it will equal its Taylor series within its radius
of convergence.

Example 163 Let f (t) = et and let t0 = 0. Then,


f 0 (t) = et , f 00 (t) = et , . . . , f (n) (t) = et , . . . ,
so
e0 1
an = = .
n! n!
Hence, the Taylor series for et at 0 is
∑∞
1 n
t .
n=0
n!
8.7. ANALYTIC FUNCTIONS AND TAYLOR SERIES 405

It is easy to determine the radius of convergence of this series by means of the ratio
test.
|t|n+1 /(n + 1)! |t|
= → 0 < 1.
|t|n /n! n+1
Hence, the series converges for all t and the radius of convergence is infinite. We
shall see that et equals the series for every t, so f(t) = et defines an analytic function
on R. However, this need not always be the case. For example, we might have f (t)
equal to the Taylor series for some but not all values in the interval of convergence
of that series.

Example 164 Let f (t) = cos t and let t0 = 0. Then, f 0 (t) = − sin t, f 00 (t) =
− cos t, f 000 (t) = sin t, and f (4) (t) = cos t. This pattern then repeats indefinitely
with a period of 4. In particular,


 1 if n is even and a multiple of 4
f (0) = −1 if n is even and not a multiple of 4 .
(n)


0 otherwise
Hence, the Taylor series is
∑∞
t2 t4 t6 t2n
1− + − + ··· = (−1)n .
2 4! 6! n=0
(2n)!

As above, a simple application of the ratio test shows that this series has infinite
radius of convergence. We shall see below that cos t is in fact analytic and equals
its Taylor series for all t.

Example 165 We showed in the previous section that the expansion



∑ tn
ln(1 + t) = (−1)n+1
n=1
n

is valid for t in the interval of convergence of the series, (|t| < 1). This tells us that
the function ln(1 + t) is certainly analytic for −1 < t < 1, and the series on the
right is its Taylor series.

If we substitute t − 1 for t in that expansion we get



∑ (t − 1)n
ln t = (−1)n+1
n=1
n

and this gives us the Taylor series of ln t for t0 = 1.

Taylor’s Formula with Remainder Because analytic functions are so important,


it is useful to have a mechanism for determining when a function is equal to its
Taylor series. To this end, let
∑N
f (n) (t0 )
RN (t) = f (t) − (t − t0 )n .
n=0
n!
406 CHAPTER 8. SERIES

The remainder , RN (t), is the difference between the value of the function and the
value of the appropriate partial sum of its Taylor series. To say the series converges
to the function is to say that the limit of the sequence of partial sums is f (t), i.e.,
that RN (t) → 0. To determine if this is the case, it is worthwhile having a method
for estimating RN (t). This is provided by the following formula which is called the
Cauchy form of the remainder.
∫ t
1
RN (t) = (t − s)N f (N +1) (s) ds. (130)
N ! t0

This formula is in fact valid as long as the f (N +1) (s) exits and is continuous in the
interval from t0 to t. We shall derive this formula later in this section, but it is
instructive to look at the first case N = 0. Here
f (t) = f (t0 ) + R0 (t)
∫ t
where R0 (t) = f 0 (s)ds
t0

so the formula tells us that the change f (t) − f (t0 ) in the function is the integral of
its rate of change f 0 (s). The general case may be considered an extension of this for
higher derivatives, but just why the factor (t − s)n comes up in the integral won’t
be clear until you see the proof.

In most cases, formula (130) isn’t much use as written. The point is that we want
to use the partial sum
∑N
f (n) (t0 )
(t − t0 )n
n=0
n!
as an approximation to the function value f (t). If we could calculate the remainder
term RN (t) exactly, we could also calculate f (t) exactly, and there would be no
need to use an approximation. Hence, in most cases, we want to get an upper bound
on the size of RN (t) rather than an exact value. Suppose it is known that
|f (N +1) (s)| ≤ M
for s in the interval between t0 and t. Then replacing |f (N +1) (s)| by M in (130)
yields (for t > t0 )
∫ t t
1 M (t − s)N +1 M
|RN (t)| ≤ (t − s) M ds = −
N
= (t − t0 )N +1 .
N ! t0 N! N + 1 t0 (N + 1)!
A similar argument works for t < t0 except for some fiddling with signs. The result
which holds in both cases is
M
|RN (t)| ≤ |t − t0 |N +1 . (131)
(N + 1)!
The expression on the right is what we get get from the next term of the Taylor series
if we replace f (N +1) (t0 ) by the maximum value of that derivative in the interval from
t0 and t.
8.7. ANALYTIC FUNCTIONS AND TAYLOR SERIES 407

Example 166 Consider the Taylor series for f (t) = cos t centered at t0 = 0

∑∞
(−1)n 2n
t .
n=0
(2n)!

Only even numbered terms appear in this series. We want to estimate the error
RN (t) using the inequality (131). We might as well take N = 2K + 1 odd. (Why?)
We know the even derivatives are all ± cos t, so we can take M = 1 as an upper
bound for |f (N +1) (s)| = |f (2K+2) (s)|. Hence, the estimate for the remainder is

1
|R2K+1 (t)| ≤ |t|2K+2 .
(2K + 2)!

(Note that in this case the estimate on the right is just the absolute value of the
next term in the series.) Thus for K = 5, t = 1 radian, we get

1 12
|R11 | ≤ 1 = 2.08768 × 10−9 .
12!
Because the odd numbered terms of the series are zero, the number of non-zero
terms in the approximating sum is 6. According to Mathematica


5
1
(−1)n = 0.540302304
n=0
(2n)!

to 9 decimal places. Use your calculator to see if it agrees that this result is as
accurate as the above error estimate suggests.

Example 167 Take f (t) = sin t and t0 = 0. Calculations as above lead to the
Taylor series
∑∞
t2n+1
(−1)n . s
n=0
(2n + 1)! e
All the even numbered terms are zero. We get the estimate for the remainder for
N = 2K + 2—(why don’t we use N = 2K + 1?)— 1

|t|2K+3
|R2K+2 (t)| ≤ .
(2K + 3)!
t<0 s t>0

Example 168 Take f (t) = et and t0 = 0. The Taylor series is

∑∞ n
t
.
n=0
n!

The estimate of the remainder, however, is a bit more involved. The crucial pa-
rameter M is the upper bound of |f (N +1) (s)| = es for s in the interval from 0 to
t, but this depends on the sign of t. Suppose first that t < 0. Then, since es is an
408 CHAPTER 8. SERIES

increasing function of s, its maximum value is attained at the right endpoint, i.e.,
at s = 0. Hence, we should take M = e0 = 1. Thus,

1
|RN (t)| ≤ |t|N +1
(N + 1)!

is a plausible estimate for the remainder. On the other hand, if t > 0, the same
reasoning shows that we should take M = et . This seems a bit circular, since it
is et that we are trying to calculate. However, there is nothing preventing us from
using an M larger than et if it is easier to calculate. For example, suppose we want
to compute e = e1 using the series

∑∞ ∞
1 n ∑ 1
1 = .
n=0
n! n=0
n!

The maximum value of es in this case would be e1 = e, but that is certainly less
than 3. Hence, we take M = 3 and use the error estimate

3
|RN (1)| ≤ |1|N +1 .
(N + 1)!

Thus, for N = 15 (actually 16 terms), we could conclude

3
|R15 (1)| ≤ = 1.43384 × 10−13 .
16!
According to Mathematica, the sum for N = 15 is 2.71828182846. You should try
this on your calculator to see if it agrees.

We may use the estimate of the error to determine values of t for which the Taylor
series converges to the function. The following lemma is useful in that context.

Lemma 8.15 For any number t,

tn
lim = 0.
n→∞ n!

Proof. We saw earlier (by the ratio test) that the Taylor series for et

∑∞ n
t
n=0
n!

has infinite radius of convergence. Hence, it converges for all t, and its general term
tn /n! must approach 0.

For each of the functions we considered above, the estimate of the remainder in-
volved |t|n /n! or something related to it. Hence, the Lemma tells us RN (t) → 0 as
8.7. ANALYTIC FUNCTIONS AND TAYLOR SERIES 409

N → ∞ in each of these cases, for any t. Thus, we have the following Taylor series
expansions, valid for all t,

∑ t2n
cos t = (−1)n
n=0
(2n)!
∑∞
t2n+1
sin t = (−1)n
n=0
(2n + 1)!
∑∞ n
t
et = .
n=0
n!

In principle, each of these series may be used to calculate the function to any desired
degree of accuracy by using sufficiently many terms. However, in practice this is
only useful for relatively small values of t. For example, we saw that
|t|2K+2
|R2K+1 (t)| ≤
(2K + 2)!
is a plausible estimate of the error when using the Taylor series for cos t. However,
here are some values for t = 10
n tn /(2n)!
1 50
2 416.667
3 1388.89
4 2480.16
5 2755.73
6 2087.68
..
.
10 41.1032
..
.
20 1.2256210−8
..
.
As you see, the terms can get quite large before the factorial term in the denominator
begins to dominate. Using the series to calculate cos t for large t might lead to some
nasty surprises. (Can you think of a better way to do it?)

Derivation of the Cauchy Remainder We assume that all the needed derivatives
exist and are continuous functions. Consider the expression
f 00 (s) f (N ) (s)
RN (t, s) = f (t) − f (s) − f 0 (s)(t − s) − (t − s)2 − · · · − (t − s)N
2 N!
410 CHAPTER 8. SERIES

as a function of both t and s. Take the derivative of both sides with respect to s.
Note that by the product rule, the derivative of a typical term is
( (n) )
∂ f (s) f (n+1) (s) f (n) (s)
− (t − s) n
=− (t − s)n + (t − s)n−1
∂s n! n! (n − 1)!

If we write these terms in the reverse order, we get

∂RN f (N +1) (s)


= 0 − f 0 (s) + f 0 (s) − f 00 (s)(t − s) + f 00 (s)(t − s) − · · · − (t − s)N
∂s N!
f (N +1) (s)
=− (t − s)N .
N!
since all terms except the last cancel. Integrating with respect to s, we obtain
∫ t ∫ t
∂RN 1
ds = − f (N +1) (s)(t − s)N ds.
t0 ∂s N! t0

However, the integral on the left is


s=t
RN (t, s)|s=t0 = RN (t, t) − RN (t, t0 ).

Since RN (t, t) = f (t) − f (t) − 0 − · · · − 0 = 0, it follows that


∫ t
1
−RN (t, t0 ) = − f (N +1) (s)(t − s)N ds,
N! t0

which is the desired formula except for the minus signs.

Determining the Radius of Convergence of a Taylor Series We have seen


that generally speaking the Taylor series expansion

∑∞
f (n) (t0 )
f (t) = (t − t0 )n
n=0
n!

is valid in some interval t0 − R < t < t0 + R. There is a simple rule for determining
R from f , which applies in many cases.

Consider the example


1
f (t) = = 1 + t + t2 + · · · + tn + . . .
1−t
We know the series on the right converges absolutely for −1 < t < 1 and that the
expansion of the function is valid in this range. (t0 = 0, R = 1.) You will note that
the function in this case has a singularity at t = 1. That suggests the following rule:

Radius of Convergence of a Taylor Series For each point t0 , let R be the


distance to the nearest singularity of the function f . Then the radius of convergence
8.7. ANALYTIC FUNCTIONS AND TAYLOR SERIES 411

of the Taylor series for f centered at t0 is R, and the series converges absolutely to
the function in the interval t0 − R < t < t0 + R.

Unfortunately, it is easy to come up with an example for which this rule appears to
fail. Put u = −t2 in the formula
1
= 1 + u + u2 + · · · + un + . . .
1−u
to get
1
= 1 + (−t2 ) + (−t2 )2 + (−t2 )3 + · · · + (−t2 )n + . . .
1 − (−t2 )
This is valid for |u| = | − t2 | < 1, i.e., |t| < 1. Cleaning up the minus signs yields

1
= 1 − t2 + t4 − t6 + · · · + (−1)n t2n + . . .
1 + t2

for −1 < t < 1. The function f (t) = 1/(1+t2 ) does not appear to have a singularity
either at t = 1 or t = −1, which contradicts the rule. However, if you consider the
function of a complex variable z defined by

1
f (z) =
1 + z2
this function does have singularities at z = ±i where the denominator vanishes.
Also, very neatly, the distance in the complex plane from z0 = 0 to each of these
singularities is R = 1, and that is the proper radius of convergence of the series.
Thus the rule does apply if we extend the function f (t) to a function f (z) of a
complex variable and look for the singularities of that complex function. In many
simple cases, it is obvious how to do this, but in general, the notion of singularity for
functions f (z) of a complex variable and how to go about extending functions f (t)
of a real variable requires extensive theoretical discussion. This will be done in your
complex analysis course. We felt it necessary, however, to enunciate the rule, even
though it could not be fully explained, because it is so important in studying the
behavior of series solutions of differential equations. It is one of many circumstances
in which behavior in the complex plane, which would be invisible just by looking
at real points, can affect the behavior of a mathematical system and of physical
processes modeled by such a system.

A
∑∞Subtle Point We have been using implicitly the fact that the sum f (t) =
n=0 an (t − t 0 )n
of a power series is an analytic function within the interval of
convergence t0 − R < t < t0 + R of that series. If so, then the Taylor series of f (t)
at any other point t1 in (t0 −R, t0 +R) should also converge to f (t). (Look carefully
at the definition of ‘analytic’ at the beginning of this section!) In fact, such is the
case, and the interval of convergence of the Taylor series at t1 extends at least to
the closer of the two endpoints t0 − R, t0 + R of the interval of convergence of the
original power series. One must prove these assertions to justify the conclusion that
the sum of a power series is analytic, but we won’t do that in this course. It would
i

412 CHAPTER 8. SERIES -1

be rather difficult to do without getting into complex variable theory. Hence, you
will have to wait for a course in that subject for a proof. -i

Exercises for 8.7.

∑∞ f (n) (t0 )
1. Find the Taylor Series n=0 (t − t0 )n for each of the following func-
n!
tions for the indicated value of t0 . Do it by finding all the derivatives and
evaluating them at t0 . Having done that, see if you can also find the Taylor
series by some simpler method.
(a) f (t) = e−t , t0 = 0.
(b) f (t) = et , t0 = 1.
1
(c) f (t) = , t0 = 1.
t
π
(d) f (t) = cos t, t0 = .
2
(e) f (t) = ln t, t0 = 1.

2. By differentiating the Taylor series expansion for sin t at t0 = 0, check that


you get the Taylor series expansion for cos t.

3. In complex variable courses, one studies power series where the independent
variable, usually called z, is allowed to assume complex values. It is very
much like the the theory outlined in this section. With this in mind, verify
the identity
eit = cos t + i sin t
by calculating the series on both sides of the equation.

4. How many terms of the Taylor series expansion of cos t at t0 = 0 are needed
to calculate cos(1) to within 5 × 10−16 ? Hint: The series is an alternating
series

5. How many terms of the Taylor series expansion of et at t0 = 0 are necessary


to calculate e to within 5 × 10−16 ? How about e10 to within 5 × 10−16 ? Hint:
The series is not an alternating series.

6. Use Taylor series to calculate sin(100) accurately to with 5 × 10−4 . Hint:


Don’t use the series at t0 = 0.

7. For each of the following functions, determine the radius of convergence of its
Taylor series for t0 = 0.
1+t
(a) f (t) = .
t−2
2t
(b) f (t) = .
1 + 2t2
8.8. MORE CALCULATIONS WITH POWER SERIES 413

1
(c) f (t) = .
1 − t3
1
(d) f (t) = .
(t − 4)(t2 + 3)
It is not necessary to actually find the Taylor series.

8.8 More Calculations with Power Series

There are two approaches to finding the Taylor series expansion of an analytic
function. You can start with the function and its derivatives and calculate the
coefficients an = f (n) (t0 )/n!. This is often quite messy. (Also, it may be quite
difficult to show that the series converges to the function in an appropriate range
by estimating RN (t).) Another approach is to start with the series for a related
function and then derive the desired series by differentiating, integrating or other
manipulations.

Example 169 By substituting u = −t2 in the series expansion


1
= 1 + u + u2 + . . .
1−u
we may obtain the expansion
1
= 1 − t2 + t4 − t6 + · · · + (−1)n t2n + . . . .
1 + t2
Since the first expansion is valid for −1 < u < 1, the second expansion is valid for
0 ≤ t2 < 1, i.e., −1 < t < 1. In this range, we may integrate the series term by
term to obtain
∫ t
1 t3 t5 n t
2n+1

2
ds = t − + − · · · + (−1) + ...
0 1+s 3 5 2n + 1
or after evaluating the left hand side
t3 t5 t2n+1
tan−1 t = t − + − · · · + (−1)n + ...
3 5 2n + 1
for −1 < t < 1. You might try to derive this Taylor series by taking derivatives of
the tan−1 t. You will find it is a lot harder.

The above series also converges for t = 1 by the alternating series test, and its sum
is tan−1 (1). (This does not follow simply by the above analysis, since that only
applied in the range −1 < t < 1, but it can be demonstrated by a more refined
analysis.) Thus we get the formula
π 1 1 1 1
= 1 − + − + · · · + (−1)n + ...
4 3 5 7 2n + 1
414 CHAPTER 8. SERIES

In principle, this series may be used to compute π to any desired degree of accuracy.
Unfortunately, it converges rather slowly, but similar expansions for other rational
multiples of π converge rapidly. Recently, such series have been used to test the
capabilities of modern computers by computing π to exceptionally large numbers
of decimal places.

The Taylor series does not converge for t = −1, since in that case you get
1 1 1 1
−1 − − − − ··· − − ...
3 5 7 2n + 1
(Apply the integral test to the negative of the series.) That fact should make
you cautious about trying to extend results to the endpoints of the interval of
convergence.

Example 170 The quantities sin t/t and (1 − cos t)/t occur in a variety of applica-
tions.

The limits limt→0 sin t/t = 1 and limt→0 (1 − cos t)/t = 0 must be determined in
order to calculate the derivatives of the sin and cos functions. We can use the
current theory to get more precise information about how the above ratios behave
near t = 0. We discuss the case of sin t/t. (1 − cos t)/t is similar. Start with

t3 t5 t2n+1
sin t = t − + − · · · + (−1)n + ...
3! 5! (2n + 1)!
which is valid for all t. Multiply by 1/t to obtain

sin t t2 t4 t2n
= 1 − + − · · · + (−1)n + ...
t 3! 5! (2n + 1)!
which is also valid for all t 6= 0. (The analysis breaks down if t = 0, but the formula
may be considered valid in the sense that the right hand side is 1 and the left hand
side has limit 1.) Note that the series is an alternating series so we may estimate
the error if we stop after N terms by looking at the next term. By taking n = 1,
we conclude that
sin t
≈1
t
for t small with the difference behaving like O(t2 ).

Sometimes one can use Taylor series to evaluate an integral quickly and to high ac-
curacy in a case where it is not possible to determine an elementary anti-derivative.
∫ π/2 sin x
Example 171 We shall calculate 0 dx so that the error is less than 5×10−4 ,
x
which loosely speaking says is it accurate to three decimal places. Note that the
function sin x/x does not have an elementary anti-derivative. As in the previous
example,
sin x x2 x4 x2n
=1− + − · · · + (−1)n + ...
x 3! 5! (2n + 1)!
8.8. MORE CALCULATIONS WITH POWER SERIES 415

and we may consider this expansion valid for all x by using the limit 1 for the value
of sin x/x for x = 0. Integrating term by term yields
∫ π/2
sin x π 1 ( π )3 1 ( π )5 1 ( π )7
dx = − + − + ...
0 x 2 3 · 3! 2 5 · 5! 2 7 · 7! 2
Also, it is possible to show that the right hand side is an alternating series in which
the terms decrease. Indeed, checking the first few values, we have (to 4 decimal
places)
π
= 1.5708
2
(
1 π ) 3
= 0.2153
18 2
1 ( π ) 5
= 0.0159
600 2
1 ( π ) 7
= 0.0007
35280 2
..
.

The next term is (1/9 · 9!)(π/2)9 ≈ 2 × 10−5 . Hence, the error in just using the
terms listed is certainly small enough that it won’t affect the 3rd decimal place.
Adding up those terms (including signs) yields 1.371 to 4 decimal places.

The Binomial Theorem An important Taylor series is that for the function
f (t) = (1 + t)a . That series is called the binomial series. Perhaps you recall from
high school the formulas

(1 + t)2 = 1 + 2t + t2
(1 + t)3 = 1 + 3t + 3t2 + t3
(1 + t)4 = 1 + 4t + 6t2 + 4t3 + t4
..
.

The general case for a a positive integer is the binomial theorem formula
∑a ( )
a(a − 1) 2 a n
(1 + t)a = 1 + at + t + · · · + ata−1 ta = t ,
2 n=0
n

where ( )
a a(a − 1)(a − 2) . . . (a − n + 1) a!
= = .
n n! n!(a − n)!
( )
(By convention na = 0 if n < 0 or n > a.) If we take a to be a negative integer,
a fraction or indeed any non-zero real number , then we may still define binomial
coefficients ( )
a a(a − 1)(a − 2) . . . (a − n + 1)
=
n n!
416 CHAPTER 8. SERIES
( )
with the convention that a0 = 1. Unlike the ordinary binomial coefficients, these
quantities aren’t necessarily positive, and moreover they don’t vanish for n > a.
Thus, instead of a polynomial, we get a series expansion
∑∞ ( )
a n
(1 + t)a = t (133)
n=0
n
which is valid for −1 < t < 1.

Example 172 For a = −2, the coefficients are


( )
−2
=1
0
( )
−2
= −2
1
( )
−2 (−2)(−3)
=3
2 2
( )
−2 (−2)(−3)(−4)
= = −4
3 3!
..
.
( )
−2 (−2)(−3) . . . (−n)(−n − 1)
= = (−1)n (n + 1)
n n!
Hence,


(1 + t)−2 = (−1)n (n + 1)tn .
n=0
(Compare this with the series for (1 + t)−2 you obtained in the Exercises for Section
6 by differentiating the series for (1 + t)−1 .)

The general binomial theorem (133) was discovered and first proved by Isaac New-
ton. Since then there have been several different proofs. We shall give a rather
tricky proof based on some of the ideas we developed above.
∑ ( )
First note that the series n na tn has radius of convergence 1. For, the ratio
( a ) n+1
| n+1 ||t|
(a) =
| n ||t|n
|a(a − 1) . . . (a − n + 1)(a − (n + 1) + 1)| n!
|t|
(n + 1)! |a(a − 1) . . . (a − n + 1)|
|a − n|
= |t|
n+1
approaches |t| as n → ∞. Thus, the series converges absolutely for |t| < 1. Let
∑∞ ( )
a n
f (t) = t for − 1 < t < 1.
n=0
n
8.8. MORE CALCULATIONS WITH POWER SERIES 417

Then
∑∞ ( )
a
f 0 (t) = ntn−1 . (134)
n=1
n

Multiply this by t to obtain

∑∞ ( ) ∑∞ ( )
0 a n a
tf (t) = nt = ntn ,
n=1
n n=0
n

where we put back the n = 0 term which is zero in any case. Similarly, putting
n + 1 for n in (134) yields

∑∞ ( )
0 a
f (t) = (n + 1)tn−1 ,
n=0
n + 1

so
∑∞ (( ) ( ) )
a a
f 0 (t) + tf 0 (t) = (n + 1) + n tn .
n=0
n + 1 n

However,
( ) ( )
a a
(n + 1) + n
n+1 n
a(a − 1) . . . (a − (n + 1) + 1) a(a − 1) . . . (a − n + 1)
= (n + 1) + n
(n + 1)! n!
a(a − 1) . . . (a − n + 1) a(a − 1) . . . (a − n + 1)
= (a − n) + n
n! ( ) n!
a(a − 1) . . . (a − n + 1) a
= a= a.
n! n

Hence,
∑∞ ( ) ∑∞ ( )
a n a n
(1 + t)f 0 (t) = a t =a t = af (t).
n=0
n n=0
n

In other words, f (t) satisfies the differential equation


a
f 0 (t) − f (t) = 0.
1+t
This is a linear equation, and we know the solution
∫ a
f (t) = Ce 1+t dt = Cea ln(1+t) = C(1 + t)a .

To determine C, note that it follows from the definition of f that f (0) = 1. Hence,
1 = C1a = C, and
f (t) = (1 + t)a for − 1 < t < 1
as claimed.
418 CHAPTER 8. SERIES

Other Manipulations Other manipulations with series are possible. For example,
series may be added, subtracted, multiplied or even divided, but determining the
radius of convergence of the resulting series may be somewhat tricky.

Example 173 We find the Taylor series expansion for e2t cos t for t0 = 0. We have
4
e2t = 1 + 2t + 2t2 + t3 + . . .
3
1 2 1 4
cos t = 1 − t + t − . . .
2 24
so
4
e2t cos t = 1 + 2t + 2t2 + t3 + . . .
3
1 2
− t − t − ...
3
2
where we have listed only the terms of degree ≤ 3. Combining terms, we obtain
3 1
e2t cos t = 1 + 2t + t2 + t3 + . . .
2 3

Example 174 We know that


1
= 1 − u + u2 − u3 + . . . for |u| < 1.
1+u

Since (1 + t)2 = 1 + 2t + t2 , we may substitute u = 2t + t2 in the above equation to


get
1 1
2
= = 1 − (2t + t2 ) + (2t + t2 )2 − (2t + t2 )3 + . . .
(1 + t) 1 + 2t + t2
= 1 − 2t − t2 + 4t2 + 4t3 + t4 − 8t3 − 12t4 − 6t5 − t6 + . . .
= 1 − 2t + 3t2 − 4t3 + . . .

Note that I stopped including terms where I could be sure that the higher degree
terms would not contribute further to the given power of t. (The term (2t + t2 )4
would contribute a term involving t4 .) Note also that it is not clear from the above
computations what radius of convergence to specify for the last expansion to be
valid.

You should compare the above expansion with what you would obtain by using the
binomial theorem for (1 + t)−2 .

One consequence of the fact that we may manipulate series in this way is that
the sum, difference, or product of analytic functions is again analytic. Similarly,
the quotient of two analytic functions is also analytic, at least if we exclude from
the domain points where the denominator vanishes. Finally, it is even possible to
8.8. MORE CALCULATIONS WITH POWER SERIES 419

substitute one series in another, so the composition of two analytic functions is


generally analytic.

Exercises for 8.8.

1. Using the expansion

∑∞
(−1)n t2n+1
tan−1 (t) = for −1<t<1
n=0
2n + 1

calculate tan−1 (.01) to within 5 × 10−4 . (Note that the series is an alternating
series, so you can use the next term criterion to estimate the error after n
terms.)

2. Find Taylor series expansions by the methods of this section in each of the
following cases.
(a) e−t at t0 = 0.
2

(b) et at t0 = 1. Hint: et = e1 et−1 .


(c) (1 + t)1/2 at t0 = 0.
(d) (1 − t)−3 at t0 = 0.
ln(1 + t)
(e) at t0 = 0.
t
tan−1 (t)
(f) at t0 = 0.
t
(g) t2 e−t at t0 = 0.
1+t
(h) at t0 = 0.
1−t
∫ 0.1
3. Calculate 0 e−t dt accurately to within 5 × 10−5 .
2

∫ 1 1 − cos t
4. Calculate 0
dt accurately to within 5 × 10−4 .
t2
5. A very thin tunnel is drilled in a straight line through the Earth connecting
two points 1/4 mile apart (great circle distance). Assume the Earth is a
perfect sphere with radius exactly 4000 miles. Find the maximum depth of
the tunnel in feet accurately to 10 significant figures. Hint: The distance is
R(1 − cos(θ/2)) where θ is the angle in radians at the center of the sphere
subtended by the great circle arc.

v2
6. The quantity 1 − 2 plays an important role in the special theory of rela-
c
tivity.
420 CHAPTER 8. SERIES

(a) Using the first order term in the binomial expansion, derive the approxi-
mation √
v2 v2
1 − 2 ≈ 1 − 2.
c 2c
(b) If v is 1 accurate the above approximation is.
(c) Does a similar analysis work for

1 v2
√ ≈1+ ?
v2 2c2
1−
c2

7. (Optional) Try to obtain a power series expansion for sec t by putting u =


t2 t4 t6
− + − . . . in
2 4! 6!
1
= 1 + u + u2 + u3 + . . .
1−u
Try to include at least terms up to degree 6.

8.9 Multidimensional Taylor Series

We want to extend the notions introduced in the previous sections to functions of


more than one variable, i.e., f (r) with r in Rn . In this section, we shall concentrate
entirely on the case n = 2 because the notational difficulties get progressively worse
as the number of variables increases. However, with a good understanding of that
case, it is not hard to see how to proceed in higher dimensional cases.

Suppose then that n = 2, and a scalar valued function is given by f (r) = f (x, y).
We want to expand this function in a multidimensional power series. From our
previous discussion of linear approximation, we know it should start

f (x, y) = f (0, 0) + fx (0, 0)x + fy (0, 0)y + . . . ,

and we need to figure out what the higher degree terms should be.

To this end, consider series of the form



∑ ∑
a00 + a10 x + a01 y + a20 x2 + a11 xy + a02 y 2 + · · · = ( aij xi y j ).
n=0 i+j=n

Notice the way the terms are arranged. A typical term will be some multiple of a
monomial xi y j . The coefficient of that monomial is denoted aij where the subscripts
8.9. MULTIDIMENSIONAL TAYLOR SERIES 421

specify the powers of x and y in the monomial. Moreover, we call n = i + j the


of the monomial xi y j , and we group all monomials of the same degree
total degree ∑
together as i+j=n aij xi y j .

The above series is centered at (0, 0). We can similarly discuss a multidimensional
power series of the form

∑ ∑
( aij (x − x0 )n (y − y0 )n )
n=0 i+j=n

centered at the point (x0 , y0 ). For the moment we will concentrate on series centered
at (0, 0) in order to save writing. Once you understand that, you can get the general
case by substituting x − x0 and y − y0 for x and y and making other appropriate
changes.

A function f (x, y) with domain some open set in R2 is said to be analytic if at each
point in its domain it may be expanded in a power series centered at that point in
some neighborhood of the point. Suppose then that

∑ ∑
f (x, y) = a00 + a10 x + a01 y + a20 x + a11 xy + a02 y + · · · =
2 2
( aij xi y j )
n=0 i+j=n

is valid in some neighborhood of (0, 0). By putting x = 0, y = 0 we see that

f (0, 0) = a00 .

Just as in the case of power series in one variable, term by term differentiation (or
integration) is valid. Hence, we have

∑∞ ∑
∂f
= a10 + 2a20 x + a11 y + · · · = ( iaij xi−1 y j ). (135)
∂x n=1 i+j=n

Putting x = 0, y = 0 yields
∂f
(0, 0) = a10 ,
∂x
∂f
just as we expected. Similarly, ∂y (0, 0) = a01 .

Take yet another derivative to get

∑∞ ∑
∂2f
= 2a20 + · · · = i(i − 1)aij xi−2 y j .
∂x2 n=2 i+j=n

Hence,
∂2f
(0, 0) = 2a20 .
∂x2
422 CHAPTER 8. SERIES

Similar reasoning applies to partials with respect to y, and continuing in this way,
we discover that
1 ∂nf
an0 = (0, 0)
n! ∂xn
1 ∂nf
a0n = (0, 0).
n! ∂y n
However, this still leaves the bulk of the coefficients undetermined. To find these,
we need to find some mixed partials. Thus, from equation (135), we get
∑∞ ∑
∂2f
= a11 + · · · = ( ijaij xi−1 y j−1 ).
∂x∂y n=2 i+j=n

Thus,
∂2f
(0, 0) = a11 .
∂x∂y
Here the total degree is 1 + 1 = 2. Suppose we consider the case of total degree
r + s = k where r and s are the subdegrees for x and y respectively. Then, it is
possible to show that
∞ ∑

∂kf
= i(i−1)(i−1) . . . (i−r+1) j(j−1)(j−1) . . . (j−s+1)aij xi−r y j−s .
∂xr ∂y s i+j=n
n=k

The leading term in the expansion on the right for n = k has only one non-zero
term, the one with i = r and j = s, i.e.,

r!s!ars .

Hence, since k = r + s,
1 ∂ r+s f
ars = (0, 0).
r!s! ∂xr ∂y s
(You should do all the calculations for a21 and a12 to convince yourself that it really
works this way. If you don’t quite see why the calculations work as claimed in the
general case, you should probably just accept them.)

Thus, the power series expansion centered at (0, 0) is



∑ ∑ 1 ∂nf
f (x, y) = ( i ∂y j
(0, 0) xi y j ).
n=0 i+j=n
i!j! ∂x

Similarly, the power series expansion centered at (x0 , y0 ) is


∞ ∑
∑ 1 ∂nf
f (x, y) = i ∂y j
(x0 , y0 ) (x − x0 )i (y − y0 )j .
n=0 i+j=n
i!j! ∂x

The right hand side is called the (multidimensional) Taylor series for the function
centered at (x0 , y0 ).
8.9. MULTIDIMENSIONAL TAYLOR SERIES 423

There is another way to express the series. Consider the terms of total degree n
∑ 1 ∂nf
i ∂y j
(x − x0 )i (y − y0 )j ,
i+j=n
i!j! ∂x

where to save writing we omit explicit mention of the fact that the partial derivatives
are to be evaluated at (x0 , y0 ). To make this look a bit more like the case of one
variable, we divide by a common factor of n!. Of course, we must then also multiply
each term by n! to compensate, and the extra n! may be incorporated with the other
factorials to obtain ( )
n! n
= .
i!j! i
Hence, the terms of degree n may be expressed
( ) n
1 ∑ n ∂ f
(x − x0 )i (y − y0 )j . (136)
n! i+j=n i ∂xi ∂y j

Example 175 Let f (x, y) = ex+y and x0 = 0, y0 = 0. Then it is not hard to check
that
∂nf
= ex+y = e0 = 1 at (0, 0).
∂xi ∂y j
Hence, the series expansion is
∑∞ ( )
x+y 1 ∑ n i j
e = xy .
n=0
n! i+j=n i

An easier way to derive this same series is to start with


∑∞
1 n
eu = u
n=0
n!

and put u = x + y. Since


∑ (n)
(x + y)n = xi y j ,
i+j=n
i

we get the same answer.

There is yet another way to express formula (136). To save writing, put ∆x =
∂ ∂
x−x0 , ∆y = y −y0 . Consider the operator D = ∆x +∆y . Then, symbolically,
∂x ∂y
∂ ∂ ∑ (n) ∂ ∂
Dn = (∆x + ∆y )n = (∆x )i (∆y )j
∂x ∂y i+j=n
i ∂x ∂y
∑ (n) ∂n
= ∆xi ∆y j i j .
i+j=n
i ∂x ∂y
424 CHAPTER 8. SERIES

Hence,
∑∞ ∑∞ ( )
1 n 1 ∑ n ∂nf
(D f )(x0 , y0 ) = ∆xi ∆y j i j (x0 , y0 ),
n=0
n! n=0
n! i+j=n i ∂x ∂y

which is the expression arrived at earlier in formula (136).

The Error One can analyze the error RN made if you stop with terms of degree
N for multidimensional Taylor series in a manner similar to the case of ordinary
Taylor series. The exact form √
of this remainder RN is not as important as the fact
that as a function of |∆r| = ∆x2 + ∆y 2 it is O(|∆r|N +1 ). Thus, we may write
in general
∑ ( ) n
1 ∑ n
N
∂ f
f (x + ∆x, y + ∆y) = i ∂y j
∆xi ∆y j + O(|∆r|N +1 ).
n=0
n! i+j=n
i ∂x

One important case is N = 1 which gives


f (x + ∆x, y + ∆y) = f (x, y) + fx ∆x + fy ∆y + O(|∆r|2 ).
(You should compare this with our previous discussion of the linear approximation
in Chapter III.)

The case N = 2 is also worth writing out explicitly. It may be put in the following
form

f (x + ∆x, y + ∆y)
1
= f (x, y) + fx ∆x + fy ∆y + (fxx ∆x2 + 2fxy ∆x∆y + fyy ∆y 2 ) + O(|∆r|3 ).
2
We shall use this relation in the next section.

Derivation of the Remainder Term You may want to skip the following dis-
cussion.

There is a multidimensional analogue of Taylor’s formula with remainder. The


easiest way to get it is to derive it from the one dimensional case as follows. We
concentrate, as above, on the case n = 2. Consider the function f (r) = f (x, y) near
the point r0 = (x0 , y0 ) and put ∆r = h∆x, ∆yi = hx − x0 , y − y0 i. Consider the line
segment from r0 to r = r0 + ∆r parameterized by
r0 + t∆r, where 0 ≤ t ≤ 1.
Define a function of one variable
g(t) = f (r0 + t∆r), 0 ≤ t ≤ 1.
By the 1-dimensional Taylor’s formula, we have
1 1 (N )
g(t) = g(0) + g 0 (0)t + g 00 (0)t2 + · · · + g (0)tN + RN (t)
2 N!
8.9. MULTIDIMENSIONAL TAYLOR SERIES 425

where the remainder RN (t) is given by a certain integral. Putting t = 1, we obtain

1 1 (N )
g(1) = g(0) + g 0 (0) + g 00 (0) + · · · + g (0) + RN (1)
2 N!
where ∫ 1
1
RN (1) = (1 − s)N g (N +1) (s)ds.
N! 0

We need to express the above quantities in terms of the function f and its partial
derivatives. This is not hard if we make use of the chain rule
dg d
= ∇f · (r0 + t∆r) = ∇f · ∆r.
dt dt
We can write this as a symbolic relation between operators

d ∂ ∂
g = ∆r · ∇f = (∆x + ∆y )f = Df.
dt ∂x ∂y

Hence,
dn
g = Dn f
dtn
so evaluating at t = 0, r = r0 yields

∑N
1 n
f (x, y) = D f (x0 , y0 ) + RN (1).
n=0
n!

To estimate the remainder, consider

dN +1 ∑ (N + 1) ∂ N +1 f
n
g(s) = ∆xi ∆y j i j (r0 + s∆r).
ds i ∂x ∂y
i+j=N +1

Assume that within the indicated range


N +1
∂ f

∂xi ∂y j ≤ M.

Then
∑ ( )
dN +1 N +1
| g(s)| ≤ |∆x|i |∆y|j M = M (|∆x| + |∆y|)N +1 .
dsn i
i+j=N +1

The quantity |∆x| + |∆y| represents the sum of the lengths of the legs of a right
triangle√with hypotenuse |∆r|. A little plane geometry will convince you that |∆x|+
|∆y| ≤ 2|∆r|. Hence,
N +1
d √

dsn g(s) ≤ M ( 2|∆r|) = 2(N +1)/2 M |∆r|N +1 .
N +1
426 CHAPTER 8. SERIES

Hence,
∆x ∫ 1
1 (N +1)/2
|RN (1)| ≤ 2 M |∆r|N +1 (1 − s)N ds
N! 0

or
∆y
2(N +1)/2 M
∆r |RN (1)| ≤ |∆r|N +1 . (138)
(N + 1)!

A similar analysis works for functions f : Rn → R, but the factor in the numerator
will be more complicated for more than two variables.

Exercises for 8.9.

1. Find the Taylor series expansion for each of the following functions up to and
including terms of degree 2.

(a) f (x, y) = 1 + x2 + y 2 at (x0 , y0 ) = (0, 0).
1+x
(b) f (x, y) = at (x0 , y0 ) = (0, 0).
1+y
(c) f (x, y) = x2 + 3xy + y 3 + 2y 2 − 4y + 6 at (x0 , y0 ) = (1, −2).

2. Find the Taylor series expansion for f (x, y) = sin(x + y) at (0, 0)


(a) by the general formula for the Taylor series of a function of two variables,
(b) by substituting u = x + y in the Taylor series for sin u.

3. Find the Taylor series expansion for f (x, y) = e−(x


2
+y 2 )
at (0, 0) up to and
including terms of degree 3
(a) by the general formula for the Taylor series of a function of two variables,
(b) by substituting u = −(x2 + y 2 ) in the Taylor series for eu ,
(c) by multiplying the Taylor series for e−x and e−y .
2 2

8.10 Local Behavior of Functions and Extremal


Points

In your one variable calculus course, you learned how to find maximum and mini-
mum points for graphs of functions. We want to generalize the methods you learned
8.10. LOCAL BEHAVIOR OF FUNCTIONS AND EXTREMAL POINTS 427

there to functions of several variables. Among such problems, one usually distin-
guishes global problems from local problems. A global maximum (minimum) point
is one at which the function takes on the largest (smallest) possible value for all
points in its domain. A local maximum (minimum) point is one at which the func-
tion is larger (smaller) than its values at all nearby points. Thus, the bottom of
the crater in the volcano Mauna Loa is a local minimum but certainly not a global
minimum, Generally, one may use the methods of differential calculus to determine
local maxima or minima, but usually other considerations must be brought into play
to determine the global maximum or minimum. In this section, we shall concern
ourselves only with local extremal points.

First, let’s review the single variable case. Suppose f : R → R is a function, and
we want to determine where it has local minima. If f is sufficiently smooth, we may
begin to expand it near a point x in a Taylor series
f (x + ∆x) = f (x) + f 0 (x)∆x + O(∆x2 )
or
f (x + ∆x) − f (x) = f 0 (x)∆x + O(∆x2 ).
The terms on the right represented by O(∆x2 ) are generally small compared to the
linear term f 0 (x)∆x, so provided f 0 (x) 6= 0 and ∆x is small, we may write
f (x + ∆x) − f (x) ≈ f 0 (x)∆x. (139)
On the other hand, at a local minimum, we must have f (x + ∆x) − f (x) ≥ 0, and
that contradicts (139) since the quantity on the right changes sign when ∆x changes
sign. The only way out of this dilemma is to conclude that f 0 (x) = 0 at a local
minimum. Similar reasoning applies at a local maximum.

Suppose f 0 (x) = 0. Then, the approximation (139) is no longer valid, and we must
consider higher order terms. Continuing the Taylor expansion yields
1
f (x + ∆x) − f (x) = f 0 (x)∆x + f 00 (x)∆x2 + O(∆x3 )
2
1 00
= f (x)∆x + O(∆x3 ).
2
2
Reasoning as above, we conclude that if f 00 (x) 6= 0 and ∆x is small, the quadratic
term on the right dominates. That means that f (x + ∆x) ≥ f (x) if f 00 (x) > 0, in
which case we conclude that x is a local minimum point. Similarly, f (x+∆x) ≤ f (x)
if f 00 (x) < 0, in which case we conclude that x is a local maximum point. If
f 00 (x) = 0, then of course no conclusion is possible. As you know, in that case, x
might be a local minimum point, a local maximum point, or a point at which the
graph has a point of inflection. Taking the Taylor series one step further might
yield additional information, but we will leave that for you to investigate on your
own.

We now want to generalize the above analysis to functions f : R2 → R. If we


assume that f is sufficiently smooth, then it may be expanded near a point (x, y)
f (x + ∆x, y + ∆y) = f (x, y) + ∇f · ∆r + O(|∆r|2 )
428 CHAPTER 8. SERIES

which may be rewritten

f (x + ∆x, y + ∆y) − f (x, y) = ∇f · ∆r + O(|∆r|2 ).

As above, if |∆r| is small enough, and ∇f 6= 0, then the linear term on the right
dominates. Since in that case the linear term can be either positive or negative
depending on ∆r, it follows that the quantity on the left cannot be of a single sign.
Hence, if the point (x, y) is either a local maximum or a local minimum point, we
must necessarily have ∇f = 0. In words, at a local extremal point of the function f ,
the gradient ∇f = 0.

A point (x, y) at which ∇f = 0 is called a critical point of the function. For a


smooth function, every local maximum or minimum is a critical point, but the
converse is not generally true. (This parallels the single variable case.) All the
assertion ∇f = 0 tells us is that the tangent plane to the graph of the function is
horizontal at the critical point.

Examples Let f (x, y) = x2 + 2x + y 2 − 6y. Then

∇f = hfx , fy i = h2x + 2, 2y − 6i.

Setting this to zero yields 2x + 2 = 0, 2y − 6 = 0 or x = −1, y = 3. Hence, (−1, 3)


is a critical point. We can see that this point is actually a local minimum point by
completing the square.

f (x, y) = x2 + 2x + y 2 − 6y = x2 + 2x + 1 + y 2 − 6y + 9 − 10 = (x + 1)2 + (y − 3)2 − 10.

The graph of this function is a circular paraboloid (bowl) with vertex (−1, 3, −10),
and it certainly has a minimum at (−1, 3). (It is even a global minimum!)

Consider on the other hand, the function f (x, y) = x2 − y 2 . Setting ∇f = 0 yields

h2x, −2yi = h0, 0i

or x = y = 0. Hence, the origin is a critical point. However, we know the graph of


the function is a hyperbolic paraboloid with a saddle point at (0, 0). Near a saddle
point, the function increases in some directions and decreases in others, so such a
point is neither a local maximum nor a local minimum.

.
8.10. LOCAL BEHAVIOR OF FUNCTIONS AND EXTREMAL POINTS 429

As in the single variable case, the analysis may be carried further by considering
second derivatives and quadratic terms. Suppose ∇f = 0, and extend the Taylor
expansion to include the quadratic terms. We have

1
f (x + ∆x, y + ∆y) − f (x, y) = (fxx ∆x2 + 2fxy ∆x∆y + fyy ∆y 2 ) + O(|∆r|3 ).
2
In most circumstances, the quadratic terms will dominate, so we will be able to
determine the local behavior by examining them. However, because the expression
is so much more complicated than in the single variable case, the theory is more
complicated. To save writing, put A = fxx , B = fxy , C = fyy , u = ∆x, and v = ∆y.
Then we want to look at the graph of

z = Q(u, v) = Au2 + 2Buv + Cv 2

in the neighborhood of the origin. Such an expression is called a quadratic form.

Before considering the general case we consider some examples.

(a) z = 2u2 + v 2 . The graph is a bowl shaped surface which opens upward, so the
origin is a local minimum. Similarly, the graph of z = −u2 − 5v 2 is a bowl shaped
surface opening downward, and the origin is a local maximum.

(b) z = uv. The graph is a saddle shaped surface, and the origin is neither a local
minimum nor a local maximum.

(c) z = u2 . The graph is a ‘trough’ centered on the u-axis and opening upward.
The entire v-axis is a line of local minimum points. Similarly, the graph of z = −v 2
is a trough centered on the u-axis and opening downward.
430 CHAPTER 8. SERIES

We now consider the general case. Suppose first that A 6= 0. Multiply through by
A and then complete the square

AQ(u, v) = A2 u2 + 2ABuv + B 2 v 2 − B 2 v 2 + ACv 2


= (Au + Bv)2 + (AC − B 2 )v 2 .

The first term is a square, so it is always non-negative, so the signed behavior of


the expression depends on the quantity ∆ = AC − B 2 . ∆ is called the discriminant
of the quadratic form.

If ∆ > 0, the expression is a sum of squares so it is always non-negative. Even better,


it can’t be zero unless u = v = 0. For, if the sum is zero, it follows Au + Bv = 0
and also v = 0. Since we assumed A 6= 0, that means that u = 0. Hence, if A 6= 0
and ∆ > 0, then AQ(u, v) is always positive (except for u = v = 0), and Q(u, v)
has the same sign as A. Hence, the origin is a local minimum if A > 0, and it is a
local maximum if A < 0. In this case the form is called definite.

If ∆ < 0, the expression is a difference of squares, so it is sometimes positive and


sometimes negative. The graph of z = AQ(u, v) = (Au + Bv)2 − |∆|v 2 is a saddle
shaped surface which intersects the u, v-plane in the locus

Au + Bv = ± |∆|v,

which is a pair of lines intersecting at the origin. These lines divide the u, v-plane
into four regions with the surface above it in two of the regions and below it in the
other two. In this case, the form is called indefinite.

v
Au + (B+ ∆ )v=0

-
- u

+
Au + (B - ∆ )v=0
8.10. LOCAL BEHAVIOR OF FUNCTIONS AND EXTREMAL POINTS 431

If ∆ = 0,
z = AQ(u, v) = (Au + Bv)2
defines a trough shaped surface which intersects the u, v-plane along the line Au +
Bv = 0. The graph of z = Q(u, v) either lies above or below the u, v-plane,
depending on the sign of A. If ∆ = AC − B 2 = 0, we say that the quadratic
form is degenerate.

If A = 0 and B 6= 0, then ∆ = AC − B 2 = −B 2 < 0, and

Q(u, v) = 2Buv + Cv 2 = (2Bu + Cv)v

can be positive or negative. This just amounts to a special case of the analysis
above for an indefinite form.

If A = B = 0 and C 6= 0, then ∆ = 0, and

Q(u, v) = Cv 2

and this is a special case of a degenerate form.

If A = B = C = 0, there isn’t really anything to consider since we don’t really have


a quadratic form.

The above analysis tells us how the quadratic terms in


1
f (x + ∆x, y + ∆y) − f (x, y) = (fxx ∆x2 + 2fxy ∆x∆y + fyy ∆y 2 ) + O(|∆r|3 )
2
behave. If |∆r| is small, the behavior on the right is determined primarily by
Q(∆x, ∆y), with the cubic and higher order providing a slight distortion. In par-
ticular we have

Sufficiency Conditions for Local Extrema Assume f : R2 → R has continuous


second partials, and that (x, y) is a critical point. Let

∆ = fxx fyy − fxy 2

where everything is evaluated at the critical point.

(a) If ∆ > 0, then the critical point is a local minimum if fxx > 0, or a local
maximum if fxx < 0. (The quadratic approximation is an elliptic paraboloid.)

(b) If ∆ < 0, then the critical point is neither a local maximum nor a local minimum.
(The quadratic approximation is a saddle.)

(c) If ∆ = 0, any of the above possibilities might occur.

The proofs of these assertions require a careful analysis in which the different order
terms are compared to one another. We shall not go into these matters here in detail,
432 CHAPTER 8. SERIES

but a few remarks might be enlightening. The reason why case (c) is ambiguous is
not hard to see. If ∆ = 0, the graph of the degenerate quadratic form is a trough
which contains a line of minimum (or maximum) points. Along that line, the cubic
order terms will control what happens. We could have either a local maximum or a
local minimum or neither depending on the shape of the trough and the contribution
from the cubic terms. In case ∆ < 0, it is harder to see how the cubic terms perturb
the saddle shaped graph for the quadratic form, but it may still be thought of as a
saddle.

There is one slightly confusing point in the use of the above formulas. It appears
that fxx is playing a special role in (a). However, it is in fact true in case (a) that
fxx and fyy have the same sign. Otherwise, ∆ = fxx fyy − fxy 2 < 0.

Example 176 We shall classify the critical points of the function defined by
f (x, y) = x sin y. First, to find the critical points, solve

∂f
= sin y = 0
∂x
∂f
= x cos y = 0.
∂y

From the first equation, we get y = kπ where k is any integer. Since cos(kπ) 6= 0,
the second equation yields x = 0. Hence, there are infinitely many critical points
(0, kπ) where k ranges over all possible integers.

To apply the criteria above, we need to calculate the discriminant. We have

fxx = 0
fyy = −x sin y
fxy = cos y
∆ = 0 − cos2 y = − cos2 y.

At (0, kπ), we have ∆ = −(±1)2 = −1 < 0. Hence, (b) applies and every critical
point is a saddle point.

Example 177 Let


f (x, y) = x3 + y 3 − 3x2 + 3y 2 + 2.

To find the critical points, solve

fx = 3x2 − 6x = 0
fy = 3y 2 + 6y = 0.

The solutions are x = 0, 2 and y = 0, −2, so the critical points are

(0, 0), (0, −2), (2, 0), (2, −2).


8.10. LOCAL BEHAVIOR OF FUNCTIONS AND EXTREMAL POINTS 433

To classify these, calculate

fxx = 6x − 6
fyy = 6y + 6
fxy = 0
∆ = 36(x − 1)(y + 1).

At (0, 0), ∆ = −36 < 0, so (0, 0) is a saddle point.

At (0, −2), ∆ = 36 > 0. Since, fxx = −6 < 0, (0, −2) is a local maximum.

At (2, 0), ∆ = 36 > 0. Since, fxx = 6 > 0, (2, 0) is a local minimum.

At (2, −2), ∆ = −36 < 0, so (2, −2) is a saddle point.

Example 178 Let


f (x, y) = x3 + y 2 .
We have
fx = 3x2 fy = 2y,
so (0, 0) is the only critical point. Moreover

fxx = 6x, fyy = 2, and fxy = 0.

Hence, ∆ = 12x = 0 at (0, 0). Hence, the criteria yield no information in this case.
In fact, the point is not a maximum, a minimum, or a saddle point.
434 CHAPTER 8. SERIES

The example f (x, y) = x4 + y 2 is very similar. It also has a degenerate critical point
at (0, 0) but in this case it is a minimum point.

If the quadratic terms vanish altogether, then the cubic terms dominate and various
interesting possibilities arise. One of these is called a ‘monkey saddle’, and you can
imagine its shape from its name. (See the Exercises.)

Higher Dimensional Cases For a smooth function f : Rn → R with n > 2,


many of the above considerations still apply. If ∇f = 0 at a point r, then the point
is called a critical point. Local maxima and minima occur at critical points, but
there are many other possibilities. Examination of the quadratic terms may allow
one to determine precisely what happens, but even in the case n = 3 this can be
much more complicated than the case of functions of two variables.

Exercises for 8.10.

1. Find all the critical points of each of the following functions


(a) f (x, y) = x2 + y 2 − 6x + 4y − 3.
(b) f (x, y) = x2 − 2x + y 3 .
2
+2x−y 2 +6y
(c) f (x, y) = ex .
(d) f (x, y) = sin x cos y.
(e) f (x, y, z) = x2 + 2xy + y 2 + z 2 − 4z.

2. For each of the following functions find and classify each of its critical points.
(If the theory described in the section gives no information, don’t try to ana-
lyze the critical point further.)
(a) f (x, y) = x2 + y 2 − 6x + 4y − 3.
(b) f (x, y) = 1 − 2xy − 2x − 2y − 3y 2 .
(c) f (x, y) = x3 + y 3 − 3xy.
(d) f (x, y) = 10 − 3x2 − 3y 2 + 2xy − 4x − 4y.
(e) f (x, y) = x2 + y 2 + 2xy + y 3 .
8.10. LOCAL BEHAVIOR OF FUNCTIONS AND EXTREMAL POINTS 435

3. Sketch the graph of f (x, y) = x4 + y 2 . Pay particular attention to the neigh-


borhood of its one critical point at the origin. Is that critical point a local
maximum or minimum? Explain.

4. The origin is the only critical point of f (x, y) = x2 +y 5 . Is it a local maximum


or minimum? Explain.

5. (a) Show that (0, 0) is the only critical point of f (x, y) = x3 − xy 2 .


(b) Show that the discriminant ∆ = 0.
(c) By writing f (x, y) = x(x − y)(x + y), show that the plane is divided into
6 wedge shaped regions by the lines x = 0, x = y, and x = −y. Examine the
sign of f in each of these regions. The graph of this function is often called a
‘monkey saddle’.
6. Find and classify the critical points of f (x, y) = x4 − 2x2 + y 4 − 8y 2 .

7. Consider the problem of minimizing the square of the distance from the point
(0, 0, 1) to the point (x, y, z) on the hyperbolic paraboloid z = x2 − y 2 . See
what you can conclude about that problem by the methods of this section.
436 CHAPTER 8. SERIES
Chapter 9

Series Solution of Differential


Equations

9.1 Power Series Solutions at Ordinary Points

We want to solve equations of the form


y 00 + p(t)y 0 + q(t)y = 0 (140)
where p(t) and q(t) are analytic functions on some interval in R. We saw several
examples of such equations in the previous chapter, and we shall see others. They
arise commonly in solving physical problems.

The functions p(t) and q(t) are supposed to be analytic on their domains, but they
often will have singularities at other points. We assume that these singularities
occur at isolated points of R. Points where the functions are analytic are called
ordinary points and other points are called singular points.

Example 179 The coefficients of Legendre’s equation


2t 0 α(α + 1)
y 00 − y + y=0
1 − t2 1 − t2
are analytic on any interval not containing the points t = ±1. ±1 are singular
points.

Let t0 be an ordinary point of (140). It makes sense to look for a solution y = y(t)
which is analytic in an interval containing t0 . Such a solution will have a power
series centered at t0
∑∞
y= an (t − t0 )n (141)
n=0

437
438 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

with a positive radius of convergence R. Hence, one way to try to solve the equation
is to substitute the series (141) in the equation (140) and see if we can determine
the coefficients an . Generally, these coefficients may be expressed in terms of the
first two, a0 = y(t0 ) and a1 = y 0 (t0 ). (See Chapter VIII, Section 1,)

We illustrate the method by an example.

Example 179, continued Consider


2t 0 α(α + 1)
y 00 − y + y=0
1 − t2 1 − t2
in the neighborhood of the point t0 = 0. Before trying a series solution, it is better
in this case to multiply through by the factor 1 − t2 so as to avoid fractions. (This
is not absolutely necessary, but in most cases it simplifies the calculations.) This
yields
(1 − t2 )y 00 − 2ty 0 + α(α + 1)y = 0. (142)
∑∞
We try a power series of the form y = n=0 an tn and calculate


y= a n tn
n=0
∑∞
y0 = nan tn−1
n=1
∑∞
y 00 = n(n − 1)an tn−2 .
n=2

We now write down the terms in the differential equation and renumber the indices
so that we may collect coefficients of common powers of t.

∑ ∞

y 00 = n(n − 1)an tn−2 = (n + 2)(n + 2 − 1)an+2 tn+2−2
n=2 n+2=2

replacing n by n + 2
∑∞
= (n + 2)(n + 1)an+2 tn
n=0

∑ ∑∞
−t2 y 00 = (−1)n(n − 1)an t2+n−2 = (−1)n(n − 1)an tn
n=2 n=0
terms for n = 0, 1 are 0

∑ ∑∞
−2ty 0 = (−2)nan t1+n−1 = (−2)nan tn
n=1 n=0
term for n = 0 is 0
∑∞
α(α + 1)y = α(α + 1)an tn .
n=0
9.1. POWER SERIES SOLUTIONS AT ORDINARY POINTS 439

(Make sure you understand each step!) Now add everything up. On the left side,
we get zero, and on the right side we collect terms involving the same power of t.


0= [(n + 2)(n + 1)an+2 − n(n − 1)an − 2nan + α(α + 1)an ]tn .
n=0

A power series in t is zero if and only if the coefficient of each power of t is zero, so
we obtain
(n + 2)(n + 1)an+2 − n(n − 1)an − 2nan + α(α + 1)an = 0 for n ≥ 0
(n + 2)(n + 1)an+2 = [n(n − 1) + 2n − α(α + 1)]an
(n + 2)(n + 1)an+2 = [n2 − n + 2n − α(α + 1)]an = [n2 + n − α(α + 1)]an
(n + 2)(n + 1)an+2 = (n + α + 1)(n − α)an
(n + α + 1)(n − α)
an+2 = an for n ≥ 0.
(n + 2)(n + 1)
The last equation is an example of what is called a recurrence relation. Once a0
and a1 are known, it is possible iteratively to determine any an with n ≥ 2.

It is better, of course, to find a general formula for an , but this is not always
possible. In the present example, it is possible to find such a formula, but it is
very complicated. (See Differential Equations by G. F. Simmons, Section 27.) We
derive the formula for two particular values of α. First take α = 1. The recurrence
relation is
(n + 2)(n − 1) n−1
an+2 = an = an for n ≥ 0.
(n + 2)(n + 1) n+1
Thus, for even n,
n=0 a2 = −a0
1 1
n=2 a4 = a2 = − a0
3 3( )
3 3 1 1
n=4 a6 = a4 = − a0 = − a0
5 5 3 5
( )
5 5 1 1
n=6 a8 = a6 = − a0 = − a0 .
7 7 5 7
The general rule is now clear
1
an = − a0 for n = 2, 4, 6, . . . .
n−1
This could also be written
1
a2k = − a0 for k = 1, 2, 3 . . . .
2k − 1

For n odd, we get for n = 1,


0
a3 = a1 = 0,
2
440 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

so an = 0 for every odd n ≥ 3.

We may now write out the general solution




y= a n tn
n=0
∞ (
∑ )
1
= a0 + − a0 t2k + a1 t
2k − 1
k=1
| {z }
even n
( ∞
)
∑ 2k
t
= a0 1− + a1 t.
2k − 1
k=1

∑∞ t2k
Define y1 = 1 − and y2 = t. Then, we have shown that any solution
k=1
2k − 1
may be written
y = a0 y1 + a1 y2 . (143)
Moreover it is clear that y1 and y2 form a linearly independent pair. (Even if it
weren’t obvious by inspection, we could always verify it by calculating the Wron-
skian at t = 0.)

Hence, (143) gives a general solution of Legendre’s differential equation for α = 1.

(Since y2 = t is a solution, you could use the method of reduction of order to find
a second independent solution. This yields
( )
t 1+t t
y = 1 − ln = 1 − (ln(1 + t) − ln(1 − t)).
2 1−t 2
See if you can expand this out and get y1 .)
1
Let’s see what happens for α = − . The recurrence relation may be rewritten
2
(n + 1/2)2
an+2 = an
(n + 2)(n + 1)
(2n + 1)2
= an for n ≥ 0.
4(n + 2)(n + 1)
(Both numerator and denominator were multiplied by 4.) Thus, for n even, we have
12
n=0 a2 = a0
4·2
52 (5 · 1)2
n=2 a4 = a2 = 2 a0
4·4·3 4 · 4!
92 (9 · 5 · 1)2
n=4 a6 = a4 = a0
4·6·5 43 · 6!
132 (13 · 9 · 5 · 1)2
n=6 a8 = a6 = a0 .
4·8·7 44 · 8!
9.1. POWER SERIES SOLUTIONS AT ORDINARY POINTS 441

Note that 43 6! = 26 6! and 44 8! = 28 8!. We begin see the following rule for n even,
[(2n − 3)(2n − 7) . . . 5 · 1)]2
an = a0 for n = 2, 4, 6, . . . .
2n n!
In the numerator, we start with 2n − 3, reduce successively by 4 until we get down
to 1, multiply all those numbers together, and square the whole thing.

For n odd, the same analysis yields a similar result.


[(2n − 3)(2n − 7) . . . 3]2
an = a1 for n = 3, 5, 7, . . . .
2n−1 n!
As above, we may define
∑ [(2n − 3)(2n − 7) . . . 5 · 1]2
y1 = 1 + tn
n even 2n n!
n>0
∑ [(2n − 3)(2n − 7) . . . 3]2
y2 = t + tn
2n−1 n!
n odd
n>1

and


y= an tn = a0 y1 (t) + a1 y2 (t).
n=0

Neither y1 nor y2 is a polynomial. For any power series in t, the constant term is
the value of the sum at t = 0 and the coefficient of t is the value of its derivative at
t = 0. Hence,

y1 (0) = 1 y10 (0) = 0


y2 (0) = 0 y20 (0) = 1.

(Why?) Hence, the Wronskian W (0) = 1 · 1 − 0 · 0 = 1 6= 0, and it follows that


the two solutions form a linearly independent pair. (Note also, that it is fairly clear
that neither is a constant multiple of the other since y1 starts with 1 and y2 starts
with t.)

Example 180 Consider the equation y 00 −2ty 0 +2y = 0. The coefficients p(t) = −2t
and q(t) = 2 have no singularities, so every point is an ordinary point. To make
the problem a trifle more interesting,∑we find a series solution centered at t0 = 1,

i.e., we try a series of the form y = n=0 an (t − 1)n . To simplify the algebra, we
introduce a new variable s = t − 1. Then, since t = s + 1, the equation may be
rewritten
y 00 − 2(s + 1)y 0 + 2y = 0.
dy dy
Subtle point: y 0 = = and similarly for y 00 . (Why?) Hence, we are not begging
dt ds
the question in the above equation by treating s as the independent variable!

We proceed exactly as before, but we shall skip some steps.


442 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS


∑ ∞

y 00 = n(n − 1)an sn−2 = (n + 2)(n + 1)an+2 sn
n=2 n=0
∑∞ ∑∞
−2sy 0 = (−2nan )s1+n−1 = (−2nan )sn
n=1 n=0
∑ ∑∞
−2y 0 = (−2nan )sn−1 = (−2(n + 1)an+1 )sn
n=1 n=0


2y = (2an )sn .
n=0

Adding up and comparing corresponding coefficients of sn yields


0 = (n + 2)(n + 1)an+2 − 2nan − 2(n + 1)an+1 + 2an
(n + 2)(n + 1)an+2 = 2(n + 1)an+1 + (2n − 2)an
(n + 1)an+1 + (n − 1)an
an+2 = 2 n ≥ 0.
(n + 2)(n + 1)
In this case, we cannot separate the terms into even and odd terms, and the general
term is not so easy to determine. Here are some of the terms
1
n=0 a2 = 2 (a1 − a0 ) = a1 − a0
2
2a2 + 0 · a1 2
n=1 a3 = 2 = (a1 − a0 )
3·2 3
3a3 + 1 · a2 1
n=2 a4 = 2 = (2(a1 − a0 ) + (a1 − a0 ))
4·3 6
1
= (a1 − a0 )
2
4a4 + 2a3 1
n=3 a5 = 2 = · · · = (a1 − a0 )
5·4 3
..
.
I gave up trying to find the general terms. The general solution is


y= an sn
n=0
2 1 1
= a0 + a1 s + (a1 − a0 )s2 + (a1 − a0 )s3 + (a1 − a0 )s4 + (a1 − a0 )s5 + . . .
3 2 3
2 1 1
= a0 (1 − s2 − s3 − s4 − s5 − . . . )
| 3 2
{z 3 }
y1
2 1 1
+ a1 (s + s2 + s3 + s4 + s5 + . . . ) .
| 3 2
{z 3 }
y1
9.1. POWER SERIES SOLUTIONS AT ORDINARY POINTS 443

Hence, if we put back s = t − 1, we may write

y = a0 y1 (t) + a1 y2 (t)

where
2 1 1
y1 (t) = 1 − (t − 1)2 − (t − 1)3 − (t − 1)4 − (t − 1)5 − . . .
3 2 3
2 1 1
y2 (t) = (t − 1) + (t − 1) + (t − 1) + (t − 1)4 + (t − 1)5 + . . . .
2 3
3 2 3

The radius of convergence of a series solution An extension of the basic


existence and uniqueness theorem—which we won’t try to prove in this course—
tells us that a solution of y 00 + p(t)y 0 + q(t)y = 0 is analytic at least where the
coefficients p(t) and q(t) are analytic. (The solution could be analytic on an even
larger domain.) We may use this fact to study the radius of convergence of a series
solution of a linear differential equation. Namely, in Chapter VIII, Section 8,

we described a rule for determining the radius of convergence of the Taylor series of
a function: calculate the distance to the nearest singularity. Unfortunately, there
was an important caveat to keep in mind. You have to look also at singularities in
the complex plane.

Example 179, revisited Legendre’s equation

2t 0 α(α + 1)
y 00 − y + y=0
1 − t2 1 − t2
has singularities when 1 − t2 = 0, i.e., at t = ±1. The distance from t0 = 0 to the
nearest singularity is 1. Hence, the radius of convergence of a power series solution
centered at t0 = 0 is at least 1, but it may be larger. Consider in particular the
case α = 1. One of the solutions y2 (t) = t is a polynomial and so it converges for
all t. Its radius of converges is infinite. The other solution is

∑ t2k
y1 (t) = 1 −
2k − 1
k=1

and it is easy to check by the ratio test that its radius of convergence is precisely 1.

Example 181 Consider


3t 0 1
y 00 + 2
y + y = 0.
1+t 1 + t2
The coefficients have singularities where 1 + t2 = 0, i.e., at t = ±i. The distance
from t0 to ±i in the complex plane is 1. Hence, series solutions of this equation
centered at t0 = 0 will have radius of convergence at least 1.

(See Braun, Section 2.8, Example 2 for solutions of this equation.)


444 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

More complicated coefficients In all the above examples, the coefficients were
quite simple. They were what we call rational functions, i.e., quotients of polynomi-
als. For such functions, it is possible to multiply through by a common denominator
and thereby deal only with polynomial coefficients. Of course, in general that may
not be possible. For example, for the equation

y 00 + (sin t)y 0 + (cos t)y = 0

the simplification employed previously won’t work. However, the method still works.
Namely, we may put the expansions

∑ t2n+1
sin t = (−1)n
n=0
(2n + 1)!
∑∞
t2n
cos t = (−1)n
n=0
(2n)!

along with the expansions




y= a n tn
n=0
∑∞
y= nan tn−1
n=1
∑∞
y= n(n − 1)an tn−2
n=2

in the differential equation, multiply everything out, and collect coefficients as be-
fore. Although this is theoretically feasible, you wouldn’t want to try it unless it
were absolutely necessary. Fortunately, that is seldom the case.

Exercises for 9.1.

1. Find a general solution of y 00 + ty 0 + y = 0 in the form a0 y1 (t) + a1 y2 (t) where


y1 (t) and y2 (t) are appropriate power series centered at t0 = 0.

2. The equation y 00 −2ty 0 +2αy = 0 is called Hermite’s equation. (It arises among
other places in the study of the quantum mechanical analogue of a harmonic
oscillator.) Find a general series solution as above in the case α = 1. One of
the two series is a polynomial.

3. The equation (1 − t2 )y 00 − ty 0 + α2 y = 0 is called Chebyshev’s equation. Its


solutions are used in algorithms for calculating functions in computers and
electronic calculators. Find a general series solution as above for α = 1. One
of the two series is a polynomial.
9.2. PARTIAL DIFFERENTIAL EQUATIONS 445

4. Write a computer program which computes the coefficients an from the re-
currence relation
(n + 1)an+1 + (n − 1)an
an+2 = 2 n ≥ 0.
(n + 2)(n + 1)

Use the program to determine a100 given a0 = 1, a1 = 0.

5. (a) Show that

∑ t2k ∞
t
1 − (ln(1 + t) − ln(1 − t)) = 1 − .
2 2k − 1
k=1

The right hand side is the series for one of the solutions of Legendre’s Equation
with α = 1. (The other solution is t, and the left hand side is the solution
obtained from t by reduction of order.)
(b) Show that the radius of convergence of the above series is 1.
(c) Using the recurrence relation for Legendre’s equation, show that both the
even and odd series have radius of convergence 1 except in the case that α is
a positive integer, in which case one of the two is a polynomial.

6. Without finding the solutions, find a lower bound on the radius of convergence
of a power series solution for each of the following equations with the series
centered at the indicated point.
(a) (2 + t)(3 − t)y 00 + 2y 0 + 3t2 y = 0 at t0 = 0.
(b) (2 + t2 )y 00 − ty 0 − 3y = 0 at t0 = 1.

7. Let y1 (t) be the power series solution to y 00 + p(t)y 0 + q(t)y 00 = 0 obtained


at an ordinary point by setting a0 = 1, a1 = 0. Similarly, let y2 (t) be the
solution obtained by setting a0 = 0, a1 = 1. How can you conclude that the
pair {y1 , y2 } is linearly independent?

9.2 Partial Differential Equations

You probably have learned by now that certain partial differential equations such as
Laplace’s Equation or the Wave Equation govern the behavior of important physical
systems. The solution of such equations leads directly to the consideration of second
order linear differential equations, and it is this fact that lends such equations much
of their importance. In this section, we show how an interesting physical problem
leads to Bessel’s equation

t2 y 00 + ty 0 + (t2 − m2 )y = 0 where m = 0, 1, 2, . . . .
446 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

In the sections that follow we shall discuss methods for solving Bessel’s equation
and related equations by use of infinite series.

The physical problem we shall consider is that of determining the possible vibrations
of a circular drum head. We model such a drum head as a disk of radius a in the
x, y-plane centered at the origin. We suppose that the circumference of the disk
is fixed, but that other points may be displaced upward or downward in the z-
direction. The displacement z is a function of both position (x, y) and time t. If
we assume that the displacement is small, then to a high degree of approximation,
this function satisfies the wave equation

1 ∂2z
= ∇2 z
v 2 ∂t2

where v is a constant and ∇2 is the Laplace operator

∂2 ∂2
∇2 = 2
+ 2.
∂x ∂y

(You will study this equation shortly in physics if you have not done so already.)

Because the problem exhibits circular symmetry, it is appropriate to switch to polar


coordinates, and then the Laplace operator takes the form

∂2 1 ∂ 1 ∂2
∇2 = + + .
∂r2 r ∂r r2 ∂θ2
(See the exercises for Chapter V, Section 13.) Thus the wave equation may be
rewritten
1 ∂2z ∂2z 1 ∂z 1 ∂2z
= + + . (144)
v 2 ∂t2 ∂r2 r ∂r r2 ∂θ2
Since the circumference of the disk is fixed, we must add the boundary condition
z(a, θ, t) = 0 for all θ and all t.

A complete study of such equations will be undertaken in your course next year in
Fourier series and boundary value problems. For the moment we shall consider only
solutions which can be expressed

z = T (t)R(r)Θ(θ) (145)

where the variables have been separated out in three functions, each of which de-
pends only on one of the variables. (The method employed here is called separation
of variables. It is similar in spirit to the method employed previously for ordinary
differential equations, but of course the context is entirely different, so the two
methods should not be confused with one another.) It should be emphasized that
the general solution of the wave equation cannot be so expressed, but as you shall
see next year, it can be expressed as a sum (usually infinite) of such functions. The
boundary condition for a separated solution in this case is simply R(a) = 0.
9.2. PARTIAL DIFFERENTIAL EQUATIONS 447

If we substitute (145) in equation (144), the partial derivatives become ordinary


derivatives of the relevant functions and we obtain
1 00 1 1
2
T RΘ = T R00 Θ + T R0 Θ + 2 T RΘ00 .
v r r
Divide through by z = T RΘ to obtain

1 T 00 R00 1 R0 1 Θ00
= + + .
v2 T R r R r2 Θ
1 T 00
In this equation, the left hand side 2 depends only on t, and the right hand
v T
side does not depend on t. Hence, both equal the same constant γ, i.e.,

1 T 00

v2 T
R00 1 R0 1 Θ00
+ + 2 = γ.
R r R r Θ
The second of these equations may be rewritten

Θ00
= r2 (an expression depending only on r),
Θ
so by similar reasoning, it must be equal to a constant µ. Thus,

Θ00

Θ
Θ00 − µΘ = 0.

This is a simple second order equation



with known

solutions. If µ is positive, the
general solution has the form C1 e µ θ + C2 e− µ θ . However, the function Θ must
satisfy the periodicity condition

Θ(θ + 2π) = Θ(θ) for every θ

since adding 2π to the polar angle θ does not change the point represented. The
solution listed above does not satisfy this condition
√ so we conclude √ that µ ≤ 0.
In that case, the general solution is Θ√= C1 cos |µ|θ + C2 sin |µ|θ. Moreover,
the periodicity condition tells us that |µ| must be an integer. Thus we may take
µ = −m2 where m is an integer. We may even assume that that m ≥ 0 since
changing the sign of m makes no essential difference in the form of the general
solution
Θ = C1 cos mθ + C2 sin mθ m = 0, 1, 2, . . . .

If we now put Θ00 /Θ = −m2 back into the separated equation, we obtain

1 T 00 R00 1 R0 1
2
= + + 2 (−m2 ) = γ
v T R r R r
448 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

where γ is a constant. It turns out that γ < 0, but the argument is somewhat more
complicated than that given above for µ. One way to approach this would be as
follows. The above equation gives the following equation for T

T 00 − γv 2 T = 0.

We know by observation (and common sense) that the motion of the drum head is
oscillatory. If γ > 0, this equation has non-periodic exponential solutions (as in the
previous argument for Θ). Similarly, γ = 0 implies that T = c1 t + c2 , which would
make sense only if c1 = c2 = 0. That corresponds to the solution in which the drum
does not vibrate at all, and it is not very interesting. Hence, the only remaining
possibility is γ < 0, in which case we get periodic oscillations:
√ √
T (t) = C1 cos( |γ| vt) + C2 sin( |γ| vt).

This argument is a bit unsatisfactory for the following reason. We should be able
to derive as a conclusion the fact that the solution is periodic in time. After
all, the purpose of a physical theory is to predict as much as we can with as few
assumptions as possible. It is in fact possible to show that γ < 0 by another
argument (discussed in the Exercises) which only uses the fact that the drum head
is fixed at its circumference.

Write γ = −λ where λ = |γ|, and consider the equation for R

R00 1 R0 1
+ + 2 (−m2 ) = −λ
R r R r
r2 R00 + rR0 + (λr2 − m2 )R = 0

where m = 0, 1, 2, . . . , λ > 0, and R satisfies the boundary condition R(a) = 0. It is


usual to make one last
√ transformation to simplify this equation, namely introduce
a new variable s = λ r. Then if S(s) = R(r), we have

dR dS ds √ dS
= = λ
dr ds dr ds
d2 R d dR √ d dS √ √ d dS d2 S
= = λ = λ λ = λ .
dr2 dr dr dr ds ds ds ds2
Thus

λr2 S 00 + λ rS 0 + (λr2 − m2 )S = 0
or s2 S 00 + sS 0 + (s2 − m2 )S = 0.

The last equation is just Bessel’s Equation, and we shall see how to solve it and
similar equations in the next sections in this chapter. Note that in terms of the
function S(s), the boundary condition becomes

S( λa) = 0.
9.2. PARTIAL DIFFERENTIAL EQUATIONS 449
√ √
On the other hand, as mentioned above, λ = |γ| is a factor in determining
the frequency of oscillation of the drum. Hence, finding the roots of the equation
S(x) = 0 is a matter of some interest.

If we switch back to calling the independent variable t and the dependent variable
y, Bessel’s Equation takes the form used earlier

t2 y 00 + ty 0 + (t2 − m2 )y = 0

or
1 t2 − m2
y 00 + y 0 + y = 0.
t t2
Of course, t here need not bear any relation to ‘time’. In fact, in the above analysis,
t came from the polar coordinate r. That means we have the following quandary.
The value t = 0 (the origin in the above discussion) may be a specially interesting
point. However, it is also a singular point for the differential equation. Thus, we
may have to use series solutions centered at a singular point, and that is rather
different from what we did previously for ordinary points.

Exercises for 9.2.

∂2z ∂2z
1. Consider solutions of Laplace’s equation ∇2 z = + = 0 of the form
∂x2 ∂y 2
z = X(x)Y (y).
(a) Derive the equation

X 00 (x) Y 00 (y)
+ = 0.
X(x) Y (y)

(b) Conclude that X and Y satisfy the second order differential equations

X 00 + cX = 0 and Y 00 − cY = 0

where c is some constant.


(c) Suppose we want to find a non-zero solution of of the above form on
the unit square in the first quadrant under the assumption that z(x, 0) =
z(0, y) = z(1, y) = 0. Show that this leads to the conditions X(0) = X(1) = 0
and Y (0) = 0.
(d) Under these assumptions show that c cannot be negative. Hint: Solve the
differential equation for X under the assumption that c < 0 and conclude that
the solution cannot vanish for x = 0 and x = 1 unless it is identically zero.
(e) Show that c = (kπ)2 with k = 0, 1, 2, . . . . Find the general form of X(x)
and Y (y).
450 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

2. (Optional) Show that γ < 0, where γ is the quantity discussed in the Section.
Use the following argument.
1 ∂2z
(a) Put z = T (t)U (r, θ) in = ∇2 z and derive the equation
v 2 ∂t2

1 T 00 ∇2 U
= = γ.
v2 T U
(In the notation of the section, U (r, θ) = R(r)Θ(θ). γ is the same as before.)
Thus, we have
∇2 U = γU. (A)

(b) A necessary detour: Apply the normal form of Green’s Theorem


∫ ∫∫
F · N ds = ∇ · F dA
∂D D

to the vector field F = U ∇U to derive the formula


∫ ∫∫
U ∇U · N ds = (|∇U |2 + U ∇2 U )dA. (B)
∂D D

(c) Let D be a disk of radius a centered at the origin. Assume that

U (a, θ) = 0 (C)

for all θ, i.e., that z vanishes on the boundary of D. Use (A), (B), and (C)
to derive a contradiction to the assumption that γ ≥ 0. What conclusion can
you draw if γ = 0?

9.3 Regular Singular Points and the Method of


Frobenius

The general problem we want to consider now is how to solve an equation of the
form y 00 + p(t)y 0 + q(t)y = 0 by expanding in a series centered at a singular point.
To understand the process, we start by reviewing the simplest case which is Euler’s
Equation
α β
y 00 + y 0 + 2 y = 0
t t
where α and β are constants. To solve it in the vicinity of the singular point t = 0,
we try a solution of the form y = tr . Then, y 0 = rtr−1 and y 00 = r(r − 1)tr−2 , so
9.3. REGULAR SINGULAR POINTS AND THE METHOD OF FROBENIUS451

the equation becomes


tr−1 tr
r(r − 1)tr−2 + αr +β 2 =0
t t
(r(r − 1) + αr + β)tr−2 = 0
r(r − 1) + αr + β = 0
r2 + (α − 1)r + β = 0.

This is a quadratic equation with two roots r1 , r2 , so we get two solutions y1 = tr1
and y2 = tr2 . If the roots are different, these form a linearly independent pair, and
the general solution is
y = C1 tr1 + C2 tr2 .
If the roots are equal, i.e., r1 = r2 = r, then y1 = tr is one solution, and we may
use reduction of order to find another solution. It turns out to be y2 = tr ln t. (See
the Exercises.) The general solution is

y = C1 tr + C2 tr ln t.

There are a couple of things to notice about the above process. First, the method
depended critically on the fact that t occurred precisely to the right powers in the
two denominators. Otherwise, we might not have ended up with the common factor
tr−2 . Secondly, the solutions of the quadratic equation need not be positive integers;
they could be negative integers, fractions, or worse. In such cases tr may exhibit
some singular behavior at t = 0. This will certainly be the case if r < 0, since in
that case tr = 1/t|r| blows up as t → 0. If r > 0 but r is not an integer, then tr
is continuous at t = 0, but it may have discontinuous derivatives of some order.
For example, if y = t5/3 , then y 0 = (5/3)t2/3 and y 00 = (10/9)t−1/3 , which is not
continuous at t = 0. Hence, the singularity at t = 0 in the differential equation
tends to show up in some way in the solution. (In general, the roots r could even
be complex numbers, which complicates the matter even more. In this course, we
shall ignore that possibility.)

Consider now the general equation

y 00 + p(t)y 0 + q(t)y = 0,

and suppose p(t) or q(t) is singular at t = 0. We say that t = 0 is a regular singular


point if we can write
p(t) q(t)
p(t) = q(t) = 2
t t
where p(t) and q(t) are analytic in the vicinity of t = 0. This means that the
differential equation has the form
p(t) 0 q(t)
y 00 + y + 2 y=0
t t
or t2 y 00 + t p(t)y 0 + q(t)y = 0.
452 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

(This is what we get if we replace the constants in Euler’s Equation by analytic


functions.)

More generally, we say that t = t0 is a regular singular point of the differential


equation if it may be rewritten

p(t) 0 q(t)
y 00 + y + y=0
t − t0 (t − t0 )2
or (t − t0 )2 y 00 + (t − t0 ) p(t)y 0 + q(t)y = 0

where p(t) and q(t) are analytic functions in the vicinity of t = t0 .

Example 182 Bessel’s Equation

1 t2 − m2
y 00 + y 0 + y=0
t t2
has a regular singular point at t = 0.

Example 183 Legendre’s Equation

2t 0 α(α + 1)
y 00 − y + y=0
1 − t2 1 − t2
has regular singular points both at t = 1 and t = −1.

For, at t = 1, we may write

−2t 2t 2t/(t + 1)
p(t) = = =
1−t 2 (t − 1)(t + 1) t−1
α(α + 1) −α(α + 1) −α(α + 1)(t − 1)/(t + 1)
q(t) = = =
1−t 2 (t − 1)(t + 1) (t − 1)2

and
2t
p(t) =
t+1
−α(α + 1)(t − 1)
q(t) =
t+1
are analytic functions near t = 1. (They are of course singular at t = −1, but that
is far enough way not to matter.)

A similar argument which reverses the roles of t − 1 and t + 1 shows that t = −1 is


also a regular singular point.

Example 184 The equation


2 0
y 00 − y + 5y = 0
t2
9.3. REGULAR SINGULAR POINTS AND THE METHOD OF FROBENIUS453

has an irregular singular point at t = 0. In this case, the best we can do with p(t)
is
−2 −2/t
2
=
t t
and −2/t is certainly not analytic at t = 0.

To solve an equation with a regular singular point at t = t0 , we allow for the


possibility that the solution is singular at t = t0 , but we hope that the singularity
won’t be worse than a negative or fractional power of t − t0 , as in the case of Euler’s
Equation. That is, we try for a solution of the form


y = (t − t0 ) r
an (t − t0 )n .
n=0

Since the power series is analytic, this amounts to trying for a solution of the form
y = (t − t0 )r g(t) where g(t) is analytic near t0 . This method is called the method
of Frobenius.

There is one technical problem with the method of Frobenius. If r is not an integer,
then by definition (t − t0 )r = er ln(t−t0 ) . Unfortunately, ln(t − t0 ) is undefined for
t − t0 < 0. Fortunately, since t = t0 is a singular point of the differential equation,
one is usually interested either in the case t > t0 or t < t0 , but one does not usually
have to worry about going from one to the other. We shall concentrate in this
course on the case t > t0 . For the case t < t0 , similar methods work except that
you should use |t − t0 |r instead of (t − t0 )r .

The Method of Frobenius for Bessel’s Equation

We want to solve
t2 y 00 + ty 0 + (t2 − m2 )y = 0
near the regular singular point t0 = 0. We look for a solution defined for t > 0.
Clearly, we may assume m is non-negative since its square is what appears in the
equation. In interesting applications m is a non-negative integer, but for the moment
we make no assumptions about m except that m ≥ 0. The method of Frobenius
suggests trying

∑ ∑
y = tr an tn = an tn+r .
n=0 n=0

Note that we may assume here that a0 6= 0 since if it were zero that would mean
the series would start with a positive power of t which could be factored out from
each term and absorbed in tr by increasing r.

As before, we calculate

∑ ∞

y0 = (n + r)an tn+r−1 and y 00 = (n + r)(n + r − 1)an tn+r−2 .
n=0 n=0
454 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

(Note however that we can’t play any games with the lower index as we did for
ordinary points.) Thus,


t2 y 00 = (n + r)(n + r − 1)an tn+r
n=0
∑∞
ty 0 = (n + r)an tn+r
n=0
∑∞ ∞

t2 y = an tn+r+2 = an−2 tn+r
n=0 n−2=0
∑∞
= an−2 tn+r
n=2
∑∞
−m2 y = (−m2 an )tn+r .
n=0

Note that after the adjustments, one of the sums starts at n = 2. That means that
when we add up the terms for each n, we have to consider those for n = 0 and n = 1
separately since they don’t involve terms from the aforementioned sum. Thus, for
n = 0, we get
0 = r(r − 1)a0 + ra0 − m2 a0
while for n = 1, we get

0 = (r + 1)ra1 + (r + 1)a1 − m2 a1 .

The general rule starts with n = 2

0 = (n + r)(n + r − 1)an + (n + r)an + an−2 − m2 an for n ≥ 2.

These relations may be simplified. For, n = 0, we get

(r2 − m2 )a0 = 0.

However, since by assumption a0 6= 0, we get

r2 − m2 = 0.

This quadratic equation in r is called the indicial equation. In this particular case,
it is easy to solve
r = ±m.
(Note that if m = 0, this is a double root!) For n = 1, (using r2 = m2 ) we get

((r + 1)2 − m2 )a1 = (2r + 1)a1 = 0 (147)

Except in the case r = −1/2, this implies that a1 = 0. Finally, for n ≥ 2, the
coefficient of an is (n + r)(n + r − 1) + (n + r) − m2 = (n + r)2 − m2 , so we get

[(n + r)2 − m2 ]an + an−2 = 0. (148)


9.3. REGULAR SINGULAR POINTS AND THE METHOD OF FROBENIUS455

We shall now consider separately what happens for each root r = ±m of the indi-
cial equation. (Some people prefer to see how far they can get without specifying
r, and then they put r equal to each of the roots when they can’t proceed further.)
Solution of Bessel’s Equation for the Positive Root of the Indicial Equa-
tion Take r = m ≥ 0. Then, r 6= −1/2, so a1 = 0. For n ≥ 2, the coefficient of an
is (n + m)2 − m2 = n2 + 2nm = n(n + 2m), and we may solve
an−2
an = − .
n(n + 2m)
This recurrence relation allows us to determine an for all n > 0. First of all, since
a1 = 0, it follows that a3 = a5 = · · · = 0, i.e., all odd numbered coefficients are
zero. For n even, we have
a0 a0
n=2 a2 = − =− 2
2(2 + 2m) 2 (m +
1)
a2 a0
n=4 a4 = − = 5
4(4 + 2m) 2 (m + 2)(m + 1)
a4 a0
n=6 a6 = − =−
6(6 + 2m) 3 · 27 (m + 3)(m + 2)(m + 1)
a4 a0
n=8 a8 = − =
6(6 + 2m) 4 · 3 · 29 (m + 4)(m + 3)(m + 2)(m + 1)
a0
=
4!28 (m + 4)(m + 3)(m + 2)(m + 1)
..
.

The general rule may be written


a0
a2k = (−1)k k = 0, 1, 2, . . .
k!22k (m + k)(m + k − 1) . . . (m + 2)(m + 1)
Note the quantity (m + k)(m + k − 1) . . . (m + 2)(m + 1) is similar to a factorial
in which each term has an extra addend m. If we interpret this product to be 1 if
k = 0, and recall that 0! = 1, then the formula is valid, as indicated, for k = 0.

The corresponding series solution is



∑ ∞
∑ (−1)k
y = tm an tn = a0 tm t2k .
n=0
k!22k (m + k)(m + k − 1) . . . (m + 2)(m + 1)
k=0

Note that since m ≥ 0, this solution is actually continuous at t = 0. (It is the


product of the continuous function tm with the sum of a power series, i.e., an
analytic function.) If m is a non-negative integer, the solution is even analytic at
t = 0. If m is positive but not an integer, the solution is definitely not analytic
since a sufficiently high derivative of tm will involve a negative power and so fail to
exist at t = 0. (A function which is analytic at t = 0 has derivatives of every order
at t = 0. They are just the coefficients (except for factorials) of its Taylor series
centered at t = 0.)
456 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

The constant a0 is arbitrary and determined by initial conditions. However, we may


pick out one specific solution and any other such solution will be a constant multiple
of it. It is often useful to adjust the constant a0 for the distinguished solution so
that the formulas work out nicely. If m is a non-negative integer the most common
choice is a0 = 1/(2m m!). This yields
∑ ∞
1 (−1)k
y = m tm t2k
2 m! 22k k!(m + k)(m + k − 1) . . . (m + 2)(m + 1)
k=0

∑ (−1)k
= t2k+m
22k+m k!(m + k)!
k=0
∑∞ ( )2k+m
(−1)k t
= .
k!(m + k)! 2
k=0

For m a non-negative integer, this solution is called a Bessel Function of the first
kind , and it is denoted

∑ ( )2k+m
(−1)k t
Jm (t) = .
k!(m + k)! 2
k=0

The ratio test shows that the series converges for all t, so its sum is an analytic
function for all t. Two interesting cases are
∑∞ ( )2k
(−1)k t
J0 (t) =
(k!)2 2
k=0
∑∞ ( )2k+1
(−1)k t
J1 (t) = .
(k + 1)!k! 2
k=0

If m > 0 but m is not an integer, everything above works except that the resulting
function is not analytic, but as mentioned above, it is bounded and continuous as
t → 0. The choice of the constant a0 for a distinguished solution is trickier. The
term 2m poses no problems, but we need an analogue of m! for m a positive real
number which is not an integer. It turns out that there is a real valued function
Γ(x) of the real variable x, which is even analytic for non-negative values of x, and
which satisfies the rules
Γ(x + 1) = xΓ(x) Γ(1) = 1. (149)
It is called appropriately enough the Gamma function. (See the Exercises for its
definition.) It follows from the rules (149) that if m is a positive integer,
Γ(m) = (m − 1)Γ(m − 1) = (m − 1)(m − 2)Γ(m − 2) = . . .
= (m − 1) . . . 2 1 Γ(1) = (m − 1)!.
This is usually rewritten with m replaced by m + 1
Γ(m + 1) = m! for m = 0, 1, 2, . . . .
9.3. REGULAR SINGULAR POINTS AND THE METHOD OF FROBENIUS457

You will study the Gamma function in more detail in your complex variables course.
1
Using the Gamma function, we may take a0 = . Then, 2m 22k = 22k+m ,
2m Γ(m + 1)
and
Γ(m + 1)(m + 1)(m + 2) . . . (m + k) = Γ(m + k + 1).
So, combining a0 with a2k , we obtain the solution

∑ ( )2k+m
(−1)k t
Jm (t) =
k!Γ(m + k + 1) 2
k=0
( )m ∑ ∞ ( )2k
t (−1)k t
=
2 k!Γ(m + k + 1) 2
k=0

where the fractional power has been put in front in the second equation to emphasize
the non-analytic part of the solution.

One interesting case is m = 1/2.

∑ ∞
t1/2 (−1)k
J1/2 (t) = 1/2 t2k
2 Γ(3/2) k=0 22k k!(1/2 + k)(1/2 + k − 1) . . . (1/2 + 1)
∑ ∞
1 k t2k+1
= (−1)
t1/2 21/2 Γ(3/2) k=0 2k k!(2k + 1)(2k − 1) . . . 3
∑ ∞
1 t2k+1
= (−1)k
1/2 1/2
t 2 Γ(3/2) k=0 (2k)(2k − 2) . . . 4 · 2 (2k + 1)(2k − 1) . . . 3
∑ ∞
1 t2k+1
= 1/2 1/2 (−1)k
t 2 Γ(3/2) k=0 (2k + 1)!
1
= sin t.
t1/2 21/2 Γ(3/2)

However, Γ(3/2) = (1/2)Γ(1/2), and it may be shown that Γ(1/2) = π. Thus,
after some algebra, we get √
2 sin t
J1/2 (t) = √ .
π t

Solution of Bessel’s Equation for the Negative Root of the Indicial Equa-
tion Suppose m > 0. By considering the positive root r = m of the indicial equation
r2 − m2 = 0, we found one solution of Bessel’s Equation. We now attempt to find a
second linearly independent solution by considering the negative root r = −m. For
this root, equation (147) for n = 1 becomes

(2r + 1)a1 = (1 − 2m)a1 = 0,


458 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

which, as earlier, implies that a1 = 0 except in the case m = 1/2, r = −1/2.


However, even in that case we need only find one additional solution, so in any
event we shall concentrate on the even numbered coefficients.

For n ≥ 2, equation (148) becomes

(n2 + 2nr)an + an−2 = (n2 − 2nm)an + an−2 = 0

which may be solved to obtain


−an−2
an = for n ≥ 2,
n(n − 2m)

provided n − 2m 6= 0. Thus, we may use the recurrence relation to generate co-


6 n/2, i.e., m is not an
efficients (for n even) for a second solution as long as m =
integer. Assume that is the case. Then we get
a0 a0
n=2 a2 = − =− 2
2(2 − 2m) 2 (1 − m)
a2 a0
n=4 a4 = − = 4
4(4 − 2m) 2 2(1 − m)(2 − m)
a4 a0
n=6 a6 = − =− 6
6(6 − 2m) 2 3 · 2(1 − m)(2 − m)(3 − m)
..
.

Putting n = 2k, we get the general rule

(−1)k a0
a2k = k ≥ 0.
22k k!(1 − m)(2 − m) . . . (k − m)

The corresponding solution is



∑ ( )2k
−m (−1)k a0 t
y = a0 t .
k!(1 − m)(2 − m) . . . (k − m) 2
k=0

1
If m > 0 is not a positive integer, we may set a0 = . That is simiilar
2−m Γ(−m
+ 1)
to what we did above, except that m is replaced by −m. The resulting solution is

∑ ( )2k−m
(−1)k t
J−m (t) = .
k!Γ(−m + k + 1) 2
k=0

Note that this solution has a singularity for t = 0 because of the common factor
t−m , so it is certainly not a constant multiple of Jm (t). Hence, we have a linearly
independent pair of solutions and we conclude that if m is not an integer , the
general solution of Bessel’s equation is

y = C1 Jm (t) + C2 J−m (t).


9.3. REGULAR SINGULAR POINTS AND THE METHOD OF FROBENIUS459

The case m = 1/2 is interesting. The series can be rewritten


∑ ∞
1 1
J−1/2 (t) = (−1)k t2k−1/2
2−1/2 Γ(1/2) k=0 k!(1 − 1/2)(2 − 1/2) . . . (k − 1/2)22k
√ ∞
2 1 ∑ 1
J−1/2 (t) = √ (−1)k t2k
π t k!(1 − 1/2)(2 − 1/2) . . . (k − 1/2)22k
k=0
√ ∞
2 1 ∑ t2k
= √ (−1)k k t2k
π t 2 k!(2 − 1)(4 − 1) . . . (2k − 1)
k=0
√ ∞
2 1 ∑ t2k
= √ (−1)k
π t (2k)!
k=0

2 cos t
= √ .
π t
Note that we still have to deal with the case that m is an integer since the above
method breaks down.

Exercises for 9.3.

1. Find general solutions for each of the following equations. (You may find
it helpful first to review Chapter VII, Section 4, Exercise 6 and Section 8,
Exercise 3.)
(a) t2 y 00 + 3ty 0 − 3y = 0.
(b) t2 y 00 − 5ty 0 + 9y = 0.
2. Assume r2 + (α − 1)r + β = 0 has a double root r = (1 − α)/2. Then, y1 = tr
is one solution of Euler’s Equation. Verify that the method of reduction of
order yields the second solution y2 = tr ln t.
3. In each of the following cases, tell if the given value of t is a regular singular
point for the indicated differential equation.
(a) t(t + 1)2 y 00 − 2ty 0 + 7y = 0, t = 0.
(b) (t − 1)2 ty 00 + 3(t − 1)y 0 − y = 0, t = 1.
(c) (t + 2)2 y 00 + (t + 1)y 0 + (t + 2)y = 0, t = −2.
(d) (sin t)y 00 + 2y 0 − (cos t)y = 0, t = 0.
4. Find one solution of ty 00 + y 0 + ty = 0 at t0 = 0 by the method of Frobenius.
You should get J0 (t) up to a multiplicative constant.
d
5. (a) Show that J0 (t) = −J1 (t).
dt
d
(b) Show that (tJ1 (t)) = tJ0 (t).
dt
460 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

6. Show that the power series in the definition


∞ ( )2k
(t/2)m ∑ (−1)k t
Jm (t) =
Γ(m + 1) k!(m + k)(m + k − 1) . . . (m + 1) 2
k=0

has infinite radius of convergence.

7. Calculate J0 (t) to 4 decimal places for t = 0, 0.1, 0.2, . . . , 0.9, 1.0.

8. Show by direct substitution


√ in the differential
√ equation t2 y 00 +ty 0 +(t2 −1/4)y =
0 that y1 = (sin t)/ t and y2 = (cos t)/ t are solutions.

9. (Optional) The Gamma function is defined by


∫ ∞
Γ(x) = tx−1 e−t dt for x > 0.
0

(It may also be extended to negative values of x and indeed to complex values
of x. The resulting function has singularities at x = 0, −1, −2, . . . .)
∫∞
(a) Show Γ(1) = 0 e−t dt = 1

∫(b) Show Γ(x + 1) = xΓ(x). Hint: Apply integration by parts to Γ(x + 1) =


∞ x −t
0
t e dt.
∫∞ √
(c) Show Γ(1/2) = 0 t1/2 e−t dt = π. Hint: Substitute t = s2 to obtain
∫∞
Γ(1/2) = 2 0 e−s ds. Now use the calculation from Chapter IV, Section 8
2

∫ ∞ −u2 /2 √
of 0 e du = π/2.

9.4 The Method of Frobenius. General Theory

In the previous section, we applied the method of Frobenius to Bessel’s Equation.


For some cases (m not an integer), the method gave a complete solution, but for
other cases (m a non-negative integer), it gave only one solution. In the ‘bad’ cases,
we need another method to find a linearly independent pair of solutions from which
we can form a general solution of the differential equation.

Before attempting to deal with the ‘bad’ cases, we should discuss how the method
of Frobenius works for other differential equations.

Example 185 We shall try to solve

t2 y 00 + 2ty 0 − (2 + t)y = 0 t>0


9.4. THE METHOD OF FROBENIUS. GENERAL THEORY 461

∑∞ ∑∞
by a series of the form y = tr n=0 an tn = n=0 an tn+r . Calculating as previously,
we have


t2 y 00 = (n + r)(n + r − 1)an tn+r
n=0
∑∞
2ty 0 = 2(n + r)an tn+r
n=0
∑∞
−2y = (−2an tn+r
n=0
∑∞ ∞

−ty = (−an )tn+r+1 = (−an−1 )tn−1+r+1
n=0 n−1=0
∑∞
= (−an−1 )tn+r .
n=1

Adding up corresponding powers of t yields for n = 0

[r(r − 1) + 2r − 2]a0 = 0 (150)

and for n ≥ 1

[(n + r)(n + r − 1) + 2(n + r) − 2]an − an−1 = 0. (151)

Since a0 6= 0, (150) yields the indicial equation

f (r) = r(r − 1) + 2r − 2 = r2 + r − 2 = (r − 1)(r + 2) = 0.

Note also that for n ≥ 1, (151) has the form

f (n + r)an − a2 = 0

where

f (n+r) = (n+r)(n+r−1)+2(n+r)−2 = (n+r)2 +(n+r)−2 = (n+r−1)(n+r+2).

(You should look back at the previous section at this point to see what happened
for Bessel’s equation. The indicial equation was f (r) = r2 − m2 = 0, while the
coefficient of an in each of the equations for n ≥ 1 was f (r) = (n + r)2 − m2 .)

The roots of the indicial equation are r = 1 and r = −2. Consider first r = 1. For
n ≥ 1, f (n + 1) = n(n + 3) so (151) becomes

n(n + 3)an − an−1 = 0

which may be solved to obtain the recurrence relation


an−1
an = for n ≥ 1. (153)
n(n + 3)
462 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

Note that there is no problem with this relation since the denominator never van-
ishes. This is not an accident. We shall see below why it happened.

Clearly, this recurrence relation may be used to determine all the coefficients an in
terms of a0 . (We leave it to the student to actually do that.) Thus, we obtain one
solution of the differential equation which is in fact uniquely determined except for
the non-zero multiplicative constant a0 .

Consider next the root r = −2 of the indicial equation. For n ≥ 1, f (n − 2) =


(n − 3)n, so (151) becomes
(n − 3)nan = an−1 .
This can be solved for n = 1 and n = 2 to obtain
a0 a0
a1 = =−
(−2)1 2
a1 a0
a2 = = . (154)
(−1)2 4
However, for n = 3 we encounter a problem. The equation becomes

0 · a3 = a2

which is consistent only if a2 = 0, and by (154) that contradicts the assumption


that a0 6= 0. Hence, for the root r = −2 of the indicial equation, the process breaks
down.

Let’s see what was going on both in our discussion of Bessel’s equation and in the
last example. In each case, we had a quadratic indicial equation

f (r) = 0.

In addition, we had for n ≥ 1 equations of the form

f (n + r)an = an expression involving aj with j < n.

(The expression on the right might be zero as for Bessel’s equation with n = 1.)
Suppose r1 ≥ r2 are the two roots of the indicial equation. If n ≥ 1, it can never
be the case that f (n + r1 ) = 0 since the only other root of the equation f (r) = 0 is
r2 which is not larger than r1 . Hence, we may always solve the above equation to
obtain recurrence relations
expression involving aj with j < n
an = , n ≥ 1.
f (n + r1 )

For the other root, r2 the situation is more complicated. If r2 + n is never a root
of the quadratic equation f (r) = 0, the above reasoning applies and we obtain
recurrence relations
expression involving aj with j < n
an = , n ≥ 1.
f (n + r2 )
9.4. THE METHOD OF FROBENIUS. GENERAL THEORY 463

Thus, we obtain a second solution, and it is not hard to see that the two solutions
form a linearly independent pair. (See the appendix to this section.)

There is the possibility, however, that for some integer n = k, r2 + k = r1 is the


larger root of the indicial equation, i.e., f (r2 +k) = 0. In that case, the kth recursion
relation becomes

0 = f (k + r2 )ak = an expression involving aj with j < k,

so the process will break down unless we are incredibly lucky and the expression on
the right happens to be zero. In that case, we can seize on our great fortune, and
set ak equal to any convenient value. Usually, we just set ak = 0. In any case, the
process may continue unimpeded for n > k as soon as we successfully get past the
‘barrier’ at k.

The upshot of the above analysis is that the method of Frobenius may break down
in the ‘bad case’ that r1 − r2 is a positive integer. Of course, if r1 = r2 , the method
only gives one solution in any case. Hence, you must be on guard whenever r1 − r2
is a non-negative integer.

If you look back at the previous section, you will see that Bessel’s Equation exhibits
the phenomena we just described. The roots are ±m, so m = 0 is is the case of
equal roots, and the method of Frobenius only gives one solution. If m > 0, the
larger of the two roots is r1 = m, and this gives a solution in any case. The smaller
of the two roots is r2 = −m, and provided m − (−m) = 2m is not a positive integer,
the method of Frobenius also generates a solution for r2 = −m. On the other hand,
if 2m is a positive integer, then the recurrence relations for r2 = −m take the form

1(1 − 2m)a1 = 0 for n = 1


n(n − 2m)an = −an−2 for n ≥ 2.

The first equation (n = 1) implies that a1 = 0 except in the case m = 1/2. Even
in that case, our luck holds out, and we may take a1 = 0 since the right hand side
of the equation is zero. Similarly, if m = k/2 for some odd integer k > 1, then
the recursion relation will imply that an = 0 for every odd n < k, and the kth
recurrence relation will read

k(0)ak = −ak−2 = 0,

so we may set ak = 0. It follows that if m is half an odd integer, then we may assume
all the odd numbered an are zero. For even n, the coefficient n(n − 2m) = n(n − k)
is never zero, so there is no problem determining the an as previously.

The only remaining case is when m is itself a positive integer. In this case, for
n = 2m, the coefficient on the left n(n − 2m) is zero, but the quantity an−1 is not,
so there is no way to recover. Hence, we must find another method to generate a
second solution.
464 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

The General Theory We shall explain here why the method of Frobenius behaves
the way it does. This section may be omitted your first time through the material,
but you should come back a look at it after you have worked some more examples.

Consider the differential equation


t2 y 00 + tp(t)y 0 + q(t)y = 0
where p(t) and q(t) are analytic functions in a neighborhood of t = 0. (That is what
it means to say ∑
t = 0 is a regular singular point.) We shall try to find a solution of

the form y = tr n=0 an tn for t > 0. (As mentioned earlier, if you want a solution
for t < 0, replace tr by |t|r .) Since p(t) and q(t) are analytic at t = 0, they have
power series expansions
p(t) = p0 + p1 t + p2 t2 + . . .
q(t) = q0 + q1 t + q2 t2 + . . . .
Putting these in the differential equation, one term at a time, we have


t2 y 00 = an (n + r)(n + r − 1)tn+r
n=0
∑∞
p0 ty 0 = p0 an (n + r)tn+r
n=0
∑∞ ∑
p1 t ty 0 = p1 t2 y 0 = p1 an (n + r)tn+r+1 = p1 an−1 (n + r − 1)tn+r
n=0 n=1
∑∞ ∑∞
p2 t2 ty 0 = p2 t3 y 0 = p2 an (n + r)tn+r+2 = p2 an−2 (n + r − 2)tn+r
n=0 n=2
..
. sums starting with n = 3, 4, . . .


q0 y = q0 an tn+r
n=0

∑ ∞

q1 ty = q1 an tn+r+1 = q1 an−1 tn+r
n=0 n=1
∑∞ ∑∞
q2 t2 y = q2 an tn+r+2 = q2 an−2 tn+r
n=0 n=2
..
. sums starting with n = 3, 4, . . . .
Adding up coefficients of corresponding powers of t yields for n = 0
[r(r − 1) + p0 r + q0 ]a0 = 0,
and since by assumption a0 6= 0, we get the indicial equation
f (r) = r(r − 1) + p0 r + q0 = 0.
9.4. THE METHOD OF FROBENIUS. GENERAL THEORY 465

(Notice the similarity to the equation obtained for Euler’s equation.) For n ≥ 1, we
obtain equations of the form

[(n + r)(n + r − 1) + p0 (n + r) + q0 ]an + lesser numbered terms = 0.

The coefficient of an will always be

f (n + r) = (n + r)(n + r − 1) + p0 (n + r) + q0 ,

and the additional terms will depend in general on the exact nature of the coefficients
p1 , p2 , . . . , q1 , q2 , . . . . (You should work out the cases n = 1 and n = 2 to make sure
you understand the argument!)

The above calculation justifies in general the conclusions drawn earlier by looking
at examples. However, there is still one point that has not been addressed. Even
if the method unambiguously generates the coefficients of the series (in terms of
a0 ), it won’t be of much use unless that series has a positive radius of convergence.
Determining the radius of convergence of the series generated by the method of
Frobenius in any specific case is usually not difficult, but showing that it is not zero
in general is hard. We shall leave that for you to investigate by yourself at a future
date if you are sufficiently interested. Introduction to Differential Equations by
Simmons has a good treatment of the question. Appendix. Why the solution

pair {y1 , y2 } is linearly independent when r1 − r2 is not an integer We have

y1 = tr1 g1 (t) and y2 = tr2 g2 (t)

where g1 (t) and g2 (t) are the sums of the series obtained in each case. Since by
assumption, the leading coefficient a0 6= 0, neither of these functions vanishes. It
follows that the quotient of the solutions has the form

y1 tr1 g1 (t)
= r2 = tr1 −r2 g(t)
y2 t g2 (t)

where g(t) = g1 (t)/g2 (t) is a quotient of two analytic functions, neither of which
vanishes at t = 0, so g(t) is also analytic at t = 0 and does not vanish there. If this
were constant, we could write
c
tr1 −r2 =
g(t)
and since g(0) 6= 0, the right hand side is analytic. On the other hand, if r1 − r2
is not a positive integer, tr1 −r2 is definitely not analytic. (If it is a positive integer,
then it vanishes at t = 0, but the right hand side does not.)

Exercises for 9.4.

All the problems in this section concern linear second order homogeneous differential
equations with t = 0 a regular singular point.
466 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

1. Assuming a0 = 1, use the recurrence relation


an−1
an = for n ≥ 1
n(n + 3)
to obtain a∑general formula for an . What is the radius of of convergence of

the series t n=0 an tn ?
2. In each of the following cases, the two roots of the indicial equation do not
differ by a non-negative integer. Find a pair of linearly independent solutions
by the method of Frobenius
(a) 4t2 y 00 + 2ty 0 − (2 + t)y = 0.
(b) 2ty 00 + 3y 0 + y = 0.
3. In each of the following cases, the indicial equation has two equal roots. Find
one solution by the method of Frobenius.
(a) t2 y 00 − ty 0 + (1 − t)y = 0.
(b) t2 y 00 + 3ty 0 + (1 + 2t)y = 0.
4. In each of the following cases, the two roots of the indicial equation differ
by a positive integer. Find a solution for the larger root by the method of
Frobenius. Determine if you can also find a solution for the smaller root by
setting the appropriate ak = 0.
(a) t2 y 00 − (2 + t)y = 0.
(b) t2 y 00 − (4t + t2 )y 0 + 6y = 0.
(c) t2 y 00 + (t − t2 )y 0 − y = 0.
5. (Optional) Suppose the two roots r1 ≥ r2 of the indicial equation differ by
a positive integer k = r1 − r2 . Let y1 (t) be the solution obtained for r1 by
the method of Frobenius with a0 = 1. Suppose that we are in the fortunate
situation that the method of Frobenius also yields a solution y2 (t) for the root
r2 , but that we do not assume a∑ k = 0. Show that y2 (t) − ak y1 (t) may be

expanded in a series of the form n=0 bn tn+r2 with bk = 0.

9.5 Second Solutions in the Bad Cases

We consider next what to do when the method of Frobenius fails to produce a


second solution. We suppose that we are working at t = 0 which is assumed to be
a regular singular point. It is clear how to modify the formulas and arguments for
an arbitrary regular singular point t0 .

The Case of Equal Roots Suppose first that the indicial equation
f (r) = 0
9.5. SECOND SOLUTIONS IN THE BAD CASES 467

has a double root r. Let y1 (t) denote the solution obtained by the method of
Frobenius for this root. Then, we may apply the method of reduction of order to
obtain a second solution. It turns out that this second solution always has the form


y2 (t) = y1 (t) ln t + cn tn+r . (155)
n=1

Note that the summation starts with n = 1. You should compare (155) with the
second solution for Euler’s equation in the case of a double root. In that case, we
had y1 = tr and y2 = tr ln t = y1 ln t, so the logarithmic term is not a complete
surprise.

We shall see later how the method of reduction of order leads to such a solution,
but first let’s look at an example.

Example 186 Consider Bessel’s equation for m = 0

t2 y 00 + ty 0 + (t2 − 02 )y = t2 y 00 + ty 0 + t2 y = 0.

r = 0 is a double root and the first solution is given by



∑ ( )2k
k 1 t
y1 = J0 (t) = (−1) .
(k!)2 2
k=0

As suggested above, let’s try a second solution of the form




y = y1 (t) ln t + cn tn .
n=1

Then

y1 (t) ∑
y 0 = y10 (t) ln t + + ncn tn−1
t n=1

y10 (t) y1 (t) ∑
y 00 = y100 (t) ln t + 2 − 2 + n(n − 1)cn tn−2 .
t t n=2

Thus, renumbering as needed, we get




t2 y 00 = t2 y100 ln t + 2ty10 − y1 + n(n − 1)cn tn
n=1
∑∞
ty 0 = ty10 ln t + y1 + ncn tn
n=1
∑∞
t2 y = t2 y1 ln t + cn−2 tn
n=3
468 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

Add this all up to get zero. The right side includes the terms

t2 y100 ln t + ty10 ln t + t2 y1 ln t = (t2 y100 + ty10 + t2 y1 ) ln t = 0

since y1 is a solution of the differential equation. The terms −y1 and +y1 cancel,
so we are left only with 2ty10 (t) and the summation terms. The coefficient of tn for
n ≥ 3 is
n(n − 1)cn + ncn + cn−2 = n2 cn + cn−2 ,
but we also have two additional terms, those for n = 1 and n = 2. Putting this all
together, we get


2ty10 (t) + c1 t + 4c2 t2 + [n2 cn + cn−2 ]tn = 0,
n=3

or


c1 t + 4c2 t2 + [n2 cn + cn−2 ]tn = −2ty10 (t).
n=3

However, we may evaluate ty10 (t) without too much difficulty.



∑ 1
y1 = (−1)k t2k
(k!)2 22k
k=0

so

∑ 1
−2tyl0 = −2t (−1)k 2kt2k−1
(k!)2 22k
k=1

∑ 1
= (−1)k+1 t2k .
k!(k − 1)!22k−2
k=1

Thus,

∑ ∞
∑ 1
c1 t + 4c2 t2 + [n2 cn + cn−2 ]tn = (−1)k+1 t2k .
n=3
k!(k − 1)!22k−2
k=1

Now compare corresponding powers of t on the two sides, and remember that only
even powers appear on the right. For n = 1, there are no terms on the right, so

c1 = 0.

For n = 2, k = 1, we have
1
4c2 = + =1
1!0!20
1
c2 = .
4
9.5. SECOND SOLUTIONS IN THE BAD CASES 469

For n = 3, we have

9c3 + c1 = 0
c3 = 0.

For n = 4, k = 2, we have
1 1
16c4 + c2 = − =1
2!1!22 8
1 1 1 1 1
c4 = − ( + ) = − (1 + ).
16 4 8 64 2
This process may be continued indefinitely to generate coefficients. Clearly, all the
odd numbered coefficients will end up being zero. The general rule for the even
numbered coefficients is not at all obvious. It turns out to be
( )
1 1 1 1
c2k = (−1) k+1
1 + + + ··· + k ≥ 1.
(k!)2 22k 2 3 k

Hence, the second solution of Bessel’s equation with m = 0 is



∑ ( )2k
(−1)k+1 t
y2 = J0 (t) ln t + (1 + 1/2 + · · · + 1/k) .
(k!)2 2
k=1

It is not hard to see that {y1 , y2 } is a linearly independent pair of solutions.

It should be noted that one seldom needs to know the exact form of the second
solution. The most important thing about it is that it is not continuous as t → 0
because of the logarithmic term. The first solution y1 = J0 (t) is continuous. This
has the following important consequence. Suppose we know on physical grounds
that the solution is continuous and bounded as t → 0. Then it follows that in

y = C1 y1 (t) + C2 y2 (t)

the coefficient of y2 must vanish. For example, this must be the case for the vibrating
membrane. The displacement must be bounded at the origin (r = 0), so we know
the solution can involve only the Bessel function of the first kind.

The Case of Roots Differing by a Positive Integer Suppose the roots r1 , r2


of the indicial equation satisfy r1 − r2 = k where k is a positive integer. If y1 (t) is
a solution obtained by the method of Frobenius for r = r1 , the larger of the two
roots, then the method of reduction of order yields a second solution of the form


y2 (t) = ay1 (t) ln t + tr2 cn tn . (156)
n=0

Note that the summation starts with n = 0. It is possible that the method of
Frobenius works for the smaller root r2 (because crucial terms vanish), and in that
470 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

case we would have a = 0 in (156); the series in the second term is what the
method of Frobenius generates. Otherwise, a 6= 0, and, since we may always adjust
a solution by a non-zero multiplicative constant, we may take a = 1. In any case
the solution definitely exhibits a logarithmic singularity. In applying the method,
you should first see if the method of Frobenius can be made to work for the lesser
root. If it doesn’t work (because the recursion at some point unavoidably yields a
non-zero numerator and a zero denominator), then try a solution of the form (156)
with a = 1. This will yield a set of recursion rules for the coefficients c0 , c1 , c2 , . . .
roughly as in Example 186. To find these rules, you will have to evaluate some
expression involving the series for the first solution y1 (t) and its derivative y10 (t).
See the Exercises for some examples.

Bessel’s equation with m > 0 and 2m an integer illustrates the above principles.
If m is half an odd integer, then the logarithmic term is missing and we may take
y2 = J−m (t). However, if m is a positive integer, then the logarithmic term is
definitely present and we must take


y2 = Jm (t) ln t + t−m cn tn , (157)
n=0

where the coefficients cn are determined by the method outlined above. Such so-
lutions are called Bessel Functions of the second kind. For technical reasons, it is
sometimes better to add to (157) an appropriate multiple of Jm (t) (a solution of
the first kind). There is one particular family of such solutions called Neumann
functions and denoted Ym (t). We won’t study these in this course, but we men-
tion them in case you encounter the notation. The most important thing about
Neumann functions is that they have logarithmic singularities. It is possible to see
in general that {y1 , y2 } is a linearly independent pair of solutions, so the general
solution of the differential equation has the form

y = C1 y1 (t) + C2 y2 (t).
∑ ∞
If r1 > 0, then y1 (t) = tr1 n=0 an tn is bounded as t → 0. Usually, the second
solution y2 does not have this property. If the logarithmic term is present or if
r2 < 0, y2 will not be bounded as t → 0. In those cases, if we know the solution is
bounded by physical considerations, we may conclude that C2 = 0.

Using Reduction of Order for the Second Solution You might want to skip
this section the first time through.

There are a variety of ways to show the second solution has the desired form in
each of the ‘bad’ cases. We shall use the method of reduction of order. There is
an alternate method based on solving the recursion relations in terms of r before
setting r equal to either of the roots of the indicial equation. It is possible thereby
to derive a set of formulas for the coefficients cn . (See Braun, Section 2.8.3 for a
discussion of this method.) However, no method gives a really practical approach
9.5. SECOND SOLUTIONS IN THE BAD CASES 471

for finding the coefficients, so in most cases it is enough to know the form of the
solution and then try to find the coefficients by substituting in the equation as we
did above.

First, assume ∑r is a double root of the indicial equation f (r) = 0 and y1 = tr g(t)

(where g(t) = n=0 an tn ) is a solution obtained by the method of Frobenius. (Note
that g(t) is analytic at t = 0 and g(0) 6= 0.) The indicial equation has the form
f (r) = r(r − 1) + p0 r + q0 = r2 + (p0 − 1)r + q0 = 0
where p(t) = p(t)/t = (p0 + p1 t + . . . )/t and q(t) = q(t)/t2 = (q0 + q1 t + . . . )/t2 .
Since r is double root, we have (p0 − 1)2 − 4q0 = 0 and
p0 − 1
r=−.
2
The method of reduction of order tells us that there is a second solution of the form
y2 = y1 v where
1 ∫
v 0 = 2 e− p(t)dt .
y1
However, as above,
p(t) 1
p(t) = = (p0 + p1 t + p2 t2 + . . . )
t t
p0
= + p1 + p2 t + · · · + pn tn−1 + . . . .
t
Hence, ∫
p2 2
p(t)dt = p0 ln t + p1 t + t + ...,
2
so ∫
e− p(t)dt
= e−p0 ln t e−p1 t−... = t−p0 h1 (t)
where h1 (t) = e−p1 t−... is an analytic function at t = 0 such that h1 (0) 6= 0. On the
other hand
1 1
= 2r = t−2r h2 (t)
y1 2 t g(t)2
where, since g(0) 6= 0, h2 (t) = 1/g(t)2 is also analytic at t = 0 and h2 (0) 6= 0. It
follows that
v 0 = t−2r h2 (t)t−p0 h1 (t) = t−(2r+p0 ) h1 (t)h2 (t).
However, r = −(p0 − 1)/2, so 2r + p0 = 1. Moreover, h(t) = h1 (t)h2 (t) is analytic
and h(0) 6= 0, so it may be expanded in a power series
h(t) = h0 + h1 t + h2 t2 + . . .
with h0 6= 0. Thus,
v 0 = t−1 (h0 + h1 t + h2 t2 + . . . )
h0 h2
= + h1 + + ...
t t
h2
v = h0 ln t + h1 t + t2 + . . .
2
472 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

It follows that
h2 2
y2 = y1 v = h0 y1 (t) ln t + y1 (t)(h1 t + t + . . . ).
2
The solution generated by this process may always be modified by multiplying by
a constant. Hence, since we know that h0 6= 0, we may multiply by its reciprocal,
so in effect we may assume h0 = 1. Moreover, since y1 = tr g(t), the second term
has the form
h2
tr g(t)(h1 t + t2 + . . . ).
2
h2 2
Since g(t) and h1 t+
∑ 2 t +. . . are analytic, so is their product which can be expressed
as a power series n=0 cn tn . Moreover, since the ‘h series’ does not have a constant
term, its sum vanishes at t = 0, and the same can be said for the product. That
implies c0 = 0, i.e., the summation may be assumed to start with n = 1. Thus we
see that the solution obtained by reduction of order has the form


y2 = y1 (t) ln t + tr cn tn
n=1

as claimed.

A very similar analysis of the process of reduction of order works in the case r1 − r2
is a positive integer.

Exercises for 9.5.

1. Find a general solution of Laguerre’s Equation ty 00 + (1 − t)y 0 + λy = 0 as


follows.
(a) Show that the indicial equation is r2 = 0.
(b) Find a solution for r = 0 by the method of Frobenius by assuming a0 = 1.
Note that this solution is analytic in general and is actually a polynomial if λ
is a non-negative integer.
(c) Set up the procedure for finding a second solution of the form y = y1 ln t +
∑ ∞ n
n=1 cn t . In the case λ = 0, find c1 , a recurrence relation for cn , the general
value of cn and the solution.
2. Find a second solution of ∑Bessel’s Equation for m = 1 by trying a solution of

the form y = J1 (t) ln t + n=0 cn tn−1 . You may not be able to determine a
general formula for cn but determine coefficients at least up to c4 . You should
discover that the recursion rule does not completely determine c2 , so you can
set it equal to anything you want. You should set it equal to zero for this
problem. (However, it turns out that zero is not usually the best choice. By an
appropriate non-zero choice, one gets the Neumann function Y1 (t) discussed
in the the text.)
9.6. MORE ABOUT BESSEL FUNCTIONS 473

9.6 More about Bessel Functions


√ √ √ √
We saw that J1/2 (t) = 2/π sin t/ t and J−1/2 (t) = 2/π cos t/ t. In general, it
turns out that√
the Bessel functions Jm (t) all behave like sines and cosines with an
‘amplitude’ 1/ t.

J (t)
0
J (t)
1

1 2 3 4 5 6 7


To see why this is the case, put y = u/ t in the differential equation
t2 y 00 + ty 0 + (t2 − m2 )y = 0.
We have
u0 1 u
y0 = √ −
t 2 t3/2
00
u u0 3 u
y 00 = √ − 3/2
+ 5/2 .
t t 4t
Hence,
t2 y 00 + ty 0 + (t2 − m2 )y
3 u 1 u u
= t3/2 u00 − t1/2 u0 + + t1/2 u0 − 1/2 + t3/2 u − m2 1/2
4 t1/2 2t t
1 u
= t3/2 u00 + t3/2 u + ( − m2 ) 1/2 .
4 t
Divide the last expression
√ by t3/2 . We see thereby that if y is a solution of Bessel’s
equation, then u = y t satisfies the differential equation
1/4 − m2
u00 + u + u = 0. (158)
t2
1/4 − m2
Let t → ∞. Then → 0, so for large t, the equation is close to
t2
u00 + u = 0. (159)
474 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

The general solution of (159) is

u = C1 cos t + C2 sin t = A cos(t + δ).

Hence, it is plausible that for large t, the solution


√of (158) should look very similar.
That means that for large t, any solution y = u/ t of Bessel’s equation should look
like
cos(t + δ)
y≈A √ .
t
(It is not clear in general that the limiting behavior of the solution of a differential
equation is the same as a solution of the limit of the differential equation, but in
this particular case, it happens to be true.)
cos t
Since Jm (t) ≈ A √ for large t, we would expect it to oscillate with approximate
t
period 2π. That means that successive roots of the equation

Jm (t) = 0

should differ roughly by π for large t. This is indeed the case. As mentioned earlier,
the roots of this equation are important in determining the frequencies of the basic
vibrations of a vibrating drum (and similar physical systems). The first root of the
equation J0 (t) = 0 is approximately 2.4048.

Functions Related to Bessel Functions The differential equation 160

t2 z 00 + 2tz 0 + (t2 − p2 )z = 0 (160)

often arises in solving problems with spherical symmetry. (Note that the coefficient
of z 0 is 2t rather than t.) This equation is related to Bessel’s equation as follows.
In
t2 y 00 + ty 0 + (t2 − m2 )y = 0

put y = t z. Then
√ 0 1 z
y0 = tz + 1/2
2t
√ 1 z0 1 z
y 00 = tz 00 + 2 1/2 − 3/2 .
2t 4t
We obtain
1 1
t5/2 z 00 + t3/2 z 0 − t1/2 z + t3/2 z 0 + t1/2 z + (t2 − m2 )t1/2 z = 0.
4 2

If we divide through by t1/2 and rearrange the terms, we obtain

t2 z 00 + 2tz 0 + (t2 − m2 + 1/4)z = 0.


9.6. MORE ABOUT BESSEL FUNCTIONS 475

If we put p2 = m2 − 1/4, then we obtain equation (160). It follows


√ that solutions
of (160) may be obtained by solving Bessel’s equation with m = p2 + 1/4. Thus,
Bessel functions for such m (where p is a non-negative integer) are often called
spherical Bessel functions.

The solutions of the equation

z 00 + tz 0 − (t2 + m2 )z = 0

are called modified Bessel functions, and they also arise in important physical
applications. (Note that the coefficient of z is −t2 −m2 rather than +t2 −m2 .) There
are two methods for obtaining solutions. First, apply the method of Frobenius. The
process is almost identical to that in the case of the

ordinary Bessel functions. The indicial equation is also r2 − m2 = 0, and the


only real difference is that the sign (−1)k does not occur. In particular, if m is a
non-negative integer, a normalized solution obtained for the root r = m is

∑ ( )2k+m
1 t
Im (t) = .
k!(m + k)! 2
k=0

For m a non-negative integer, a second solution is obtained for the root r = −m by


reduction of order and has a logarithmic singularity.

An alternate approach to the √ modified Bessel functions is to replace t by it in


Bessel’s equation where i = −1. Then, we see that Im (t) and Jm (it) just differ
by a multiplicative constant. (Check that for yourself!) You may learn more about
this in your complex variables course.

One could go on almost without end discussing properties of Bessel functions and
of the other special functions of mathematical physics. We leave such pleasures for
other courses in mathematics, physics, geophysics, etc.

Exercises for 9.6.

1. Find the second positive root of the equation J0 (t) = 0. If you want you
can just look up the answer in a book, but you will have to figure out which
book to look in. You might find it more illuminating to graph J0 (t) using an
appropriate graphics program such as Maple or Mathematica and try to zoom
in on the root. Such a program will also give you a numerical value for the
root if you ask it nicely.

2. Jm (t) where m2 = p2 + 1/4 is a spherical Bessel function. Show that if


p2 = k(k + 1), then m is half an odd integer. That explains why Bessel
functions of fractional order m are interesting.
476 CHAPTER 9. SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

3. (a) Apply the method of Frobenius to the differential equation

t2 y 00 + ty 0 − (t2 + m2 )y = 0.

Find a first solution in the case m is a positive integer. Normalize it by putting


a0 = 1/(2m m!).
(b) Compare your answer with Jm (it) where i2 = −1.
Part III

Linear Algebra

477
Chapter 10

Linear Algebra, Basic


Notation

10.1 Systems of Differential Equations

Usually, in physical problems there are several variables, and the rate of change of
each variable depends not only on it, but also on the other variables. This gives
rise to a system of differential equations.

Example 187 Suppose that in a chemical reaction there are two substances X
and Y that we need to keep track of. Suppose the kinetics of the reaction is such
that X decomposes into Y at a rate proportional to the amount of X present, and
suppose Y decomposes into uninteresting byproducts U at a rate proportional to
the amount of Y present. We may indicate this schematically by

k k
X →→
1
Y →→
2
U

where k1 and k2 are rate constants. We may translate the above description into
mathematics as follows. Let x(t) and y(t) denote the amounts of X and Y respec-
tively present at time t. Then

dx
= −k1 x
dt
dy
= k1 x − k2 y.
dt
This is simple example of a system of differential equations. It is not very hard
to solve. The first equation involves only x, and its solution is x = x0 e−k1 t where
x0 = x(0) is the amount of X present initially. This may be substituted in the

479
480 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

second equation to obtain


dy
+ k2 y = k1 x = k1 x0 e−k1 t ,
dt
which is a first order linear equation which may be solved by the method in Chapter
VI, Section 3.

Not every system is so easy to solve. Suppose for example that the substance Y
in addition to decomposing into uninteresting substances also recombines with a
substrate to form X at a rate depending on y. We would then have to modify the
above system to one of the form
dx
= −k1 x + k3 y
dt
dy
= k1 x − (k2 + k3 )y.
dt
It is not immediately clear how to go about solving this system!

Example 188 Consider two identical masses m on a track connected by springs as


indicated below. Suppose all the springs have the same spring constant k.

k m k m k

x1 x
2

The masses will be at rest in certain equilibrium positions, but if they are displaced
from those positions, the resulting system will oscillate in some very complicated
way. Let x1 and x2 denote the displacements of the masses from equilibrium. The
force exerted on the first mass by the spring to its left will be −kx1 since that
spring will be stretched by x1 . On the other hand, the spring in the middle will be
stretched by x1 − x2 , so the force it exerts of the first mass is −k(x1 − x2 ). Thus
the total force on the first particle is the sum, so by Newton’s Second Law

d2 x1
m = −kx1 − k(x1 − x2 ).
dt2
By a similar argument, we get for the second mass

d2 x2
m = −kx2 − k(x2 − x1 ).
dt2
10.1. SYSTEMS OF DIFFERENTIAL EQUATIONS 481

(You should check both these relations to be sure you agree the forces are being
exerted in the proper directions.) These equations may be simplified algebraically
to yield

d2 x1
m = −2kx1 + kx2
dt2
d2 x2
m 2 = kx1 − 2kx2 . (161)
dt

Of course, to determine the motion completely, it is necessary to specify the initial


positions x1 (t0 ), x2 (t0 ) and the initial velocities x01 (t0 ), x02 (t0 ) of both masses.

The above example is typical of many interesting physical systems. For example, a
molecule consists of several atoms with attractive forces between them which to a
first approximation may be treated mathematically as simple springs.

Higher Order Equations as Systems There is a simple trick which, by intro-


ducing new variables, allows us to reduce a differential equation of any order to a
first order system.

Example 189 Consider the differential equation

y 000 + 2y 00 + 3y 0 − 4y = et . (162)

There is an elaborate theory of such equations which generalizes what we did in


Chapter VII for second order equations. However, there is another approach which
replaces the equation by a first order system. Introduce new variables as follows:

x1 = y
x2 = y 0 = x01
x3 = y 00 = x02 .

From the differential equation,

x03 = y 000 = 4y − 3y 0 − 2y 00 + et .

Hence, we may replace the single equation (162) by the system

x01 = x2
x02 = x3
x03 = 4x1 − 3x2 − 2x3 + et .

The same analysis may in fact be applied to any system of any order in any number
of variables.
482 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

Example 188, revisited Introduce additional variables x3 = x01 and x4 = x02 .


Then from (161), we have

2k k
x03 = x001 = − x1 + x2
m m
k 2k
x04 = x002 = x2 − x2 .
m m
Putting this all together yields the first order system in 4 variables

x01 = x3
x02 = x4
2k k
x03 = − x1 + x2
m m
0 k 2k
x4 = x2 − x2 .
m m
To completely specify the solution, we need (in the new notation) to specify the 4
initial values x1 (0), x2 (0), x3 (0) = x01 (0) and x4 (0) = x02 (0).

By similar reasoning, any system may be reduced to a first order system involving
variables x1 , x2 , . . . , xn each of which is a function of the independent variable t.
Using the notation of Rn , such a collection of variables may be combined in a single
vector variable
x = hx1 , x2 , . . . , xn i
which is assumed to be a vector valued function x(t) of t. For each component xi ,
its derivative is supposed to be a function

dxi
= fi (x1 , x2 , . . . , xn , t) = fi (x, t).
dt
We may summarize this in a single vector equation

dx
= f (x, t)
dt
where the vector valued function f has components the scalar functions fi for i =
1, 2, . . . , n.

The most important special case is that of linear systems. In this case the component
functions have the special form

fi (x1 , x2 , . . . , xn , t) = ai1 (t)x1 + ai2 (t)x2 + · · · + ain (t)xn + gi (t)

for i = 1, 2, . . . , n. That is, each component function depends linearly on the


dependent variables x1 , x2 , . . . , xn with coefficients aij (t) which as indicated may
dx
depend on t. In this case, the system = f (x, t) may be written out in ‘longhand’
dt
10.1. SYSTEMS OF DIFFERENTIAL EQUATIONS 483

as
dx1
= a11 (t)x1 + a12 (t)x2 + · · · + a1n (t)xn + g1 (t)
dt
dx2
= a21 (t)x1 + a22 (t)x2 + · · · + a2n (t)xn + g2 (t)
dt
..
.
dxn
= an1 (t)x1 + an2 (t)x2 + · · · + ann (t)xn + g2 (t).
dt
You should compare this with each of the examples discussed in this section. You
should see that they are all linear systems.

The above notation is quite cumbersome, and clearly it will be easier to follow if we
use the notation of vectors in Rn as above. To exploit this fully for linear systems
we need additional notation and concepts. To this end, we shall study some linear
algebra, one purpose of which is to make higher dimensional problems look one-
dimensional. In the next sections we shall introduce notation so that the above
linear system may be rewritten
dx
= A(t)x + g(t).
dt
With this notation, much of what we discovered about first order equations in
Chapter VI will apply to systems.

Exercises for 10.1.

1. Solve the system


dx
= −k1 x
dt
dy
= k1 x − k2 y
dt
as suggested in the text. The answer should be expressed in terms of x0 = x(0)
and y0 = y(0).
2. In each of the following cases, reduce the given equation(s) to an appropriate
first order system.
(a) y 000 + 2y 00 − 3y 0 + 2y = cos t.
(b) y 000 − 2(y 0 )2 + y = 0.
(c) y100 = 3y1 − 2y2 , y200 = −2y1 + 4y2 .
(d) x001 + (x01 )2 + x1 x2 = 0, x002 + x1 − x2 = 0.
3. In the previous problem, determine if the first order system you obtained in
each part is linear.
484 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

10.2 Matrix Algebra

A rectangular array  
a11 a12 ... a1n
 a21 a22 ... a2n 
 
 .. .. .. 
 . . ... . 
am1 am2 ... amn
is called an m × n matrix. It has m rows and n columns. The quantities aij are
called the entries of the matrix. The first index i tells you which row it is in, and
the second index j tells you which column it is in.

Examples
[ ]
−2k k
is a 2 × 2 matrix
k −2k
[ ]
1 2 3 4
is a 2 × 4 matrix
4 3 2 1
[ ]
x1 x2 x3 x4 is a 1 × 4 matrix
 
x1
x2 
  is a 4 × 1 matrix
x3 
x4

Matrices of various sizes and shapes arise in many situations. For example, the first
matrix listed above is the matrix of coefficients on the right hand side of the system
d2 x1
m = −2kx1 + kx2
dt2
2
d x2
m 2 = kx1 − 2kx2 .
dt
in Example 188 of the previous section.

In computer programming, a matrix is called a 2-dimensional array and the entry


in row i and column j is usually denoted a[i, j] instead of aij . As in programming,
it is useful to think of the entire array as a single entity, so we use a single letter to
denote it  
a11 a12 . . . a1n
 a21 a22 . . . a2n 
 
A= . .. ..  .
 .. . ... . 
am1 am2 ... amn

There are various different special arrangements which play important roles. A
matrix with the same number of rows as columns is called a square matrix. Matrices
10.2. MATRIX ALGEBRA 485

of coefficients for linear systems of differential equations are usually square. A 1 × 1


matrix
[ ]
a

is not logically distinguishable from a scalar , so we make no distinction between


the two concepts. A matrix with one row
[ ]
a = a1 a2 ... an

is called a row vector and a matrix with one column


 
a1
 a2 
 
a= . 
 .. 
an

is called a column vector. Logically, either a 1 × n row vector or an n × 1 column


with real entries is just an n-tuple, i.e., an element of Rn . However, as we shall
see, operations with row vectors are sometimes different than with column vectors.
We may identify either the set of all row vectors with real entries or the set of all
column vectors with real entries with the set Rn . For reasons that will become clear
shortly, we shall usually make the latter choice. That is, we shall ordinarily think
of an element of Rn as a column vector
 
x1
 x2 
 
x =  . .
 .. 
xn

More generally, it should be noted that the information contained in any m × n


matrix has two parts. There are the mn entries which, in the real case, specify, in
some order, an element of Rmn , and there is also the arrangement of the entries in
rows and columns.

Matrices are denoted in different ways by different authors. Most people use ordi-
nary (non-boldface) capital letters, e.g., A, B, X, Q. However, one sometimes wants
to use boldface for row or column vectors, as above, when the relationship to Rn is
being emphasized. One may also use lower case non-boldface letters for row vectors
or column vectors. Since there is no consistent rule about this, you should make
sure you know when a symbol represents a matrix which is not a scalar.

Matrices may be combined in various useful ways. Two matrices of the same size
and shape are added by adding corresponding entries. You are not allowed to add
matrices with different shapes.
486 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

Examples
     
1 −1 1 1 2 0
2 1  +  0 3 = 2 4
0 1 −1 −2 −1 −1

     
x+y −y x
 y  + −y  =  0  .
0 x x

The m×n matrix with zero entries is called a zero matrix and is usually just denoted
0. Since zero matrices with different shapes are not the same, it is sometimes
necessary to indicate the shape by using subscripts, as in ‘0mn ’, but usually the
context makes it clear which zero matrix is needed. The zero matrix of a given
shape has the property that if you add it to any matrix A of the same shape, you
get the A again as the result.

A matrix may also be multiplied by a scalar by multiplying each entry of the matrix
by that scalar. More generally, we may multiply several matrices with the same
shape by different scalars and add up the result:

c1 A1 + c2 A2 + · · · + ck Ak

where c1 , c2 , . . . , ck are scalars and A1 , A2 , . . . , Ak are m × n matrices with the same


m and n. This process is called linear combination.

Example
             
1 0 1 2 0 3 5
0 1 1 0 −1 3 2
           
2   + (−1)   + 3   =   +   +   =  
 .
1 0 1 2 0 3 5
0 1 1 0 −1 3 2

Sometimes it is convenient to put the scalar on the other side of the matrix, but
the meaning is the same: each entry of the matrix is multiplied by the scalar.

cA = Ac.

We shall also have occasion to consider matrix valued functions A(t) of a scalar
variable t. That means that each entry aij (t) is a function of t. Such functions are
differentiated or integrated entry by entry.
10.2. MATRIX ALGEBRA 487

Examples

[ ] [ 2t ]
d e2t e−t 2e −e−t
=
dt 2e2t −e−t 4e2t e−t
∫ 1[ ] [ ]
t 1/2
2 dt =
0 t 1/3

There are various ways to multiply matrices. For example, one sometimes multiplies
matrices of the same shape by multiplying corresponding entries. This is useful only
in very special circumstances. Another kind of multiplication generalizes the dot
product of vectors. If
[ ]
a1 a2 ... an

is a row vector of size n, and


 
b1
 b2 
 
 .. 
.
bn

is a column vector of the same size n, the row by column product is defined to be
the sum of the products of corresponding entries

 
b1
[ 
]  b2 
 ∑n
a1 a2 ... an  .  = a1 b1 + a2 b2 + · · · + an bn = ai bi .
 ..  i=1
bn

This product is of course a scalar , and except for the distinction between row and
column vectors, it is the same as the notion of dot product for elements of Rn
introduced in Chapter I, Section 3. You should be familiar with its properties.

More generally, let A be an m × n matrix and B an n × p matrix. Then each row


of A has the same size as each column of B. The matrix product AB is defined to
be the m × p matrix with i, j entry the row by column product of the ith row of A
with the jth column of B. Thus, if C = AB, then C has the same number of rows
as A, the same number of columns as B, and


n
cij = air brj .
r=1
488 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

Examples

[ ][ ] [ ]
2 1 1 0 1 2−1 0+2 2+1
=
1 0 −1 2 1 1−0 0+0 1+0
| {z } | {z } | {z }
2×2 2×3 2×3
[ ]
1 2 3
=
1 0 1
   
1 −1 [ ] x−y
1 x
0 = x 
y
2 1 |{z} 2x + y
| {z } 2×1 | {z }
3×2 3×1

The most immediate use for matrix multiplication is a simplification of the notation
used to describe a linear system. We have

    
a11 a12 ... a1n x1 a11 x1 + a12 x2 + · · · + a1n xn
 a21 a2n     
 a22 ...   x2   a21 x1 + a22 x2 + · · · + a2n xn 
 .. .. ..   ..  =  .. .
 . . ... .  .   . 
an1 an2 ... ann xn an1 x1 + an2 x2 + · · · + ann xn

(Note that the matrix on the right is an n×1 column vector and each entry, although

expressed as a complicated sum, is a scalar.) With this notation, the linear system

dx1
= a11 (t)x1 + a12 (t)x2 + · · · + a1n (t)xn + g1 (t)
dt
dx2
= a21 (t)x1 + a22 (t)x2 + · · · + a2n (t)xn + g2 (t)
dt
..
.
dxn
= an1 (t)x1 + an2 (t)x2 + · · · + ann (t)xn + g2 (t).
dt

may be rewritten

dx
= A(t)x + g(t)
dt
10.2. MATRIX ALGEBRA 489

where
 
x1 (t)
 x2 (t) 
 
x = x(t) =  . 
 .. 
xn (t)
 
a11 (t) a12 (t) ... a1n (t)
 a21 (t) a22 (t) ... a2n (t) 
 
A(t) =  . .. .. 
 .. . ... . 
an1 (t) an2 (t) ... ann (t)
 
g1 (t)
 g2 (t) 
 
g(t) =  . 
 .. 
gn (t)

Example 190 The system

x01 = x2
x02 = x3
x03 = 4x1 − 3x2 − 2x3 + et .

may be rewritten
      
x 0 1 0 x1 0
d  1 
x2 = 0 0 1  x2  +  0  .
dt
x3 4 −3 −2 x3 et

Another important application of matrices occurs in the analysis of large systems of


simultaneous linear algebraic equations. We shall have much more to say about this
later in this chapter. In addition, matrices and linear algebra are used extensively
in practically every branch of science and engineering.

Exercises for 10.2.

1. Let      
1 −2 1
x =  2 , y =  1 , z =  0 .
−3 3 −1
Calculate x + y and 3x − 5y + z.
490 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

2. Let      
2 7 4 −3 1 −2
−3 0 1 −2 −2  2 
A=
1
, x= 
 3 , y= 
 0 .
3 −2 3
0 0 5 −5 5 4
Compute Ax, Ay, Ax + Ay, and A(x + y).
3. Let
   
[ ] 1 2 [ ] −1 −2 0
1 −1 3   −1 1 −3
A= ,B = 1 0 ,C = ,D =  1 −2 1  .
0 −2 2 0 2 −2
−3 2 2 1 −4

Calculate each of the following quantities if it is defined : A + 3B, A + C, C +


2D, AB, BA, CD, DC.
4. Suppose A is a 2 × 2 matrix such that
[ ] [ ] [ ] [ ]
1 3 2 6
A = A = .
2 1 1 4

Find A.
5. Let ei denote the n × 1 column vector, with all entries zero except the ith
which is 1, e.g., for n = 3,
     
1 0 0
e1 = 0 , e2 = 1 , e3 = 0 .
0 0 1

Let A be an arbitrary m × n matrix. Show that Aei is the ith column of A.


You may verify this just in the case n = 3 and A is 3 × 3. That is sufficiently
general to understand the general argument.
6. Write each of the following systems in matrix form.
(a) x01 = 2x1 − 3x2 , x02 = −4x1 + 2x2 .
(b) x01 = 2x1 − 3x2 + 4, x02 = −4x1 + 2x2 − 1.
(c) x01 = x2 , x02 = x3 , x03 = 2x1 + 3x2 − x3 + cos t.
7. Let y(t) be a solution of the linear second order differential equation

t2 y 00 + tp(t)y 0 + q(t)y = 0.

Put [ ]
y(t)
x = x(t) =
ty 0 (t)
and show that x satisfies the matrix differential equation
[ ]
dx 0 1
t = x.
dt −q(t) 1 − p(t)
10.3. FORMAL RULES 491

8. Matrices are used to create more realistic population models than those we
considered in Chapter VI, Section 4. First, divide the population into n age
groups for an appropriate positive integer n. Let xi , i = 1, 2, . . . , n be the
number of women in the ith age group, and consider the vector x with those
components. Construct an n × n matrix A which incorporates information
about birth and death rates so that Ax gives the population vector after one
unit of time has elapsed. Then An x gives the population vector after n units
of time.
Assume a human population is divided into 10 age groups between 0 and 99.
Suppose the following table gives the birth and death rates for each age group
Age BR DR
0...9 0 .01
10 . . . 19 .01 .01
20 . . . 29 .04 .01
30 . . . 39 .03 .01
40 . . . 49 .01 .02
50 . . . 59 .001 .03
60 . . . 69 0 .04
70 . . . 79 0 .10
80 . . . 89 0 .30
90 . . . 99 0 1.00
Find A.

10.3 Formal Rules

The usual rules of algebra apply to matrices with a few exceptions. Here are some
of these rules and warnings about when they apply.

The associative law


A(BC) = (AB)C
works as long as the shapes of the matrices match. That means that the length of
each row of A must be the same as the length of each column of B and the length
of each row of B must be the same as the length of each column of C. Otherwise,
none of the products in the formula will be defined. The proof of the associative
law requires some fiddling with indices and is left for the Exercises.

For each positive integer n, the n × n matrix


 
1 0 ... 0
0 1 . . . 0
 
I = . . .. 
 .. .. . . . .
0 0 ... 1
492 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

is called the identity matrix of degree n. As in the case of the zero matrices, we
get a different identity matrix for each n, and if we need to note the dependence on
n, we shall use the notation In . The identity matrix of degree n has the property
IA = A for any matrix A with n rows and the property BI = B for any matrix B
with n columns. The entries of the identity matrix are usually denoted δij . δij = 1
if i = j (the diagonal entries) and δij = 0 if i 6= j. The indexed expression δij is
often called the Kronecker δ.

The commutative law AB = BA is not generally true for matrix multiplication.


First of all, the products won’t be defined unless the shapes match. Even if the
shapes match on both sides, the resulting products may have different sizes. Thus,
if A is m × n and B is n × m, then AB is m × m and BA is n × n. Finally, even if
the shapes match and the products have the same sizes (if both A and B are n × n),
it may still be true that the products are different.

Example 191 Suppose


[ ] [ ]
1 0 0 0
A= B= .
0 0 1 0

Then [ ] [ ]
0 0 0 0
AB = =0 BA = 6 0
=
0 0 1 0
so AB 6= BA. Lest you think that this is a specially concocted example, let me
assure you that it is the exception rather than the rule for the commutative law to
hold for a randomly chosen pair of square matrices.

Another rule of algebra which holds for scalars but does not generally hold for
matrices is the cancellation law.

Example 192 Let


[ ] [ ] ][
1 0 0 0 0 0
A= B= C= .
0 0 1 0 0 1

Then
AB = 0 and AC = 0
so we cannot necessarily conclude from AB = AC that B = C.

The distributive laws

A(B + C) = AB + AC
(A + B)C = AC + BC

do hold as long as the operations are defined. Note however that since the com-
mutative law does not hold in general, the distributive law must be stated for both
possible orders of multiplication.
10.4. LINEAR SYSTEMS OF ALGEBRAIC EQUATIONS 493

Another useful rule is


c(AB) = (cA)B = A(cB)
where c is a scalar and A and B are matrices whose shapes match so the products
are defined.

The rules of calculus apply in general to matrix valued functions except that you
have to be careful about orders whenever products are involved. For example, we
have
d dA(t) dB(t)
(A(t)B(t)) = B(t) + A(t)
dt dt dt
for matrix valued functions A(t) and B(t) with matching shapes.

We have just listed some of the rules of algebra and calculus, and we haven’t
discussed any of the proofs. Generally, you can be confident that matrices can
be manipulated like scalars if you are careful about matters like commutativity
discussed above. However, in any given case, if things don’t seem to be working
properly, you should look carefully to see if some operation you are using is valid
for matrices.

Exercises for 10.3.

1. Find two 2 × 2 matrices A and B such that neither has any zero entries but
such that AB = 0.

2. Let A be an m × n matrix, let x and y be n × 1 column vectors, and let a and


b be scalars. Using the rules of algebra discussed in Section 3, prove

A(ax + by) = a(Ax) + b(Ay).

3. Prove
∑n the associative law (AB)C = A(BC). ∑p Hint: If D = AB, then dik =
j=1 aij b jk , and if E = BC then e jr = k=1 bjk ckr , where A is m × n, B is
n × p, and C is p × q.

10.4 Linear Systems of Algebraic Equations

Before studying the problem of solving a linear system of differential equations,


we tackle the simpler problem of solving a linear system of simultaneous algebraic
equations. This problem is important in its own right, and, as we shall see, we
need to be able to solve linear algebraic systems in order to be able to solve linear
systems of differential equations.
494 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

We start with a problem you ought to be able to solve from what you learned in
high school

Example 193 Consider the algebraic system


x1 + 2x2 − x3 = 1
x1 − x2 + x3 = 0
x1 + x2 + 2x3 = 1 (163)
which is a system of 3 equations in 3 unknowns x1 , x2 , x3 . This system may also be
written more compactly as a matrix equation
    
1 2 −1 x1 1
1 −1 1  x2  = 0 .
1 1 2 x2 1

The method we shall use to solve (163) is the method of elimination of unknowns.
Subtract the first equation from each of the other equations to eliminate x1 from
those equations.
x1 + 2x2 − x3 = 1
−3x2 + 2x3 = −1
−x2 + 3x3 = 0
Now subtract 3 times the third equation from the second equation.
x1 + 2x2 − x3 = 1
−7x3 = −1
−x2 + 3x3 = 0
which may be reordered to obtain
x1 + 2x2 − x3 = 1
−x2 + 3x3 = 0
7x3 = 1.
We may now solve as follows. According to the last equation x3 = 1/7. Putting
this in the second equation yields
−x2 + 3/7 = 0 or x2 = 3/7.
Putting x3 = 1/7 and x2 = 3/7 in the first equation yields
x1 + 2(3/7) − 1/7 = 1 or x1 = 1 − 5/7 = 2/7.
Hence, we get
x1 = 2/7
x2 = 3/7
x3 = 1/7
10.4. LINEAR SYSTEMS OF ALGEBRAIC EQUATIONS 495

To check, we calculate
    
1 2 −1 2/7 1
1 −1 1  3/7 = 0 .
1 1 2 1/7 1

The above example illustrates the general procedure which may be applied to any
system of m equations in n unknowns

a11 x1 + a12 x2 + · · · + a1n xn = b1


a21 x1 + a22 x2 + · · · + a2n xn = b2
..
.
am1 x1 + am2 x2 + · · · + amn xn = bm

or, using matrix notation,


Ax = b
with
 
a11 a12 ... a1n
 a21 a22 ... a2n 
 
A= . .. .. 
 .. . ... . 
am1 am2 ... amn
 
x1
 x2 
 
x= . 
 .. 
xn
 
b1
 b2 
 
b =  . .
 .. 
bm

As in Example 193, a sequence of elimination steps yields a set of equations each


involving at least one fewer unknowns than the one above it. This process is called
Gaussian reduction after the famous 19th century German mathematician C. F.
Gauss. To complete the solution, we start with the last equation and substitute back
recursively in each of the previous equations. This process is called appropriately
back-substitution. The combined process will generally lead to a complete solution,
but, as we shall see later, there can be some difficulties.

Row Operations and Gauss-Jordan reductionWe now consider the general


process of solving a system of equations of the form

AX = B
496 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

where A is an n × n matrix, X is an n × p matrix of unknowns, and B is an n × p


matrix of known quantities. Usually, p will be 1, so X and B will be column vectors,
but the procedure is basically the same for any p. For the moment we emphasize
the case in which the coefficient matrix A is square, but we shall return later to the
general case (m and n possibly different).

If you look carefully at Example 193, you will see that we employed three basic
types of operations:

1. adding or subtracting a multiple of one equation from another,


2. multiplying or dividing an equation by a non-zero scalar,
3. interchanging two equations.

These operations correspond when using matrix notation to applying the following
operations to the matrices on both sides of the equation AX = B:

1. adding or subtracting one row of a matrix to another,


2. multiplying or dividing one row of a matrix by a non-zero scalar,
3. interchanging two rows of a matrix.

These operations are called elementary row operations.

An important principle about row operations that we shall use over and over again
is the following: To apply a row operation to a product AX, it suffices to apply the
row operation to A and then to multiply the result by X. It is easy to convince
yourself that this rule is valid by looking at examples. Thus, for the product,
[ ][ ] [ ]
a b x ax + by
=
c d y cx + dy
adding the first row to the second yields
[ ] [ ]
ax + by ax + by
= .
ax + by + cx + dy (a + c)x + (b + d)y
On the other hand, adding the rows of the coefficient matrix yields
[ ] [ ]
a b a b
→ ,
c d a+c b+d
[ ]
x
and multiplying the transformed matrix by yields
y
[ ]
ax + by
(a + c)x + (b + d)y
10.4. LINEAR SYSTEMS OF ALGEBRAIC EQUATIONS 497

as required. (See the appendix to this section for a general proof.)

It is now clear how to proceed in general to solve a system of the form

AX = B.

Apply row operations to both sides until we obtain a system which is easy to solve
(or for which it is clear there is no solution.) Because of the principle just enunciated,
we may apply the row operations on the left just to the matrix A and omit reference
to X since that is not changed. For this reason, it is usual to collect A on the left
and B on the right in a so-called augmented matrix

[A | B]

where the ‘|’ (or other appropriate divider) separates the two matrices. We illustrate
this by considering another system of 3 equations in 3 unknowns.

Example 194

x1 + x2 − x3 = 0
2x1 + x3 = 2
x1 − x2 + 3x3 = 1

or
    
1 1 −1 x1 0
2 0 1  x2  = 2
1 −1 3 x2 1

We first do the Gaussian part of the reduction but for the augmented matrix rather
than the original set of equations.
   
1 1 −1 | 0 1 1 −1 | 0
2 0 1 | 2 → 0 −2 3 | 2 − 2[r1] + r2
1 −1 3 | 1 1 −1 3 | 1
 
1 1 −1 | 0
→ 0 −2 3 | 2 − [r1] + r3
0 −2 4 | 1
 
1 1 −1 | 0
→ 0 −2 3 | 2  − [r2] + r3
0 0 1 | −1

At this point the corresponding system is

x1 + x2 − x3 = 0
−2x2 + 3x3 = 2
x3 = −1
498 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

so we could now apply back-substitution to find the solution. However, it is better


for matrix computation to use an essentially equivalent process. Starting with the
last row, use the leading non-zero entry to eliminate the entries above it. (That
corresponds to substituting the value of the corresponding unknown in the previous
equations.) This process is called Jordan reduction.
   
1 1 −1 | 0 1 1 −1 | 0
0 −2 3 | 2  → 0 −2 0 | 5  − 3[r3] + r2
0 0 1 | −1 0 0 1 | −1
 
1 1 0 | −1
→ 0 −2 0 | 5  [r3] + r1
0 0 1 | −1
 
1 1 0 | −1
→ 0 1 0 | −5/2 − (1/2)[r2]
0 0 1 | −1
 
1 0 0 | −3/2
→ 0 1 0 | −5/2 − [r2] + r1
0 0 1 | −1

This corresponds to the system


     
1 0 0 − 3/2 − 3/2
0 1 0 X =  −5/2  or X =  −5/2 
0 0 1 −1 −1

which is the desired solution: x1 = −3/2, x2 = −5/2, x3 = −1. (Check it by


plugging back into the original matrix equation.)

The combined method employed in the previous example is called Gauss-Jordan


reduction. The strategy is clear. Use a sequence of row operations to reduce the
coefficient matrix A to the identity matrix I. If this is possible, the same sequence
of row operations will transform the matrix B to a new matrix B 0 , and the corre-
sponding matrix equation will be

IX = B 0 or X = B0.

It is natural at this point to conclude that X = B 0 is the solution of the original


system, but there is a subtlety involved here. The method outlined above shows
the following: if there is a solution, and if it is possible to reduce A to I by a
sequence of row operations, then the solution is X = B 0 . In essence, this says that
if the solution exists, then it is unique. It does not demonstrate that any solution
exists. Why are we justified in concluding that we do in fact have a solution when
the reduction is possible? To understand that, first note that every possible row
operation is reversible. Thus, to reverse the effect of adding a multiple of one row
to another, just subtract the same multiple of the first row from the (modified)
second row. To reverse the effect of multiplying a row by a non-zero scalar, just
10.4. LINEAR SYSTEMS OF ALGEBRAIC EQUATIONS 499

multiply the (modified) row by the reciprocal of that scalar. Finally, to reverse the
effect of interchanging two rows, just interchange them back. Hence, the effect of
any sequence of row operations on a system of equations is to produce an equivalent
system of equations. Anything which is a solution of the initial system is necessarily
a solution of the transformed system and vice-versa. Thus, the system AX = B is
equivalent to the system X = IX = B 0 , which is to say X = B 0 is a solution of
AX = B.

Elementary Matrices and the Effect of Row Operations on Products


Each of the elementary row operations may be accomplished by multiplying by an
appropriate square matrix on the left. Such matrices of course should have the
proper size for the matrix being multiplied.

To add c times the jth row of a matrix to the ith row (with i 6= j), multiply that
matrix on the left by the matrix Eij (c) which has diagonal entries 1, the i, j-entry c,
and all other entries 0. This matrix may also be obtained by applying the specified
row operation to the identity matrix. You should try out a few examples to convince
yourself that it works.

Example For n = 3,  
1 0 −4
E13 (−4) = 0 1 0 .
0 0 1

To multiply the ith row of a matrix by c 6= 0, multiply that matrix on the left by
the matrix Ei (c) which has diagonal entries 1 except for the i, i-entry which is c
and which has all other entries zero. Ei (c) may also be obtained by multiplying the
ith row of the identity matrix by c.

Example For n = 3,  
1 0 0
E2 (6) = 0 6 0 .
0 0 1

To interchange the ith and jth rows of a matrix, with i 6= j, multiply by the
matrix on the left by the matrix Eij which is obtained from the identity matrix by
interchanging its ith and jth rows. The diagonal entries of Eij are 1 except for its
i, i, and j, j-entries which are zero. Its i, j and j, i-entries are both 1, and all other
entries are zero.

Examples For n = 3,
   
0 1 0 0 0 1
E12 = 1 0 0 E13 = 0 1 0 .
0 0 1 1 0 0

Matrices of the above type are called elementary matrices.


500 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

The fact that row operations may be accomplished by matrix multiplication by ele-
mentary matrices has many important consequences. Thus, let E be an elementary
matrix corresponding to a certain elementary row operation. The associative law
tells us
E(AB) = (EA)B
as long as the shapes match. However, E(AB) is the result of applying the row
operation to the product AB and (EA)B is the result of applying the row operation
to A and then multiplying by B. This establishes the important principle enunciated
earlier in this section and upon which Gauss-Jordan reduction is based.

Exercises for 10.4.

1. Solve each of the following systems by Gauss-Jordan elimination if there is a


solution.
(a)

x1 + 2x2 + 3x3 = 4
3x1 + x2 + 2x3 = −1
x1 + x3 = 0

(b)

x1 + 2x2 + 3x3 = 4
2x1 + 3x2 + 2x3 = −1
x1 + x2 − x3 = 10

(c)     
1 1 −2 3 x1 9
2 1    
 1 0  x2  = −18 .
1 −1 1 0   x3   −9 
3 1 2 1 x4 9

2. Use Gaussian elimination to solve


[ ] [ ]
3 2 0 1
X=
2 1 1 0

where X is an unknown 2 × 2 matrix.


3. Calculate
     
1 0 1 1 0 0 0 1 0 2 0 0 1 2 3
0 1 0 −1 1 0 1 0 0 0 1 0 4 5 6 .
0 0 1 0 0 1 0 0 1 0 0 1 7 8 9

Hint: User the row operations suggested by the first three matrices.
10.5. SINGULARITY, PIVOTS, AND INVERTIBLE MATRICES 501

4. What is the effect of mulitplying a 2 × 2 matrix A on the right by the elemen-


tary matrix [ ]
1 a
?
0 1
What general rule is this a special case of?

10.5 Singularity, Pivots, and Invertible Matrices

Let A be a square coefficient matrix. Gauss-Jordan reduction will work as indi-


cated in the previous section if A can be reduced by a sequence of elementary row
operations to the identity matrix I. A square matrix with this property is called
non-singular or invertible. (The reason for the latter terminology will be clear
shortly.) If it cannot be so reduced, it is called singular. Clearly, there are singular
matrices. For example, the matrix equation
[ ][ ] [ ]
1 1 x1 1
=
1 1 x2 0

is equivalent to the system of 2 equations in 2 unknowns

x1 + x2 = 1
x1 + x2 = 0

which is inconsistent and has no solution. Thus Gauss-Jordan reduction certainly


can’t work on its coefficient matrix.

To understand how to tell if a square matrix A is non-singular or not, we look more


closely at the Gauss-Jordan reduction process. The basic strategy is the following.
Start with the first row, and use type (1) row operations to eliminate all entries in
the first column below the 1, 1-position. A leading non-zero entry when used in this
way is called a pivot. There is one problem with this course of action: the leading
non-zero entry in the first row may not be in the 1, 1-position. In that case, first
interchange the first row with a succeeding row which does have a non-zero entry
in the first column. (If you think about it, you may still see a problem. We shall
come back to this and related issues later.)

After the first reduction, the coefficient matrix will have been transformed to a
matrix of the form  
p1 ∗ ... ∗
0 ∗ . . . ∗
 
 .. .. .
. . . . . .. 
0 ∗ ... ∗
502 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

where p1 is the (first) pivot. We now do something mathematicians (and computer


scientists) love: repeat the same process for the submatrix consisting of the second
and subsequent rows. If we are fortunate, we will be able to transform A ultimately
by a sequence of elementary row operations into matrix of the form
 
p1 ∗ ∗ . . . ∗
 0 p2 ∗ . . . ∗ 
 
 0 0 p3 . . . ∗ 
 
 .. .. .. .. 
. . . ... . 
0 0 0 ... pn
with pivots on the diagonal and nonzero-entries in those pivot positions. (Such a
matrix is also called an upper triangular matrix because it has zeroes below the
diagonal.) We may now start in the lower right hand corner and apply the Jordan
reduction process. In this way each of the entries above the diagonal pivots may be
eliminated, so we obtain a diagonal matrix
 
p1 0 0 . . . 0
 0 p2 0 . . . 0 
 
 0 0 p3 . . . 0 
 
 .. .. .. .. 
. . . ... . 
0 0 0 . . . pn
with non-zero entries on the diagonal. We may now finish off the process by applying
type (2) operations to the rows as needed and finally obtain the identity matrix I
as required.

The above analysis makes clear that the placement of the pivots is what is essential
to non-singularity. What can go wrong? It may happen for a given row that the
leading non-zero entry is not in the diagonal position, and there is no way to remedy
this by interchanging with a subsequent row. In that case, we just do the best we
can. We use a pivot as far to the left as possible (after suitable row interchange
with a subsequent row where necessary). In the extreme case, it may turn out
that the submatrix we are working with consists only of zeroes, and there are no
possible pivots to choose, so we stop. For a square matrix, this extreme case must
occur, since we will run out of pivot positions before we run out of rows. Thus,
the Gaussian reduction will still transform A to an upper triangular matrix A0 , but
some of the diagonal entries will be zero and some of the last rows (perhaps only
the last row) will consist of zeroes. That is the singular case.

Example 195
   
1 2 −1 1 2 −1
1 2 0  → 0 0 1 clear 1st column
1 2 −2 0 0 −1
 
1 2 0
→ 0 0 1 no pivot in 2, 2 position
0 0 0
10.5. SINGULARITY, PIVOTS, AND INVERTIBLE MATRICES 503

Note that the last row consists of zeroes.

We showed in the previous section that if the n × n matrix A is non-singular, then


every equation of the form AX = B (where both X and B are n × p matrices) has
a solution and also that the solution X = B 0 is unique. On the other hand, if A
is singular , an equation of the form AX = B may have a solution, but there will
certainly be matrices B for which AX = B has no solutions. This is best illustrated
by an example.

Example 196 Consider the system


 
1 2 −1
1 2 0 x = b
1 2 −2

where x and b are 3 × 1 column vectors. Without specifying b, the reduction of


the augmented matrix for this system would follow the scheme
     
1 2 −1 b1 1 2 −1 ∗ 1 2 0 b01
1 2 0 b2  → 0 0 1 ∗ → 0 0 1 b02  .
1 2 −2 b3 0 0 −1 ∗ 0 0 0 b03

Now simply choose b03 = 1 (or any other non-zero value), so the reduced system is
inconsistent. (Its last equation would be 0 = b03 6= 0.) Since, the two row operations
may be reversed, we can now work back to a system with the original coefficient
matrix which is also inconsistent. (Check in this case that if you choose b01 = 0, b02 =
1, b03 = 1, then reversing the operations yields b1 = −1, b2 = 0, b3 = −1.)

The general case is completely analogous. Suppose

A → · · · → A0

is a sequence of elementary row operations which transforms A to a matrix A0 for


which the last row consists of zeroes. Choose any n × p matrix B 0 for which the
last row does not consist of zeroes. Then the equation

A0 X = B 0

cannot be valid since the last row on the left will necessarily consist of zeroes. Now
reverse the row operations in the sequence which transformed A to A0 . Let B be
the effect of this reverse sequence on B 0 .

A ← · · · ← A0
B ← · · · ← B0

Then the equation


AX = B
cannot be consistent because the equivalent system A0 X = B 0 is not consistent.
504 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

We shall see later that when A is a singular n × n matrix, if AX = B has a solution


X for a particular B, then it has infinitely many solutions.

There is one unpleasant possibility we never mentioned. It is conceivable that the


standard sequence of elementary row operations transforms A to the identity matrix,
so we decide it is non-singular, but some other bizarre sequence of elementary row
operations transforms it to a matrix with some rows consisting of zeroes, in which
case we should decide it is singular. Fortunately this can never happen because
singular matrices and non-singular matrices have diametrically opposed properties.
For example, if A is non-singular then AX = B has a solution for every B, while if
A is singular, there are many B for which AX = B has no solution. This fact does
not depend on the method we use to find solutions.

Inverses of Non-singular Matrices Let A be a non-singular n × n matrix. Ac-


cording to the above analysis, the equation

AX = I

(where we take B to be the n × n identity matrix I) has a unique n × n solution


matrix X = B 0 . This B 0 is called the inverse of A, and it is usually denoted A−1 .
That explains why non-singular matrices are also called invertible.

Example 197 Consider  


1 0 −1
A = 1 1 0
1 2 0
To solve AX = I, we reduce the augmented matrix [A | I].
   
1 0 −1 | 1 0 0 1 0 −1 | 1 0 0
1 1 0 | 0 1 0 → 0 1 1 | −1 1 0
1 2 0 | 0 0 1 0 2 0 | −1 0 1
 
1 0 −1 | 1 0 0
→ 0 1 1 | −1 1 0
0 0 −1 | 1 −2 1
 
1 0 −1 | 1 0 0
→ 0 1 1 | −1 1 0 
0 0 1 | −1 2 −1
 
1 0 0 | 0 2 −1
→ 0 1 0 | 0 −1 1  .
0 0 1 | −1 2 −1

(You should make sure you see which row operations were used in each step.) Thus,
the solution is  
0 2 −1
X = A−1 =  0 −1 1  .
−1 2 −1
10.5. SINGULARITY, PIVOTS, AND INVERTIBLE MATRICES 505

Check the answer by calculating


    
0 2 −1 1 0 −1 1 0 0
A−1 A =  0 −1 1  1 1 0  = 0 1 0 .
−1 2 −1 1 2 0 0 0 1

There is a subtle point about the above calculations. The matrix inverse X = A−1
was derived as the unique solution of the equation AX = I, but we checked it by
calculating A−1 A = I. The definition of A−1 told us only that AA−1 = I. Since
matrix multiplication is not generally commutative, how could we be sure that the
product in the other order would also be the identity I? The answer is provided by
the following tricky argument. Let Y = A−1 A. Then

AY = A(A−1 A) = (AA−1 )A = IA = A

so that Y is the unique solution of the equation AY = A. However, Y = I is also a


solution of that equation, so we may conclude that A−1 A = Y = I. The upshot is
that for a non-singular square matrix A, we have both AA−1 = I and A−1 A = I.

The existence of matrix inverses for non-singular square matrices suggests the fol-
lowing scheme for solving matrix equations of the form

AX = B.

First, find the matrix inverse A−1 , and then take X = A−1 B. This is indeed the
solution since
AX = A(A−1 B) = (AA−1 )B = IB = B.
However, as easy as this looks, one should not be misled by the formal algebra.
Note that the only method we have for finding the matrix inverse is to apply Gauss-
Jordan reduction to the augmented matrix [A | I]. If B has fewer than n columns,
then applying Gauss-Jordan reduction directly to [A | B] would ordinarily involve
less computation that finding A−1 . Hence, in the most common cases, applying
Gauss-Jordan reduction to the original system of equations is the best strategy.
Numerical Considerations in Computation The examples we have chosen to
illustrate the principles employ small matrices for which one may do exact arith-
metic. The worst that will happen is that some of the fractions may get a bit messy.
In real applications, the matrices are often quite large, and it is not practical to
do exact arithmetic. The introduction of rounding and similar numerical approx-
imations complicates the situation, and computer programs for solving systems of
equations have to deal with problems which arise from this. If one is not careful
in designing such a program, one can easily generate answers which are very far
off, and even deciding when an answer is sufficiently accurate sometimes involves
rather subtle considerations. Typically, one encounters problems for matrices where
the entries differ radically in size. Also, because of rounding, few matrices are ever
exactly singular since one can never be sure that a very small numerical value at a
potential pivot would have been zero if the calculations had been done exactly. On
506 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

the other hand, it is not surprising that matrices which are close to being singular
can give computer programs indigestion.

If you are interested in such questions, there are many introductory texts which
discuss numerical linear algebra. Two such are Introduction to Linear Algebra by
Johnson, Riess, and Arnold and Applied Linear Algebra by Noble and Daniel. One
of the computer assignments in your programming course is concerned with some
of the problems of numerical linear algebra.

Exercises for 10.5.

1. In each of the following cases, find the matrix inverse if one exists. Check
your answer by multiplication.
 
1 −1 −2
(a) 2 1 1
2 2 2
 
1 4 1
(b) 1 1 2
1 3 1
 
1 2 −1
(c) 2 3 3 
4 7 1
 
2 2 1 1
−1 1 −1 0
(d) 
 1 0 1 2

2 2 1 2
[ ]
a b
2. Let A = , and suppose det A = ad − bc 6= 0. Show that
c d
[ ]
1 d −b
A−1 = .
ad − bc −c a
Hint: If someone is kind enough to suggest to you what A−1 is, you need not
‘find’ it. Just check that it works by multiplication.
3. Let A and B be invertible n × n matrices. Show that (AB)−1 = B −1 A−1 .
Note the reversal of order! Hint: As above, if you are given a candidate for
an inverse, you needn’t ‘find’ it; you need only check that it works.
4. In the general discussion of Gauss-Jordan reduction, we assumed for simplicity
that there was at least one non-zero entry in the first column of the coefficient
matrix A. That was done so that we could be sure there would be a non-zero
entry in the 1, 1-position (after a suitable row interchange) to use as a pivot.
What if the first column consists entirely of zeroes? Does the basic argument
(for the singular case) still work?
10.6. GAUSS-JORDAN REDUCTION IN THE GENERAL CASE 507

10.6 Gauss-Jordan Reduction in the General Case

Gauss-Jordan reduction works just as well if the coefficient matrix A is singular or


even if it is not a square matrix. Consider the system

Ax = b

where the coefficient matrix A is an m × n matrix. We shall concentrate on the


case that x is an n × 1 column vector of unknowns and b is a given m × 1 column
vector. (This illustrates the principles, and the case AX = B where X and B are
n × p matrices with p > 1 works in a similar manner.) The method is to apply
elementary row operations to the augmented matrix

[A | b] → · · · → [A0 | b0 ]

making the best of it with the coefficient matrix A. We may not be able to transform
A to the identity matrix, but we can always pick out a set of pivots, one in each
non-zero row, and otherwise mimic what we did in the case of a square non-singular
A. If we are fortunate, the resulting system A0 x = b0 will have solutions.

Example 198 Consider


    
1 1 2 x1 1
−1 −1 1 x2  = 5 .
1 1 3 x3 3

Reduce the augmented matrix as follows


     
1 1 2 | 1 1 1 2 | 1 1 1 2 | 1
−1 −1 1 | 5 → 0 0 3 | 6 → 0 0 3 | 6
1 1 3 | 3 0 0 1 | 2 0 0 0 | 0

This completes the ‘Gaussian’ part of the reduction with pivots in the 1, 1 and 2, 3
positions, and the last row of the transformed coefficient matrix consists of zeroes.
Let’s now proceed with the ‘Jordan’ part of the reduction. Use the last pivot to
clear the column above it.
     
1 1 2 | 1 1 1 2 | 1 1 1 0 | −3
0 0 3 | 6 → 0 0 1 | 2 → 0 0 1 | 2 
0 0 0 | 0 0 0 0 | 0 0 0 0 | 0

and the resulting augmented matrix corresponds to the system

x1 + x2 = −3
x3 = 2
0=0

Note that the last equation could just as well have read 0 = 6 (or some other
non-zero quantity) in which case the system would be inconsistent and not have a
508 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

solution. Fortunately, that is not the case in this example. The second equation
tells us x3 = 2, but the first equation only gives a relation x1 = −3 − x2 between
x1 and x2 . That means that the solution has the form
       
x1 −3 − x2 −3 −1
x = x2  =  x2  =  0  + x2  1 
x3 2 2 0

where x2 can have any value whatsoever. We say that x2 is a free variable, and the
fact that it is arbitrary means that there are infinitely many solutions. x1 and x3
are called bound variables.

It is instructive to reinterpret this geometrically in R3 . The original system of


equations may be written

x1 + x2 + 2x3 = 1
−x1 − x2 + x3 = 5
x1 + x2 + 3x3 = 3

which are equations for 3 planes in R3 . Solutions


 
x1
x = x2 
x3

correspond to points lying in the common intersection of those planes. Normally,


we would expect three planes to intersect in a single point. That would have been
the case had the coefficient matrix been non-singular. However, in this case the
planes intersect in a line, and the solution obtained above may be interpreted as
the vector equation of that line. If we put x2 = s and rewrite the equation using
vector notation, we obtain

x = h−3, 0, 2i + sh−1, 1, 0i.

Example 198 illustrates many features of the general procedure. Gauss-Jordan


reduction of the coefficient matrix is always possible, but the pivots don’t always
end up on the diagonal. In any case, the Jordan part of the reduction will yield a
1 in each pivot position with zeroes above and below the pivot in that column. In
any given row of the reduced coefficient matrix, the pivot will be on the diagonal or
to its right, and all entries to the left of the pivot will be zero. (Some of the entries
to the right of the pivot may be non-zero.) If the number of pivots is smaller than
the number of rows (which will always be the case for a singular square matrix),
then some rows of the reduced coefficient matrix will consist entirely of zeroes. If
there are non-zero entries in those rows to the right of the divider in the augmented
matrix , the system is inconsistent and has no solutions. Otherwise, the system
does have solutions. Such solutions are obtained by writing out the corresponding
system, and transposing all terms not associated with the pivot position to the right
10.6. GAUSS-JORDAN REDUCTION IN THE GENERAL CASE 509

side of the equation. Each unknown in a pivot position is then expressed in terms
of the non-pivot unknowns (if any). The pivot unknowns are said to be bound. The
non-pivot unknowns may be assigned any value and are said to be free.

Example 199 Consider


 
  x  
1 2 −1 0  1 0
x2   
1 2 1 3 
x3  = 0 . (164)
2 4 0 3 0
x4

Reducing the augmented matrix yields


     
1 2 −1 0 | 0 1 2 −1 0 | 0 1 2 −1 0 | 0
1 2 1 3 | 0 → 0 0 2 3 | 0 → 0 0 2 3 | 0
2 4 0 3 | 0 0 0 2 3 | 0 0 0 0 0 | 0
   
1 2 −1 0 | 0 1 2 0 3/2 | 0
→ 0 0 1 3/2 | 0 → 0 0 1 3/2 | 0 .
0 0 0 0 | 0 0 0 0 0 | 0

(Note that since there are zeroes to the right of the divider, we don’t have to worry
about possible inconsistency in this case.) The system corresponding to the reduced
augmented matrix is

x1 + 2x2 + (3/2)x4 = 0
x3 + (3/2)x4 = 0
0=0

Thus,

x1 = −2x2 − (3/2)x4
x3 = − 3(/2)x4

with x1 and x3 bound and x2 and x4 free. A general solution has the form
       
x1 −2x2 − (3/2)x4 − 2x2 − (3/2)x4
x2   x2   x2   0 
x=  
x3  = 
= + 
− (3/2)x4   0   −(3/2)x4 
x4 x4 0 0
   
−2 − 3/2
 1   0 
x = x2   
 0  + x4  −3/2 

0 0

where x2 and x4 can assume any value.


510 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

This solution may also be interpreted geometrically in R4 . Introduce two vectors

v1 = h−2, 1, 0, 0i
v2 = h−3/2, 0, −3/2, 0i

in R4 . Note that neither of these vectors is a multiple of the other. Hence, we may
think of them as spanning a (2-dimensional) plane in R4 . Putting s1 = x2 and
s2 = x4 , we may express the general solution vector as

x = s1 v1 + s2 v2 ,

so the solution set of the system (164) may be identified with the plane spanned by
{v1 , v2 }.

Make sure you understand the procedure used in the above examples to express
the general solution vector x entirely in terms of the free variables. We shall use it
quite generally.

Any system of equations with real coefficients may be interpreted as defining a


locus in Rn , and studying the structure—in particular, the dimensionality—of such
a locus is something which will concern us later.

Example 200 Consider    


1 2 [ ] 1
1 0  
  1 =  5 .
x
−1 1 x2 −7
2 0 10
Reducing the augmented matrix yields
     
1 2 | 1 1 2 | 1 1 2 | 1
1 0 | 5 0 −2 | 4 0 −2 | 4
   → 
−1 1 | −7 → 0 3 | −6 0 0 | 0
2 0 | 10 0 −4 | 8 0 0 | 0
   
1 2 | 1 1 0 | 5
0 1 | −2 0 1 | −2
→ 0 0
→ 
| 0 0 0 | 0
0 0 | 0 0 0 | 0

which is equivalent to

x1 = 5
x2 = −2.

Thus the unique solution vector is


[]
5
x= .
−2
10.6. GAUSS-JORDAN REDUCTION IN THE GENERAL CASE 511

These examples and the preceding discussion lead us to certain conclusions about
a system of the form
Ax = b
where A is an m × n matrix, x is an n × 1 column vector of unknowns, and b is an
m × 1 column vector that is given.

The number r of pivots of A is called the rank of A, and clearly it plays an crucial
role. It is the same as the number of non-zero rows at the end of the Gauss-Jordan
reduction since there is exactly one pivot in each non-zero row. The rank is certainly
not greater than either the number of rows m or the number of columns n of A.

If m = n, i.e., A is a square matrix, then A is non-singular when its rank is n and


it is singular when its rank is smaller than n.

More generally for an m × n matrix A, if the rank r is smaller than the number
of rows m, then there are m × 1 column vectors b such that the system Ax = b
does not have any solutions. The argument is basically the same as for the case of
a singular square matrix. Transform A by a sequence of elementary row operations
to a matrix A0 with its last row consisting of zeroes, choose b0 so that A0 x = b0 is
inconsistent, and reverse the operations to find an inconsistent Ax = b.

If for a given b, the system Ax = b does have solutions, then the unknowns
x1 , x2 , . . . , xn may be partitioned into two sets: r bound unknowns and n − r free
unknowns. The bound unknowns are expressed in terms of the free unknowns. The
number n − r of free unknowns is sometimes called the nullity of the matrix A. If
the nullity n − r > 0, i.e., n > r, then (if there are any solutions at all) there are
infinitely many solutions.

Systems of the form


Ax = 0
are called homogeneous. Example 199 is a homogeneous system. Gauss-Jordan
reduction of a homogeneous system always succeeds since the matrix b0 obtained
from b = 0 is also zero. If m = n, i.e., the matrix is square, and A is non-singular,
the only solution is 0, but if A is singular, i.e., r < n, then there are definitely
non-zero solutions since there are some free unknowns which can be assigned non-
zero values. This rank argument works for any m and n: if r < n, then there
are definitely non-zero solutions for the homogeneous system Ax = 0. One special
case of interest is m < n. Since r ≤ m, we must have r < n in that case. That
leads to the following important principle: a homogeneous system of linear algebraic
equations for which there are more unknowns than equations always has some non-
trivial solutions.

Pseudo-inverses It some applications, one needs to try to find ‘inverses’ of non-


square matrices. Thus, if A is a m × n matrix, one might need to find an n × m
matrix A0 such that
AA0 = I the m × m identity.
512 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

Such an A0 would be called a right pseudo-inverse. Similarly, an n × m matrix A00


such that
A00 A = I the n × n identity
is called a left pseudo-inverse.

If m > n, i.e., A has more rows than columns, then no right pseudo-inverse is
possible. For, suppose we could find an n × m matrix A0 such that AA0 = I (the
m × m identity matrix). Then for any m × 1 column vector b, x = A0 b is a solution
of Ax = b since
Ax = A(A0 b) = (AA0 )b = Ib = b.
On the other hand, we know that since m > n ≥ r, there is at least one b such that
Ax = b does not have a solution.

On the other hand, if m < n and the rank of A is m (which is as large as it can get
in any case), then it is always possible to find a right pseudo-inverse. To see this,
let  
x11 x12 . . . x1m
 x21 x22 . . . x2m 
 
X= . .. .. 
 .. . ... . 
xn1 xn2 ... xnm
and consider the matrix equation

AX = I.

It may be viewed as m separate equations of the form


     
1 0 0
0 1 0
     
Ax =  .  , Ax =  .  , . . . , Ax =  .  ,
 ..   ..   .. 
0 0 1

one for each column of I. Since r = m, each of these equations has a solution. (In
fact it will generally have infinitely many solutions.)

Exercises for 10.6.

1. In each of the following cases, apply the Gauss-Jordan reduction process to


find a general solution, if one exists. As in the text, the answer should express
the general solution x as a ‘particular solution’ (possibly zero) plus a linear
combination of ‘basic solutions’ with the free unknowns (if any) as coefficients.
    
1 −6 −4 x1 −3
(a)  3 −8 −7 x2  =  −5 .
−2 2 3 x3 2
10.6. GAUSS-JORDAN REDUCTION IN THE GENERAL CASE 513
   
1 2 [ ] 1
3 1  
(b)   1 = 2.
x
4 3  x2 3
2 −1 1
    
1 −2 2 1 x1 6
1 −2 1 2 x2   4 
(c) 
3
   =  .
−6 4 5 x3  14
1 −2 3 0 x4 8

2. Find a general solution vector of the system Ax = 0 where


   
1 0 1 2 1 3 4 0 2
(a) A =  2 −1 1 0 (b) A = 2 7 6 1 1
−1 4 −1 −2 4 13 14 1 3

3. What is the rank of the coefficient matrix for each of the matrices in the
previous problem.

4. What is the rank of each of the following matrices?


 
    1
1 1 1 1 2 3 4 2
1 2 3 , 0 0 0 0 ,  
3
4 5 6 0 0 0 0
4

5. Let A be an m × n matrix with m < n, and let r be its rank. Which of the
following is always true, sometimes true, never true?
(a) r ≤ m < n. (b) m < r < n. (c) r = m. (d) r = n. (e) r < m. (f) r = 0.

6. How do you think the rank of a product AB compares to the rank of A?


Is the former rank always ≤, ≥, or = the latter rank? Try some examples,
make a conjecture, and see if you can prove it. Hint: Look at the number of
rows of zeroes after you reduce A completely to A0 . Could further reduction
transform A0 B to a matrix with more rows of zeroes?

7. Find a right pseudo-inverse A0 for


[ ]
1 1 2
A= .
2 1 1

Note that there are infinitely many answers to this problem. You need only
find one, but if you are ambitious, you can find all of them. Is there a left
pseudo-inverse for A. If there is find one, if not explain why not.
514 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

10.7 Vector Spaces

Many of the notions we encountered when studying linear differential equations have
interesting analogues in the theory of solutions of systems of algebraic equations.
Both these theories have remarkable similarities to the theory of vectors in two and
three dimensions. We give some examples.

The general solution of a first order linear equation


y 0 + a(t)y = b(t)
has the form
y = yp + ch
where yp is one solution, h is a solution of the corresponding homogeneous equation,
and c is a scalar which may assume any value. Similarly, we saw in Example 1 in
the previous section that the general solution of the system of equations had the
form
x = x0 + sv
where x0 = h−3, 0, 2i is one solution (for s = 0), v = h−1, 1, 0i, and s is a scalar
which may assume any value. Each of these is reminiscent of the equation of a line
in R3
r = r0 + sv
where r0 is the position vector of a point on the line, v is a vector parallel to the
line, and s is a scalar which may assume any value.

Furthermore, we saw that the general solution of the second order linear homoge-
neous differential equation
y 00 + p(t)y 0 + q(t)y = 0
has the form
y = c1 y1 + c2 y2
where {y1 , y2 } is a linearly independent pair of solutions and c1 and c2 are two
scalars which may assume any values. Similarly, we saw in Example 2 in the previous
section that the general solution of the system had the form
x = s1 v1 + s2 v2
where v1 = h−2, 1, 0, 0i, v2 = h−3/2, 0, −3/2, 0i and s1 and s2 are scalars which
may assume any values. Note that the pair {v1 , v2 } is linearly independent in ex-
actly the same sense that a pair of solutions of a differential equation is linearly
independent, i.e., neither vector is a scalar multiple of the other. Both these situa-
tions are formally similar to what happens for a plane in R3 which passes through
the origin. If {v1 , v2 } is a pair of vectors in the plane, and neither is a multiple of the
other, then a general vector in that plane can be expressed as a linear combination
v = s1 v2 + s2 v2 .
10.7. VECTOR SPACES 515

As one studies these different theories, more and more similarities arise. For exam-
ple, we just remarked that the general solution of the differential equation

y 00 + p(t)y 0 + q(t)y = f (t)

has the form

y = yp + a general solution of the corresponding homogeneous equation

where yp is particular solution of the inhomogeneous equation. Exactly the same


rule applies to the general solution of the inhomogeneous algebraic system

Ax = b.

Namely, suppose xp is one particular solution of this inhomogeneous system, and x


is any other solution. Then,

Ax = b
Axp = b

and subtraction yields


Ax − Axp = A(x − xp ) = 0,
i.e., z = x − xp is a solution of the homogeneous system Az = 0. Thus

x = xp + z = xp + a general solution of the homogeneous system.

The important point to note is that not only is the conclusion the same, but the
argument used to derive it is also essentially the same.

Whenever mathematicians notice that the same phenomena are observed in differ-
ent contexts, and that the same arguments are used to study those phenomena, they
look for a common way to describe what is happening. One of the major accom-
plishments of the late nineteenth and early twentieth centuries was the realization
among mathematicians that there is a single concept, that of an abstract vector
space, which may be used to study many diverse mathematical phenomena, includ-
ing those mentioned above. They discovered that the common aspects of all such
theories were based on the fact that they share certain operations, and that these
operations, although defined differently in each individual case, all obey common
rules. Any argument which uses only those rules will be valid in all cases.

There are two basic operations which must be present for a collection of objects to
constitute a vector space. (Additional operations may also be present, but for the
moment we ignore them.) First, there should be an operation of addition which
obeys the usual rules you are familiar with for addition of vectors in space. Thus,
addition should be an associative operation, it should obey the commutative law,
and there should be an element called zero (0) which added to any other element
results in the same element. Finally, every element must have a negative which
when added to the original element yields zero.
516 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

For example, in Rn , addition x + y is done by adding corresponding entries of the


two n-tuples. The zero element is the n-tuple 0 for which all entries are zero. For
any x in Rn , the negative of x is just the n-tuple −x.

Alternatively, let S denote the set of all solutions of the second order homogeneous
linear differential equation

y 00 + p(t)y 0 + q(t)y = 0.

In this case, the addition operation y1 +y2 is just the ordinary addition of functions,
and the zero element is the function which is identically zero. The negative of a
function is defined as expected by the rule (−y)(t) = −y(t). Since the equation is
homogeneous, the sum of two solutions is again a solution, the zero function is also
a solution, and the negative of a solution is a solution. If such were not the case,
the set S would not be closed under the operations, so it would not be a vector
space.

To have a vector space, we also need a second operation: we must be able to multiply
objects by scalars. This operation must also obey the usual rules of vector algebra.
Namely, it must satisfy the associative law, distributive laws, and multiplying an
object by the scalar 1 should not change the object.

For example, in Rn , we multiply a vector x by a scalar c in the usual way.

For S, the vector space of solutions of a 2nd order homogeneous linear differential
equation, a function (solution) is multiplied by a scalar by the rule (cy)(t) = cy(t).
Again, since the differential equation is homogeneous, any scalar multiple of a so-
lution is a solution, so the set S is closed under the operation.

In courses in abstract algebra, one studies in detail the list of rules (or axioms)
which govern the operations in a vector space. In particular, one derives all the
usual rules of vector algebra from the properties listed above. In this course, we
shall assume all that has been done, so you may safely manipulate objects in any
vector space just the way you would manipulate ordinary vectors in space. Note the
different levels of abstraction you have seen in this course for the concept ‘vector’.
First a vector was just a quantity in the plane or in space with a magnitude and a
direction. Vectors were added or multiplied by scalars using simple geometric rules.
Later, we introduced the concept of a ‘vector’ in Rn which was an n-tuple of real
numbers. Such ‘vectors’ were added or multiplied by scalars by doing the same to
their components. Finally, we are now considering ‘vectors’ which are functions.
Such ‘vectors’ are added or multiplied by scalars by doing the same to the function
values. As noted above, what is important is not the nature of the individual object
we call a ‘vector’, but the properties of the operations we define on the set of all
such ‘vectors’.

There are many other important examples of vector spaces. Often they are function
spaces, that is, their elements are functions defined on some common domain. Here
are a few examples of such vector spaces.
10.7. VECTOR SPACES 517

Let I be a real interval. The set F(I) of all real valued functions defined on I is
vector space if we use the operations of adding functions and multiplying them by
scalars. The set C(I) of all continuous real valued functions defined on I is also
a vector space because the sum of two continuous functions is continuous and any
scalar multiple of a continuous function is continuous. Similarly, the set C 2 (I) of
all twice differentiable real valued functions defined on I with continuous second
derivatives is a vector space. As mentioned before, the set of all such differentiable
functions which are solutions of a specified second order homogeneous linear differ-
ential equation is a vector space. Finally, the set of all real valued analytic functions
defined on I is also a vector space.

Not every set of ‘vector’ like objects is a vector space. We have already seen several
such examples. For example, the vector equation of a line in R3 has the form

r = r0 + sv

where r is the position vector connecting the origin to a general point on the line, r0
is the position vector of one particular point on the line, and v is a vector parallel to
the line. If the line does not pass through the origin, the sum of two position vectors
with end points on the line won’t be a vector with endpoint on the line. Hence,
the set is not closed under addition. It is also not closed under multiplication by
scalars.

O
O

Line not through origin Line through origin

Similarly, while the set of solutions of the homogeneous first order equation

y 0 + a(t)y = 0

is a vector space, the set of solution of the (inhomogeneous) first order equation

y 0 + a(t)y = b(t)

is not a vector space if b(t) is not identically 0. This rule holds in general. The set
of solutions of a homogeneous linear equation (of any kind) is a vector space, but
the set of solutions of an inhomogeneous linear equation is not a vector space.
518 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

The Domain of Acceptable Scalars In the previous discussion, we assumed


implicitly that all scalars were real. However, there are circumstances where it
would be more appropriate to allow scalars which are either real or complex. For
example, we know that it is easier to study solutions of a second order linear equation
with constant coefficients if we allow complex valued solutions and use complex
scalar constants when writing out the general solution. Thus, one must specify
in any given case whether one is talking about a real vector space, where the set
of possible scalars is restricted to R, or a complex vector space, where the set of
possible scalars is the larger domain C.

One important complex vector space is the set Cn of all n × 1 column vectors
with complex entries. Another is the set of all complex valued solutions of a given
homogeneous linear differential equation.

One confusing point is that every complex vector space may also be considered to
be a real vector space simply by agreeing to allow only real scalars in any expression
in which a ‘vector’ is multiplied by a scalar. For example, the set C of all complex
numbers may itself be viewed as a real vector space in this way. If we do so, then,
since any complex number a + bi is determined by the pair (a, b) of its real and
imaginary parts, as a real vector space C is essentially the same as R2 . Subspaces
You may have noticed that in many of the examples listed previously, some vector
spaces were subsets of others. If V is a vector space, and W is a non-empty subset,
then we call W a subspace of V if whenever two elements of V are in W so is
their sum and whenever an element of V is in W so is any scalar multiple of that
element. This means that W becomes a vector space in its own right under the
operations it inherits from V.

For V = R3 , most subspaces are what you would expect. Any line passing through
the origin yields a subspace, but a line which does not pass through the origin does
not. Similarly, any plane passing through the origin yields a subspace but a plane
which does not pass through the origin does not. A less obvious subspace is the
zero subspace which consists of the single vector 0. Also, in general mathematical
usage, any set is considered to be a subset of itself, so R3 is also a subspace of R3 .

For any vector space V, the zero subspace and the whole vector space are subspaces
of V.

For a less obvious example, let C 2 (I) be the vector space of all functions, defined
on the real interval I, with continuous second derivatives. Then the set of solutions
of the homogeneous differential equation

y 00 + p(t)y 0 + q(t)y = 0

is a subspace. In essence, we know this from earlier work, but let’s derive it again
by a general argument. Consider the operator

d2 d
L= + p(t) + q(t)
dt2 dt
10.7. VECTOR SPACES 519

acting on twice differentiable functions y(t), i.e., let


L(y) = y 00 + p(t)y 0 + q(t)y.
With this notation, the differential equation may be written L(y) = 0.

L is what we call a linear operator because it obeys the following rules:


L(y1 + y2 ) = L(y1 ) + L(y2 ) for functions y1 , y2 one (166)
L(cy) = cL(y) for a function y and a scalar c.two (166)
The fact that the set of solutions of L(y) = 0 is a subspace is a direct consequence
of these rules. Namely, if L(y1 ) = 0 and L(y2 ) = 0, then rule (165) immediately
gives L(y1 + y2 ) = L(y1 ) + L(y2 ) = 0. Similarly, if L(y) = 0, rule (166) immediately
gives L(cy) = cL(y) = 0 for any scalar c.

This same argument would work for any linear operator , and we shall see that one
of the most common ways to obtain a subspace is as the solution set or null space
of a linear operator. For example, the solution set of a homogeneous system of
algebraic equations
Ax = 0
is also a subspace because it is the null space of an appropriate linear operator. (See
the Exercises.)

Exercises for 10.7.

1. Determine if each of the following subsets of R3 is a vector subspace of R3 .


If it is not a subspace, explain what fails.
 
x1
(a) The set of all x = x2  such that 2x1 − x2 + 4x3 = 0.
x3
 
x1
(b) The set of all x = x2  such that 2x1 − x2 + 4x3 = 3.
x3
 
x1
(c) The set of all x = x2  such that x1 2 + x2 2 − x3 2 = 1.
x3
 
1 + 2t
(d) The set of all x of the form x =  −3t  where t is allowed to assume
2t
any real value.
 
s + 2t
(e) The set of all x of the form x = 2s − 3t where s and t are allowed to
s + 2t
assume any real values.
520 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

2. Which of the following sets of functions is a vector space under the operations
discussed in the section. If not a vector space, what fails?
(a) The set of all polynomial functions of the form f (t) = a0 +a1 t+a2 t2 +a3 t3
(of degree ≤ 3.)
(b) The set of all polynomial functions with constant term 1.
(c) The set of all continuous, real valued functions f with domain −1 ≤ t ≤ 1
such that f (0) = 0.
(d) The set of all continuous, real valued functions f with domain −1 ≤ t ≤ 1
such that f (0) = 1.
(e) The set of all solutions of the differential equation y 00 + 4y 0 + 5y = cos t.

3. Show that the set of solutions of the first order equation y 0 + a(t)y = b(t) is
a vector space if and only if b(t) is identically 0.

4. This problem is mostly just a matter of translating terminology from one


context to another. Let A be an m × n matrix with real entries. Define an
operator L : Rn → Rm which transforms n-tuples to m-tuples by the rule:
L(x) = Ax. (a) Show that L is a linear operator as defined in Section 7.
(b) Show that the set of solutions of the homogeneous system Ax = 0 is a
subspace of Rn by repeating the argument given in the text that the null
space of the operator L is a subspace.

10.8 Linear Independence, Bases, and Dimension

Let V be a vector space. In general, V will have infinitely many elements, but it is
often possible to specify V in terms of an appropriate finite subset. For example,
we know that the vector space of solutions of a homogeneous second order linear
differential equation consists of all linear combinations

c1 y1 + c2 y2

where {y1 , y2 } is a linearly independent pair of solutions, and c1 , c2 are arbitrary


scalars. We say in this case that the pair {y1 , y2 } is a basis for the space of solutions.
We want to generalize this to ‘bases’ with more than two elements.

As before, let V be any vector space and let {v1 , v2 , . . . , vk } be a non-empty finite
subset of elements of V. Such a set is called linearly independent if no element of
the set can be expressed as a linear combination of the other elements in the set.
For a set {v1 , v2 } with two vectors, this subsumes the previous definition: neither
vector should be a scalar multiple of the other. For a set {v1 , v2 , v3 } with three
10.8. LINEAR INDEPENDENCE, BASES, AND DIMENSION 521

elements it means that no relation of any of the following forms is possible:


v1 = a2 v2 + a3 v3
v2 = b1 v1 + b3 v3
v3 = c1 v1 + c2 v2 .
The opposite of ‘linearly independent’ is ‘linearly dependent’.

To get a better hold on the concept, consider the (infinite) set of all linear combi-
nations

k
c1 v1 + c2 v2 + · · · + ck vk = ci vi (167)
i=1
where each coefficient ci is allowed to range arbitrarily over the domain of scalars.
It is not very hard to see that this infinite set is a subspace of V. It is called
the subspace spanned by {v1 , v2 , . . . , vk }. You can think of the elements of this
subspace as forming a general solution of some (homogeneous) problem. We would
normally want to be sure that there aren’t any redundant elements in the spanning
set {v1 , v2 , . . . , vk }. If one vi could be expressed linearly in terms of the others, that
expression for vi could be substituted in (167), and the result could be simplified
by combining terms. We could thereby omit vi and express the general element in
(167) as a linear combination of the other elements in the spanning set.

Example 201 Consider the set consisting of the following four vectors in R4 .
       
1 1 0 0
0 −1 1 1
v1 =        
0 , v2 =  0  , v3 = 0 , v4 =  1  .
0 0 0 −1
This set is not linearly independent since
v2 = v1 − v3 . (168)
Thus, any element is the subspace spanned by {v1 , v2 , v3 , v4 } can be rewritten
c1 v1 + c2 v2 + c3 v3 + c4 v4 = c1 v1 + c2 (v1 − v3 ) + c3 v3 + c4 v4
= (c1 + c2 )v1 + (c3 − c2 )v3 + c4 v4
= c01 v1 + c03 v3 + c4 v4 .

On the other hand, if we delete the element v2 , the set consisting of the vectors
     
1 0 0
0 1 1
v1 =      
0 , v3 = 0 , v4 =  1  .
0 0 −1
is linearly independent. To see this, just look carefully at the pattern of zeroes. For
example, v1 has first component 1, and the other two have first component 0, so v1
522 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

could not be a linear combination of v2 and v3 . Similar arguments eliminate the


other two possible relations. (What are those arguments?)

In the above example, we could just as well have written


v1 = v2 + v3
and eliminated v1 from the spanning set without loss. In general, there are many
possible ways to delete redundant vectors from a spanning set.

A linearly independent subset {v1 , v2 , . . . , vn } of a vector space V which also spans


V is called a basis for V. Many of the algorithms we have for solving homogeneous
problems yield bases for the solution space. For example, as noted above, any
linearly independent pair of solutions of a second order homogeneous linear equation
is a basis for its solution space. Much of what we do later will be designed to
generalize that to higher order differential equations and to systems of differential
equations.

Let A be an m × n matrix, and let W be the solution space of the homogeneous


system
Ax = 0.
(To be definite, assume the matrix has real entries and that W is the solution
subspace of Rn . However, the corresponding theory for a complex matrix with
solution subspace in Cn is basically the same.) The Gauss-Jordan reduction method
always generates a basis for W. We illustrate this with an example. (You should
also go back and look at Example 2 in Section 6.)

Example 202 Consider


 
  x1
1 1 0 3 −1  
x2 
1 1 1 2  
1 x3   = 0.
2 2 1 5 0 x4 
x5
To solve it, apply Gauss-Jordan reduction
   
1 1 0 3 −1 | 0 1 1 0 3 −1 | 0
1 1 1 2 1 | 0 → 0 0 1 −1 2 | 0
2 2 1 5 0 | 0 0 0 1 −1 2 | 0
 
1 1 0 3 0 | 0
→ 0 0 1 −1 2 | 0 .
0 0 0 0 0 | 0
The last matrix is fully reduced with pivots in the 1, 1 and 2, 3 positions. The
corresponding system is
x1 + x2 + 3x4 =0
x3 − x4 + 2x5 = 0
10.8. LINEAR INDEPENDENCE, BASES, AND DIMENSION 523

with x1 , x3 bound and x2 , x4 , and x5 free. Expressing the bound variables in terms
of the free variables yields

x1 = −x2 − 3x4
x3 = + x4 − 2x5 .

The general solution vector, when expressed in terms of the free variables, is
         
x1 −x2 − 3x4 − x2 − 3x4 0
x2   x2   x2   0   0 
         
x= x  =
 3  4
 x − 2x      
5  =  0  +  x4  + −2x5 

x4   x4   0   x4   0 
x5 x5 0 0 x5
     
−1 −3 0
 1   0  0
     
= x2      
 0  + x4  1  + x5 −2 .
 0   1  0
0 0 1

If we put     

−1 −3 0
 1   0  0
     
v1 =  
 0 , v2 =  
 1 , v3 =  
−2 ,
 0   1  0
0 0 1
and c1 = x2 , c2 = x4 , and c3 = x5 , then the general solution takes the form

x = c1 v1 + c2 v2 + c3 v3

where the scalars c1 , c2 , c3 (being new names for the free variables) can assume
any values. Also, the set {v1 , v2 , v3 } is linearly independent. This is clear for the
following reason. Each vector is associated with one of the free variables and has a
1 in that position where the other vectors necessarily have zeroes. Hence, none of
the vectors can be linear combinations of the others. It follows that {v1 , v2 , v3 } is
a basis for the solution space.

The above example illustrates all the important aspects of the solution process for
a homogeneous system
Ax = 0.
We state the important facts about the solution without going through the general
proofs since they are just the same as what we did in the example but with a lot
more confusing notation. The general solution has the form

x = c1 v1 + c2 v2 + · · · + ck vk

where v1 , v2 , . . . , vk are basic solutions obtained by successively setting each free


variable equal to 1 and the other free variables equal to zero. c1 , c2 . . . , ck are just
524 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

new names for the free variables. The set {v1 , v2 , . . . , vk } is linearly independent
because of the pattern of 1’s and 0’s at the positions of the free variables, and since
it spans the solution space, it is a basis for the solution space.

There are some special cases which are a bit confusing. First, suppose there is
only one basic solution v1 . Then, the set {v1 } with one element is indeed a basis.
In fact, in any vector space, the set {v} consisting of a single non-zero vector is
linearly independent. Namely, there are no other vectors in the set which it could
be a linear combination of. In this case, the subspace spanned by {v} just consists
of all multiples cv where c can be any scalar. A much more confusing case is that in
which the spanning set is the empty set, i.e., the set with no elements. (That would
arise, for example, if the zero solution were the unique solution of the homogeneous
system, so there would be no free variables and no basic solutions.) This is dealt with
as follows. First, the empty set is taken to be linearly independent by convention.
Second, again by convention, we take every linear combination of no vectors to be
zero. It follows that the empty set spans the zero subspace {0}, and is a basis for
it. (Can you see why the above conventions imply that the set {0} is not linearly
independent?)

Let V be a vector space. If V has a basis {v1 , v2 , . . . , vn } with n elements, then


we say that V is n-dimensional. That is, the dimension of a vector space is the
number of elements in a basis.

For example, since the solution space of a 2nd order homogeneous linear differential
equation has a basis with two elements, that solution space is 2-dimensional.

Not too surprisingly, the dimension of Rn is n. To see this we note that the set
consisting of the vectors
     
1 0 0
0 1 0
     
     
e1 = 0 , e2 = 0 , . . . , en = 0 ,
 ..   ..   .. 
. . .
0 0 1
is a basis. For, the set is certainly linearly independent because the pattern of 0’s
and 1’s precludes any dependence relation. It also spans Rn because any vector x
in Rn can be written
     
  1 0 0
x1 0 1 0
 x2       
       
x =  .  = x1 0 + x2 0 + · · · + xn 0
 ..   ..   ..   .. 
. . .
xn
0 0 1
= x1 e1 + x2 e2 + · · · + xn en .
{e1 , e2 , . . . , en } is called the standard basis for Rn . Note that in R3 the vectors
e1 , e2 , and e3 are what we previously called i, j, and k.
10.8. LINEAR INDEPENDENCE, BASES, AND DIMENSION 525

In this chapter we have defined the concept dimension only for vector spaces, but
the notion is considerably more general. For example, a plane in R3 should be
considered two dimensional even if it doesn’t pass through the origin. Also, a surface
in R3 , e.g., a sphere or hyperboloid, should also be considered two dimensional.
(People are often confused about curved objects because they seem to extend in
extra dimensions. The point is that if you look at a small part of a surface, it
normally looks like a piece of a plane, so it has the same dimension. Also, as
we have seen, a surface can normally be represented parametrically with only two
parameters.) Mathematicians have develped a very general theory of dimension
which applies to almost any type of set. In cosmology, one envisions the entire
universe as a certain type of four dimensional object. Certain bizarre sets can even
have a fractional dimension, and that concept is useful in what is called ‘chaos’
theory. Coordinates Let V be a vector space and suppose {v1 , v2 , . . . , vn } is
a linearly independent subset of V. Suppose v is in the subspace spanned by
{v1 , v2 , . . . , vn }, i.e.,
v = c1 v1 + c2 v2 + · · · + cn vn
for appropriate coefficients c1 , c2 , . . . , cn . The coefficients in such a linear combina-
tion are unique. For, suppose we had

v = c1 v1 + c2 v2 + · · · + cn vn = c01 v1 + c02 v2 + · · · + c0n vn .

Subtract one expression from the other to obtain

(c1 − c01 )v1 + (c2 − c02 )v2 + · · · + (cn − c0n )vn = 0.

We would like to conclude that all these coefficients are zero, i.e., that

c1 = c01
c2 = c02
..
.
cn = c0n .

If that were not the case, one of the coefficients would be non-zero, and we could
divide by it and transpose, thus expressing one of the vectors vi as a linear com-
bination of the others. But since, the set is linearly independent, we know that is
impossible. Hence, all the coefficients are zero as required.

Suppose {v1 , v2 , . . . , vn } is a basis for V. The above argument shows that any
vector v in V may be expressed uniquely

v = c1 v1 + c2 v2 + · · · + cn vn ,

and the coefficients c1 , c2 , . . . , cn are called the coordinates of the vector v with
respect to the basis {v1 , v2 , . . . , vn }. A convenient way to exhibit the relationship
between a vector and its coordinates is as follows. Put the coefficients ci on the
526 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

other side of the basis vectors, and write


 
c1
[ ]  c2 


v = v1 c1 + v2 c2 + . . . vn cn = v1 v2 ... vn  .  .
 .. 
cn

The column vector on the right is a bona-fide element of Rn (or of Cn in the case
of complex scalars), but the ‘row vector’ on the left is not really an n × 1 matrix
since its entries are vectors, not scalars.

Example 203 Consider the vector space S of all real solutions of the differential
equation
y 00 + k 2 y = 0.
The solutions y1 = cos kt and y2 = sin kt constitute a linearly independent pair of
solutions, so that gives a basis for S. On the other hand

y = cos(kt + δ)

is also a solution, so it should be expressible as a linear combination of the basis


elements. Indeed, by trigonometry, we have

y = cos(kt + δ) = cos(kt) cos δ − sin(kt) sin δ


= y1 cos δ + y2 (− sin δ)
[ ]
[ ] cos δ
y1 y2 .
− sin δ

Thus [ ]
cos δ
− sin δ

is a vector in R2 giving the coordinates of cos(kt + δ) with respect to the basis


{y1 , y2 }.

Given a basis for a vector space V, one may think of the elements of the basis as
unit vectors pointing along coordinate axes in V. The coordinates with respect to
the basis then are the coordinates relative to these axes. If one starts (as one does
normally in Rn ) with some specific set of axes, then the axes associated with a
new basis need not be mutually perpendicular, and also the unit of length may be
altered, and we may even have different units of length on each axis.

Invariance of Dimension There is a subtle point involved in the definition of


dimension. The dimension of V is the number of elements in a basis for V, but it
is at least conceivable that two different bases have different numbers of elements.
If that were the case, V would have two different dimensions, and that does not
square with our idea of how such words should be used.
10.8. LINEAR INDEPENDENCE, BASES, AND DIMENSION 527

In fact it can never happen that two different bases have different numbers of
elements. To see this, we shall prove something slightly different. Suppose V has a
basis with m elements. We shall show that

any linearly independent subset of V has at most m elements.

This would suffice for what we want because if we had two bases one with n and the
other with m elements, either could play the role of the basis and the other the role
of the linearly independent set. (Any basis is also linearly independent!) Hence,
on the one hand we would have n ≤ m and on the other hand m ≤ n, whence it
follows that m = n.

Here is the proof of the above assertion about linearly independent subsets, (but
you might want to skip it your first time through the subject).

Let {u1 , u2 , . . . , un } be a linearly independent subset. Each ui can be expressed


uniquely in terms of the basis
 
p11
∑m
[ ] 
 p21 
u1 = vj pj1 = v1 v2 . . . vm  . 
j=1
 .. 
pm1
 
p21
∑m
[ ] 
 p22 
u2 = vj pj2 = v1 v2 . . . vm  . 
j=1
 .. 
pm2
..
.
 
p1n

m
[ ] 
 p2n 
un = vj pjn = v1 v2 ... vm  .  .
j=1
 .. 
pmn
Each of these equations represents one column of the complete matrix equation
 
p11 p12 . . . p1n
[ ] [ ] p21 p22 . . . p2n 

u1 u2 . . . un = v1 v2 . . . vm  . .. ..  .
 .. . ... . 
pm1 pm2 ... pmn
Note that the matrix on the right is an m × n matrix. Consider the homogeneous
system   
p11 p12 ... p1n x1
 p21 p22 ... p2n   x2 
  
 .. .. ..   ..  = 0.
 . . ... .  . 
pm1 pm2 ... pmn xn
528 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

Assume, contrary to what we hope, that n > m. Then, we know by the theory of
homogeneous linear systems, that there is a non-trivial solution to this system, i.e.,
one with at least one xi not zero. Then
 
x1
[ ] 
 x2 
u1 u2 . . . un  .  =
 .. 
xn
  
p11 p12 ... p1n x1
[ 
]  21
p p22 ... p2n   x2 
 
v1 v2 ... vm  . .. ..   ..  = 0.
 .. . ... .  . 
pm1 pm2 . . . pmn xn

Thus, 0 has a non-trivial representation

0 = u1 x1 + u2 x2 + · · · + un xn

which we know can never happen for a linearly independent set. Thus, the only
way out of this contradiction is to believe that n ≤ m as claimed.

One consequence of this argument is the following fact. The dimension of a subspace
cannot be larger than the dimension of the whole vector space. The reasoning is that
a basis for a subspace is necessarily a linearly independent set and so it cannot have
more elements than the dimension of the whole vector space.

It is important to note that two different bases of the same vector space might have
no elements whatsoever in common. All we can be sure of is that they have the
same size.

Infinite Dimensional Vector Spaces Not every vector space has a finite basis.
We shall not prove it rigorously here, but it is fairly clear that the vector space of
all continuous functions C(I) cannot have a finite basis {f1 , f2 . . . , fn }. For, if it
did, then that would mean any continuous function f on I could be written as a
finite linear combination

f (t) = c1 f1 (t) + c2 f2 (t) + · · · + cn fn (t),

and it is not plausible that any finite set of continuous functions could capture the
full range of possibilities of continuous functions in this way.

If a vector space has a finite basis, we say that it is finite dimensional ; otherwise
we say it is infinite dimensional.

Most interesting function spaces are infinite dimensional. Fortunately, the subspaces
of these spaces which are solutions of homogeneous linear differential equations are
finite dimensional, and these are what we shall spend the next chapter studying.
10.8. LINEAR INDEPENDENCE, BASES, AND DIMENSION 529

We won’t talk much about infinite dimensional vector spaces in this course, but you
will see them again in your course on Fourier series and partial differential equations,
and you will also encounter such spaces when you study quantum mechanics.

Exercises for 10.8.

1. In each of the following cases, determine if the indicated set is linearly inde-
pendent or not.
     
 1 0 1 
(a) 2 , 1 , 1 .
 
3 1 2
     

 1 0 0 
      
0  ,   , 0 .
1
(b)      

 0 0 1 
 
0 0 0
2. Determine if each of the following sets of functions is linearly independent.
(a) {1, t, t2 , t3 }.
(b) {e−t , e2t , et }.
(c) {cos t sin t, sin 2t, cos 2t}.

3. Let V be the vector space of all polynomials of degree at most 2. (You may
assume it is known that V is a vector space.)
(a) Show that {1, t, t2 } is a basis for V so that the dimension of V is 3.
(b) The first three Legendre polynomials (solutions of Legendre’s equation
(1 − t2 )y 00 − 2ty 0 + α(α + 1)y = 0 for α = 0, 1, 2) are P0 (t) = 1, P1 (t) = t, and
1
P2 (t) = (3t2 − 1). Show that {P0 , P1 , P2 } is a linearly independent set of
2
functions. It follows that it is a basis. Why?

4. Find a basis for the solution space of the differential equation y 00 − k 2 y = 0.

5. Find a basis for the subspace of R4 consisting of solutions of the homogeneous


system  
1 −1 1 −1
1 2 −1 1  x = 0.
0 3 −2 2

6. Find the dimension of the solution space of Ax = 0 in each of the following


cases. (See the Exercises for Section 6.)
   
1 0 1 2 1 3 4 0 2
(a) A =  2 −1 1 0 (b) A = 2 7 6 1 1
−1 4 −1 −2 4 13 14 1 3
530 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

7. Show that a set of vectors {v1 , v2 , . . . , vn } is linearly independent if and only


if the equation
c1 v1 + c2 v2 + · · · + cn vn = 0
has only the solution c1 = c2 = · · · = cn = 0.

8. Let {v1 , v2 , . . . , vn } be a subset of a vector space V. Show that the set is


linearly independent if and only if the equation

0 = c1 v2 + c2 v2 + · · · + cn vn

has only the trivial solution, i.e., all the coefficients c1 = c2 = · · · = cn = 0.


This characterization is very convenient to use when proving a set is linearly
independent. It is often taken as the definition of linear independence in books
on linear algebra.

9. (Optional.) It is assumed implicitly at various points in our development that


interesting vector spaces do in fact have bases. In most cases, we have an
explicit method for constructing a basis, but this is not always possible.
(a) Let V be a finite dimensional vector space with basis {v1 , v2 , . . . , vn },
and let W be a subspace. Show that W has a finite basis. Hint. Construct
a sequence of elements in W as follows. Start by choosing w1 6= 0 in W.
Assume w1 , w2 , . . . , wk have been chosen in W so that {w1 , w2 , . . . , wk } is
a linearly independent set. Show that either this set is a basis for W or it
is possible to choose wk+1 in W such that {w1 , w2 , . . . , wk , wk+1 } is linearly
independent. (This is a bit harder than you might think.) This process can’t
go on forever. Look at the discussion of invariance of dimension to see why
not.
Note that the argument won’t work if W is the zero subspace. What is the
basis in that case?
(b) Use part (a) to conclude that any subspace of Rn (or Cn in the complex
case) has a finite basis.

10. The set of all infinite sequences

x = (x1 , x2 , . . . , xn , . . . )

forms a vector space if two sequences are added by by adding corresponding


entries and a sequence is multiplied by a scalar by multiplying each entry by
that scalar. If the scalars are assumed to be real, it would be appropriate to
denote this vector space R∞ . Let ei be the vector with xi = 1 and all other
entries zero.
(a) Show that the set {e1 , e2 , . . . , en } of the first n of these is a linearly
independent set for each n. Thus there is no upper bound on the size of a
linearly independent subset of R∞ .
(b) Does the set of all possible ei span R∞ ? Explain.
10.9. CALCULATIONS IN RN OR CN 531

11. (Optional) We know from our previous work that {eikt , e−ikt } is a basis for
the set of complex valued solutions of the differential equation y 00 + k 2 y = 0
However, {cos kt, sin kt} is also a basis for that complex vector space. What
are the coordinates of eikt and e−ikt with respect to the second basis? What
are the coordinates of cos kt and sin kt with respect to the first basis? In each
case express the answers as column vectors in C2 .

10.9 Calculations in Rn or Cn

Let {v1 , v2 , . . . , vk } be a collection of vectors in Rn (or Cn in the case of complex


scalars.) It is often useful to have a way to pick out a linearly independent subset
which spans the same subspace, i.e., which is a basis for that subspace. The basic
idea (no pun intended) is to throw away redundant vectors until that is no longer
possible, but there is a systematic way to do this all at once. Since the vectors
vi are elements of Rn , each may be realized as a n × 1 column vector. Put these
vectors together to form an n × k matrix
[ ]
A = v1 v2 . . . vk .
(This is the same notation we used in the previous section, but now since the vi are
column vectors, rather than elements of some abstract vector space, we really do
get a matrix.) To find a basis, apply Gaussian reduction to the matrix A, and pick
out the columns of A which in the transformed reduced matrix end up with pivots.

Example 204 Let


       
1 2 −1 0
0 2  1  1
v1 =  
1 , v2 =  
4 , v3 =  
 0 , v4 =  
1 .
1 0 −2 0
Form the matrix A with these columns and apply Gaussian reduction
   
1 2 −1 0 1 2 −1 1
0 2 1 1  1 1
  → 0 2 
1 4 0 1 0 2 1 1
1 0 −2 0 0 −2 −1 0
 
1 2 −1 1
 0 2 1 1
→  0 0 0 0

0 0 0 1
 
1 2 −1 1
 0 2 1 1
→  0 0 0 1 .

0 0 0 0
532 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

This completes the Gaussian reduction, and the pivots are in the first, second, and
fourth columns. Hence, the vectors
     
1 2 0
0 2 1
v1 =    
1 , v2 = 4 , v4 = 1
 

1 0 0

form a basis for the subspace spanned by {v1 , v2 , v3 , v4 }.

Let’s look more closely at this example to see why the subset is linearly independent
and also spans the same subspace as the original set. The proof that the algorithm
works in the general case is more complicated to write down but just elaborates the
ideas exhibited in the example. Consider the homogeneous system Ax = 0. This
may also be written
 
x1
[ ] x2 
Ax = v1 v2 v3 v4    = v1 x1 + v2 x2 + v3 x3 + v4 x4 = 0.
x3 
x4

In the general solution, x1 , x2 , and x4 will be bound variables (from the pivot
positions) and x3 will be free. That means we can set x3 equal to anything, say
x3 = −1 and the other variables will be determined. For this choice, the relation
becomes
v1 x1 + v2 x2 − v3 + v4 x4 = 0
which may be rewritten
v3 = x1 v1 + x2 v2 + x4 v4 .
Thus, v3 is redundant and may be eliminated from the set without changing the
subspace spanned by the set. On the other hand, the set {v1 , v2 , v4 } is linearly
independent, since if we were to apply Gaussian reduction to the matrix
[ ]
A0 = v1 v2 v4

the reduced matrix would have a pivot in every column, i.e., it would have rank 3.
Thus, the system
 
[ ] x1
v1 v2 v4 x2  = v1 x1 + v2 x2 + v4 x4 = 0
x4

has only the trivial solution. That means that no one of the three vectors can be
expressed as a linear combination of the other two. For example, if v2 = c1 v1 +c4 v4 ,
we have
v1 c1 + v2 (−1) + v4 c4 = 0.
It follows that the set is linearly independent. Column Space and Row Space
Let A be an m × n matrix. Then the columns v1 , v2 , . . . , vn of A are vectors in
10.9. CALCULATIONS IN RN OR CN 533

Rm (or Cm in the complex case), and {v1 , v2 , . . . , vn } spans a subspace of Rm


called the column space of A. The column space plays a role in the the theory of
inhomogeneous systems Ax = b in the following way. A vector b is in the column
space if and only if it is expressible as a linear combination
 
x1
[ ] 
 x2 
b = v1 x1 + v2 x2 + · · · + vn xn = v1 v2 . . . vn  .  = Ax.
 .. 
xn

Thus, the column space of A consists of all vectors b in Rm for which the system
Ax = b has a solution.

Note that the method outlined in the beginning of this section gives a basis for the
column space, and the number of elements in this basis is the rank of A. (The rank
is the number of pivots!) Hence, the rank of an m × n matrix A is the dimension
of its column space.

There is a similar concept for rows; the row space of an m × n matrix A is the
subspace of Rn spanned by the rows of A. It is not hard to see that the dimension
of the row space of A is also the rank of A. For, since each row operation is
reversible, applying a row operation does not change the subspace spanned by the
rows. Hence, the row space of the matrix A0 obtained by Gauss-Jordan reduction
from A is the same as the row space of A. However, the set of non-zero rows of the
reduced matrix is a basis for this subspace. To see this, note first that it certainly
spans (since leaving out zero rows doesn’t cost us anything). Moreover, it is also
a linearly independent set because each non-zero row has a 1 in a pivot position
where all the other rows are zero.

The fact that both the column space and the row space have the same dimension
is sometimes expressed by saying “the column rank equals the row rank”.

The column space also has an abstract interpretation. Consider the linear operator
L : Rn → Rm defined by L(x) = Ax. The image of this operator is defined to be
the set of all vectors b in Rm of the form b = L(x) for some x in Rn . Thus, the
image of L is just the set of b = Ax, which by the above reasoning is the column
space of A, and its dimension is the rank r. On the other hand, you should recall
that the dimension of the null space of L is the number of basic solutions, i.e., n − r.
Since these add up to n, we have

dim Image of L + dim Null Space of L = dim Domain of L.

A Note on the Definition of Rank The rank of A is defined as the number of


pivots in the reduced matrix obtained from A by an appropriate sequence of elemen-
tary row operations. Since we can specify a standard procedure for performing such
row operations, that means the rank is a well defined number. On the other hand,
534 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION

it is natural to wonder what might happen if A were reduced by an alternative,


perhaps less systematic, sequence of row operations. The above analysis shows that
we would still get the same answer for the rank. Namely, the rank is the dimension
of the column space of A, and that number depends only on the column space itself,
not on any particular basis for it. (Or you could use the same argument using the
row space.)

The rank is also the number of non-zero rows in the reduced matrix, so it follows
that this number does not depend on the particular sequence of row operations used
to reduce A to Gauss-Jordan reduced form. In fact, the entire matrix obtained at
the end (as long as it is in Gauss-Jordan reduced form) depends only on the original
matrix A and not on the particular sequence or row operations used to obtain it.
The proof of this fact is not so easy, and we omit it here.

Exercises for 10.9.

1. Find a subset of the following set of vectors which is a basis for the subspace
it spans.        

 1 3 3 1 
        
2 0
 , , , 3 −1
 3 −3 3 −3

 

0 2 1 1
 
1 0 2 1 1
2. Let A = −1 1 3 0 1.
1 1 7 2 3
(a) Find a basis for the column space of A.
(b) Find a basis for the row space of A.

3. Let    
1 1
v1 = −2 , v2 = 2 .
−1 1
Find a basis for R3 by finding a third vector v3 such that {v1 , v2 , v3 } is
linearly independent. Hint. You may find an easier way to do it, but the
following method should work. Use the method suggested in Section 9 to pick
out a linearly independent subset from {v1 , v2 , e1 , e2 , e3 }.

4. Let {v1 , v2 , . . . , vk } be a linearly independent subset of Rn . Apply the


method in section 9 to the set {v1 , v2 , . . . , vk , e1 , e2 , . . . , en }. It will necessar-
ily yield a basis for Rn . Why? Show that this basis will include {v1 , v2 , . . . , vk }
as a subset. That is show that none of the vi will be eliminated by the process.
Note. If V is any finite dimensional vector space with basis {u1 , u2 , . . . , un }
and {v1 , v2 , . . . , vk } is a linearly independent subset of V, then we may
10.9. CALCULATIONS IN RN OR CN 535

form another basis for V by adding some of the vectors u1 , u2 , . . . , un to


{v1 , v2 , . . . , vk }. However, since we aren’t necessarily dealing with column
vectors in this case, the proof is a bit more involved.

5. Show that [ ] [ ]
1 1
u1 = and u2 =
−1 1
form a linearly independent pair in R2 . It follows that they form a basis for
R2 . Why? Find the coordinates of e1 and e2 with respect to this new basis.
Hint. You need to solve
[ ] [ ]
[ ] x1 [ ] x1
u1 u2 = e1 and u1 u2 = e2 .
x2 x2

You can solve these simultaneously by solving


[ ]
u1 u2 X = I

for an appropriate 2 × 2 matrix X. What does this have to do with inverses?


536 CHAPTER 10. LINEAR ALGEBRA, BASIC NOTATION
Chapter 11

Determinants and
Eigenvalues

11.1 Homogeneous Linear Systems of Differential


Equations

Let A = A(t) denote an n × n matrix with entries aij (t) which are functions defined
and continuous on some real interval a < t < b. In general, these functions may be
complex valued functions. Consider the n × n system of differential equations
dx
= A(t)x
dt
where x = x(t) is a vector valued function also defined on a < t < b and taking
values in Cn . (If A(t) happens to have real entries, then we may consider solutions
x = x(t) with values in Rn , but even in that case it is often advantageous to
consider complex valued solutions.) The most interesting case is that in which A is
a constant matrix, and we shall devote almost all our attention to that case.

Example 205 Consider


[ ] [ ]
dx 1 2 x1 (t)
= x where x =
dt 2 1 x2 (t)
on the interval −∞ < t < ∞. Note that the entries in the coefficient matrix are
constants.

The set of solutions of such a homogeneous system forms a (complex) vector space.
To see this, consider the operator
d
L= −A
dt

537
538 CHAPTER 11. DETERMINANTS AND EIGENVALUES

which is defined on the vector space of all differentiable vector valued functions. It
is not hard to see that L is a linear operator, and its null space is the desired set of
solutions since
dx
L(x) = 0 means − Ax = 0.
dt

We shall see shortly that this vector space is n-dimensional. Hence, solving the
system amounts to finding n solutions x1 (t), x2 (t), . . . , xn (t) which form a basis for
the solution space. That means that any solution can be written uniquely

x = c1 x1 (t) + c2 x2 (t) + · · · + cn xn (t).

Moreover, if the solution has a specified initial value x(t0 ), then the ci are deter-
mined by solving

x(t0 ) = c1 x1 (t0 ) + c2 x( t0 ) + · · · + cn xn (t0 ).

(If you look closely, you will see that this is in fact a linear system of algebraic
equations with unknowns c1 , c2 , . . . , cn .)

Example 205, continued We shall try to solve


[ ] [ ]
dx 1 2 1
= x given x(0) = .
dt 2 1 −2

The first problem is to find a linearly independent pair solutions. Assume for the
moment that you have a method for generating such solutions, and it tells you to
try [ 3t ] [ ] [ ] [ ]
e 1 − e−t −t − 1
x1 = 3t = e3t , x2 = = e .
e 1 e−t 1
(We shall develop such methods later in this chapter.) It is not hard to see that
these are solutions:
[ ] [ ] [ ][ ] [ ]
dx1 3t 1 1 2 3t 1 2 1 3t 3
= 3e and x =e =e
dt 1 2 1 1 2 1 1 3
[ ] [ ] [ ][ ] [ ]
dx2 −1 1 2 1 2 −1 1
= −e−t and x2 = e−t =e −t
.
dt 1 2 1 2 1 1 −1

Also, x1 (t) and x2 (t) form a linearly independent pair. For, otherwise one would
be a scalar multiple of the other, say x1 (t) = cx2 (t) for all t. Then the same thing
would be true of their first components, e.g., e3t = ce−t for all t, and we know that
to be false.

Thus, {x1 , x2 } is a basis for the solution space, so any solution may be expressed
uniquely
[ ]
[ ] c1
x = c1 x1 (t) + c2 x2 (t) = x1 (t)c1 + x2 (t)c2 = x1 (t) x2 (t) .
c2
11.1. HOMOGENEOUS LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS539

Thus putting t = 0 yields


[ ] [ ]
[ ] c1 1
x(0) = x1 (0) x2 (0) = .
c2 −2

However,
[ ] [ ] [ ] [ ]
1 1 −1 −1
x1 (0) = e3(0) = x2 (0) = e−(0) = ,
1 1 1 1

so the above system becomes


[ ][ ] [ ]
1 −1 c1 1
= .
1 1 c2 −2

This system is easy to solve by Gauss-Jordan reduction


[ ] [ ] [ ]
1 −1 | 1 1 −1 | 1 1 0 | −1/2
→ → .
1 1 | −2 0 2 | −3 0 1 | −3/2

Hence, the solution is c1 = −1/2, c2 = −3/2 and the desired solution of the system
of differential equations is
[ ] [ ] [ 1 3t 3 −t ]
1 1 3 −1 −2e + 2e
x = − e3t − e−t = .
2 1 2 1 − 21 e3t − 32 e−t

The general situation is quite similar. Suppose we have some method for generating
n vector valued functions x1 (t), x2 (t), . . . , xn (t) which together constitute a linearly
independent set of solutions of the n × n system
dx
= Ax.
dt
Then, the general solution takes the form
 
c1
[ 
]  c2 

x = x1 (t)c1 + x2 (t)c2 + . . . xn (t)cn = x1 (t) x2 (t) . . . xn (t)  .  .
..
cn

To match a given initial condition x(t0 ) at t = t0 , we have to solve the n × n


algebraic system
 
c1
[ 
]  c2 

x1 (t0 ) x2 (t0 ) . . . xn (t0 )  .  = x(t0 ). (169)
 .. 
cn

Note that the coefficient matrix is just a specific n × n matrix with scalar entries
and the quantity on the right is a specific n × 1 column vector. (Everything in
540 CHAPTER 11. DETERMINANTS AND EIGENVALUES

sight is evaluated at t0 , so there are no variable quantities in this equation.) We


now have to rely on the results of the previous chapter. Since in principle the given
initial value vector x(t0 ) could be anything whatsoever, we hope that this algebraic
system can be solved for any possible n × 1 column vector on the right. However,
we know this is possible only in the case that the coefficient matrix is non-singular,
i.e., if it has rank n. How can we be sure of this? It turns out to be a consequence
of the basic uniqueness theorem for systems of differential equations.

Existence and Uniqueness for Systems We state the basic theorem for complex
valued functions. There is a corresponding theorem in the real case.

Theorem 11.16 Let A(t) be an n × n complex matrix with entries continuous


functions defined on a real interval a < t < b, and suppose t0 is a point in that
interval. Let x0 be a given n × 1 column vector in Cn . Then there exists a unique
solution x = x(t) of the system

dx
= A(t)x
dt
defined on the interval a < t < b and satisfying x(t0 ) = x0 .

We shall not try to prove this theorem in this course.

The uniqueness part of the theorem has the following important consequence.

Corollary 11.17 With the notation as above, suppose x1 (t), x2 (t), . . . , xn (t) are n
dx
solutions of the system = A(t)x on the interval a < t < b. Let t0 be any point in
dt
that interval. Then the set {x1 (t), x2 (t), . . . , xn (t)} is a linearly independent set of
functions if and only if the set {x1 (t0 ), x2 (t0 ), . . . , xn (t0 )} is a linearly independent
set of column vectors in Cn .

Example 1, again The set of functions is


{ [ ] [ ]}
1 −1
e3t , e−t
1 1

and the set of column vectors (obtained by setting t = t0 = 0) is


{[ ] [ ]}
1 −1
, .
1 1

Proof. We shall prove that one set is linearly dependent if and only if the other is.
First suppose that the set of functions is dependent. That means that one of them
may be expressed as a linear combination of the others. For the sake of argument
suppose the notation is arranged so that

x1 (t) = c2 x2 (t) + · · · + cn xn (t) (170)


11.1. HOMOGENEOUS LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS541

holds for all t in the interval. Then, it holds for t = t0 and we have

x1 (t0 ) = c2 x2 (t0 ) + · · · + cn xn (t0 ). (171)

This in turn tells us that the set of column vectors at t0 is dependent.

Suppose on the other hand that the set of column vectors at t0 is dependent. Then
we may assume that there is a relation of the form (171) with appropriate scalars
c2 , . . . , cn . But that means that the solutions x1 (t) and c2 x2 (t) + · · · + cn xn (t) agree
at t = t0 . According to the uniqueness part of Theorem 11.16, this means that they
agree for all t, which means that (170) is true as a relation among functions. This
in turn implies that the set of functions is dependent.

The corollary gives us what we need to show that the n × n matrix


[ ]
x1 (t0 ) x2 (t0 ) . . . xn (t0 )

is non-singular. Namely, if {x1 (t), x2 (t), . . . , xn (t)} is a linearly independent set of


solutions, then the columns of the above matrix form a linearly independent set in
Cn . It follows that they form a basis for the column space of the matrix which must
then have rank n, and so it is non-singular. As noted above, that means that we can
always solve the algebraic system of equations (169) specifying initial conditions at
t0 .

Exercises for 11.1.

1. Verify that
[ ] [ ] [ ]
1 1 −1
x1 = e0t = and x2 = e−2t
1 1 1
are solutions of the 2 × 2 system
[
]
dx−1 1
= x.
dt1 −1
[ ]
1
Find the solution satisfying x(0) = .
3
2. Show that
    
1 1 1
x1 = et 1 , x1 = e2t 2 , x1 = e−2t −2
1 4 4
are solutions of  
0 1 0
dx 
= 0 0 1 x. (172)
dt
−4 4 1
542 CHAPTER 11. DETERMINANTS AND EIGENVALUES

Show that
[ they form a linearly
] independent set of solutions by finding the
rank of x1 (0) x2 (0) x3 (0) . Finally, find the solution of (172) satisfying
 
1
x(0) = 0 .
2

3. The vector functions


     
1 3 2
x1 = e2t 1 , x2 = e3t 2 , x3 = e−t 1
0 1 1

are solutions of a system of the form dx/dt = Ax where A is an appropriate


3 × 3 constant matrix. Do these functions constitute a basis for the solution
space of that system?

11.2 Finding Linearly Independent Solutions

Existence of a Basis The basic existence and uniqueness theorem stated in the
previous section ensures that a system of the form
dx
= A(t)x
dt
always has n solutions u1 (t), u2 (t), . . . , un (t) which form a basis for the vector space
of all solutions.

To see this, fix t0 in the interval a < t < b, and define the ith solution ui (t) to be
the unique solution satisfying the initial condition

ui (t0 ) = ei

where as before ei is the ith vector in the standard basis for Cn , i.e., it is the ith
column of the n × n identity matrix. Since {e1 , e2 , . . . , en } is an independent set,
it follows from Corollary 11.2

that {u1 (t), u2 (t), . . . , un (t)} is an independent set of solutions. It also spans the
subspace of solutions. To see this, let x(t) denote any solution. At t = t0 we have
 
x1 (t0 )
 x2 (t0 ) 
 
x(t0 ) =  .  = x1 (t0 )e1 + x2 (t0 )e2 + · · · + xn (t0 )en
 .. 
xn (t0 )
= c1 u1 (t0 ) + c2 u2 (t0 ) + · · · + cn (un (t0 ),
11.2. FINDING LINEARLY INDEPENDENT SOLUTIONS 543

where c1 = x1 (t0 ), c2 = x2 (t0 ), . . . , cn = xn (t0 ). Thus, by the uniqueness theorem,


we have for all t in the interval

x(t) = c1 u1 (t) + c2 u2 (t) + · · · + cn un (t).

Case of Constant Coefficients It is reassuring to know that in principle we can


always find a basis for the solution space, but that doesn’t help us find it. In this
dx
section we outline a method for generating a basis for the solutions of = Ax in
dt
case A is constant. This method will work reasonably well for 2 × 2 systems, but we
shall have to develop the theory of n × n determinants to get it to work for general
n × n systems.

Example 206 Consider the 2 × 2 system


[ ]
dx 0 −2
= x. (173)
dt 1 3
I chose this because it is just the system version of the second order equation

y 00 − 3y 0 + 2y = 0 (174)

which we already know how to solve. Namely, put x1 = y and x2 = y 0 , so

x01 = y 0 = x2
x02 = y 00 = −2y + 3y 0 = −2x1 + 3x2 ,

which when put in matrix form is (173). To solve the second order equation (174),
proceed in the usual manner. The roots of

r2 − 3r + 2 = (r − 1)(r − 2) = 0

are r1 = 1, r2 = 2. Hence, y1 = et and y2 = e2t constitute a linearly independent


pair of solutions. The corresponding vector solutions are
[ ] [ t] [ ] [ ] [ 2t ] [ ]
y e 1 y e 2t 1
x1 = 10 = t = et x2 = 20 = = e .
y1 e 1 y2 2e2t 2

The above example suggests that we look for solutions of the form

x = eλt v (175)

where λ is a scalar and v is a vector, both to be determined by the solution process.


Note also that we want v 6= 0 since otherwise the solution of the differential equation
would be identically zero and hence not very interesting.
dx
Substitute (175) in = Ax to obtain
dt
dx
= λeλt v = Aeλt v.
dt
544 CHAPTER 11. DETERMINANTS AND EIGENVALUES

The factor eλt is a scalar and non-zero for all t, so we cancel it from the above
equation. The resulting equation may be rewritten

Av = λv where v 6= 0. (176)

We introduce special terminology for the situation described by this equation. If


(176) has a non-zero solution for a given scalar λ, then λ is called an eigenvalue of
the matrix A, and any non-zero vector v which works for that eigenvalue is called an
eigenvector of A corresponding to λ. These related concepts are absolutely essential
for understanding systems of differential equations, and they arise in fundamental
ways in a wide variety of applications of linear algebra.

Let’s analyze the problem of finding eigenvalues and eigenvectors for A a 2 × 2


matrix. Then, (176) may be rewritten
[ ][ ] [ ]
a11 a12 v1 λv1
=
a21 a22 v2 λv2
or

a11 v1 + a12 v2 = λv1


a21 v1 + a22 v2 = λv1 ,

which, after transposing, becomes

(a11 − λ)v1 + a12 v2 = 0


a21 v1 + (a22 − λ)v2 = 0.

This is a homogeneous system which may be put in matrix form


[ ][ ]
a11 − λ a12 v1
= 0. (177)
a21 a22 − λ v2

This system will have a non-zero solution v, as required, if and only if the coefficient
matrix has rank less than n = 2. Unless the matrix consists of zeroes, this means
it must have rank one. That, in turn, amounts to saying that one of the rows is
a multiple of the other, i.e., that the ratios of corresponding components are the
same, or, in symbols,
a11 − λ a12
= .
a21 a22 − λ
Cross multiplying and transposing yields the quadratic equation

(a11 − λ)(a22 − λ) − a12 a21 = 0. (178)

Our strategy then is to solve this equantion for λ to find the possible eigenvalues
and then for each eigenvalue λ to find the non-zero solutions of (177) which are
the eigenvectors corresponding to that eigenvalue. In this way, each eigenvalue
and eigenvector pair will generate a solution x = eλt v of the original system of
differential equations.
11.2. FINDING LINEARLY INDEPENDENT SOLUTIONS 545

Note that (178) may be rewritten using 2 × 2 determinants as


[ ]
a −λ a12
det 11 = 0. (179)
a21 a22 − λ

This equation is called the characteristic equation of the matrix.

Example 207 Consider (as in Section 1) the system


[ ]
dx 1 2
= x. (180)
dt 2 1

Try x = eλt v as above. As we saw, this comes down to solving the eigenvalue–
eigenvector problem for the coefficient matrix
[ ]
1 2
A= .
2 1

To do so, we first solve the characteristic equation


[ ]
1−λ 2
det = (1 − λ)2 − 4 = 0
2 1−λ
or 1 − 2λ + λ2 − 4 = λ2 − 2λ − 3 = (λ − 3)(λ + 1) = 0.

The roots of this equation are λ = 3 and λ = −1. First consider λ = 3. Putting
this in (177) yields
[ ] [ ]
1−3 2 −2 2
v= v = 0. (181)
2 1−3 2 −2

Gauss-Jordan reduction yields the solution v1 = v2 with v2 free. A general solution


vector has the form [ ] [ ] [ ]
v1 v2 1
v= = = v2 .
v2 v2 1
Put v2 = 1 to obtain [ ]
1
v1 =
1
which will form a basis for the solution space of (181). Thus, we obtain as one
solution of (180) [ ]
1
x1 = eλt v1 = e3t .
1
Note that any other eigenvector for λ = 3 is a non-zero mulitple of the basis vector
v1 , so choosing another eigenvector in this case will result in a solution of the
differential equation which is just a constant multiple of x1 .

To find a second solution, consider the root λ = −1. Put this in (177) to obtain
[ ] [ ]
1 − (−1) 2 2 2
v= v = 0. (182)
2 1−3 2 2
546 CHAPTER 11. DETERMINANTS AND EIGENVALUES

Gauss-Jordan reduction yields the general solution v1 = −v2 with v2 free. The
general solution vector is [ ] [ ]
v −1
v = 1 = v2 .
v2 1
Putting v2 = 1 yields a the basic eigenvector
[ ]
−1
v2 =
1

and the corresponding solution of the differential equation


[ ]
−1
x2 = e−t .
1

Note that these are the solutions we used to form a basis in Example 1 in the
previous section.

The above procedure appears similar to what we did to solve a second order equation
y 00 + py 0 + qy = 0 with constant coefficients. This is no accident!. The quadratic
equation
r2 + pr + q = 0
is just the characteristic equation (with r replacing λ) of the 2×2 matrix you obtain
when you reformulate the problem as a first order system. You should check this
explicitly in Example 206. (The general case is the topic of an exercise.)

The method for n × n systems is very similar to what we did above. However, the
analogue of (179), i.e., the characteristic equation, takes the form
 
a11 − λ a12 ... a1n
 a21 a22 − λ . . . a2n 
 
det  . . .. =0
 .. .. ... . 
an1 an2 ... ann − λ

which requires the use of n × n determinants. So far in this course we have only
discussed 2 × 2 determinants and briefly 3 × 3 determinants. Hence, to develop the
general theory, we need to define and study the properties of n × n determinants.

Exercises for 11.2.

1. In each of the following examples, try to find a linearly independent pair of


solutions of the 2 × 2 system dx/dt = Ax by the method outlined in this
section. It may not be possible to do so in all cases.
[ ]
−2 1
(a) A = .
1 −2
11.3. DEFINITION OF THE DETERMINANT 547
[ ]
0 −2
(b) A = .
−1 1
[ ]
−1 0
(c) A = .
1 −1
[ ]
−1 1
(d) A = .
1 −1
2. Show that for the second order system y 00 + [py 0 + qy ]= 0, the characteristic
0 1
equation of the corresponding matrix A = is λ2 + pλ + q = 0.
−q −p
This helps us to subsume the theory of second order equations under that of
systems.

11.3 Definition of the Determinant

Let A be an n × n matrix.

By definition
[ ]
for n = 1 det a = a
[ ]
a a12
for n = 2 det 11 = a11 a22 − a12 a21 .
a21 a22

For n > 2, the definition is much more complicated. It is a sum of many terms
formed as follows. Choose any entry from the first row of A; there are n possible
ways to do that. Next, choose any entry from the second row which is not in the
same column as the first entry chosen; there are n − 1 possible ways to do that.
Continue in this way until you have chosen one entry from each row in such a way
that no column is repeated; there are n! ways to do that. Now multiply all these
entries together to form a typical term. If that were all, it would be complicated
enough, but there is one further twist. The products are divided into two classes of
equal size according to a rather complicated rule and then the sum is formed with
the terms in one class multiplied by +1 and those in the other class multiplied by
−1.

Here is the definition for n = 3 arranged to exhibit the signs.


 
a11 a12 a13
det a21 a22 a23  =
a31 a32 a33
a11 a22 a33 + a12 a23 a31 + a13 a21 a32
− a11 a23 a32 − a12 a21 a33 − a13 a22 a31 .
548 CHAPTER 11. DETERMINANTS AND EIGENVALUES

The definition for n = 4 involves 4! = 24 terms, and I won’t bother to write it out.

A better way to develop the theory is recursively. That is, we assume that determi-
nants have been defined for all (n − 1) × (n − 1) matrices, and then use this to define
determinants for n × n matrices. Since we have a definition for 1 × 1 matrices, this
allows us in principle to find the determinant of any n × n matrix by recursively
invoking the definition. This is less explicit, but it is easier to work with.

Here is the recursive definition. Let A be an n × n matrix, and let Dj (A) be the
determinant of the (n − 1) × (n − 1) matrix obtained by deleting the jth row and
the first column of A. Then, define
det A = a11 D1 (A) − a21 D2 (A) + · · · + (−1)j+1 aj1 Dj (A) + · · · + (−1)n+1 an1 Dn (A).
In words: take each entry in the first column of A, multiply it by the determinant
of the (n − 1) × (n − 1) matrix obtained by deleting the first column and that row,
and then add up these entries alternating signs as you do.

Examples
 
2 −1 3 [ ] [ ] [ ]
2 0 −1 3 −1 3
det 1 
2 0 = 2 det − 1 det + 0 det
3 6 3 6 2 0
0 3 6
= 2(12 − 0) − 1(−6 − 9) + 0(. . . ) = 24 + 15 = 39.
Note that we didn’t bother evaluating the 2 × 2 determinant with coefficient 0. You
should check that the earlier definition gives the same result.

 
1 2 −1 3
0 1 2 0
det 
2 0

3 6
1 1 2 1
   
1 2 0 2 −1 3
= 1 det 0 3 6 − 0 det 0 3 6
1 2 1 1 2 1
   
2 −1 3 2 −1 3
+ 2 det 1 2 0 − 1 det 1 2 0 .
1 2 1 0 3 6
Each of these 3 × 3 determinants may be evaluated recursively. (In fact we just did
the last one in the previous example.) You should work them out for yourself. The
answers yield
 
1 2 −1 3
 0 1 2 0
det  
2 0 3 6 = 1(3) − 0(. . . ) + 2(5) − 1(39) = −26.
1 1 2 1
11.3. DEFINITION OF THE DETERMINANT 549

Although this definition allows one to compute the determinant of any n × n matrix
in principle, the number of operations grows very quickly with n. In such calcu-
lations one usually keeps track only of the multiplications since they are usually
the most time consuming operations. Here are some values of N (n), the number
of multiplications needed for a recursive calculation of the determinant of an n × n
determinant. We also tabulate n! for comparison.

n N (n) n!
2 2 2
3 6 6
4 28 24
5 145 120
6 876 720
.. .. ..
. . .

Thus, we clearly need a more efficient method to calculate determinants. As is often


the case in linear algebra, elementary row operations provide us with such a method.
This is based on the following rules relating such operations to determinants.

Rule (i): If A0 is obtained from A by adding a multiple of one row of A to another,


then det A0 = det A.

Example 208
 
1 2 3
det 2 1 3 = 1(1 − 6) − 2(2 − 6) + 1(6 − 3) = 6
1 2 1
 
1 2 3
det 0 −3 −3 = 1(−3 + 6) − 0(2 − 6) + 1(−6 + 9) = 6.
1 2 1

Rule (ii): if A0 is obtained from A by multiplying one row by a scalar c, then


det A0 = c det A.

Example 209
 
1 2 0
det 2 4 2 = 1(4 − 2) − 2(2 − 0) + 0(. . . ) = −2
0 1 1
 
1 2 0
2 det 1 2 1 = 2(1(2 − 1) − 1(2 − 0) + 0(. . . )) = 2(−1) = −2.
0 1 1

One may also state this rule as follows: any common factor of a row of A may be
‘pulled out’ from its determinant. Rule (iii): If A0 is obtained from A by interchang-
ing two rows, then det A0 = − det A.
550 CHAPTER 11. DETERMINANTS AND EIGENVALUES

Example 210
[ ] [ ]
1 2 2 1
det = −3 det = 3.
2 1 1 2

The verification of these rules is a bit involved, so we relegate it to an appendix.


The rules allow us to compute the determinant of any n × n matrix with specific

numerical entries.

Example 211 We shall calculate the determinant of a 4 × 4 matrix. You should


make sure you keep track of which elementary row operations have been performed
at each stage.

     
1 2 −1 1 1 2 −1 1 1 2 −1 1
0 2 1 2  2  2

det   = det 0 2 1  = det 0 2 1 
3 0 1 1 0 −6 4 −2 0 0 7 4
−1 6 0 2 0 8 −1 3 0 0 −5 −5
   
1 2 −1 1 1 2 −1 1
0 2 1 2  1 2

= −5 det   = +5 det 0 2 
0 0 7 4 0 0 1 1
0 0 1 1 0 0 7 4
 
1 2 −1 1
0 2 1 2
= +5 det 
0
.
0 1 1
0 0 0 −3

We may now use the recursive definition to calculate the last determinant. In each
case there is only one non-zero entry in the first column.

 
1 2 −1 1  
0 2  2 1 2
 1 2 0
det  . = 1 det 1 1
0 0 1 1
0 0 −3
0 0 0 −3
[ ]
1 1 [ ]
= 1 · 2 det = 1 · 2 · 1 det −3
0 −3
= 1 · 2 · 1 · (−3) = −6.

Hence, the determinant of the original matrix is 5(−6) = −30.

The last calculation is a special case of a general fact which is established in much
the same way by repeating the recursive definition.
11.3. DEFINITION OF THE DETERMINANT 551

 
a11 a12 a13 ... a1n
 0 a22 a23 ... a2n 
 
 a3n 
det  0 0 a33 ...  = a11 a22 a33 . . . ann .
 .. .. .. .. 
 . . . ... . 
0 0 0 . . . ann
In words, the determinant of an upper triangular matrix is the product of its diagonal
entries. It is important to be able to tell when the determinant of an n × n matrix
A is zero. Certainly, this will be the case if the first column consists of zeroes, and
indeed it turns out that the determinant vanishes if any row or any column consists
only of zeroes. More generally, if either the set of rows or the set of columns is a
linearly dependent set, then the determinant is zero. (That will be the case if the
rank r < n since the rank is the dimension of both the row space and the column
space.) This follows from the following important theorem.

Theorem 11.18 Let A be an n × n matrix. Then A is singular if and only if


det A = 0. Equivalently, A is invertible, i.e., has rank n, if and only if det A 6= 0.

Proof. If A is invertible, then Gaussian reduction leads to an upper triangular


matrix with non-zero entries on its diagonal, and the determinant of such a matrix
is the product of its diagonal entries, which is also non-zero. No elementary row
operation can make the determinant zero. For, type (i) operations don’t change
the determinant, type (ii) operations multiply by non-zero scalars, and type (iii)
operations change its sign. Hence, det A 6= 0.

If A is singular, then Gaussian reduction also leads to an upper triangular matrix,


but one in which at least the last row consists of zeroes. Hence, at least one
diagonal entry is zero, and so is the determinant.

Example 212
 
1 1 2
det 2 1 3 = 1(1 − 0) − 2(1 − 0) + 1(3 − 2) = 0
1 0 1

so the matrix must be singular. To confirm this, we reduce


     
1 1 2 1 1 2 1 1 2
2 1 3 → 0 −1 −1 → 0 −1 −1
1 0 1 0 −1 −1 0 0 0

which shows that the matrix is singular.

In the previous section, we encountered 2 × 2 matrices with symbolic non-numeric


entries. For such a matrix, Gaussian reduction doesn’t work very well because we
don’t know whether the non-numeric expressions are zero or not.
552 CHAPTER 11. DETERMINANTS AND EIGENVALUES

Example 213 Suppose we want to know whether or not the matrix


 
−λ 1 1 1
 1 −λ 0 0 
 
 1 0 −λ 0 
1 0 0 −λ
is singular. We could try to calculate its rank, but since we don’t know what λ is,
it is not clear how to proceed. Clearly, the row reduction works differently if λ = 0
than if λ 6= 0. However, we can calculate the determinant by the recursive method.
 
−λ 1 1 1
 1 −λ 0 0 
det 
 1

0 −λ 0 
1 0 0 −λ
   
−λ 0 0 1 1 1
= (−λ) det  0 −λ 0  − 1 det 0 −λ 0 
0 0 −λ 0 0 −λ
   
1 1 1 1 1 1
+ 1 det −λ 0 0  − 1 det −λ 0 0
0 0 −λ 0 −λ 0
= (−λ)(−λ3 ) − (λ2 ) + (−λ2 ) − (λ2 )
√ √
= λ4 − 3λ2 = λ2 (λ − 3)(λ + 3).
√ √
Hence, this matrix is singular just in the cases λ = 0, λ = 3, and λ = − 3.

Appendix. Some Proofs We now establish the basic rules relating determinants
to elementary row operations. If you are of a skeptical turn of mind, you should
study this section, since the relation between the recursive definition and rules (i),
(ii), and (iii) is not at all obvious. However, if you have a trusting nature, you
might want to skip this section since the proofs are quite technical and not terribly
enlightening.

The idea behind the proofs is to assume that the rules—actually, modified forms
of the rules—have been established for (n − 1) × (n − 1) determinants, and then to
prove them for n × n determinants. To start it all off, the rules must be checked
explicitly for 2 × 2 determinants. I leave that step for you in the Exercises.

We start with the hardest case, rule (iii). First we consider the special case that A0
is obtained from A by switching two adjacent rows, the ith row and the (i + 1)st
row. Consider the recursive definition

det A0 = a011 D1 (A0 ) − · · · + (−1)i+1 a0i1 Di (A0 )


+ (−1)i+2 a0i+1,1 Di+1 (A0 ) + · · · + (−1)n+1 a0n1 Dn (A0 ).
Look at the subdeterminants occurring in this sum. For j 6= i, i + 1, we have
Dj (A0 ) = −Dj (A)
11.3. DEFINITION OF THE DETERMINANT 553

since deleting the first column and jth row of A and then switching two rows—
neither of which was deleted—changes the sign by rule (iii) for (n − 1) × (n − 1)
determinants. The situation for j = i or j = i + 1 is different; in fact, we have

Di (A0 ) = Di+1 (A) and Di+1 (A0 ) = Di (A).

The first equation follows because switching rows i and i + 1 and then deleting row
i is the same as deleting row i + 1 without touching row i. A similar argument
establishes the second equation. Using this together with a0i1 = ai+1,1 , a0i+1,1 = ai1
yields

(−1)i+1 a0i1 Di (A0 ) = (−1)i+1 ai+1,1 Di+1 (A) = −(−1)i+2 ai+1,1 Di+1 (A)
(−1)i+2 a0i+1,1 Di+1 (A0 ) = (−1)i+2 ai1 Di (A) = −(−1)i+1 ai1 Di (A).

In other words, all terms in the recursive definition of det A0 are negatives of the
corresponding terms of det A except those in positions i and i + 1 which get reversed
with signs changed. Hence, the effect of switching adjacent rows is to change the
sign of the sum.

Suppose instead that non-adjacent rows in positions i and j are switched, and
suppose for the sake of argument that i < j. One way to do this is as follows. First
move row i past each of the rows between row i and row j. This involves some
number of switches of adjacent rows—call that number k. (k = j − i − 1, but it
that doesn’t matter in the proof.) Next, move row j past row i and then past the
k rows just mentioned, all in their new positions. That requires k + 1 switches of
adjacent rows. All told, to switch rows i and j in this way requires 2k + 1 switches
of adjacent rows. The net effect is to multiply the determinant by (−1)2k+1 = −1
as required.

There is one important consequence of rule (iii) which we shall use later in the
proof of rule (i). Rule (iiie): If an n × n matrix has two equal rows, then det A = 0.
This is not too hard to see. Interchanging two rows changes the sign of det A, but
if the rows are equal, it doesn’t change anything. However, the only number with
the property that it isn’t changed by changing its sign is the number 0. Hence,
det A = 0. We next verify rule (ii). Suppose A0 is obtained from A by multiplying
the ith row by c. Consider the recursive definition

det A0 = a011 D1 (A0 ) + · · · + (−1)i+1 a0i1 Di (A0 ) + · · · + (−1)n+1 an1 Dn (A). (183)

For any j 6= i, Dj (A0 ) = cDj (A) since one of the rows appearing in that determinant
is multiplied by c. Also, a0j1 = aj1 for j 6= i. On the other hand, Di (A0 ) = Di (A)
since the ith row is deleted in calculating these quantities, and, except for the ith
row, A0 and A agree. In addition, a0i1 = cai1 so we pick up the extra factor of
c in any case. It follows that every term on the right of (183) has a factor c, so
det A0 = c det A. Finally, we attack the proof of rule (i). It turns out to be necessary
to verify the following stronger rule.
554 CHAPTER 11. DETERMINANTS AND EIGENVALUES

Rule (ia): Suppose A, A0 , and A00 are three n × n matrices which agree except in
the ith row. Suppose moreover that the ith row of A is the sum of the ith row of A0
and the ith row of A00 . Then det A = det A0 + det A00 . Let’s first see why rule (ia)
implies rule (i). We can add c times the jth row of A to its i row as follows. Let
B 0 = A, let B 00 be the matrix obtained from A by replacing its ith row by c times
its jth row, and let B be the matrix obtained form A by adding c times its jth row
to its ith row. Then according to rule (ia), we have

det B = det B 0 + det B 00 = det A + det B 00 .

On the other hand, by rule (ii), det B 00 = c det A00 where A00 has both ith and jth
rows equal to the jth row of A. Hence, by rule (iiie), det A00 = 0, and det B = det A.
Finally, we establish rule (1a). Assume it is known to be true for (n − 1) × (n − 1)
determinants. We have

det A = a11 D1 (A) − · · · + (−1)i+1 ai1 Di (A) + · · · + (−1)n+1 an1 Dn (A). (184)

For j 6= i, the the sum rule (ia) may be applied to the determinants Di (A) because
the appropriate submatrix has one row which breaks up as a sum as needed. Hence,

Dj (A) = Dj (A0 ) + Dj (A00 ).

Also, for j 6= i, we have aj1 = a0j1 = a00j1 since all the matrices agree in any row
except the ith row. Hence, for j 6= i,

ai1 Di (A) = ai1 Di (A0 ) + ai1 Di (A00 ) = a0i1 Di (A0 ) + a00i1 Di (A00 ).

On the other hand, Di (A) = Di (A0 ) = Di (A00 ) because in each case the ith row was
deleted. But ai1 = a0i1 + a00i1 , so

ai1 Di (A) = a0i1 Di (A) + a00i1 Di (A) = a0i1 Di (A0 ) + a00i1 Di (A00 ).

It follows that every term in (184) breaks up into a sum as required, and det A =
det A0 + det A00 .

Exercises for 11.3.

1. Find the determinants of each of the following matrices. Use whatever method
seems most convenient, but seriously consider the use of elementary row op-
erations.
 
1 1 2
(a) 1 3 5.
6 4 1
 
1 2 3 4
2 1 4 3
(b) 
1 4 2 3.

4 3 2 1
11.4. SOME IMPORTANT PROPERTIES OF DETERMINANTS 555
 
0 0 0 0 3
1 0 0 0 2
 
(c) 
0 1 0 0 1.
0 0 1 0 4
0 0 0 1 2
 
0 x y
(d) −x 0 z .
−y −z 0
2. Verify the following rules for 2 × 2 determinants.
(i) If A0 is obtained from A by adding a multiple of the first row to the second,
then det A0 = det A.
(ii) If A0 is obtained from A by multiplying its first row by c, then det A0 =
c det A.
(iii) If A0 is obtained from A by interchanging its two rows, then det A0 =
− det A.
Rules (i) and (ii) for the first row, together with rule (iii) allow us to derive
rules (i) and (ii) for the second row. Explain.

3. Derive the following generalization of rule (i) for 2 × 2 determinants.


[ 0 ] [ 0 0] [ 00 00 ]
a + a00 b0 + b00 a b a b
det = det + det .
c d c d c d

What is the corresponding rule for the second row? Why do you get it for
free if you use the results of the previous problem?

11.4 Some Important Properties of Determinants

Theorem 11.19 (The Product Rule) Let A and B be n × n matrices. Then

det(AB) = det A det B.

Proof. First assume that A is non-singular. Then there is a sequence of row opera-
tions which reduces A to the identity

A → A1 → A2 → . . . → Ak = I.

Associated with each of these operations will be a multiplier ci which will depend
on the particular operation, and

det A = c1 det A1 = c1 c2 det A2 = · · · = c1 c2 . . . ck det Ak = c1 c2 . . . ck


556 CHAPTER 11. DETERMINANTS AND EIGENVALUES

since Ak = I and det I = 1. Now apply exactly these row operations to the product
AB
AB → A1 B → A2 B → . . . → Ak B = IB = B.
The same multipliers contribute factors at each stage, and

det AB = c1 det A1 B = c1 c2 det A2 B = · · · = c1 c2 . . . ck det B = det A det B.


| {z }
det A

Assume instead that A is singular. Then, AB is also singular. (This follows from
the fact that the rank of AB is at most the rank of A, as mentioned in the Exercises
for Chapter X, Section 6. However, here is a direct proof for the record. Choose a
sequence of elementary row operations for A, the end result of which is a matrix A0
with at least one row of zeroes. Applying the same operations to AB yields A0 B
which also has to have at least one row of zeroes.) It follows that both det AB and
det A det B are zero, so they are equal.

Transposes Let A be an m × n matrix. The transpose of A is the n × m matrix


for which the columns are the rows of A. (Also, its rows are the columns of A.) It
is usually denoted At , but other notations are possible.

Examples
 
[ ] 2 2
2 0 1
A= At = 0 1
2 1 2
1 2
   
1 2 3 1 0 0
A = 0 2 3 At = 2 2 0
0 0 3 3 3 3
 
a1 [ ]
a = a2  at = a1 a2 a3 .
a3

The following rule follows almost immediately from the definition. Assume A is an
m × n matrix and B is an n × p matrix. Then

(AB)t = B t At .

Note that the order on the right is reversed. Unless the matrices are square, the
shapes won’t even match if the order is not reversed.

Theorem 11.20 Let A be an n × n matrix. Then

det At = det A.
11.4. SOME IMPORTANT PROPERTIES OF DETERMINANTS 557

Example 214
 
1 0 1
det 2 1 2 = 1(1 − 0) − 2(0 − 0) + 0(. . . ) = 1
0 0 1
 
1 2 0
det 0 1 0 = 1(1 − 0) − 0(. . . ) + 1(0 − 0) = 1.
1 2 1

The importance of this theorem is that it allows us to go freely from statements


about determinants involving rows of the matrix to corresponding statements in-
volving columns and vice-versa.

Proof. If A is singular, then At is also singular and vice-versa. For, the rank may
be characterized as either the dimension of the row space or the dimension of the
column space, and an n × n matrix is singular if its rank is less than n. Hence, in
the singular case, det A = 0 = det At .

Suppose then that A is non-singular. Then there is a sequence of elementary row


operations
A → A1 → A2 → · · · → Ak = I.
Recall from Chapter X, Section 4

that each elementary row operation may be accomplished by multiplying by an


appropriate elementary matrix. Let Ci denote the elementary matrix needed to
perform the ith row operation. Then,

A → A1 = C1 A → A2 = C2 C1 A → · · · → Ak = Ck Ck−1 . . . C2 C1 A = I.

In other words,

A = (Ck . . . C2 C1 )−1 = C1 −1 C2 −1 . . . Ck −1 .

To simplify the notation, let Di = Ci −1 . The inverse D of an elementary matrix


C is also an elementary matrix; its effect is the row operation which reverses the
effect of C. Hence, we have shown that any non-singular square matrix A may be
expressed as a product of elementary matrices

A = D1 D2 . . . D k .

Hence, by the product rule

det A = (det D1 )(det D2 ) . . . (det Dk ).

On the other hand, we have by rule for the transpose of a product

At = Dk t . . . D2 t D1 t ,
558 CHAPTER 11. DETERMINANTS AND EIGENVALUES

so by the product rule

det At = det(Dk t ) . . . det(D2 t ) det(D1 t ).

Suppose we know the rule det Dt = det D for any elementary matrix D. Then,

det At = det(Dk t ) . . . det(D2 t ) det(D1 t )


= det(Dk ) . . . det(D2 ) det(D1 )
= (det D1 )(det D2 ) . . . (det Dk ) = det A.

(We used the fact that the products on the right are products of scalars and so can
be rearranged any way we like.)

It remains to establish the rule for elementary matrices. If D = Eij (c) is obtained
from the identity matrix by adding c times its jth row to its ith row, then Dt =
Eji (c) is a matrix of exactly the same type. In each case, det D = det Dt = 1.
If D = Ei (c) is obtained by multiplying the ith row of the identity matrix by c,
then Dt is exactly the same matrix Ei (c). Finally, if D = Eij is obtained from the
identity matrix by interchanging its ith and jth rows, then Dt is Eji which in fact
is just Eij again. Hence, in each case det Dt = det D does hold.

Because of this rule, we may use column operations as well as row operations to
calculate determinants. For, performing a column operation is the same as transpos-
ing the matrix, performing the corresponding row operation, and then transposing
back. The two transpositions don’t affect the determinant.

Example
   
1 2 3 0 1 2 2 0
2 1 3 1 2 1 1 1
det 
3
 = det   operation (−1)c1 + c3
3 6 2 3 3 3 2
4 2 6 4 4 2 2 4
= 0.

The last step follows because the 2nd and 3rd columns are equal, which implies that
the rank (dimension of the column space) is less than 4. (You could also subtract
the third column from the second and get a column of zeroes, etc.)

Expansion in Minors or Cofactors There is a generalization of the formula


used for the recursive definition. Namely, for any n × n matrix A, let Dij (A) be the
determinant of the (n − 1) × (n − 1) matrix obtained by deleting the ith row and
jth column of A. Then,

n
det A = (−1)i+j aij Dij (A) (185)
i=1
= (−1)1+j a1j D1j (A) + · · · + (−1)i+j aij Dij (A) + · · · + (−1)n+j anj Dnj (A).
11.4. SOME IMPORTANT PROPERTIES OF DETERMINANTS 559

The special case j = 1 is the recursive definition given in the previous section. The
more general rule is easy to derive from the special case j = 1 by means of column
interchanges. Namely, form a new matrix A0 by moving the jth column to the first
position by successively interchanging it with columns j − 1, j − 2, . . . , 2, 1. There
are j − 1 interchanges, so the determinant is changed by the factor (−1)j−1 . Now
apply the rule for the first column. The first column of A0 is the jth column of A,
and deleting it has the same effect as deleting the jth column of A. Hence, a0i1 = aij
and Di (A0 ) = Dij (A). Thus,

n
det A = (−1)j−1 det A0 = (−1)j−1 (−1)1+i a0i1 Di (A0 )
i=1

n
= (−1)i+j aij Dij (A).
i=1

Similarly, there is a corresponding rule for any row of a matrix



n
det A = (−1)i+j aij Dij (A) (186)
j=1

= (−1)i+1 ai1 Di1 + · · · + (−1)i+j aij Dij (A) + · · · + (−1)i+n ain Din (A).
This formula is obtained from (185) by transposing, applying the corresponding
column rule, and then transposing back.

Example Expand the following determinant using its second row.


 
1 2 3 [ ]
1 3
det 0  2+3 2+2
6 0 = (−1) 0(. . . ) + (−1) 6 det + (−1)2+3 0(. . . )
3 1
3 2 1
= 6(1 − 9) = −48.

There is some terminology which you may see used in connection with these formu-
las. The determinant Dij (A) of the (n−1)×(n−1) matrix obtained by deleting the
ith row and jth column is called the i, j-minor of A. The quantity (−1)i+j Dij (A)
is called the i, j-cofactor. Formula (185) is called expansion in minors (or cofactors)
of the jth column and formula (186) is called expansion in minors (or cofactors)
of the ith row. It is not necessary to remember the terminology as long as you
remember the formulas and understand how they are used. Cramer’s Rule One
may use determinants to derive a formula for the solutions of a non-singular system
of n equations in n unknowns
    
a11 a12 . . . a1n x1 b1
 a21 a22 . . . a2n   x2   b2 
    
 .. .. ..   ..  =  ..  .
 . . ... .  .   . 
an1 an2 ... ann xn bn
560 CHAPTER 11. DETERMINANTS AND EIGENVALUES

The formula is called Cramer’s rule, and here it is. For the jth unknown xj , take
the determinant of the matrix formed by replacing the jth column of the coefficient
matrix A by b, and divide it by det A. In symbols,
 
a11 . . . b1 . . . a1n
 a21 . . . b2 . . . a2n 
 
det  . .. .. 
 .. ... . ... . 
an1 . . . bn . . . ann
xj =  
a11 . . . a1j . . . a1n
 a21 . . . a2j . . . a2n 
 
det  . .. .. 
 .. ... . ... . 
an1 . . . anj . . . ann

Example Consider     
1 0 2 x1 1
1 1 2 x2  = 5 .
2 0 6 x3 3
We have  
1 0 2
det 1 1 2 = 2.
2 0 6
(Do you see a quick way to compute that?) Hence,
 
1 0 2
det 5 1 2
3 0 6 0
x1 = = =0
 2  2
1 1 2
det 1 5 2
2 3 6 8
x1 = = =4
 2  2
1 0 1
det 1 1 5
2 0 3 1
x3 = = .
2 2
You should try to do this by Gauss-Jordan reduction.

Cramer’s rule is not too useful for solving specific numerical systems of equations.
The only practical method for calculating the needed determinants for n large is to
use row (and possibly column) operations. It is usually easier to use row operations
to solve the system without resorting to determinants. However, if the system has
non-numeric symbolic coefficients, Cramer’s rule is sometimes useful. Also, it is
often valuable as a theoretical tool.
11.4. SOME IMPORTANT PROPERTIES OF DETERMINANTS 561

Cramer’s rule is related to expansion in minors. You can find further discussion of
it and proofs in Section 5.4 and 5.5 of Introduction to Linear Algebra by Johnson,
Riess, and Arnold. (See also Section 4.5 of Applied Linear Algebra by Noble and
Daniel.)

Exercises for 11.4.

1. Check the validity of the product rule for the product


  
1 −2 6 2 3 1
2 0 3 1 2 2 .
−3 1 1 1 1 0

2. Find  
3 0 0 0
2 2 0 0
det 
1
.
6 4 0
1 5 4 3

Of course, the answer is the product of the diagonal entries. Using the prop-
erties discussed in the section, see how many different ways you can come to
this conclusion.
What can you conclude in general about the determinant of a lower triangular
square matrix?

1
3. (a) Prove that if A is an invertible n × n matrix, then det(A−1 ) = .
det A
(b) Using part(a), show that if A is any n × n matrix and P is an invertible
n × n matrix, then det(P AP −1 ) = det A.

4. Why does Cramer’s rule fail if the coefficient matrix A is singular?

5. Use Cramer’s rule to solve the system


    
0 1 0 0 x1 1
1 0 1 0 x2  2
   =  .
0 1 0 1 x2  3
0 0 1 0 x4 4

Also, solve it by Gauss-Jordan reduction and compare the amount of work


you had to do in each case.
562 CHAPTER 11. DETERMINANTS AND EIGENVALUES

11.5 Eigenvalues and Eigenvectors

As in Section 2, we want to solve an n × n system


dx
= Ax (187)
dt
by finding a basis {x1 (t), x2 (t), . . . , xn (t)} of the solution space. In case A is con-
stant, it was suggested that we look for solutions of the form
x = eλt v
where λ and v 6= 0 are to be determined by the process. Such solutions form a
linearly independent set as long as the corresponding v’s form a linearly independent
set. For, suppose
x1 = eλ1 t v1 , x2 = eλ2 t v2 , . . . , xk = eλk t vk
are k such solutions. We know that the set of solutions {x1 (t), . . . , xk (t)} is linearly
independent if and only if the set of vectors obtained by evaluating the functions
at t = 0 is linearly independent. However, this set of vectors is just {v1 , . . . , vk }.

We discovered in Section 2 that trying a solution of the form x = eλt v leads to the
eigenvalue–eigenvector problem
Av = λv. (188)

We redo some of the algebra in Section 2 as follows. Rewrite equation (188) as


Av = λv
Av − λv = 0
Av − λIv = 0
(A − λI)v = 0.
The last equation is the homogeneous n × n system with n × n coefficient matrix
 
a11 − λ a12 ... a1n
 a21 a22 − λ . . . a2n 
 
A − λI =  . . .. .
 .. .. ... . 
an1 an2 . . . ann − λ
It has a non-zero solution vector v if and only if the coefficient matrix has rank less
than n, i.e., if and only if it is singular . By Theorem 11.3,

this will be true if and only if λ satisfies the characteristic equation


 
a11 − λ a12 ... a1n
 a21 a22 − λ . . . a2n 
 
det(A − λI) = det  . . ..  = 0. (189)
 .. .. ... . 
an1 an2 . . . ann − λ
11.5. EIGENVALUES AND EIGENVECTORS 563

As in Section 2, the strategy for finding eigenvalues and eigenvectors is as follows.


First find the roots of the characteristic equation. These are the eigenvalues. Then
for each root λ, find a general solution for the system

(A − λI)v = 0. (190)

This gives us all the eigenvectors for that eigenvalue.

Example 216 Consider the matrix


 
1 4 3
A = 4 1 0 .
3 0 1

The characteristic equation is


 
1−λ 4 3
det(A − λI) = det  4 1−λ 0 
3 0 1−λ
= (1 − λ)((1 − λ)2 − 0) − 4(4(1 − λ) − 0) + 3(0 − 3(1 − λ)
= (1 − λ)3 − 25(1 − λ) = (1 − λ)((1 − λ)2 − 25)
= (1 − λ)(λ2 − 2λ − 24) = (1 − λ)(λ − 6)(λ + 4) = 0.

Hence, the eigenvalues are λ = 1, λ = 6, and λ = −4. We proceed to find the


eigenvectors for each of these eigenvalues, starting with the largest.

First, take λ = 6, and put it in (190) to obtain the system


     
1−6 4 3 v1 −5 4 3 v1
 4 1−6 0  v2  = 0 or  4 −5 0  v2  = 0.
3 0 1 − 6 v3 3 0 −5 v3

To solve, use Gauss-Jordan reduction


     
−5 4 3 −1 −1 3 − 1 −1 3
 4 −5 0  →  4 −5 0 → 0 −9 12
3 0 −5 3 0 −5 0 −3 4
   
−1 −1 3 − 1 −1 3
→ 0 0 0 →  0 3 −4
0 −3 4 0 0 0
   
1 1 −3 1 0 −5/3
→ 0 1 −4/3 → 0 1 −4/3 .
0 0 0 0 0 0

Note that the matrix is singular, and the rank is smaller than 3. This must be the
case because the condition det(A − λI) = 0 guarantees it. If the coefficient matrix
564 CHAPTER 11. DETERMINANTS AND EIGENVALUES

were non-singular, you would know that there was a mistake: either the roots of
the characteristic equation are wrong or the row reduction was not done correctly.

The general solution is


v1 = (5/3)v3
v2 = (4/3)v3
with v3 free. The general solution vector is
   
(5/3)v3 5/3
v = (4/3)v3  = v3 4/3 .
v3 1
Hence, the solution space is 1-dimensional. A basis may be obtained by setting
v3 = 1 as usual, but it is a bit neater to put v3 = 3 so as to avoid fractions. Thus,
 
5
v1 = 4
3
constitutes a basis for the solution space. Note that we have now found all eigenvec-
tors for the eigenvalue λ = 6. They are all the non-zero vectors in the 1-dimensional
solution subspace, i.e., all non-zero multiples of v1 .

Next take λ = 1 and put it in (190) to obtain the system


  
0 4 3 v1
4 0 0 v2  = 0.
3 0 0 v3
Use Gauss-Jordan reduction
   
0 4 3 1 0 0
4 0 0 → 0 1 3/4 .
3 0 0 0 0 0
The general solution is
v1 = 0
v2 = −(3/4)v3
with v2 free. Thus the general solution vector is
   
0 0
v = −(3/4)v3  = v3 −3/4 .
v3 1
Put v3 = 4 to obtain a single basis vector
 
0
v2 = −3 .
4
11.5. EIGENVALUES AND EIGENVECTORS 565

The set of eigenvectors for the eigenvalue λ = 1 is the set of non-zero multiples of
v2 .

Finally, take λ = −4, and put this in (190) to obtain the system
  
5 4 3 v1
4 5 0 v2  = 0.
3 0 5 v3

Solve this by Gauss-Jordan reduction.


     
5 4 3 1 −1 3 1 −1 3
4 5 0 → 4 5 0 → 0 9 −12
3 0 5 3 0 5 0 3 −4
   
1 −1 3 1 0 5/3
→ 0 3 −4 → 0 1 −4/3 .
0 0 0 0 0 0

The general solution is

v1 = −(5/3)v3
v2 = (4/3)v3

with v3 free. The general solution vector is


   
−(5/3)v3 −5/3
v =  (4/3)v3  = v3  4/3  .
v3 1

Setting v3 = 3 yields the basis vector


 
−5
v3 =  4  .
3

The set of eigenvectors for the eigenvalue λ = −4 consists of all non-zero multiples
of v3 .

The set {v1 , v2 , v3 } obtained in the previous example is linearly independent. To


see this apply Gaussian reduction to the matrix with these vectors as columns:
     
5 0 −5 1 0 −1 1 0 −1
4 −3 4  → 0 −3 8  → 0 1 −8/3 .
3 4 3 0 4 6 0 0 50/3

The reduced matrix has rank 3, so the columns of the original matrix form an
independent set.

It is no accident that a set so obtained is linearly independent. The following


theorem tells us that this will always be the case.
566 CHAPTER 11. DETERMINANTS AND EIGENVALUES

Theorem 11.21 Let A be an n × n matrix. Let λ1 , λ2 , . . . , λk be different eigen-


values of A, and let v1 , v2 , . . . , vk be corresponding eigenvectors. Then

{v1 , v2 , . . . , vk }

is a linearly independent set.

Proof. Assume {v1 , v2 , . . . , vk } is not a linearly independent set, and try to derive
a contradiction. In this case, one of the vectors in the set can be expressed as a
linear combination of the others. If we number the elements appropriately, we may
assume that
v1 = c2 v2 + · · · + ck vr , (191)
where r ≤ k. (Before renumbering, leave out any vector vi on the right if it appears
with coeficient ci = 0.) Note that we may also assume that no vector which appears
on the right is a linear combination of the others because otherwise we could express
it so and after combining terms delete it from the sum. Thus we may assume the
vectors which appear on the right form a linearly independent set. Multiply (191)
on the left by A. We get

Av1 = c2 Av2 + · · · + ck Avk


λ1 v1 = c2 λ2 v2 + . . . ck λk vk (192)

where in (192) we used the fact that each vi is an eigenvector with eigenvalue λi .
Now multiply (191) by λ1 and subtract from (192). We get

0 = c2 (λ2 − λ1 )v2 + · · · + ck (λk − λ1 )vk . (193)

Not all the coefficients on the right in this equation are zero. For at least one of
the ci 6= 0 (since v1 6= 0), and none of the quantities λ2 − λ1 , . . . λk − λ1 is zero. It
follows that (193) may be used to express one of the vectors v2 , . . . , vk as a linear
combination of the others. However, this contradicts the assertion that the set of
vectors appearing on the right is linearly independent. Hence, our initial assumption
that the set {v1 , v2 , . . . , vk } is dependent must be false, and the theorem is proved.

You should try this argument out on a set {v1 , v2 , v3 } of three eigenvectors to see
if you understand it.

Historical Aside The concepts discussed here and in Section 2 were invented by
the 19th century English mathematicians Cayley and Sylvester, but they used the
terms ‘characteristic vector’ and ‘characteristic value’. These were translated into
German as ‘Eigenvector’ and ‘Eigenwerte’, and then partially translated back into
English—largely by physicists—as ‘eigenvector’ and ‘eigenvalue’. Some English and
American mathematicians tried to retain the original English terms, but they were
overwhelmed by extensive use of the physicists’ language in applications. Nowadays
everyone uses the German terms. The one exception is that we still call

det(A − λI) = 0
11.5. EIGENVALUES AND EIGENVECTORS 567

the characteristic equation and not some strange German-English name. Application
to Homogeneous Linear Systems of Differential Equations What lessons can
we learn for solving systems of differential equations from the previous discussion
of eigenvalues? First, Theorem 11.21 assures us that if the n × n matrix A has dis-
tinct eigenvalues λ1 , λ2 , . . . , λk corresponding to eigenvectors v1 , v2 , . . . , vk , then
the functions
x1 = eλ1 t v1 , x2 = eλ2 t v2 , . . . , xk = eλk t vk
dx
form a linearly independent set of solutions of = Ax. If k = n then this set will
dt
dx
be a basis for the space of solutions of = Ax. (Why?)
dt
Example 216a Consider the system
 
1 4 3
dx 
= 4 1 0 x.
dt
3 0 1
We found that λ1 = 6, λ2 = 1, λ3 = −4 are eigenvalues of the coefficient matrix
corresponding to eigenvectors
     
5 0 −5
v1 = 4 , v2 = −3 , v3 =  4  .
3 4 3
Since k = 3 in this case, we conclude that the general solution of the system of
differential equations is
     
5 0 −5
x = c1 e6t 4 + c2 et −3 + c3 e−4t  4  .
3 4 3

The above example illustrates that we should ordinarily look for a linearly inde-
pendent set of eigenvectors, as large as possible, for the n × n coefficient matrix
A. If we can find such a set with n elements, then we may write out a complete
solution as in the example. The condition that there is a linearly independent set
of n eigenvectors for A, i.e., that there is a basis for Rn (Cn in the complex case)
consisting of eigenvectors for A, will certainly be verified if there are n distinct
eigenvalues (Theorem 11.21). We shall see later that there are other circumstances
in which it holds. On the other hand, it is easy to find examples where it fails.

Example 217 Consider the 2 × 2 system


[ ]
dx 2 3
= x.
dt 0 2
The eigenvalues are found by solving the characteristic equation of the coefficient
matrix [ ]
2−λ 3
det = (2 − λ)2 = 0.
0 2−λ
568 CHAPTER 11. DETERMINANTS AND EIGENVALUES

Hence there is only one (double) root λ = 2. To find the corresponding eigenvectors,
solve [ ]
0 3
v = 0.
0 0
This one is easy, (but you can make it hard for yourself if you get confused about
Gauss-Jordan reduction). The general solution is

v2 = 0, v1 free.

Hence, the general solution vector is


[ ] [ ]
v1 1
v= = v1 .
0 0

Hence, a basic eigenvector for λ = 2 is


[ ]
1
v1 = = e1 .
0

{v1 } is certainly not a basis for R2 .

In the sections that follow, we shall be concerned with these two related questions.
Given an n × n matrix A, when can we be sure that there is a basis for Rn (Cn
in the complex case) consisting of eigenvectors for A? If there is no such basis, is
there another way to solve the system dx/dt = Ax?

Solving Polynomial Equations To find the eigenvalues of an n × n matrix, you


have to solve a polynomial equation. You all know how to solve quadratic equations,
but you may be stumped by cubic or higher equations, particularly if there are no
obvious ways to factor. You should review what you learned in high school about
this subject, but here are a few guidelines to help you.

First, it is not generally possible to find a simple solution in closed form for an
algebraic equation. For most equations you might encounter in practice, you would
have to use some method to approximate a solution. (Many such methods exist. One
you may have learned in your calculus course is Newton’s Method .) Unfortunately,
an approximate solution of the characteristic equation isn’t much good for finding
the corresponding eigenvectors. After all, the system

(A − λI)v = 0

must have rank smaller than n for there to be non-zero solutions. If you replace
the exact value of λ by an approximation, the chances are that the new system will
have rank n. Hence, the textbook method we have described for finding eigenvectors
won’t work. There are in fact many alternative methods for finding eigenvalues and
eigenvectors approximately when exact solutions are not available. Whole books
are devoted to such methods. (See Johnson, Riess, and Arnold or Noble and Daniel
for some discussion of these matters.)
11.5. EIGENVALUES AND EIGENVECTORS 569

Fortunately, textbook exercises and examination questions almost always involve


characteristic equations for which exact solutions exist, but it is not always obvious
what they are. Here is one fact (a consequence of an important result called Gauss’s
Lemma) which helps us find such exact solutions when they exist. Consider an
equation of the form

λn + a1 λn−1 + · · · + an−1 λ + an = 0

where all the coefficients are integers. (The characteristic equation of a matrix
always has leading coefficient 1 or −1. In the latter case, just imagine you have
multiplied through by −1 to apply the method.) Gauss’s Lemma tells us that if
this equation has any roots which are rational numbers, i.e., quotients of integers,
then any such root is actually an integer, and, moreover, it must divide the constant
term an . Hence, the first step in solving such an equation should be checking all
possible factors (positive and negative) of the constant term. Once, you know a
root r1 , you can divide through by λ − r1 to reduce to a lower degree equation. If
you know the method of synthetic division, you will find checking the possible roots
and the polynomial long division much simpler.

Example 218 Solve


λ3 − 3λ + 2 = 0.
If there are any rational roots, they must be factors of the constant term 2. Hence,
we must try 1, −1, 2, −2. Substituting λ = 1 in the equation yields 0, so it is a root.
Dividing λ3 − 3λ + 2 by λ − 1 yields

λ3 − 3λ + 2 = (λ − 1)(λ2 + λ − 2)

and this may be factored further to obtain

λ3 − 3λ2 + 2 = (λ − 1)(λ − 1)(λ + 2) = (λ − 1)2 (λ + 2).

Hence, the roots are λ = 1 which is a double root and λ = −2. Eigenvalues and
Eigenvectors for Function Spaces The concepts of eigenvalue and eigenvector
make sense for a linear operator L defined on any vector space V , i.e., λ is an
eigenvalue for L with eigenvector v if

L(v) = λv with v 6= 0.

If the vector space V is not finite dimensional, then the use of the characteristic
equation and the other methods introduced in this section do not apply, but the
concepts are still very useful, and other methods may be employed to calculate
them.

In particular, eigenvalues and eigenvectors arise naturally in the function spaces


which occur in solving differential equations, both ordinary and partial. Thus, if
you refer back to the analysis of the vibrating drum problem in Chapter IX, Section
570 CHAPTER 11. DETERMINANTS AND EIGENVALUES

2, you will recall that the process of separation of variables led to equations of the
form
Θ00 = µΘ
1 m2
R00 + R0 − 2 = γR where µ = −m2 .
r r
Here I took some liberties with the form of the equations in order to emphasize
the relation with eigenvalues and eigenvectors. In each case, the equation has the
form L(ψ) = λψ where ψ denotes a function, and L is an appropriate differential
operator:
d2
L=
dθ2
d2 1 d m2
L= 2 + − 2.
dr r dr r
There is one subtle but crucial point here. The allowable functions ψ in the domains
of these operators are not arbitrary but have other conditions imposed on them by
their interpretation in the underlying physical problem. Thus, we impose the peri-
odicity condition Θ(θ+2π) = Θ(θ) because of the geometric meaning of the variable
d2
θ, i.e., the domain of the operator L = 2 is restricted to such periodic functions.

The eigenvalue–eigenvector condition L(Θ) = µΘ amounts to a differential equation
which is easy to solve—see Chapter IX, Section 2—but the periodicity condition
limits the choice of the eigenvalue µ to numbers of the form µ = −m2 where m is
a non-negative integer. The corresponding eigenvectors (also called appropriately
eigenfunctions) are the corresponding solutions of the differential equation given by
Θ(θ) = c1 cos mθ + c2 sin mθ.
Similarly, the allowable functions R(r) must satisfy the boundary condition R(a) =
0. Solving the eigenvalue–eigenvector problem L(R) = γR in this case amounts to
solving Bessel’s equation and finding the eigenvalues γ comes down to finding roots
of Bessel functions.

This approach is commonly used in the study of partial differential equations, and
you will go into it thoroughly in your course on Fourier series and boundary value
problems. It is also part of the formalism used to describe quantum mechanics.
In that theory, linear operators correspond to observable quantities like position,
momentum, energy, etc., and the eigenvalues of these operators are the possible
results of measurements of these quantities.

Exercises for 11.5.

1. Find the eigenvalues and eigenvectors for each of the following matrices. Use
the method described above for solving the characteristic equation if it has
degree greater than two.
11.5. EIGENVALUES AND EIGENVECTORS 571
[ ]
5 −3
(a) .
2 0
 
3 −2 −2
(b) 0 0 1 .
1 0 −1
 
2 −1 −1
(c) 0 0 −2.
0 1 3
 
4 −1 −1
(d) 0 2 −1.
1 0 3
2. Taking A to be each of the matrices in previous exercise, use the eigenvalue-
eigenvector method to find a basis for the vector space of solutions of the
dx
system = Ax if it works. (It works if in the previous exercise, you found
dt n
a basis for R consisting of eigenvectors for the matrix.)
3. Solve the initial value problem
   
1 −1 1 2
dx 
= 1 −2 −1 x where x(0) = 0 .
dt
0 1 2 4

Hint: One of the eigenvalues is zero.


4. Show that zero is an eigenvalue of the square matrix A if and only if det A = 0,
i.e., if and only if A is singular.
5. Let A be a square matrix, and suppose λ is an eigenvalue for A with eigenvec-
tor v. Show that λ2 is an eigenvalue for A2 with eigenvector v. What about
λn and An for n a positive integer?
6. Suppose A is non-singular. Show that λ is an eigenvalue of A if and only if
λ−1 is an eigenvalue of A−1 . Hint. Use the same eigenvector.
7. (a) Show that det(A−λI) is a quadratic polynomial in λ if A is a 2×2 matrix.
(b) Show that det(A − λI) is a cubic polynomial in λ if A is a 3 × 3 matrix.
(c) What would you guess is the coefficient of λn in det(A − λI) for A an n × n
matrix?
8. (Optional) Let A be an n × n matrix with entries not involving λ. Prove in
general that det(A − λI) is a polynomial in λ of degree n. Hint. Assume B(λ)
is an n × n matrix such that each column has at most one term involving λ
and that term is of the form a + bλ. Show by using the recursive definition
of the determinant that det B(λ) is a polynomial in λ of degree at most n.
Now use this fact and the recursive definition of the determinant to show that
det(A − λI) is a polynomial of degree exactly n.
572 CHAPTER 11. DETERMINANTS AND EIGENVALUES

9. Solve the 2 × 2 system in Example 2


dx1
= 2x1 + 3x2
dt
dx2
= 2x2
dt
by solving the second equation and substituting back in the first equation.
10. Consider the infinite dimensional vector space of all infinitely differentiable
real valued functions u(x) defined for 0 ≤ x ≤ a and satisfying u(0) = u(a) =
d2
0. Let L = . Find the eigenvalues and eigenvectors of the operator L.
dx2
Hint: A non-zero solution of the differential equation u00 + µu = 0 such that
u(0) = u(a) = 0 is an eigenvector (eigenfunction) with eigenvalue λ = −µ.
You may assume for the purposes of the problem that λ < 0, i.e., µ > 0.

11.6 Complex Roots

Let A be an n × n matrix. The characteristic equation

det(A − λI) = 0

is a polynomial equation of degree n in λ. The Fundamental Theorem of Algebra


tells us that such an equation has n complex roots, at least if we count repeated
roots with proper multiplicity. Some or all of these roots may be real, but even if A
is a real matrix, some of the roots may be non-real complex numbers. For example,
the characteristic equation of
[ ] [ ]
0 −1 − λ −1
A= is det = λ2 + 1 = 0
1 0 1 λ

which has roots λ = ±i. We want to see in general how the nature of these roots
affects the calculation of the eigenvectors of A.

First, suppose that A has some non-real complex entries, that is, not all its entries
are real.

Example 219 Consider [ ]


0 i
A= .
−i 0
The characteristic equation is
[ ]
−λ i
det = λ2 − (−i2 ) = λ2 − 1 = 0.
−i −λ
11.6. COMPLEX ROOTS 573

Thus the eigenvalues are λ = 1 and λ = −1. For λ = 1, we find the eigenvectors by
solving [ ]
−1 i
v = 0.
−i −1
Gauss-Jordan reduction yields
[ ] [ ]
−1 i 1 −i
→ .
−i −1 0 0

(Multiply the first row by i and add it to the second row; then change the signs in
the first row.) Thus the general solution is

v1 = iv2 , v2 free.

A general solution vector is


[ ] [ ]
v2 i i
v= = v2
v2 1
Thus a basic eigenvector for λ = 1 is
[ ]
i
v1 = .
1

A similar calculations shows that a basic eigenvector for the eigenvalue λ = −1 is


[ ]
−i
v2 = .
1

The above example shows that when some of the entries are non-real complex
numbers, we should expect complex eigenvectors. That is, the proper domain to
consider is the complex vector space Cn .

Suppose instead that A has only real entries. It may still be the case that some of
the roots of the characteristic equation are not real. We have two choices. We can
consider only real roots as possible eigenvalues. For such roots λ, we may consider
only real solutions of the system

(A − λI)v = 0.

That is, we choose as our domain of attention the real vector space Rn . In effect,
we act as if we don’t know about complex numbers. Clearly, we will be missing
something this way. We will have a better picture of what is happening if we also
consider the non-real complex roots of the characteristic equation. Doing that will
ordinarily lead to complex eigenvectors, i.e., to the complex vector vector space Cn .

Example 220 Consider [


]
2 1
A= .
−2 0
574 CHAPTER 11. DETERMINANTS AND EIGENVALUES

The characteristic equation is


[ ]
2−λ 1
det = λ2 − 2λ + 2 = 0.
−2 −λ

The roots of this equation are



2± 4−8
= 1 ± i.
2

Neither of these roots are real, so considering this a purely real problem in Rn will
yield no eigenvectors.

Consider it instead as a complex problem. The eigenvalue λ = 1 + i yields the


system [ ]
1−i 1
v = 0.
−2 −1 − i
Gauss-Jordan reduction (done carefully to account for the complex entries) yields
[ ] [ ]
1−i 1 1 (1 + i)/2
→ switch rows, divide by − 2
−2 −1 − i 1−i 1
[ ]
1 (1 + i)/2
→ .
0 0

(1 + i)
(The calculation of the last 2, 2-entry is 1 − (1 − i) = 1 − 1 = 0.) The general
2
solution is
(1 + i)
v1 = − v2 v2 free.
2
The general solution vector is
[ ]
−(1 + i)/2
v = v2 .
1

Putting v2 = 2 to avoid fractions yields a basic eigenvector


[ ]
−1 − i
v1 =
2

for the eigenvalue λ = 1 + i.

A similar calculation may be used to determine the eigenvectors for the eigenvalue
1 − i. However, there is a shortcut based on the fact that the second eigenvalue
1 − i is the complex conjugate λ of the first eigenvalue 1 + i. To see how this works
requires a short digression. Suppose v is an eigenvector with eigenvalue λ. This
means that
Av = λv.
11.6. COMPLEX ROOTS 575

Now take the complex conjugate of everything in sight on both sides of this equation.
This yields
Av = λv.
(Here, putting a ‘bar’ over a matrix means that you should take the complex con-
jugate of every entry in the matrix.) Since A is real , we have A = A. Thus, we
have
Av = λv.
In words, for a real n × n matrix, the complex conjugate of an eigenvector is also an
eigenvector, and the eigenvalue corresponding to the latter is the complex conjugate
of the eigenvalue corresponding to the former.

Applying this principle in Example 220 yields


[ ]
−1 + i
v2 = v 1 =
2

as a basic eigenvector for eigenvalue λ = 1 − i.

Application to Homogeneous Linear Systems of Differential Equations


dx
Given a system of the form = Ax where A is a real n × n matrix, we know from
dt
our previous work with second order differential equations that it may be useful to
consider complex valued solutions x = x(t).

Example 220, expanded Consider the system


[ ]
dx 2 1
= x.
dt −2 0

Since the roots of the characteristic equation of the coefficient matrix are complex,
if we look for solutions x(t) taking values in R2 , we won’t get anything by the
eigenvalue-eigenvector method. Hence, it makes sense to look for solutions with
values in C2 . Then, according to our previous calculations, the general solution
will be

x = c1 eλt v1 + c2 eλt v1
[ ] [ ]
−1 − i −1 + i
= c1 e(1+i)t + c2 e(1−i)t .
2 2

As usual, the constants c1 and c2 are arbitrary complex scalars.

It is often the case in applications that the complex solution is meaningful in its
own right, but there are also occasions where one wants a real solution. For this,
we adopt the same strategy we used when studying second order linear differential
equations: take the real and imaginary parts of the complex solution. This will be
valid if A is real since if x(t) = u(t) + iv(t) is a solution with u(t) and v(t) real,
576 CHAPTER 11. DETERMINANTS AND EIGENVALUES

then we have
dx
= Ax
dt
du dv
+i = Au + iAv
dt dt
so comparing real and imaginary parts on both sides, we obtain
du dv
= Au and = Av.
dt dt
Note that we needed to know A is real in order to know that Au and Av on the
right are real.

Let’s apply this in Example 220. One of the basic complex solutions is
[ ] [ ]
−1 − i −1 − i
x(t) = e(1+i)t = et (cos t + i sin t)
2 2
[ ]
(cos t + i sin t)(−1 − i)
= et
2 cos t + i2 sin t
[ ]
t − cos t + sin t − i(sin t + cos t)
=e
2 cos t + i2 sin t
[ ] [ ]
t − cos t + sin t t −(sin t + cos t)
=e + ie .
2 cos t 2 sin t
Thus, the real and imaginary parts are
[ ]
− cos t + sin t
u(t) = et
2 cos t
[ ]
−(sin t + cos t)
v(t) = et .
2 sin t
These form a linearly independent set since putting t = t0 = 0 yields
[ ]
−1
u(0) =
2
[ ]
−1
v(0) =
0

and these form a linearly independent pair in R2 . Hence, the general real solution
of the system is
[ ] [ ]
− cos t + sin t −(sin t + cos t)
x = c1 et + c2 et
2 cos t 2 sin t
where c1 and c2 are arbitrary real scalars.

Note that if we had used the eigenvalue λ = 1 − i and the corresponding basic
complex solution x(t) = u(t) − iv(t) instead, we would have obtained the same
thing except for the sign of one of the basic real solutions.
11.6. COMPLEX ROOTS 577

The analysis in the example illustrates what happens in general. If A is a real


n × n matrix, then the roots of its characteristic equation are either real or come in
conjugate complex pairs λ, λ. For real roots µ, we can always find basic eigenvectors
in Rn . For non-real complex roots λ, we need to look for basic eigenvectors in Cn ,
but we may obtain the basic eigenvectors for λ by taking the conjugates of the
basic eigenvectors for λ. Hence, for each pair of conjugate complex roots, we need
only consider one root in the pair in order to generate an independent pair of real
solutions.

We already know that if the eigenvalues are distinct then the corresponding set of
eigenvectors will be linearly independent. Suppose that, for each pair of conjugate
complex roots, we choose one root of the pair and take the real and imaginary parts
of the corresponding basic eigenvectors. If we throw in the basic real eigenvectors
associated with the real roots, then the set obtained in this way is always a linearly
independent subset of Rn . The proof of this fact is not specially difficult, but we
shall skip it. (See the Exercises for special cases.)

Exercises for 11.6.

1. In each case, find the eigenvalues and eigenvectors of the given matrix. In case
the characteristic equation is cubic, use the method described in the previous
section to find a real (integer) root. The roots of the remaining quadratic
equation will be complex.
[ ]
0 −1
(a) .
1 0
[ ]
1 1
(b) .
−2 3
 
0 0 4
(c) 1 0 −1.
0 1 4
 
−1 1 7
(d)  −1 2 3.
0 −1 2
[ ]
2 2i
(e) .
−3i 1

2. For A equal to each of the matrices in the previous problem, find the general
dx
complex solution of the system = Ax.
dt
3. For A equal to each of the matrices in parts (a) through (d) of the previous
dx
problem, find a general real solution of the system = Ax.
dt
578 CHAPTER 11. DETERMINANTS AND EIGENVALUES

4. Find the solution to the initial value problem


   
0 0 8 1
dx 
= 1 0 −4 x where x(0) = 1 .
dt
0 1 2 2

Since the system is real, the final answer should be expressed in terms of real
functions.

5. Suppose w is a vector in Cn , and write w = u + iv where u and v are vectors


in Rn . (Then w = u − iv.)
(a) Show that if {w, w} is a linearly independent pair in Cn , then {u, v} is a
linearly independent pair in Rn .
(b) Conversely, show that if {u, v} is a linearly independent pair in Rn , then
{w, w} is a linearly independent pair in Cn .
Note that for a pair of vectors in Rn , you need only consider real scalars as
multipliers in a dependence relation, but for a pair of vectors in Cn , you would
normally need to consider complex scalars as multipliers in a dependence
relation.
(c) Can you invent a generalizations of (a) and (b) for sets with more than
two elements?

11.7 Repeated Roots and the Exponential of a Ma-


trix

We have noted that the eigenvalue-eigenvector method for solving the n × n system
dx
= Ax succeeds if we can find a set of eigenvectors for A which forms a basis for
dtn
R or where necessary for Cn . Also, according to Theorem 11.6,

this will always be the case if there are n distinct eigenvalues. Unfortunately, we
still have to figure out what to do if the characteristic equation has repeated roots.

First, we might be lucky, and there might be a basis of eigenvectors of A.

Example 221 Consider the system


 
1 1 −1
dx 
= −1 3 −1 x.
dt
−1 1 1
11.7. REPEATED ROOTS AND THE EXPONENTIAL OF A MATRIX 579

First solve the characteristic equation


 
1−λ 1 −1
det  −1 3−λ −1  =
−1 1 1−λ
(1 − λ)((3 − λ)(1 − λ) + 1) + (1 − λ + 1) − (−1 + 3 − λ)
= (1 − λ)(3 − 4λ + λ2 + 1) + 2 − λ − 2 + λ
= (1 − λ)(λ2 − 4λ + 4)
= (1 − λ)(λ − 2)2 = 0.

Note that 2 is a repeated root. We find the eigenvectors for each of these eigenvalues.

For λ = 2 we need to solve (A − 2I)v = 0.


   
− 1 1 −1 1 −1 1
 −1 1 −1 → 0 0 0 .
−1 1 −1 0 0 0
The general solution of the system is v1 = v2 − v3 with v2 , v3 free. The general
solution vector for that system is
     
v2 − v3 1 −1
v =  v2  = v2 1 + v3  0  .
v3 0 1
The solution space is two dimensional. Thus, for the eigenvalue λ = 2 we obtain
two basic eigenvectors    
1 −1
v1 = 1 , v2 =  0  ,
0 1
and any eigenvector for λ = 2 is a non-trivial linear combination of these.

For λ = 1, we need to solve (A − I)v = 0.


     
0 1 −1 1 −1 0 1 0 −1
−1 2 −1 → 0 1 −1 → 0 1 −1 .
−1 1 0 0 0 0 0 0 0
The general solution of the system is v1 = v3 , v2 = v3 with v3 free. The general
solution vector is    
v3 1
v = v3  = v3 1 .
v3 1
The solution space is one dimensional, and a basic eigenvector for λ = 1 is
 
1
v3 = 1 .
1
580 CHAPTER 11. DETERMINANTS AND EIGENVALUES

It is not hard to check that the set of these basic eigenvectors


      
 1 −1 1 
v1 = 1 , v2 =  0  , v3 = 1
 
0 1 1

is linearly independent, so it is a basis for R3 .

We may now write out the general solution of the system of differential equations
     
1 −1 1
x = c1 e2t 1 + c2 e2t  0  + c3 et 1 .
0 1 1

Of course, we may not always be so lucky when we have repeated roots of the
characteristic equation. (See for example Example 2 in Section 5.) Hence, we need
some other method. It turns out that there is a generalization of the eigenvalue-
eigenvector method which always works, but it requires a digression. The Expo-
nential of a Matrix Let A be a constant n × n matrix. We could try to solve
dx/dt = Ax by the following nonsensical calculations

dx
= Ax
dt
dx
= A dt
x
ln x = At + c
x = eAt ec = eAt C.

Practically every line in the above calculation contains some undefined quantity.
dx
For example, what in the world is ? (x is an n × 1 column vector, so it isn’t an
x
invertible matrix.) Strangely enough something like this actually works, but one
must first make some proper definitions. We start with the definition of ‘eAt ’.

Let B be any n × n matrix. We define


1 1 1
eB = I + B + B 2 + B 3 + · · · + B j + . . .
2 3! j!

A little explanation is necessary. Each term on the right is an n × n matrix. If


there were only a finite number of such terms, there would be no problem, and the
sum would also be an n × n matrix. In general, however, there are infinitely many
terms, and we have to worry about whether it makes sense to add them up.

Example 222 Let [ ]


0 1
B=t .
−1 0
11.7. REPEATED ROOTS AND THE EXPONENTIAL OF A MATRIX 581

Then
[ ]
−1 0
B 2 = t2
0 −1
[ ]
0 −1
B 3 = t3
1 0
[ ]
1 0
B 4 = t4
0 1
[ ]
5 5 0 1
B =t
−1 0
..
.

Hence,
[ ] [ ] [ ] [ ]
1 0 0 1 1 −1 0 1 0 −1
eB = +t + t2 + t3 + ...
0 1 −1 0 2 0 −1 3! 1 0
[ 2 4 3 5
]
1 − t2 + t4! − . . . t − t3! + t5! − ...
= 3 5 2 4
−t + t3! − t5! + . . . 1 − t2 + t4! − ...
[ ]
cos t sin t
= .
− sin t cos t

As in the example, a series of n × n matrices yields a separate series for each of


the n2 possible entries. We shall say that such a series of matrices converges if
the series it yields for each entry converges. With this rule, it is possible to show
that the series defining eB converges for any n × n matrix B, but the proof is a
bit involved. Fortunately, as we shall see presently, we can usually avoid worrying
about convergence by a trick. In what follows we shall generally ignore such matters
and act as if the series were finite sums.

The exponential function for matrices obeys the usual rules you expect an expo-
nential function to have, but sometimes you have to be careful.

1. If 0 denotes the n × n zero matrix, then e0 = I.


2. The law of exponents holds if the matrices commute, i.e., if B and C are n × n
matrices such that BC = CB, then eB+C = eB eC .
d At
3. If A is an n × n constant matrix, then e = AeAt = eAt A. (It is worth
dt
writing this in both orders because products of matrices don’t automatically
commute.)

1 2
Here are the proofs of these facts. (1) e0 = I + 0 + 0 + · · · = I. (2) See the
2
582 CHAPTER 11. DETERMINANTS AND EIGENVALUES

Exercises. (3) Here we act as if the sum were finite (although the argument would
work in general if we knew enough about convergence of series of matrices.)
( )
d At d 1 1 1
e = I + tA + t2 A2 + t3 A3 + · · · + tj Aj + . . .
dt dt 2 3! j!
1 1 1
= 0 + A + (2t)A2 + (3t2 )A3 + · · · + (jtj−1 )Aj + . . .
2 3! j!
1 2 3 1
= A + tA + t A + · · · +
2 j−1 j
t A + ...
2 (j − 1)!
1 1
= A(I + tA + t2 A2 + · · · + tj−1 Aj−1 + . . . )
2 (j − 1)!
= AeAt .

Note that in the next to last step A could just as well have been factored out on
the right, so it doesn’t matter which side you put it on. Rule (3) gives us a formal
dx
way to solve the system = Ax when A is a constant n × n matrix. Namely, if
dt
v is any constant n × 1 column vector, then x = eAt v is a solution. For,
dx d
= eAt v = AeAt v = Ax.
dt dt
Suppose {v1 , v2 , . . . , vn } is any basis for Rn (or in the complex case for Cn ). That
gives us n solutions

x1 = eAt v1 , x2 = eAt v2 , . . . , xn = eAt vn .

Moreover, these solutions form a linearly independent set of solutions (hence a basis
for the vector space of all solutions) since when we evaluate at t = 0, we get

x1 (0) = e0 v1 = v1 , . . . , xn (0) = e0 vn = vn .

By assumption, these form a linearly independent set of vectors in Rn (or Cn in


the complex case).

The simplest choices for basis vectors are the standard basis vectors

v1 = e1 , v2 = e2 , . . . , vn = en ,

(which you should recall are just the columns of the identity matrix). In this case,
we have xi (t) = eAt ei , which is the ith column of the n × n matrix eAt . Thus, the
columns of eAt always form a basis for the vector space of all solutions.

Example 222, revisited For the system


[ ]
dx 0 1
= x
dt −1 0
11.7. REPEATED ROOTS AND THE EXPONENTIAL OF A MATRIX 583

we have [ ]
At cos t sin t
e =
− sin t cos t
so [ ] [ ]
cos t sin t
and
− sin t cos t
form a basis for the solution space of the system.

There is one serious problem with the above analysis. Adding up the series for eAt
is usually not very easy. Hence, the facts mentioned in the previous paragraphs are
usually not very helpful if you want to write out an explicit solution. To get around
this problem, we rely on the observation that a proper choice of v can make the
series
1 1
eAt v = (I + tA t2 A2 + t3 A3 + . . . )v
2 3!
1 2 2 1
= v + t(Av) + t (A v) + t3 (A3 v) + . . .
2 3!
easier to calculate. The object then is to pick v1 , v2 , . . . , vn with this strategy in
mind.

For example, suppose v is an eigenvector for A with eigenvalue λ. Then


Av = λv
A2 v = A(Av) = A(λv) = λ(Av) = λ2 v
A3 v = · · · = λ3 v
..
.
In fact, if v is an eigenvector with eigenvalue λ, it follows that Aj v = λj v for any
j = 0, 1, 2. . . . . Thus,
1 1
eAt v = v + t(Av) + t2 (A2 v) + t3 (A3 v) + . . .
2 3!
1 1
= v + t(λv) + t2 (λ2 V) + t3 (λ3 v) + . . .
2 3!
1 1
= (1 + tλ + t2 λ2 + t3 λ3 + . . . )v
2 3!
= eλt v.
Thus, if v is an eigenvector with eigenvalue λ, the series essentially reduces to the
scalar series for eλt , and
x = eAt v = eλt v
is exactly the solution obtained by the eigenvalue-eigenvector method.

Even where we don’t have enough eigenvectors, we may exploit this strategy as
follows. Let λ be an eigenvalue, and write
A = λI + (A − λI).
584 CHAPTER 11. DETERMINANTS AND EIGENVALUES

Then, since A and A − λI commute, the law of exponents tells us that

eAt = eλIt e(A−λI)t .

However, as above,

1 1
eλIt = I + λtI + (λt)2 I 2 + (λt)3 I 3 + . . .
2 3!
= eλt I.

Hence,
eAt = eλt Ie(A−λI)t = eλt e(A−λI)t
which means that calculating eAt v can be reduced to calculating the scalar multi-
plier eλt and the quantity e(A−λI)t v. However,

1 1
e(A−λI)t v = (I + t(A − λI) + t2 (A − λI)2 + · · · + tj (A − λI)j + . . . )v
2 j!
1 2 1
= v + t(A − λI)v + t (A − λI) v + · · · + tj (A − λI)j v + . . . ,
2
2 j!
(194)

so it makes sense to try to choose v so that (A − λI)j v vanishes for all j beyond a
certain point. Then the series (194) will reduce to a finite sum. In the next section,
we shall explore a systematic method to do this.

Exercises for 11.7.

1. (a) Find a basis for R3 consisting of eigenvectors for


 
1 2 −4
A =  2 −2 −2 .
−4 −2 1

(b) Find a general solution of the system x0 = Ax for this A.


(c) Find a solution of the system in (b) satisfying x(0) = e2 .

2. (a) Find a basis for R3 consisting of eigenvectors for


 
2 1 1
A = 1 2 1 .
1 1 2

(b) Find a general solution of the system x0 = Ax for this A.


11.7. REPEATED ROOTS AND THE EXPONENTIAL OF A MATRIX 585
[ ]
λ 0
3. (a) Let A = . Show that
0 µ
[ ]
eλt 0
eAt = .
0 eµt
 
λ1 0 ... 0
0 λ2 ... 0 
 
(b) Let A =  . .. .. . Such a matrix is called a diagonal matrix.
 .. . ... . 
0 0 . . . λn
What can you say about eAt ?
[ ]
λ 0
4. (a) Let A = . Calculate eAt . Hint: use A = λI + (A − λI).
1 λ
 
λ 0 0
(b) Let A =  1 λ 0 . Calculate eAt .
0 1 λ
 
λ 0 ... 0 0
1 λ . . . 0 0
 
 
(c) Let A be an n × n matrix of the form  0 1 . . . 0 0 . What is
 .. .. .. .. 
. . ... . .
0 0 ... 1 λ
the smallest integer k satisfying (A − λI)k = 0? What can you say about
eAt = eλt e(A−λI)t ?
5. Let A be an n × n matrix, and let P be a non-singular n × n matrix. Show
that −1
P eAt P −1 = eP AP t .

6. Let B and C be two n × n matrices such that BC = CB. Prove that

eB+C = eB eC .

Hint: You may assume that the binomial theorem applies to commuting ma-
trices, i.e.,
∑ n!
(B + C)n = BiC j .
i+j=n
i!j!

7. Let [ ] [ ]
0 1 0 0
B= C= .
0 0 −1 0

(a) Show that BC 6= CB.


(b) Show that [ ] ] [
B 1 1 1 0
C
e = e = .
0 1 −1 1
586 CHAPTER 11. DETERMINANTS AND EIGENVALUES

(c) Show that eB eC 6= eB+C . Hint: B + C = J, where etJ was calculated in


the text.

11.8 Generalized Eigenvectors

Let A be an n × n matrix, and let λ be an eigenvalue for A. Suppose moreover that


λ has multiplicity m as a root of the characteristic equation of A. Call any solution
of the system
(A − λI)v = 0.
a level 1 generalized eigenvector. These include all the usual eigenvectors plus the
zero vector. Similarly, consider the system

(A − λI)2 v = 0,

and call any solution of that system a level 2 generalized eigenvector. Continuing
in this way, call any solution of the system

(A − λI)j v = 0

a level j generalized eigenvector. We will also sometimes just use the term general-
ized eigenvector without explicitly stating the level.

If v is a level j generalized eigenvector, then

(A − λI)j+1 v = (A − λI)(A − λI)j v = 0

so it is also a level j + 1 generalized eigenvector, and similarly for all higher levels.
Thus, one may envision first finding the level 1 generalized eigenvectors (i.e., the
ordinary eigenvectors), then finding the level 2 vectors, which may constitute a
larger set, then finding the level 3 vectors, which may consititute a still larger
set, etc. That we need not continue this process indefinitely is guaranteed by the
following theorem.

Theorem 11.22 Let A be an n × n matrix, and suppose λ is an eigenvalue for A


with multiplicity m.

(a) The solution space of (A − λI)j v = 0 for j > m is identical with the solution
space of (A − λI)m v = 0.

(b) The solution space of (A − λI)m v = 0 has dimension m.

Part (a) tells us that we need go no further than level m in order to obtain all
generalized eigenvectors of any level whatsover. Part (b) tells us that in some sense
11.8. GENERALIZED EIGENVECTORS 587

there are ‘sufficiently many’ generalized eigenvectors. This will be important when
we need to find a basis consisting of such vectors.

We shall not attempt to prove this theorem here. The proof is quite deep and closely
related to the theory of the so-called Jordan Canonical Form. You will probably
encounter this theory if you take a more advanced course in linear algebra.

Example 223 Let  


1 2 0
A = 2 1 0 .
0 1 3
The eigenvalues of A are obtained by solving
 
1−λ 2 0
det  2 1−λ 0  = (1 − λ)((1 − λ)(3 − λ) − 0) − 2(2(3 − λ) − 0) + 0
0 1 3−λ
= (3 − λ)((1 − λ)2 − 4)
= (3 − λ)(λ2 − 2λ − 3) = −(λ − 3)2 (λ + 1).

Hence, λ = 3 is a root of multiplicity 2 and λ = −1 is a root of multiplicity 1.

Let’s find the generalized eigenvectors for each eigenvalue.

For λ = 3, we need only go to level 2 and solve (A − 3I)2 v = 0. We have


 2  
−2 2 0 8 −8 0
(A − 3I)2 =  2 −2 0 = −8 8 0 ,
0 1 0 2 −2 0
and Gauss-Jordan reduction yields
   
8 −8 0 1 −1 0
−8 8 0 → 0 0 0 .
2 −2 0 0 0 0
The general solution is v1 = v2 with v2 , v3 free. The general solution vector is
     
v2 1 0
v = v2  = v2 1 + v3 0 .
v2 0 1
Hence, a basis for the subspace of generalized eigenvectors for the eigenvalue λ = 3
is     
 1 0 
v1 = 1 , v2 = 0 .
 
0 1

Theorem 11.22 tells us that we don’t need to go further than level 2 for the eigenvalue
λ = 3, but it is reassuring to check that explicitly in this case. Look for level 3
588 CHAPTER 11. DETERMINANTS AND EIGENVALUES

vectors by solving (A − 3I)3 v = 0 (j = m + 1 = 3). We have


 
− 32 32 0
(A − 3I)3 v =  32 −32 0 ,
−8 8 0

and it is clear that solving this system gives us exactly the same solutions as solving
(A − 2I)2 v = 0.

For λ = −1, the multiplicity is 1, and we need to solve (A − (−1)I)1 v = 0. Hence,


finding the generalized eigenvectors for λ = −1 just amounts to finding the usual
eigenvectors.    
2 2 0 1 0 −4
2 2 0 → 0 1 4  .
0 1 4 0 0 0
The general solution is v1 = 4v3 , v2 = −4v3 , with v3 free. The general solution
vector is    
4v3 4
v = −4v3  = v3 −4 .
v3 1
Thus,  
4
v3 = −4
1
forms a basis for the subspace of (generalized) eigenvectors for λ = −1.

Put these basic generalized eigenvectors together in a set


      
 1 0 4 
v1 = 1 , v2 = 0 , v3 = −4 ,
 
0 1 1

It is not hard to check by the usual means that we get a basis for R3 .

What we observed in this example always happens. If we find a basis for the
subspace of generalized eigenvectors for each eigenvalue and put them together in
a set, the result is always a linearly independent set. (See the first appendix to this
section if you are interested in a proof.) If we are working in Cn and using complex
scalars, then the Fundamental Theorem of Algebra tells us that the multiplicities
of the roots of the characteristic equation add up to n, the degree of the equation.
Thus, the linearly independent set of basic generalized eigenvectors has the right
number of elements for a basis, so it is a basis. If we are working in Rn using real
scalars, we will also get a basis in this way provided all the (potentially complex)
roots of the characteristic equation are real. However, if there are any non-real
complex roots, we will miss the corresponding generalized eigenvectors by sticking
strictly to Rn .
11.8. GENERALIZED EIGENVECTORS 589

Diagonalizable Matrices In the simplest case, that in which all generalized eigen-
vectors are level one, the matrix A is said to be diagonalizable. In this case, there is
a basis for Cn consisting of eigenvectors for A. (If the eigenvalues and eigenvectors
are all real, we could replace Cn by Rn in this statement.) This is certainly the
easiest case to deal with, so it is not surprising that we give it a special name.
However, why we use the term ‘diagonalizable’ requires an explanation.

Suppose {v1 , v2 , . . . , vn } is a basis for Cn consisting of eigenvectors for A. Then


we have
Avi = λi vi = vi λi , i = 1, 2, . . . , n

where λi is the eigenvalue associated with vi . (These eigenvalues need not be


distinct.) We may write this as a single matrix equation
[ ] [ ]
A v1 v2 ... vn = v1 λ1 v2 λ2 ...vn λ n
 
λ1 0 ··· 0
[ ]
 0 λ2 ··· 0 

= v1 v2 ... vn  . .. .. .
 .. . ··· . 
0 0 ··· λn

If we put
[ ]
P = v1 v2 ... vn

then this becomes


AP = P D

where D is a diagonal matrix with the eigenvalues of A appearing on the diagonal.


P is an invertible matrix since its columns form a basis for Cn , so the last equation
can be written in turn

P −1 AP = D where D is a diagonal matrix.

Example In Example 207 of Section 11.2, we considered the matrix


[ ]
1 2
A= .
2 1

The eigenvalues are λ = 3 and λ = −1. Corresponding eigenvectors are


[ ] [ ]
1 −1
v1 = , v2 =
1 1

and these form a basis for R2 so A is diagonalizable. Take


[ ]
1 −1
P = .
1 1
590 CHAPTER 11. DETERMINANTS AND EIGENVALUES

The theory predicts that P −1 AP should be diagonal. Indeed,


[ ]−1 [ ][ ] [ ][ ][ ]
1 −1 1 2 1 −1 1 1 1 1 2 1 −1
=
1 1 2 1 1 1 2 −1 1 2 1 1 1
[ ][ ]
1 3 3 1 −1
=
2 1 −1 1 1
[ ] [ ]
1 6 0 3 0
= = .
2 0 −2 0 −1

Note that the eigenvalues appear on the diagonal as predicted.

Application to Homogeneous Linear Systems of Differential Equations

Let A be an n × n matrix and let v be a generalized eigenvector for the eigenvalue


λ. In this case, eAt v is specially easy to calculate. Let m be the multiplicity of λ.
Then we know that (A − λI)j v = 0 for j ≥ m (and perhaps also for some lesser
powers). Then, as in the previous section,

eAt = eλt e(A−λI)t ,

but
1 1
e(A−λI)t v = v + t(A − λI)v + t2 (A − λI)2 v + · · · + tm−1 (A − λI)m−1 v
2 (m − 1)!

since all other terms in the series vanish. Hence,


1 1
eAt v = eλt (v + t(A − λI)v + t2 (A − λI)2 v + · · · + tm−1 (A − λI)m−1 v).
2 (m − 1)!
(195)
dx
This gives us a method for solving a homogeneous system = Ax. First find
dt
a basis {v1 , v2 , . . . , vn } consisting of generalized eigenvectors. (This may require
working in Cn rather than Rn if some of the eigenvalues are non-real complex
numbers.) Then, the solutions xi = eAt vi may be calculated by formula (195), and
together form a basis for the solution space.

Example 223a Consider the system


 
1 2 0
dx 
= 2 1 0 x.
dt
0 1 3

Then, as we determined above,


      
 1 0 4 
v1 = 1 , v2 = 0 , v3 = −4
 
0 1 1
11.8. GENERALIZED EIGENVECTORS 591

is a basis of R3 consisting of generalized eigenvectors of the coefficient matrix. The


first two correspond to the eigenvalue λ = 3 with multiplicity 2, and v3 corresponds
to the eigenvalue λ = 1 with multiplicity 1. For λ = 3, m = 2, so we only need
terms up to the first power of (A−3I) in computing e(A−3I)t v for v = v1 or v = v2 .
    
−2 2 0 1 0
(A − 3I)v1 =  2 −2 0 1 = 0 .
0 1 0 0 1

Thus      
1 0 1
x1 = eAt v = e3t 1 + t 0 = e3t 1 .
0 1 t

Similarly,     
−2 2 0 0 0
(A − 3I)v2 =  2 −2 0 0 = 0
0 1 0 1 0
so v2 turns out to be an eigenvector. Thus,
 
0
x2 = e v2 = e v2 = e 0 .
At 3t 3t

For the eigenvalue λ = −1, v3 is also an eigenvector, so the method also just gives
the expected solution
 
4
x3 = eAt v3 = e−t v3 = e−t −4 .
1

It follows that the general solution of the system is


     
1 0 4
x = c1 e3t 1 + c2 e3t 0 + c3 e−t −4 .
t 1 1

The fact that v2 and v3 are eigenvectors simplifies the calculation of the solutions
x2 and x3 . This sort of simplification often happens, so you should be on the
lookout for it. v3 is an eigenvector because the associated eigenvalue has multiplicity
one, but it is a bit mysterious why v2 should be an eigenvector. However, this
may be clarified somewhat if you note that v2 turns up (in the process of finding
x1 = eAt v1 ) as v2 = (A − 3I)v1 . Thus, it is an eigenvector because (A − 3I)v2 =
(A − 3I)(A − 3I)v1 = (A − 3I)2 v2 = 0.
592 CHAPTER 11. DETERMINANTS AND EIGENVALUES

Example 224 Consider the linear system


[ ]
dx − 3 −1
= x.
dt 1 −1

The characteristic equation is

(3 + λ)(1 + λ) + 1 = λ2 + 4λ + 4 = (λ + 2)2 = 0.

Since λ = −2 is the only eigenvalue and its multiplicity is 2, it follows that every
element of R2 is a generalized eigenvector for that eigenvalue. Hence, {e1 , e2 } is a
basis consisting of generalized eigenvectors. The first vector e1 leads to the basic
solution

x1 = eAt e1 = e−2t et(A+2I) e1 = e−2t (I + t(A + 2I))e1


([ ] [ ] [ ])
−2t −2t 1 − 1 −1 1
= e (e1 + t(A + 2I)e1 ) = e +t
0 1 1 0
([ ] [ ])
1 −1
= e−2t +t
0 1
[ ]
1−t
= e−2t .
t

A similar calculation gives a second independent solution


([ ] [ ]) [ ]
At −2t 0 −1 −2t −t
e e2 = e +t =e .
1 1 1+t

However, we may simplify things somewhat as follows. Let


[ ]
−1
v2 = (A + 2I)e1 = .
1

Then, it is not hard to see that


{ [ ] [ ]}
1 −1
e1 = , v2 =
0 1

is an independent pair of vectors. Hence, it must also necessarily be a basis for R2 .


Using the basis {e1 , v2 } rather than {e1 , e2 } seems superficially to make things
harder, but we gain something by using it since

(A + 2I)v2 = (A + 2I)(A + 2I)e1 = (A + 2I)2 e1 = 0.

That is, v2 is an eigenvector for λ = −2. Hence, we may use the simpler second
solution [ ]
−2t −2t − 1
x2 = e v2 = e .
1
11.8. GENERALIZED EIGENVECTORS 593

Thus,
[ ]
[ ] e−2t
x1 = e−2t (e1 + tv2 ) = e1 v2
te−2t
[ ]
[ ] 0
x2 = e−2t v2 = e1 v2
e−2t
form a basis for the solution space of the linear system of differential equations.

Example 225 Consider the system


 
2 0 0
dx 
= 1 2 0 x.
dt
0 1 2
It is apparent that the characteristic equation is −(λ − 2)3 = 0, so λ = 2 is the only
root and has multiplicity 3. Also,
 2  
0 0 0 0 0 0
(A − 2I)2 = 1 0 0 = 0 0 0 ,
0 1 0 1 0 0
and necessarily (A − 2I)3 = 0. Hence, every vector is a generalized eigenvector for
A and
{e1 , e2 , e3 }
is a perfectly good basis consisting of generalized eigenvectors.

The solutions are determined as before.


     
0 0 0 0 0 0
1
x1 = e2t e1 + t 1 0 0 e1 + t2 0 0 0 e1 
2
0 1 0 1 0 0
     
1 0 0
1
= e2t 0 + t 1 + t2 0
2
0 0 1
 
1
= e2t  t 
t2 /2
Similarly,
     
0 0 0 0 0 0
1
x2 = e2t e2 + t 1 0 0 e2 + t2 0 0 0 e2 
2
0 1 0 1 0 0
     
0 0 0
1
= e2t 1 + t 0 + t2 0
2
0 1 0
 
0
= e2t 1
t
594 CHAPTER 11. DETERMINANTS AND EIGENVALUES

and
     
0 0 0 0 0 0
1
x3 = e2t e3 + t 1 0 0 e3 + t2 0 0 0 e3 
2
0 1 0 1 0 0
     
0 0 0
1
= e2t 0 + t 0 + t2 0
2
1 0 0
 
0
= e2t 0 .
1

Note that x2 only needs terms of degree 1 in t and x3 doesn’t even need those since
e3 is actually an eigenvector. This is not surprising since

e2 = (A − 2I)e1 so (A − 2I)2 e2 = (A − 2I)3 e1 = 0

and
e3 = (A − 2I)e2 so (A − 2I)e3 = (A − 2I)2 e2 = 0.

The general solution is


     
1 0 0
x = c1 e2t  t  + c2 e2t 1 + c3 e2t 0 .
t2 /2 t 1

It should be noted that if the matrix A is diagonalizable, then for each eigenvalue λi ,
we need only deal with genuine eigenvectors vi , and the series expansion e(A−λi I)t vi
has only one term and reduces to Ivi = vi . Thus, if A is diagonalizable, the
generalized eigenvector method just reduces to the ordinary eigenvector method
discussed earlier. Appendix 1. Proof of Linear Independence of the Set of
Basic Generalized Eigenvectors You may want to skip this proof.

Let λ1 , λ2 , . . . , λk be distinct eigenvalues of the n × n matrix A. Suppose that,


for each eigenvalue λi , we have chosen a basis for the subspace of solutions of the
system (A − λi I)mi v = 0, where mi is the multiplicity of λi . Put these all together
in a set S. We shall prove that S is a linearly independent set.

If not there is a dependence relation. Rearrange this dependence relation so that on


the left we have a non-trivial linear combination of the basic generalized eigenvectors
belonging to one of the eigenvalues, say it is λ1 , and on the other side we have a linear
combination of basic generalized eigenvectors for the other eigenvalues. Suppose this
has the form
∑k
v1 = ui (196)
i=2
11.8. GENERALIZED EIGENVECTORS 595

where v1 6= 0 is a generalized eigenvector for λ1 and each ui is a generalized eigen-


vector for λi for i = 2, . . . , k.

Since v1 is a generalized eigenvector for λ1 , we have (A − λ1 I)r v1 = 0 for some


r > 0. Assume r is chosen to be the least positive power for which that is true.
Then v10 = (A − λ1 I)r−1 v1 6= 0 and (A − λ1 I)v10 = (A − λ1 I)r v1 = 0, i.e., v10 is an
eigenvector with eigenvalue λ1 . (Note that r = 1 is possible, so this requires that
v1 6= 0, which is true by assumption.) Multiply both sides of equation (196) by
(A − λ1 I)r−1 . On the left we get v10 . On the right, each term u0i = (A − λ1 I)r−1 ui
is still a generalized eigenvector for λi . For,
(A − λi I)mi (A − λ1 I)r−1 ui = (A − λ1 I)r−1 (A − λi I)mi ui = 0.
(This used the rather obvious fact that polynomial expressions in the matrix A
commute with one another.) The upshot of this argument is that we may assume
that v1 in (196) is an actual eigenvector for λ1 . (Just replace v1 by v10 and each ui
by the corresponding u0i .)

Now multiply equation (196) by the product


(A − λ2 I)m2 (A − λ3 I)m3 . . . (A − λk I)mk .
Note as above that the factors in the product commute with one another so the
order in which the terms are written in not important. Each ui on the right is a
generalized eigenvector for λi , so (A − λi I)mi ui = 0. That means that the effect on
the right of multiplying by the product is 0. Consider the effect on the left. v1 is
an eigenvector for λ1 , so
(A − λi I)v1 = Av1 − λi v1 = λ1 v1 − λi v1 = (λ1 − λi )v1
(A − λi I)2 v1 = (A − λi I)(A − λi I)v1 = (A − λi I)(λ1 − λi )v1
= (λ1 − λi )(A − λi I)v1 = (λ1 = λi )2 v1
..
.
(A − λi ) v1 = (λ1 − λi )mi v1 .
mi

Thus the effect of the product on the left is


(λ1 − λ2 )m2 . . . (λ1 − λk )mk v1
which is non-zero since the scalar multiplier is non-zero. This contradicts the fact
that the effect on the right is zero, so we have a contradiction from the assumption
that there is a dependence relation among the basic generalized eigenvectors.

Appendix 2. Cyclic Vectors and the Jordan FormYou may want to come
back and read this if you take a more advanced course in linear algebra.

It is a bit difficult to illustrate everything that can happen when one computes
solutions eAt v = eλt e(A−λI)t v as v ranges over a basis of generalized eigenvec-
tors, particularly since the degree is usually fairly small. However, there is one
596 CHAPTER 11. DETERMINANTS AND EIGENVALUES

phenomenon we encountered in some of the examples. It might happen that the


system (A − λI)2 v = 0 has some basic solutions of the form

v1 , v2 = (A − λI)v1

so v2 is an eigenvector. Similarly, it might happen that (A − λI)3 v = 0 has some


basic solutions of the form

v1 , v2 = (A − λI)v1 , v3 = (A − λI)v2 = (A − λI)2 v1 .

More generally, we might be able to find a generalized eigenvector v which satisfies


(A − λI)r v = 0, where the set formed from

v, (A − λI)v, (A − λI)2 v, . . . , (A − λI)r−1 v (197)

is linearly independent. In this case, v is called a cyclic vector for A of order r.


Note that a cyclic vector of order 1 is an eigenvector.

The theory of the Jordan Canonical Form asserts that it is always possible to find
a basis of generalized eigenvectors formed from cyclic vectors as in (197). The
advantage of using a basis derived from cyclic vectors is that the number of terms
needed in the expansion of e(A−λI)t is kept to a minimum.

It sometimes happens in solving (A − λI)m v = 0 (where m is the multiplicity of


the eigenvalue) that the solution method gives a ‘cyclic’ basis, but as we saw in
examples it is not always the case. The best case is that in which one of the basic
generalized eigenvectors for λ is cyclic of order m, i.e., it does not satisfy a lower
order system (A − λI)j v = 0 with j < m. However, it is quite possible that all
cyclic vectors for a given eigenvalue have order smaller than m. In that case it is
necessary to use two or more cyclic vectors to generate a basis. For example, for an
eigenvalue λ of multiplicity m = 3, we could have a basis

{v1 , v2 = (A − λI)v1 , v3 }

where v1 is a cyclic vector of order 2 and v3 is a cyclic vector of order 1, i.e., it is


an eigenvector. An even more extreme case would be a basis of eigenvectors, i.e.,
each basis vector would be a cyclic vector of order 1. In general, it may be quite
difficult to find a basis derived from cyclic vectors.

Exercises for 11.8.

1. In each case, find a basis for Rn consisting of generalized eigenvectors for the
given matrix.
   
[ ] 2 −1 1 1 −1 1
2 1
(a) (b) 3 1 −2 (c) 2 −2 1
−1 4
3 −1 0 1 −1 0
11.8. GENERALIZED EIGENVECTORS 597
   
4 −1 1 −2 −1 0
(d) 1 3 0 (e)  1 0 0
0 1 2 −2 −2 −1

2. For A equal to each of the matrices in the previous problem, find a general
dx
solution of the system = Ax.
dt
3. (a) Find a basis for R3 consisting of eigenvectors for
 
1 2 −4
A =  2 −2 −2 .
−4 −2 1

(b) Let P be the matrix with columns the basis vectors in part (a). Calculate
P −1 AP and check that it is diagonal with the diagonal entries the eigenvalues
you found.
4. Which of the following matrices are diagonalizable? Try to discover the answer
without doing any significant amount of computation.
     
1 1 0 1 1 0 1 0 0
(a) 0 2 0 (b) 0 1 0 (c) 0 1 1
0 0 3 0 0 3 0 0 3

5. Solve the initial value problem


   
3 −1 1 −1 1
dx  1 1 2 0 0
=
x where x(0) =  
3 .
dt 0 0 2 0
0 0 0 2 2

Hint. The only eigenvalue is λ = 2.


6. Theorem 11.7 asserts that the dimension of the solution spaces for (A −
λI)j v = 0, where λ is an eigenvalue for A of multiplicity m, increases to
the maximum value m for j = m and then stays constant thereafter. With-
out using the Theorem, show that the dimensions of these subspaces must
increase to some value m0 and stabilize thereafter. (n fact, m0 ≤ m, but you
don’t have to show that for this problem.)
598 CHAPTER 11. DETERMINANTS AND EIGENVALUES
Chapter 12

More about Linear Systems

12.1 The Fundamental Solution Matrix

Let A be an n × n matrix and consider the problem of solving

dX
= AX (198)
dt
where
[ X = X(t) is an n ×] n matrix valued function of the real variable t. If X =
x1 (t) x2 (t) . . . xn (t) , then (198) amounts to the simultaneous consideration
of n equations, one for each column of X,

dx1 dx2 dxn


= Ax1 , = Ax2 , ..., = Axn .
dt dt dt
Much of our previous discussion of systems still applies. In particular, if the entries
of A are continuous functions on an interval a < t < b, then there is a unique
solution of (198) defined on that interval which assumes a specified initial value
X(t0 ) at some point t0 in the interval.

This formalism gives us a way to discuss a basis {x1 (t), x2 (t), . . . , xn (t)} for the
vector space of solutions of the homogeneous linear system

dx
= Ax (199)
dt
in one compact notational package. Clearly, if we want the columns of X to form
such a basis, we need to assume that they constitute a linearly independent set.
An X = X(t) with these properties is called a fundamental solution matrix for the
system (199). Finding a fundamental solution matrix is equivalent to finding a basis
for the solution space of (199).

599
600 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

A fundamental solution matrix may also be used to express the general solution
 
c1
[ ] 
 c2 
x = x1 (t)c1 + x2 (t)c2 + · · · + xn (t)cn = x1 (t) x2 (t) . . . xn (t)  . 
 .. 
cn

or
 
c1
 c2 
 
x = X(t)c where c =  .  .
 .. 
cn

Example 226 As in Example 1a of Section 8 of Chapter XI,

consider the system  


1 2 0
dx 
= 2 1 0 x.
dt
0 1 3
Form the fundamental solution matrix by putting together the basic solutions in a
3 × 3 matrix
      
1 0 4
X(t) = e3t 1 e3t 0 e−t −4
t 1 1
 3t −t

e 0 4e
=  e3t 0 −4e−t  .
te3t e3t e−t

Suppose we want a solution x(t) satisfying


 
1
x(0) = 0 .
0

This amounts to solving X(0)c = x(0) or


   
1 0 4 1
1 0 −4 c = 0 .
0 1 1 0

for c. We leave the details of the solution to you. The solution is c1 = 1/2, c2 =
−1/8, and c3 = 1/8. The desired solution is
     
1 0 4
1 1 1
x = e3t 1 − e3t 0 + e−t −4 .
2 8 8
t 1 1
12.1. THE FUNDAMENTAL SOLUTION MATRIX 601

If A is a constant n × n matrix, then


X = eAt .
is always a fundamental solution matrix. Indeed, one way to characterize the expo-
dX
nential eAt is as the unique solution of = AX satisfying X(0) = I. However,
dt
the exponential matrix is defined as the sum of a series of matrices which is not
usually easy to compute. Instead, it is usually easier to find a fundamental solution
matrix X(t) by some other method, and then use X(t) to find eAt .

Theorem 12.23 Let A be an n × n matrix. If X(t) is a fundamental solution


d
matrix for the system x = Ax, then for any initial value t0 ,
dt
X(t) = eA(t−t0 ) X(t0 ).
In particular, for t0 = 0,
X(t) = eAt X(0) or eAt = X(t)X(0)−1 . (200)

dY
Proof. By assumption, Y = X(t) satisfies the matrix equation = AY . However,
dt
Y = eA(t−t0 ) X(t0 ) also satisfies that equation since
dY d
= eA(t−t0 ) X(t0 ) = AeA(t−t0 ) X(t0 ) = AY.
dt dt
Moreover, at t = t0 , these two functions agree since
eA(t0 −t0 ) X(t0 ) = IX(t0 ) = X(t0 ).
Hence, by the uniqueness theorem, X(t) = eA(t−t0 ) X(t0 ) for all t.

Example 226, continued Let


 
e3t 0 4e−t
X =  e3t 0 −4e−t 
te3t e3t e−t
be the fundamental solution matrix obtained above. We may calculate by the usual
method  −1  1 
1
1 0 4 2 2 0
X(0)−1 = 1 0 −4 = − 81 1
8 1
0 1 1 1
8 − 1
8 0
so
 3t  1 
e 0 4e−t 2
1
2 0
eAt =  e3t 0 −4e−t  − 18 1
8 1
−t
te 3t
e 3t
e 1
−8 0
1
 1 −t
8
1 −t

2e − 2e
1 3t 1 3t
2e + 2e 0
= 2e − 2e
1 3t 1 −t 1 3t
2e + 2e
1 −t
0 .
1 −t 1 −t
2 te − 8 e + 8 e 2 te + 8 e − 8 e
1 3t 1 3t 1 3t 1 3t
e3t
602 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

Example 227 Consider the system


[ ]
dx 0 1
= x.
dt −1 0

The characteristic equation of the coefficient matrix is λ2 +1 = 0 which has roots λ =


±i. That means that we start by finding complex valued solutions. The eigenvalues
are distinct in this case, so the eigenvalue-eigenvector method will suffice; we don’t
need to use generalized eigenvectors.

For λ = i, we need to solve (A − iI)v = 0. We have


[ ] [ ]
−i 1 1 i

−1 −i 0 0

so the general solution is v1 = −iv2 with v2 free. The general solution vector is
[ ]
−i
v = v2 ,
1
so [ ]
−i
v1 =
1
is a basic eigenvector. The corresponding solution of the differential equation is
[ ]
− ieit
x1 = eit v1 =
eit
[ ]
sin t − i cos t
= .
cos t + i sin t

To find independent real solutions, take the real and imaginary parts of this complex
solution. They are [ ] [ ]
sin t − cos t
u= , v= .
cos t sin t
Hence, a fundamental solution matrix for this system is
[ ]
sin t − cos t
X(t) = .
cos t sin t

Thus,
 

0 t [ ][ ]−1
−t 0 sin t − cos t 0 −1
e =
cos t sin t 1 0
[ ][ ]
sin t − cos t 0 1
=
cos t sin t −1 0
[ ]
cos t sin t
= .
− sin t cos t
12.1. THE FUNDAMENTAL SOLUTION MATRIX 603

We computed this earlier in Chapter XI, Section 7, Example 2 by adding up the


series.

Wronskians

Let {x1 (t), x2 (t), . . . , xn (t)} be a set of solutions of the system dx/dt = A(t)x where
A(t) is an n × n matrix which is not necessarily constant. Let
[ ]
X(t) = x1 (t) x2 (t) . . . xn (t) .

The quantity W (t) = det X(t) is called the Wronskian, and it generalizes the Wron-
skian for second order linear equations in one variable. It is not hard to see that
W (t) never vanishes if X(t) is a fundamental solution matrix. For, if it vanished
for a given t = t0 , the columns of X(t0 ) would form a dependent set of vectors, and
this in turn would imply that the columns of X(t) form a dependent set of functions
of t.

It is possible to show that the Wronskian satisfies the first order differential equation
dW
= a(t)W,
dt
where

n
a(t) = aii (t)
i=1

is the sum of the diagonal entries of A(t). (The sum of the diagonal entries of an
n × n matrix A is called the trace of the matrix.)

We shall not use the Wronskian, but you may encounter it if you do further work
with linear systems of differential equations.

Exercises for 12.1.

1. For each of the given matrices, solve the system x0 = Ax and find a funda-
mental solution matrix.
 
0 −1 −1
(a)  1 2 1 .
−3 1 −2
[ ]
−3 4
(b) .
−1 1
 
0 1 1
(c) 2 0 −1.
0 −1 0

2. Using (200), calculate eAt for each of the matrices in the previous problem.
604 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

3. Calculate the Wronskian for each of the systems in the previous problem.
4. Suppose X(t) is a fundamental solution matrix for the n × n system x0 = Ax.
Show that if C is any invertible constant n × n matrix, then X(t)C is also a
fundamental solution matrix.

12.2 Inhomogeneous Systems

Having thoroughly explored methods for solving homogeneous systems, we now


consider inhomogeneous systems
dx
= A(t)x + f (t)
dt
where A(t) is a given n × n matrix, f (t) is a given vector function, and x = x(t) as
before is an vector solution to be found.

The analysis is similar to that we went through for second order inhomogeneous
linear equations y 00 + p(t)y 0 + q(t)y = f (t). We proceed by first finding the general
solution of the homogeneous equation
dx
= Ax
dt
to which is added a particular solution of the inhomogeneous equation. (Indeed,
second order inhomogeneous linear equations may be reformulated as 2 × 2 inho-
mogeneous systems in the usual way.)

To find a particular solution of the inhomogeneous equation, we appeal to methods


that worked for second order equations.

The simplest method, if it works, is guessing.

Example 228 Consider the system


[ ] [ ]
dx 0 1 1
= x+ . (201)
dt −1 0 0
In Example 2 in the previous section, we found a general solution of the correspond-
ing homogeneous system. Using the fundamental solution matrix eAt , it may be
expressed [ ][ ]
cos t sin t c1
xh = .
− sin t cos t c2
To find a particular solution of (201), try a constant solution
[ ]
a
xp = 1
a2
12.2. INHOMOGENEOUS SYSTEMS 605

where a1 and a2 are to be determined. Putting this in the differential equation, we


have
[ ][ ] [ ]
0 1 a1 1
0= +
−1 0 a2 0

or
[ ][ ] [ ]
0 1 a1 −1
= .
−1 0 a2 0

This is a 2 × 2 algebraic system which is not very difficult to solve. The solution is
a1 = 0, a2 = −1. Hence, the general solution of the inhomogeneous equation is
[ ] [ ][ ]
0 cos t sin t c1
x= +
−1 − sin t cos t c2
[ ] [ ] [ ]
0 cos t sin t
= + c1 + c2 .
−1 − sin t cos t

There is another method for finding a particular solution which is based on the
method of ‘variations of parameters’. Unfortunately, it usually leads to extremely
complex calculations, even when the answer is relatively simple. However, it is
useful in many theoretical discussions. To apply the system version of variation of
parameters we look for a particular solution of the form

x = X(t)u(t)

where X(t) is a fundamental solution matrix of the corresponding homogeneous


system and u(t) is a vector valued function to be determined. Substituting in the
inhomogeneous equation yields

d
(Xu) = AXu + f
dt
dX du
u+X = AXu + f
dt dt

dX
However, = AX so the first term on each side may be canceled, and we get
dt

du
X =f
dt
du
= X −1 f .
dt

We may now determine u by integrating both sides with respect to t. This could
be done using indefinite integrals, but it is usually done with a dummy variable as
606 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

follows. Let t0 be an initial value of t.

du
= X(s)−1 f (s)
ds
∫ t
X(s)−1 f (s) ds
t
u(t)|t0 =
t0
∫ t
u(t) − u(t0 ) = X(s)−1 f (s) ds
t0
∫ t
u(t) = u(t0 ) + X(s)−1 f (s) ds.
t0

If we multiply this by X(t) to obtain x, we get the particular solution


∫ t
x = X(t)u(t) = X(t)u(t0 ) + X(t) X(s)−1 f (s) ds.
t0

Since we only need one particular solution, we certainly should be free to choose
u(t0 ) any way we want. If we set it equal to 0, then the second term gives us the
desired particular solution
∫ t
xp = X(t) X(s)−1 f (s) ds. (202)
t0

On the other hand, we may also write


[ ]
c
u(t0 ) = c = 1 .
c2

where c is a vector of arbitrary constants. Then the above equation becomes


∫ t
x = X(t)c + X(t) X(s)−1 f (s) ds (203)
t0

which is the general solution of the inhomogeneous equation.

Example 228, again Use the same fundamental solution


[ ]
cos t sin t
X(t) =
− sin t cos t

as before, and take t0 = 0. Then


[ ]−1 [ ]
−1 cos s sin s 1 cos s − sin s
X(s) = =
− sin s cos s cos2 s + sin2 s sin s cos s
[ ]
cos s − sin s
= .
sin s cos s
12.2. INHOMOGENEOUS SYSTEMS 607

Thus,
[ ]∫ t [ ][ ]
cos t sin t cos s − sin s 1
xp = ds.
− sin t cos t 0 sin s cos s 0

The integral is
∫ t[ ] [ ] t
cos s sin s
ds =
0 sin s − cos s 0
[ ] [ ]
sin t 0
= − .
− cos t −1

Multiplying this by X(t) yields


[ ] ([ ] [ ])
cos t sin t sin t 0
xp = −
− sin t cos t − cos t −1
[ ] [ ]
cos t sin t − sin t cos t − sin t
= +
− sin2 t − cos2 t − cos t
[ ] [ ]
0 sin t
= − .
−1 cos t

The second term is a solution of the homogeneous equation. (In fact, except for the
sign, it is the second column of the fundamental solution matrix.) Hence, we can
drop that term, and we are left with
[ ]
0
xp = ,
−1

which is the same particular solution we obtained previously by guessing.

If A is a constant n × n matrix, then X(t) = eAt is always a fundamental solution


matrix for dx/dt = Ax. Also,

X(s)−1 = (eAs )−1 = e−As .

Hence, the general solution of the inhomogeneous equation is


∫ t
x = eAt c + eAt e−As f (s) ds
t0
∫ t
= eAt c + eAt e−As f (s) ds
t0
∫t
= eAt c + eA(t−s) f (s) ds.
t0

Moreover, if x(t) assumes the initial value x(t0 ) at t = t0 , we have


∫ t0
At0
x(t0 ) = e c + (. . . ) ds = eAt0 c,
t0
608 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

so c = e−At0 x(t0 ). Thus,


∫ t
At −At0
x=e e x(t0 ) + eA(t−s) f (s) ds
t0
∫ t
= eA(t−t0 ) x(t0 ) + eA(t−s) f (s) ds
t0

is a solution of the inhomogeneous equation satisfying the desired initial condition


at t = t0 . This formula sums everything up in a neat package, but it is not specially
easy to use for a variety of reasons. First, eAt is not usually easy to calculate, and
in addition, the integration may not be specially easy to do.

(See if you can simplify the calculations in the previous example by exploiting the
fact that we were using eAt as our fundamental solution matrix.)

Exercises for 12.2.

1. Variation of parameters requires calculation of X(s)−1 where X(t) is a funda-


mental solution matrix. Suppose the coefficient matrix A is constant. Derive
the formula
X(s)−1 = X(0)−1 X(−s)X(0)−1 .
Hint: Use X(s) = eAs X(0).
2. (a) Find a particular solution of the system
[ ] [ ]
2 3 2
x0 = x+
2 1 5

by guessing.
(b) Solve the corresponding homogeneous system and find a fundamental so-
lution matrix.
(c) Find a particular solution by using (202). (Take t0 = 0.)
3. Find a general solution of the system
   
1 0 1 1
x0 = 0 1 0 x + e3t 1 .
0 1 2 0

4. (a) Find a general solution of the differential equation y 000 − 2y 00 − 5y 0 + 6y = 0.


Hint: solve the system you get by putting x1 = y, x2 = y 0 , and x3 = y 00 .
(b) Solve y 000 − 2y 00 − 5y 0 + 6y = et given y(0) = y 0 (0) = 0, y 00 (0) = 1. Hint:
use variation of parameters for the appropriate inhomogeneous system.
12.3. NORMAL MODES 609

12.3 Normal Modes

Example 229 Recall the second order system


d2 x1
m = −2kx1 + kx2
dt2
d2 x2
m 2 = kx1 − 2kx2
dt
which was discussed in Chapter X, Section 1, Example 2. This system arose from
the configuration of particles and springs indicated below, where m is the common
mass of the two particles and k is the common spring constant of the three springs.

k m k m k

x1 x
2

The system may also be rewritten in matrix form


[ ]
d2 x − 2k k
m 2 = x.
dt k −2k

Systems of this kind abound in nature. For example, a molecule may be modeled
as a system of particles connected by springs provided one assumes all the displace-
ments from equilibrium are small. One is often very interested in determining the
ways in which such a molecule may oscillate and in particular what the oscilla-
tory frequencies are. These will tell us something about the spectral response of
the molecule to infrared radiation. This classical model of a molecule is only an
approximation, of course. and one must use quantum mechanics to get a more
accurate picture of what happens. However, the classical model often illustrates
important features of the problem, and it is usually more tractable mathematically.

Example 230 A CO2 molecule may be represented as two Oxygen atoms connected
by springs to a Carbon atom. In reality, the interatomic forces are quite complicated,
but to a first approximation, they may be thought of as linear restoring forces
produced by imaginary springs. Of course, the atoms in a real CO2 molecule may
be oriented relative to each other in space in quite complicated ways, but for the
moment we consider only configurations in which all three atoms lie in a line.
610 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

O C O

1 2 3

If m is the mass of each Oxygen atom and m0 is the mass of the Carbon atom then
we have m0 /m ≈ 12/16 = 3/4. As in the diagram, let x1 and x3 denote the linear
displacements of the two Oxygen atoms from some equilibrium position and let x2
be the linear displacement of the Carbon atom. Then Newton’s Second Law and
an analysis of the forces yields the equations
d2 x1
m = −k(x1 − x2 ) = −kx1 + kx2
dt2
d2 x2
m0 2 = −k(x2 − x1 ) − k(x2 − x3 ) = kx1 − 2kx2 + kx3
dt
d2 x3
m 2 = −k(x3 − x2 ) = kx2 − kx3
dt
which may be put in the following matrix form
      
m 0 0 2 x1 −k k 0 x1
 0 m0 d
0  2 x2  =  k −2k k  x2  .
dt
0 0 m x3 0 k −k x3
This may be written more compactly as

M x00 = Kx (204)

where  
m 0 0
M =  0 m0 0
0 0 m
is a diagonal matrix of masses, and
 
−k k 0
K= k −2k k 
0 k −k
is a matrix of spring constants. Note that K is a symmetric matrix , i.e. K is equal
to its transpose K t .

It will be the object of this and ensuing sections to solve second order systems of the
form (204). Of course, we already have a method for doing that: convert to a first
order system of twice the degree and solve that by the methods we developed in the
previous chapter. It is more enlightening, however, to start from the beginning and
apply the same principles directly to the second order system. As in the eigenvalue-
eigenvector method for first order systems, we proceed by looking for complex vector
valued solutions of the form
x = eiωt v (205)
12.3. NORMAL MODES 611

where ω and v 6= 0 are to be determined. (The rationale for replacing λ by iω


is that because of the nature of the physical problem it makes sense to look for
oscillatory real solutions, and we know from our previous study of simple harmonic
motion that the complex expression of such solutions will involve exponentials of
the form eiωt .) Then

dx d2 x
= iωeiωt v and = (iω)2 eiωt v = −ω 2 eiωt v.
dt dt
Hence, putting (205) in (204) yields

M (−ω 2 eiωt v) = Keiωt v.

Factoring out the (non-zero) term eiωt yields in turn −ω 2 M v = Kv, which may be
rewritten
Kv = µM v where µ = −ω 2 and v 6= 0. (206)
This equation should look familiar. The quantity µ = −ω 2 looks like an eigenvalue
for K, and the vector v looks like an eigenvector, except of course for the presence
of the diagonal matrix M . As previously, (206) may be rewritten as a system

(K − µM )v = 0 (207)

and since we need to find non-zero solutions, we need to require

det(K − µM ) = 0. (208)

This equation is similar to the characteristic equation for K except that the identity
matrix I has been replaced by the diagonal matrix M . It is called the secular
equation of the system.

The strategy then is first to find the possible values of µ by solving (208), and
for each such µ to find the possible v 6= 0 by solving the system √ (207). The
corresponding oscillatory (complex) solutions will be eiωt v where ω = |µ|.

Example 229, continued We have


[ ] [ ] [ ]
m 0 −2 1 − 2k k
M = mI = K=k =
0 m 1 −2 k −2k

so the secular equation is


[ ]
−2k − mµ k
det = (−2k − mµ)2 − k 2
k −2k − mµ
= m2 µ2 + 2kmµ + 4k 2 − k 2
= m2 µ2 + 4kmµ + 3k 2
= (mµ + k)(mµ + 3k) = 0.
√ √
Hence, the roots are µ = −k/m (ω = k/m) and µ = −3k/m (ω = 3k/m).
612 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

For µ = −k/m (ω = k/m), we need to solve (K + (k/m)M )v = 0. But
[ ] [ ] [ ]
k −2k + k k −1 1 1 −1
K+ M = =k → .
m k −2k + k 1 −1 0 0

Hence, the solution is v1 = v2 with v2 free. A basic solution vector for the subspace
of solutions is [ ]
1
v1 = .
1
The corresponding complex solution is
√ [ ]
1
ei k/m t
.
1

If we take the real and imaginary parts, we get two real solutions.
[ ] [ ]
√ 1 √ 1
cos k/m t and sin k/m t .
1 1

If we write the components out explicitly, we get


√ √
x1 = cos k/m t, x2 = cos k/m t

for the first real solution, and


√ √
x1 = sin k/m t, x2 = sin k/m t

for the second real solution. In either case, we have x1 (t) = x2 (t) for all t, and
√ the
two particles move together in tandem with the same angular frequency k/m.
Note the behavior of the particles is a consequence of the fact that the components
of the basic vector v1 are equal. Indeed, the same would be true for any linear
combination
[ ] [ ]
√ 1 √ 1
c1 cos k/m t + c2 sin k/m t
1 1
[ ]
√ √ 1
= (c1 cos k/m t + c2 sin k/m t)
1

of the two real solutions obtained above. This two dimensional


√ real subspace of
solutions is called the normal mode of angular frequency k/m.

Similarly, for µ = −3k/m (ω = 3k/m), we have
[ ] [ ] [ ]
−2k + 3k k 1 1 1 1
K + 3k/mM = =k → .
k −2k + 3k 1 1 0 0

The solution is v1 = −v2 with v2 free, and a basic solution vector for the system is
[ ]
−1
v2 = .
1
12.3. NORMAL MODES 613

The corresponding solution of the differential equation is


√ [ ]
i 3k/m t − 1
e .
1

The corresponding normal mode is encompassed by the set of all real solutions of
the form [ ]
√ √ −1
(c3 cos 3k/m t + c4 sin 3k/m t)
1

which oscillate with angular frequency 3k/m.

The general real solution of the differential equation has the form
[ ] [ ]
√ 1 √ 1
x = c1 cos k/m t + c2 sin k/m t
1 1
[ ] [ ]
√ −1 √ −1
+ c3 cos 3k/m t + c4 sin 3k/m t .
1 1

This example illustrates some features which need further discussion. First, we
assumed implicitly in writing out the general solution that the 4 functions
[ ] [ ] [ ] [ ]
√ 1 √ 1 √ −1 √ −1
cos k/m t , sin k/m t , cos 3k/m t , sin 3k/m t
1 1 1 1

constitute a basis for the vector space of solutions. For this to be true we need
to know first of all that the space of solutions is 4 dimensional, and secondly that
the above functions form a linearly independent set. The first conclusion is clear
if we recall that the second order system in 2 variables which we are considering
is equivalent to a first order system of twice the size. Hence, the dimension of the
solution space is 4. (In general, for a normal mode problem, the solution space
should have dimension 2n where n is the number of variables.) It is not so obvious
that the functions form a linearly independent set. We shall address this question
in detail at the end of this section. Note, however, that the rule which worked for
first order systems does not work here. If we evaluate the above vector functions at
t = 0, we don’t get a linearly independent set; in fact, two of the vectors so obtained
are zero.

Another point is that we could have determined the two vectors v1 and v2 by
inspection. The first corresponds to motion in which the particles move in tandem
and the spring between them experiences no net change in length. The second
corresponds to motion in which the particles move back and forth equal amounts in
opposite directions but with the same frequency. In fact, it is often true that careful
consideration of the physical arrangement of the particles, with particular attention
to any symmetries that may be present, may suggest possible normal modes with
little or no calculation.
614 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

Opposite directions
Same direction

Example 230, continued In this case, the secular equation is


 
−k − mµ k 0
det(K − µM ) = det  k −2k − µm0 k 
0 k −k − µm
= (−mµ − k)((−m0 µ − 2k)(−mµ − k) − k 2 ) − k(k(−mµ − k))
= (−mµ − k)((−m0 µ − 2k)(−mµ − k) − 2k 2 )
= (−mµ − k)(mm0 µ2 + k(2m + m0 )µ + 2k 2 − 2k 2 )
= −(mµ + k)(mm0 µ + k(2m + m0 ))µ = 0.

This has 3 roots. (Don’t worry about multiplicities at this point.) They are

k k(2m + m0 )
µ=− , µ=− , µ=0
m mm0
with corresponding angular frequencies
√ √
k k(2m + m0 )
ω= , ω= , ω = 0.
m mm0

Let’s find a pair of real solutions for each of these. That should provide an indepen-
dent set of 6 basic real solutions, which is what we should expect since the solution
space is 6 dimensional.
√ m0 3
Start with µ = −k/m (ω = k/m). If we make the approximation = , the
m 4
coefficient matrix of the system (K + k/mM )v = 0 becomes
   
−k + k k 0 0 k 0
 k −2k + (k/m)m0 k  = k −2k + k(3/4) k 
0 k 0 0 k 0
   
0 k 0 1 0 1
= k −(5/4)k k  → 0 1 0 .
0 k 0 0 0 0
12.3. NORMAL MODES 615

The general solution is v1 = −v3 , v2 = 0 with v3 free. A basic solution is


 
−1
v1 =  0  .
1
The corresponding complex solution is
 
√ −1
k/m t 
ei 0 .
1
The corresponding normal mode is describe in real terms by
 
√ √ −1
(c1 cos k/m t + c2 sin k/m t)  0  .
1
The physical interpretation is clear. The two Oxygen atoms move in equal and
opposite directions while the Carbon atom stays fixed.

O C O

1 2 3


For µ = −k(2/m0 + 1/m) (ω = k(2/m0 + 1/m)), the coefficient matrix of the
relevant system is
 
−k + mk(2/m0 + 1/m) k 0
 k −2k + m0 k(2/m0 + 1/m) k 
0 k −k + mk(2/m0 + 1/m)
 
−k + k(8/3 + 1) k 0
= k −2k + k(2 + 3/4) k 
0 k −k + k(8/3 + 1)
 
8/3 1 0
= k 1 3/4 1 
0 1 8/3
   
1 3/8 0 1 0 −1
→ 0 3/8 1 → 0 1 8/3 .
0 0 0 0 0 0

The corresponding solution is v1 = v3 , v2 = −(8/3)v3 with v3 free. The correspond-


ing basic vector is  
1
v2 = −8/3 ,
1
616 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

and the corresponding normal mode is described in real terms by


 
√ √ 1
(c3 cos k(2/m0 + 1/m) t + c4 sin k(2/m0 + 1/m) t) −8/3 .
1

The physical interpretation is that the two Oxygen atoms move together in tandem
while the Carbon atom moves in the opposite direction in such a way that the center
of mass always stays fixed.

O C O

1 2 3

Finally, consider µ = 0. This does not correspond to an oscillatory solution at all


since in this case ω = 0 and eiωt = 1. Let’s solve the system (K − µM )v = Kv = 0
in any case, although it is not exactly clear what the physical interpretation should
be.
       
−k k 0 −1 1 0 1 −1 0 1 0 −1
 k −2k k  →  1 −2 1  → 0 −1 1  → 0 1 −1 .
0 k −k 0 1 −1 0 1 −1 0 0 0

The corresponding solution is v1 = v3 , v2 = v3 with v3 free. The corresponding


basic vector is  
1
v3 = 1 ,
1
but all we get this way for a real solution is
 
1
c5 1 .
1

What does this mean and where is the second real solution in this case? Since all
three displacements are equal, it appears that the particles are displaced an equal
distance in the same direction and there is no oscillation. A little thought suggests
that what this corresponds to is a uniform motion of the center of mass and no
relative motion of the individual particles about the center of mass. This tells
us that we should add the additional solution tv3 , and the corresponding ‘normal
mode’ is  
1
(c5 + c6 t) 1 ,
1
which is a mathematical description of such uniform motion. We now have a set of
6 independent real solutions.
12.3. NORMAL MODES 617

This example also illustrates the principle that understanding the physical nature
of the problem and the underlying symmetries can often lead to appropriate guesses
for the vectors v. Note also that once you have picked out such a vector, you can
check if you are right and also determine the corresponding root µ = −ω 2 of the
secular equation by using the relation

Kv = µM v.

Some General Remarks The whole method depends on the fact that the roots
µ of the secular equation

det(K − µM ) = 0

can be represented µ = −ω 2 where ω ≥ 0. This is the same as assuming all the


roots µ are negative or at worst zero. However, if you pick a random symmetric
n × n matrix and a random diagonal matrix M —even assume the diagonal entries
of M are positive—there is no way to be sure that some of the roots µ may not be
positive. Hence, for the problem to be a bona-fide normal mode problem, we must
impose this as an additional assumption. This is not entirely arbitrary, however,
because it can be shown from energy considerations that if some of the roots are
positive, then the physical configuration will tend to fly apart instead of oscillating
about a stable equilibrium.

As in Example 2, solutions associated with the root zero correspond to non-oscillatory


uniform motions. However, if we allow more than one spatial dimension, the situ-
ation is much more complicated. Consider, for example, plane motions of the CO2
molecule. For each of the three particles, there are two spatial coordinates, and so
there are altogether 6 displacement variables. Thus the vector space of all solutions
is 12 dimensional, and it has 6 possible basic ‘normal modes’. Some of these will
involve two dimensional oscillations—see the diagram—but others will be uniform
motions corresponding to a zero root of the secular equation. Some non-oscillatory
solutions will consist of motion of the center of mass in some direction at a con-
stant velocity with no relative motion of the particles about the center of mass.
The problem is that other solutions will correspond to uniform rotations about the
center of mass, but that won’t be apparent from the mathematical representation.
The point is that in our analysis of the problem, we assumed that the components
of the vector x(t) are small, since we were concentrating on oscillations. Consider
for example the motion in which the two Oxygen atoms rotate at a constant rate
about the Carbon atom which stays fixed at the center of mass of the system. For
small displacements, this is not distinguishable from a solution in which each Oxy-
gen atom starts moving perpendicular to the line between the two Oxygen atoms
(passing through the Carbon atom) but in opposite directions. This is what the
mathematical solution x = (a + bt)v will appear to describe, but it is only valid
‘infinitesimally’.
618 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

O
O O

C
C

O
Rotation Two dimensional oscillation

Relation to Eigenvectors and Eigenvalues The normal mode problem may be


restated as follows. Solve
det(K − µM ) = 0
and, for each root µ, find all solutions of the system

Kv = µM v. (209)

(µ = −ω 2 , but that doesn’t matter here.) We noted that this looks very much like
an eigenvalue-eigenvector problem, and by an appropriate change of variables, we
can reduce it exactly to such a problem. Let
 
m1 0 . . . 0
 0 m2 . . . 0 
 
M = . .. ..  .
 .. . ... . 
0 0 ... mn

Let
1
vj = √ uj for j = 1, 2, . . . , n.
mj

Thus, uj = mj vj is weighted by the mass of the corresponding particle. This may
be written in compact matrix form as

v = ( M )−1 u
√ √
where M is the matrix with diagonal entries mj . Putting this in (209) yields
√ √ √
K( M )−1 u = µM ( M )−1 u = M µu

or √ √
( M )−1 K( M )−1 u = µu.
√ √
This says that u is an eigenvector for the matrix A = ( M )−1 K( M )−1 with µ as
the eigenvalue. It is not hard to see that A is also a real symmetric matrix, so we
12.3. NORMAL MODES 619

see that the normal mode problem is really a special case of the problem of finding
eigenvalues and eigenvectors of real symmetric matrices.

Example 231 Consider a normal mode problem similar to that in Example 1 except
that the second particle has mass 4m rather than m. Then the matrix K is the
same, but [ ]
m 0
M= .
0 4m
Hence, [√ ]
√ m √0
M=
0 2 m
and
√ √
A = ( M )−1 K( M )−1
[ √ ][ ][ √ ]
1/ m 0√ − 2k k 1/ m 0√
=
0 1/(2 m) k −2k 0 1/(2 m)
[ ]
− 2k/m k/(2m)
=
k/(2m) −k/(2m)

Linear Independence of the Solutions Suppose that in solving the n×n normal
mode problem M x00 = Kx we obtained a basis {v1 , v2 , . . . , vn } for Rn such that

Kvj = −ωj 2 vj j = 1, 2, . . . , n,

where each angular frequency ωj is a non-negative root of the secular equation. We


don’t assume that the ωj are distinct, and, as in Example 231, some of them may
be zero. We want to show that the set of 2n real solutions of the normal mode
problem

cos ωj t vj , sin ωj t vj if ωj 6= 0
or vj , tvj if ωj = 0
for j between 1 and n

is linearly independent.

Suppose not. Then there is a dependence relation of some sort. By transposing, we


may assume this has the form


n
(aj cos ωj t + bj sin ωj t)vj = 0, (210)
j=1

and at least one of the coefficients a1 , b1 , a2 , b2 , . . . , an , bn is 1. (If ωj = 0, the


appropriate term in the sum should be (aj + bj t)vj , but, as you will see, that will
not affect the nature of the argument.)
620 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

Put t = 0 in (210). We obtain



n
aj vj = 0.
j=1

However, the set {v1 , v2 , . . . , vn } is linearly independent so the only such relation
is the trivial one, i.e., we must have aj = 0 for j = 1, 2, . . . , n. Rewrite (210)


n
bj sin ωj t vj = 0.
j=1

(Again, if ωj = 0, the corresponding term will be bj tvj instead.) Differentiate (211)


and set t = 0. We obtain
∑n
bj ωj cos ωj t vj = 0.
j=1

(Note that if ωj = 0, the differentiated term would be bj vj without the factor of ωj .)


Again by linear independence, all the coefficients are zero. It follows that bj = 0
for j = 1, 2, . . . , n. (Either ωj 6= 0 or the factor ωj is not there.) This contradicts
the assumption that we had a dependence relation in the first place.

Exercises for 12.3.

1. Set up and solve the normal mode problem for each of the systems depicted
below. Choose as variables the horizontal displacements from equilibrium of
each of the particles. Assume these are measured so the positive direction is
to the right. In each case identify the normal modes and their frequencies.
(a)

2k 3m k m

x x
1 2

(b)
12.4. REAL SYMMETRIC AND COMPLEX HERMITIAN MATRICES 621

k 2m k 2m k m

x1 x x
2 3

2. Consider the normal mode problem for the plane oscillations of a molecule
consisting of three atoms of equal mass m connected by ‘springs’ with equal
spring constants k. Without deriving the equations, see if you can figure out
what some of the normal modes should look like. Which ‘normal modes’ will
correspond to ω = 0? Can you guess what the dimension of the space of all
real solutions corresponding to ω = 0 will be?

3. Read the proof of linear independence at the end of the section and write it
out explicitly for the case of the CO2 molecule as described in Example 2.

4. In our discussion of normal modes, we suggested a method for finding n basic


complex solutions eiωt v and then constructed an independent set of 2n basic
real solutions by taking real and imaginary parts. However, there should be
n additional basic complex solutions since the dimension of the vector space
of all complex solutions should also be 2n. What are those additional basic
complex solutions?
√ √
5. (a) Calculate A = ( M )−1 K( M )−1 for K and M as in Example 2 (the
CO2 molecule).
m0 3
(b) Use = to simplify A, and find its eigenvalues. Check that you get
m 4
the same roots as we did when we worked directly with the secular equation.
(c) Are the eigenvectors of A the same as the solutions of (K − µM )v = 0?

6. Let K be a symmetric n × n matrix and let Q be a diagonal n × n matrix.


Show that A = QKQ is symmetric. Hint: Calculate (QKQ)t .

12.4 Real Symmetric and Complex Hermitian Ma-


trices

We saw in the previous section that finding the normal modes of a system of particles
is mathematically a special case of finding the eigenvalues and eigenvectors of a real
622 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

symmetric matrix. Many other physical problems reduce mathematically to the


same problem. In this section we investigate that problem in greater detail.

Let A be a real symmetric matrix. The first thing we want to show is that the roots
of its characteristic equation
det(A − λI) = 0
are all real. This is important in modern physics because we have to boil the predic-
tions of a theory down to some numbers which can be checked against experiment,
and it is easier if these are real. Many physical theories generate such numbers as
the eigenvalues of a matrix.

It is not true
[ that the
] characteristic equation of a real matrix must have real roots.
0 −1
(Try A = , for example, as in Chapter XI, Section 6.) Hence, the fact
1 0
that the matrix is symmetric must play an important role. In order to see how this
comes into play, we need a short digression.

The dot product in Rn was defined by the formula


 
v1
[ ] 
 v2  ∑
n
(u, v) = ut v = u1 u2 ... un  .  = uj vj . (212)
 ..  j=1
vn

(Note that we introduced a new notation (u, v) for the dot product u · v. Another
notation you will often see is hu, vi.) In order to discuss complex eigenvalues,
even if only to show there aren’t any, we have to allow the possibility of complex
eigenvectors, i.e., we need to work in Cn , so we want to generalize the notion of dot
product to that domain. One’s first thought is to just use the same formula (212),
but there is a problem with that. In Rn , the length of a vector is given by

m
|v|2 = (v, v) = vj 2
j=1

and this has all the properties you expect∑from a good ‘length function’. Unfortu-
n
nately, in the complex case, the quantity i=1 vi 2 can vanish without v being zero.
For example, with n = 2, for
[ ]
i
v= , we have v1 2 + v1 2 = i2 + 1 = 0.
1
Clearly, it would be much better to use the formula

n
|v|2 = |vj |2 (213)
j=1

where |vj |2 = v j vj is the square of the absolute value of the complex number vj .
Unfortunately, this is not consistent with the definition (212) of the dot product,
12.4. REAL SYMMETRIC AND COMPLEX HERMITIAN MATRICES 623

but it is easy to remedy that. Define


n
(u, v) = ut v = uj vj (214)
j=1

for two vectors u and v in Cn . If the vectors are real this gives the same dot product
as before, but it also gives
|v|2 = (v, v)
if the left hand side is defined by (213).

We may now use this extended dot product to derive the promised result.

Theorem 12.24 Let A be a real symmetric matrix. The roots of det(A − λI) = 0
are all real.

Proof. Let λ be a possibly complex eigenvalue for A, i.e., assume there is a non-zero
v in Cn such that Av = λv. Consider the expression
t
(Av, v) = Av v.

We have
t t
(Av) = (Av)t = vt A , (215)
but since A is real and symmetric, we have
t
A = At = A. (216)

Hence,
t
(Av) v = vt Av.
Now put Av = λv in the last equation to get
t
(λv) v = vt λv
λvt v = λvt v

However, vt v = (v, v) = |v|2 6= 0 since v 6= 0. Hence,

λ = λ.

That tells us λ is real.

Note that paradoxically we have to consider the possibility that λ is complex in


order to show it is real. It is possible to prove this result without mentioning Cn ,
but the argument is much more difficult. We will mention it again when we discuss
the subject of Lagrange multipliers.
624 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

One crucial step in the above argument was in (216) where we concluded that
t
A =A
from the fact that A is real and symmetric. However, the proof would work just as
t
well if A were complex and satisfied A = A. Such matrices are called Hermitian
(after the 19th century French mathematician Hermite). Thus, we may extend the
previous result to

Theorem 12.25 Let A be a complex Hermitian n×n matrix. Then the eigenvalues
of A are real.

Example 232 The matrix [ ]


0 i
A=
−i 0
which is used in quantum mechanics is Hermitian since
[ ] [ ]
0 −i t 0 i
A= , A = = A.
i 0 −i 0
Its eigenvalues are the roots of
[ ]
−λ i
det = λ2 − 1 = 0,
−i −λ
which are λ = ±1 so they are certainly real.

Note that a real n × n matrix A is Hermitian matrix if and only if it is symmetric.


t
(The conjugation has no effect so the condition becomes A = At = A.)

More about the dot product in Cn The new dot product (u, v) = ut v has all
the usual properties we expect of a dot product except that because of the complex
conjugation of the first factor, it obeys the rule
(cu, v) = c(u, v). (217)
However, for the second factor it obeys the usual rule (u, cv) = c(u, v). It is also
not commutative, but obeys the following rule when the factors are switched.
(u, v) = (v, u).
(These formulas follow easily from the definition (u, v) = ut v. See the Exercises.)

In R3 we chose the basis vectors e1 , e2 , e3 (formerly called i, j, k) by picking unit


vectors along the coordinate axes. It is usual in R3 to pick mutually perpendicular
coordinate axes, so the basis vectors are mutually perpendicular unit vectors. It
makes sense to do the same thing in Rn (or in the complex case Cn ). That is, we
should attach special significance to bases {u1 , u2 , . . . , un } satisfying
(ui , uj ) = 0 for i 6= j
|ui | = (ui , ui ) = 1
2
otherwise.
12.4. REAL SYMMETRIC AND COMPLEX HERMITIAN MATRICES 625

Such a basis is called an orthonormal basis. The ‘ortho’ part of ‘orthonormal’ refers
to the fact that the vectors are mutually orthogonal , i.e., perpendicular, and the
‘normal’ part refers to the fact that they are unit vectors.

Orthogonality plays an important role for eigenvectors of Hermitian matrices. The-


orem 12.26 Let A be a real symmetric n × n matrix or, in the complex case, a
Hermitian matrix. Then eigenvectors of A associated with different eigenvalues are
perpendicular.

Proof. Assume

Au = λu and Av = µv where λ 6= µ.

We have t t
(Au) v = ut A v = ut (Av)
t
since A = A. Hence,
t
λu v = ut (µv)
λut v = µut v.

But the eigenvalues of A are real, so λ = λ, and

λut v = µut v
(λ − µ)ut v = 0
(λ − µ)(u, v) = 0.

Since, λ 6= µ, this implies that (u, v) = 0 as required.

Because of the above theorems, orthogonality plays an important role in finding the
eigenvectors of a real symmetric (or a complex Hermitian) n × n matrix A. Suppose
first of all that A has n distinct eigenvalues. Then we know in any case that we
can choose a basis for Rn (or Cn in the complex case) consisting of eigenvectors for
A. However, by Theorem 12.26, we know in addition in the symmetric (Hermitian)
case that these basis vectors are mutually perpendicular. To get an orthonormal
basis, it suffices to normalize each basic eigenvector by dividing it by its length.

Example 232, continued We saw that the eigenvalues of


[ ]
0 i
A=
−i 0

are λ = ±1.

For λ = 1, we find the eigenvectors by reducing


[ ] [ ]
−1 i 1 −i
A−I = → .
−i −1 0 0
626 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

The general solution is v1 = iv2 with v2 free, and a basic eigenvector is


[ ]
i
v1 = .
1
√ √
To get a unit vector, divide this by its length |v1 | = |i|2 + 12 = 2; this gives
[ ]
1 i
u1 = √ .
2 1

Similarly, for λ = −1, reduce


[ ] [ ]
1 i 1 i
A+I = →
−i − 0 0

which yields as above the unit basic eigenvector


[ ]
1 −i
u2 = √ .
2 1
Note that
1
(u1 , u2 ) = ((−i)(−i) + (1)(1)0 = 0
2
as expected.

Exercises for 12.4.

1. (a) Find a basis for the subspace of R4 of all vectors perpendicular to v =


h1, 2, −1, 4i. It need not be an orthonormal basis.
(b) Find a basis for the subspace of C3 of all vectors perpendicular to v =
hi, −i, 1i. It need not be an orthonormal basis.
2. (Optional) Derive the following properties of the dot product in Cn . Use the
rules of matrix algebra derived earlier and the additional rules B + C = B+C,
BC = B C, and (B + C)t = B t + C t . Note also that (ut v)t = ut v since either
is a scalar.
(a) (u + v, w) = (u, v) + (u, w), (u, v + w) = (u, v) + (u, w).
(b) (cu, v) = c(u, v), (u, cv) = c(u, v).
(c) (u, v) = (v, u).
3. Let A be Hermitian. Prove the self-adjoint property

(Au, v) = (u, Av).

Note. This formula was used implicitly at several points in the text. See if
you can find where.
12.5. THE PRINCIPAL AXIS THEOREM 627

4. Let {u1 , u2 , . . . , uk } be a set of mutually perpendicular non-zero vectors.


Show that the set is linearly independent. Hint. Assume there is a dependence
relation which after renumbering takes the form

u1 = c2 u2 + · · · + ck uk

and take the dot product of both sides with u1 .


 
0 3i 0
5. (a) Show that the matrix A = −3i 0 4i is Hermitian.
0 −4i 0
(b) Find the eigenvalues and eigenvectors for A.
(c) Find an orthonormal basis of eigenvectors for A.
 
1 0 1
6. (a) Find a basis of eigenvectors for A = 0 1 0.
1 0 1
(b) Find an orthonormal basis of eigenvectors for A.

12.5 The Principal Axis Theorem

One of the most important results in linear algebra asserts that if A is a real
symmetric (a complex Hermitian) n × n matrix then there is an orthonormal basis
for Rn ( Cn ) consisting of eigenvectors for A. This is usually called the Principal
Axis Theorem. The reason for the name is that the case n = 2, 3 may be used
to find the orientation or ‘principal axes’ of an arbitrary conic in the plane or
quadric surface in space. There is an important generalization of the result to
infinite dimensional spaces which is called the Spectral Theorem, so the Principal
Axis Theorem is also called the ‘finite dimensional Spectral Theorem’.

In this section we shall work an example and also explore some concepts related to
the use of the theorem. The proof will be deferred for the moment.

Let A be a complex Hermitian n × n matrix. (If it is real it will automatically be


symmetric, so we don’t need to discuss the real case separately.) As we saw in the
previous section, if the eigenvalues of A are distinct, then we already know that
there is a basis consisting of eigenvectors and they are automatically perpendicular
to one another. Hence, the Principal Axis Theorem really only tells us something
new in case of repeated eigenvalues.

Example 233 Consider  


−1 1 1
A= 1 −1 1 .
1 1 −1
628 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

This example is real, so we shall work in R3 .

The characteristic equation is


 
−1 − λ 1 1
det  1 −1 − λ 1 
1 1 −1 − λ
= −(1 + λ)((1 + λ)2 − 1) − 1(−1 − λ − 1) + 1(1 + 1 + λ)
= −(1 + λ)(λ2 + 2λ) + 2(λ + 2)
= −(λ3 + 3λ2 − 4) = 0.

Using the method suggested at the end of Chapter XI, Section 5, we may find the
roots of this equation by trying the factors of the constant term. The roots are
λ = 1, which has multiplicity 1, and λ = −2, which has multiplicity 2.

For λ = 1, we need to reduce


     
−2 1 1 1 1 −2 1 0 −1
A−I = 1 −2 1  → 0 −3 3  → 0 1 −1 .
1 1 −2 0 3 −3 0 0 0

The general solution is v1 = v3 , v2 = v3 with v3 free. A basic eigenvector is


 
1
v1 = 1
1

but we should normalize this by dividing it by |v1 | = 3. This gives
 
1
1  
u1 = √ 1 .
3 1

For λ = −2, the situation is more complicated. Reduce


   
1 1 1 1 1 1
A + 2I = 1 1 1 → 0 0 0
1 1 1 0 0 0

which yields the general solution v1 = −v2 − v3 with v2 , v3 free. This gives basic
eigenvectors    
−1 −1
v2 =  1  , v3 =  0  .
0 1
Unfortunately, v2 and v3 are not perpendicular , but this is easy to remedy. All
we have to do is pick another basis for the subspace spanned by {v2 , v3 }. The
12.5. THE PRINCIPAL AXIS THEOREM 629

eigenvectors with eigenvalue −2 are exactly the non-zero vectors in this subspace,
so any basis will do as well.

It is easy to construct the new basis. Indeed we need only replace one of the two
vectors. Keep v2 , and let v30 = v3 − cv2 where c is chosen so that

(v2 , v30 ) = (v2 , v3 ) − c(v2 , v2 ) = 0,

(v2 , v3 )
i.e., take c = . (See the diagram to get some idea of the geometry behind
(v2 , v2 )
this calculation.) We have

v
(v2 , v3 ) 1 3
=
(v2 , v2 ) 2
     1 v
−1 −1 −2 3
1 1
v30 = v3 − v2 =  0  −  1  =  − 12  .
2 2
1 0 1
v
2
We should also normalize this basis by choosing

  √  − 1
−1
1 1  1 2  12 
u2 = v2 = √ 1 , u3 = 0 v30 = −2 .
|v2 | 2 |v3 | 3
0 1

Putting this all together, we see that

    √  − 1
1 −1
1   1  2  12 
u1 = √ 1 , u2 = √ 1 , u3 = −2
3 1 2 3
0 1

form an orthonormal basis for R3 consisting of eigenvectors for A. Notice that u1


is automatically perpendicular to u2 and u3 as the theory predicts.

The Gram–Schmidt Process In Example 233, we used a special case of a more


general algorithm in order to construct an orthonormal basis of eigenvectors. The
algorithm, called the Gram–Schmidt Process works as follows. Suppose

{v1 , v2 , . . . , vk }

is a linearly independent set spanning a certain subspace W . We construct an


630 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

orthonormal basis for W as follows. Let

v10 = v1
(v10 , v2 ) 0
v20 = v2 − v
(v10 , v10 ) 1
(v0 , v3 ) (v0 , v3 )
v30 = v3 − 10 0 v10 − 20 0 v20
(v1 , v1 ) (v2 , v2 )
..
.

k−1
(vj0 , vk ) 0
vk0 = vk − v .
j=1
(vj0 , vj0 ) j

It is not hard to see that each new vj0 is perpendicular to those constructed before
it. For example,

(v10 , v3 ) 0 0 (v0 , v3 )
(v10 , v30 ) = (v10 , v3 ) − 0 0 (v1 , v1 ) − 20 0 (v10 , v20 ).
(v1 , v1 ) (v2 , v2 )

However, we may suppose that we already know that (v10 , v20 ) = 0 (from the previous
stage of the construction), so the above becomes

(v10 , v30 ) = (v10 , v3 ) − (v10 , v3 ) = 0.

The same argument works at each stage.

It is also not hard to see that at each stage, replacing vj by vj0 in

{v10 , v20 , . . . , vj−1


0
, vj }

does not change the subspace spanned by the set. Hence, for j = k, we conclude
that {v10 , v20 , . . . , vk0 } is a basis for W consisting of mutually perpendicular vectors.
Finally, to complete the process simply divide each vj0 by its length

1 0
uj = v .
|vj0 | j

Then {u1 , . . . , uk } is an orthonormal basis for W .

Example 234 Consider the subspace of R4 spanned by


     
−1 −1 1
 1   1  0
v1 =      
 0  , v2 =  1  , v3 = 0 .
1 0 1
12.5. THE PRINCIPAL AXIS THEOREM 631

Then
 
−1
 1 
v10 = 
 0 

1
     1
−1 −1 −3
 1  2 1   1 
v2 = 
0     3 
 1 − 3 0 = 1 
0 1 −2
     3 1 4
1 −1 −3 5
0 0  1  −1  1   1 
v3 = 
0 − 
0 3  0 
−  3  =  53  .
15  1   
9 5
1 1 −32 3
5

Normalizing, we get
 
−1
1  1 
u1 = √  
3 0 
1
 1  
−3 −1
3  31  
 = √1  1 

u2 = √     
15 1 15 3
− 23 −2
4  
5 4
5  1 1 1
u3 = √  5 = √  .
35  5  35 3
3
3
5
3

Exercises for 12.5.

1. Apply the Gram–Schmidt Process to each of the following sets of vectors.


   
 1 2 
(a) 0 , 1
 
1 0
     

 1 1 0 
      
0 1
,  ,  2  .
(b) 2 0  1 

 
 
0 1 −1
2. Let {v1 , v2 , v3 } be a linearly independent set. Suppose {v10 , v20 , v30 } is the set
obtained (before normalizing) by the Gram-Schmidt Process. Show that none
of the vj0 is zero.
632 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

The generalization of this to an arbitrary linearly independent set is one reason


the Gram-Schmidt Process works. The vectors produced by that process are
mutually perpendicular provided they are non-zero, and so they form a linearly
independent set. Since they are in the subspace W spanned by the original
set of vectors and there are just enough of them, they must form a basis a
basis for W .
 
1 1 1
3. Find an orthonormal basis of eigenvectors for A = 1 1 1.
1 1 1
4. Find an orthonormal basis of eigenvectors for
 
−1 k k
A= k −1 k  .
k k −1
Hint: 2k − 1 is an eigenvalue.
d2 x
Use the results to solve the differential equation 2 = Ax where
dt
 
θ1
x = θ2  .
θ3
This system describes small oscillations of the triple pendulum system pic-
tured below if the units are chosen so that m = 1 and L = g. (What is k?)
Find the normal modes.

L
L
θ L
1 θ θ
2 3

m
m m

Note that there are at two obvious normal modes, and if you choose the ap-
propriate eigenvectors for those modes, you can determine the corresponding
eigenvalues, one of which happens to be 2k − 1.
12.6. CHANGE OF COORDINATES AND THE PRINCIPAL AXIS THEOREM633

12.6 Change of Coordinates and the Principal Axis


Theorem

One way to understand the Principal Axis Theorem and other such theorems about
a special choice of basis is to think of how a given problem would be expressed
relative to that basis. For example, if we look at the linear operator L defined by
L(x) = Ax, then if {v1 , v2 , . . . , vn } is a basis of eigenvectors for A, we have by
definition L(vi ) = λi vi . Thus, for any vector x, its coordinates x01 , x02 , . . . , x0n with
respect to this basis are the coeffcients in
 0
x1
[ ]  x02 
 
x = v1 x01 + v2 x02 + · · · + vn x0n = v1 v2 . . . vn  .  ,
 .. 
x0n
so we have
L(x) = L(v1 )x01 + L(v2 )x02 + · · · + L(vn )x0n
 
 λ1 x01 
[ ]


0 
= v1 λ1 x01 + v2 λ2 x02 + · · · + vn λn x0n = v1 v2 . . . vn  λ2 x2  .
 .. 
 . 
λn x0n
Thus, the effect of L on the coordinates of a vector with respect to such a basis is
quite simple: each coordinate is just multiplied by the corresponding eigenvalue.
(See Chapter X, Section 8

to review the concept of coordinates with respect to a basis.)

To study this in greater detail, we need to talk a bit more about changes of co-
ordinates. Although the theory is quite general, we shall concentrate on Rn and
Cn . In either of these vector spaces, we start implicitly with the standard basis
{e1 , e2 , . . . , en }. The entries in a vector x may be thought of as the coordinates
x1 , x2 , . . . , xn of the vector with respect to that basis. Suppose {v1 , v2 , . . . , vn } is
another basis. As above, the coordinates of x with respect to the new basis are
obtained by solving [ ]
x = v1 v2 . . . vn x0 (218)
for  
x01
 x02 
0  
x =  . .
 .. 
x0n
Let [ ]
P = v1 v2 . . . vn .
634 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

Then the relation (218) becomes


x = P x0 (219)
which may be thought of as a rule relating the ‘old’ coordinates of a vector to its
‘new’ coordinates. P is called the change of coordinates matrix, and its jth column
is vj which may also be thought of as the set of ‘old’ coordinates of the jth ‘new’
basis vector.

(219) is backwards in that the ‘old’ coordinates are expressed in terms of the ‘new’
coordinates. However, it is easy to turn this around. Since the columns of P are
linearly independent, P is invertible and we may write instead

x0 = P −1 x.

These rules have been stated for the case in which we change from the standard
basis to some other basis, but they work quite generally for any change of basis.
(They even work in cases where there is no obvious ‘standard basis’.) Just use the
rule enunciated above: the jth column of P is the set of ‘old’ coordinates of the jth
‘new’ basis vector.

Example 235 Suppose in R2 we pick a new set of coordinate axes by rotating


each of the old axes through angle θ in the counterclockwise direction. Call the old
coordinates (x1 , x2 ) and the new coordinates (x01 , x02 ). According to the above dis-
cussion, the columns of the change of basis matrix P come from the old coordinates
of the new basis vectors, i.e., of unit vectors along the new axes. From the diagram,
these are [ ] [ ]
cos θ − sin θ
.
sin θ cosθ

x
x’ 2
2
x’ 1

θ
x
1
12.6. CHANGE OF COORDINATES AND THE PRINCIPAL AXIS THEOREM635

Hence, [ ] [ ][ ]
x1 cos θ − sin θ x01
= .
x2 sin θ cos θ x02
The change of basis matrix is easy to invert in this case. (Use the special rule which
applies to 2 × 2 matrices.)
[ ]−1 [ ] [ ]
cos θ − sin θ 1 cos θ sin θ cos θ sin θ
= =
sin θ cos θ cos2 θ + sin2 θ − sin θ cos θ − sin θ cos θ

(You could also have obtained this by using the matrix for rotation through angle
−θ.) Hence, we may express the ‘new’ coordinates in terms of the ‘old’ coordinates
through the relation [ 0] [ ][ ]
x1 cos θ sin θ x1
= .
x02 − sin θ cos θ x2

The significance of the Principal Axis Theorem is clarified somewhat by thinking in


] {v1 , v2 , . . . , vn } is
terms of changes of coordinates. Suppose A [is diagonalizable and
a basis of eigenvectors for A. Suppose P = v1 v2 . . . vn is the corresponding
change of basis matrix. We

showed in Chapter 11, Section 8 that

P −1 AP = D (221)

where D is a diagonal matrix with the eigenvalues of A on the diagonal. To see how
this might be used, consider a second order system of the form

d2 x
= Ax.
dt2
Assume we make the change of coordinates

x = P x0 .

Then
d2 P x0
= AP x0
dt2
d2 x0
P 2 = AP x0
dt
d2 x0
= P −1 AP x0 = Dx0 .
dt2
However, since D is diagonal, this last equation may be written as n scalar equations

d2 x0j
= λj x0j j = 1, 2, . . . , n.
dt2
636 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

In the original coordinates, the motions of the particles are ‘coupled’ since the
motion of each particle may affect the motion of the other particles. In the new
coordinate system, these motions are ‘decoupled’. If we do this for a normal modes
problem, the new coordinates are called normal coordinates. Each x0j may be
thought of as the displacement of one of n fictitious particles, each of which oscillates
independently of the others in one of n mutually perpendicular directions. The
physical significance in terms of the original particles of each normal coordinate is a
but murky, but they presumably represent underlying structure of some importance.

Example 236 Recall the normal modes problem in Section 3, Example 1.

k m k m k

x1 x
2

Since the masses are equal, the problem can be reformulated as


[ ]
d2 x k −2 1
= x.
dt2 m 1 −2

This doesn’t change anything in the solution process, and a basis of eigenvectors
for the coefficient matrix is as before
{ [ ] [ ]}
1 −1
v1 = , v2 = .
1 1

If we divide the vectors by their lengths, we obtain the orthonormal basis


{ [ ] [ ]}
1 1 1 −1
√ , √ .
2 1 2 1

This in turn leads to the change of basis matrix


[ ]
√1
2
− √12
P = √1 √1
2 2
12.6. CHANGE OF COORDINATES AND THE PRINCIPAL AXIS THEOREM637

x’ x’ 1
2 x
2

π/
4
π/
4
x
1

If you look carefully, you will see this represents a rotation of the original x1 , x2 -axes
through an angle π/4. However, this has nothing to do with the original geometry
of the problem. x1 and x2 stand for displacements of two different particles along
the same one dimensional axis. The x1 , x2 plane is a fictitious configuration space
in which a single point represents the pair of particles. It is not absolutely clear
what a rotation of axes means for this plane, but the new normal coordinates
x01 , x02 obtained thereby give us a formalism in which the normal modes appear as
decoupled oscillations.

Orthogonal and Unitary Matrices You may have noticed that the matrix P
obtained in Example 236 has the property P −1 = P t . This is no accident. It is a
consequence of the fact that its columns are mutually perpendicular unit vectors.

Theorem 12.27 Let P be an n × n real matrix. Then the columns of P form an


orthonormal basis for Rn if and only if P −1 = P t . Similarly if P is an n×n complex
t
matrix, its columns form an orthonormal basis for Cn if and only if P −1 = P .

A matrix with this property is called orthogonal in the real case and unitary in
the complex case. The complex case subsumes the real case since a real matrix is
unitary if and only if it is orthogonal.

Proof. We consider the real case. (The argument in the complex case is similar
except that dot products need a complex conjugation on the first factor.) Let
[ ]
P = v1 v2 . . . vn .
638 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

Then  
v1 t
 v2 t 
 
Pt =  . .
 .. 
vn t
Hence, the j, k-term of the product P t P is

vj t vk = (vj , vk )

Thus, P t P = I if and only if


(vj , vk ) = δjk
where δjk , the ‘Kronecker δ’, gives the entries of the identity matrix. However, this
just says that the vectors are mutually perpendicular (for j 6= k) and have length 1
(for j = k).

Since the Principal Axis Theorem asserts that there is an orthonormal basis consist-
ing of eigenvectors for the Hermitian matrix A, that means that the corresponding
change of basis matrix is always unitary (orthogonal in the real case). Putting this
in (221), we get the following equivalent form of the Principal Axis Theorem.

Theorem 12.28 If A is a complex Hermitian n×n matrix, there is a unitary matrix


P such that
t
P AP = P −1 AP = D
is diagonal. If A is real symmetric, P may be chosen to be real orthogonal.

The Proof of the Principal Axis Theorem

Proof. We know that we can always find a basis for Cn of generalized eigenvectors
for a complex n × n matrix A. The point of the Principal Axis Theorem is that if A
is Hermitian, ordinary eigenvectors suffice. The issue of orthogonality may be dealt
with separately since, for a Hermitian matrix, eigenvectors for different eigenvalues
are perpendicular and the Gram–Schmidt Process is available for repeated eigen-
values. Unfortunately there does not seem to be a simple direct way to eliminate
the possibility of generalized eigenvectors which are not eigenvectors. The proof we
shall give proceeds by induction, and it shares with many inductive proofs the fea-
ture that, while you can see that it is correct, you may not find it too enlightening.
You might want to skip the proof the first time you study this material.

We give the proof in the real case. The only difference in the complex case is that
you need to put complex conjugates over the appropriate terms in the formulas.

Let A be an n × n symmetric matrix. We shall show that there is a real orthogonal


n × n matrix P such that

AP = P D or equivalently P t AP = D
12.6. CHANGE OF COORDINATES AND THE PRINCIPAL AXIS THEOREM639

where D is a diagonal matrix with the eigenvalues of A (possibly repeated) on its


diagonal.
[ ]
If n = 1 there really isn’t anything to prove. (Take P = 1 .) Suppose the theorem
has been proved for (n − 1) × (n − 1) matrices. Let u1 be a unit eigenvector for A
with eigenvalue λ1 . Consider the subspace W consisting of all vectors perpendicular
to u1 . It is not hard to see that W is an n − 1 dimensional subspace. Choose (by
the Gram–Schmidt Process) an orthonormal basis {w2 , w2 . . . , wn } for W . Then
{u1 , w2 , . . . , wn } is an orthonormal basis for Rn , and
 
λ1
[ ]0

Au1 = u1 λ1 = u1 w2 . . . wn  .  .
| {z }  .. 
P1
0

This gives the first column of AP1 , and we want to say something about its remain-
ing columns
Aw2 , Aw2 , . . . , Awn .
To this end, note that if w is any vector in W , then Aw is also a vector in W . For,
starting with the self adjoint property (Section 4, Problem A3),

we have
(u1 , Aw) = (Au1 , w) = (λ1 u1 , w) = λ1 (u1 , w) = 0,
which is to say, Aw is perpendicular to u1 if w is perpendicular to u1 . It follows
that each Awj is a linear combination just of w2 , w3 , . . . , wn , i.e.,
 
0
[ ] ∗


Awj = u1 w2 . . . wn  . 
. .

where ‘∗’ denotes some unspecified entry. Putting this all together, we see that
 
λ1 0 . . . 0
0 
 
AP1 = P1  . 
 .. A0 
0
| {z }
A1

where A0 is an (n − 1) × (n − 1) matrix. P1 is orthogonal (since its columns form


an orthonormal basis) so
P1 t AP1 = A1 ,
and it is not hard to derive from this the fact that A1 is symmetric. Because of
the structure of A1 , this implies that A0 is symmetric. Hence, by induction we may
640 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

assume there is an (n − 1) × (n − 1) orthogonal matrix P 0 such that A0 P 0 = P 0 D0


with D0 diagonal. It follows that
    
1 0 ... 0 λ1 0 . . . 0 1 0 ... 0
0  0  0 
    
A1  .  = .   . 
 .. P 0   .
. A 0   .
. P 0 
0 0 0
| {z }
P2
   
λ1 0 ...0 λ1 0 ... 0
0  0 
   
= .  =  .. 
 .. A0 P 0  . P 0 D0 
0 0
  
1 0 ... 0 λ1 0 . . . 0
0  0 
  
= .   .  = P2 D.
 .. P 0   .
. D 0 
0 0
| {z }| {z }
P2 D

Note that P2 is orthogonal and D is diagonal. Thus,


A P1 P2 = P1 A1 P2 = P1 P2 D
| {z } | {z }
P P
or AP = P D.
However, a product of orthogonal matrices is orthogonal—see the Exercises—so P
is orthogonal as required.

Exercises for 12.6.

1. An inclined plane makes an angle of 30 degrees with the horizontal. Change


to a coordinate system with x01 axis parallel to the inclined plane and x02
perpendicular to it. Use the change of variables formula derived in the section
to find the components of the gravitational acceleration vector −gj in the new
coordinate system. Compare this with what you would get by direct geometric
reasoning.
2. Show that the product of two orthogonal matrices is orthogonal. Show that
the product of two unitary matrices is unitary. How about the inverse of an
orthogonal or unitary matrix?
[ ]
1 −i
3. Let A = . Find an orthonormal basis for C2 consisting of eigenvectors
i 1
for A. Use this to find a unitary matrix P such that P −1 AP is diagonal. (The
diagonal entries should be the eigenvalues.)
12.7. CLASSIFICATION OF CONICS AND QUADRICS 641
[ ]
1 2
4. Let A = . Find a 2×2 orthogonal matrix P such that P t AP is diagonal.
2 1
What are the diagonal entries?
 
1 4 3
5. Let A = 4 1 0. Find a 3 × 3 orthogonal matrix P such that P t AP is
3 0 1
diagonal. What are the diagonal entries?

12.7 Classification of Conics and Quadrics

As mentioned earlier, the Principal Axis Theorem derives its name from its relation
to classifying conics, quadric surfaces, and their higher dimensional analogues.

A level curve in R2 defined by an equation of the form

f (x) = a11 x1 2 + 2a12 x1 x2 + a22 x2 2 = C

is called a central conic. (The reason for the 2 will be clear shortly.) As we shall
see, a central conic is either an ellipse or a hyperbola (for which the principal axes
need not be the coordinate axes) or a degenerate ‘conic’ consisting of a pair of lines.

The most general conic is the locus of an arbitrary quadratic equation which may
have linear as well as quadratic terms. Such curves may be studied by applying
the methods discussed in this section to the quadratic terms and then completing
squares to eliminate linear terms. Parabolas are included in the theory in this way.

To study a central conic, it is convenient to express the function f as follows.

f (x) = (x1 a1,1 + x2 a21 )x1 + (x1 a12 + x2 a22 )x2


= x1 (a11 x1 + a12 x2 ) + x2 (a21 x1 + a22 x2 ),

where we have introduced a21 = a12 . The above expression may also be written in
matrix form
∑ 2
f (x) = xj ajk xk = xt Ax
j,k=1

where A is the symmetric matrix of coefficients.

This may be generalized to n > 2 in a rather obvious manner. Let A be a real


symmetric n × n matrix, and define

n
f (x) = xj ajk xk = xt Ax.
j,k=1
642 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

For n = 3 this may be written explicitly

f (x) = (x1 a11 + x2 a21 + x3 a31 )x1


+ (x1 a12 + x2 a22 + x3 a32 )x2
+ (x1 a13 + x2 a23 + x3 a33 )x3
= a11 x1 2 + a22 x2 2 + a33 x3 2
+ 2a12 x1 x2 + 2a13 x1 x3 + 2a23 x2 x3 .

The level set defined by


f (x) = C
is called a central hyperquadric. It should be visualized as an n − 1 dimensional
curved object in Rn . For n = 3 it will be an ellipsoid or a hyperboloid (of one or
two sheets) or perhaps a degenerate ‘quadric’ like a cone. (As in the case of conics,
we must also allow linear terms to encompass paraboloids.)

If the above contentions are true, we expect the locus of the equation f (x) = C
to have certain axes of symmetry which we shall call its principal axes. It turns
out that these axes are determined by an orthonormal basis of eigenvectors for
the coefficient
[ matrix A.] To see this, suppose {u1 , u2 , . . . , un } is such a basis and
P = u1 u2 . . . un is the corresponding orthogonal matrix. By the Principal
Axis Theorem, P t AP = D is diagonal with the eigenvalues, λ1 , λ2 , . . . , λn , of A
appearing on the diagonal. Make the change of coordinates x = P x0 where x
represents the ‘old’ coordinates and x0 represents the ‘new’ coordinates. Then

f (x) = xt Ax = (P x0 )t A(P x0 ) = (x0 )t P t AP x0 = (x0 )t Dx0 .

Since D is diagonal, the quadratic expression on the right has no cross terms, i.e.
  0 
λ1 0 ··· 0 x1
[ ]  0 λ2 ··· 0   x02 
  
(x0 )t Dx0 = x01 x02 · · · x0n  . .. ..   .. 
 .. . ··· .  . 
0 0 · · · λn x0n
= λ1 (x01 )2 + λ2 (x02 )2 + · · · + λn (x0n )2 .

In the new coordinates, the equation takes the form

λ1 (x01 )2 + λ2 (x02 )2 + · · · + λn (x0n )2 = C

and its graph is usually quite easy to describe.

Example 237 We shall determine the level curve f (x, y) = x2 + 4xy + y 2 = 1.


First rewrite the equation
[ ][ ]
[ ] 1 2 x
x y = 1.
2 1 y
12.7. CLASSIFICATION OF CONICS AND QUADRICS 643

Next, find the eigenvalues of the coefficient matrix by solving


[ ]
1−λ 2
det = (1 − λ)2 − 4 = λ2 − 2λ − 3 = 0.
2 1−λ
This equation is easy to factor, and the roots are λ = 3, λ = −1.

For λ = 3, to find the eigenvectors, we need to solve


[ ][ ]
−2 2 v1
= 0.
2 −2 v2
Reduction of the coefficient matrix yields
[ ] [ ]
−2 2 1 −1

2 −2 0 0
with the general solution v1 = v2 , v2 free. A basic normalized eigenvector is
[ ]
1 1
u1 = √ .
2 1

For λ = −1, a similar calculation (which you should make) yields the basic normal-
ized eigenvector [ ]
1 −1
u2 = √ .
2 1
(Note that u1 ⊥ u2 as expected.)

From this we can form the corresponding orthogonal matrix P and make the change
of coordinates [ ] [ 0]
x x
=P 0 ,
y y
and, according to the above analysis, the equation of the level curve in the new
coordinate system is
3(x0 )2 − (y 0 )2 = 1.
It is clear that this is a hyperbola with principal axes pointing along the new axes.

y
y x

x
644 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

Example 238 Consider the quadric surface defined by


x1 2 + x2 2 + x3 2 − 2x1 x3 = 1.
We take
  
[ ] 1 0 −1 x1
f (x) = x1 2 + x2 2 + x3 2 − 2x1 x3 = x1 x2 x3 0 1 0  x2  .
−1 0 1 x3
The characteristic equation of the coefficient matrix is
 
1−λ 0 −1
det  0 1−λ 0  = (1 − λ)3 − (1 − λ) = −(λ − 2)(λ − 1)λ = 0
−1 0 1−λ
Thus, the eigenvalues are λ = 2, 1, 0.

For λ = 2, reduce    
−1 0 −1 1 0 1
 0 −1 0  → 0 1 0
−1 0 −1 0 0 0
to obtain v1 = −v3 , v2 = 0 with v3 free. Thus,
 
−1
v1 =  0 
1
is a basic eigenvector for λ = 2, and
 
−1
1 
u1 = √ 0 
2 1
is a basic unit eigenvector.

Similarly, for λ = 1 reduce


   
0 0 −1 1 0 0
0 0 0  → 0 0 1
−1 0 0 0 0 0
which yields v1 = v3 = 0 with v2 free. Thus a basic unit eigenvector for λ = 1 is
 
0
u2 = 1 .
0

Finally, for λ = 0, reduce


   
1 0 −1 1 0 −1
0 1 0  → 0 1 0 .
−1 0 1 0 0 0
12.7. CLASSIFICATION OF CONICS AND QUADRICS 645

This yields v1 = x3 , v2 = 0 with v3 free. Thus, a basic unit eigenvector for λ = 0 is


 
1
1  
u3 = √ 0 .
2 1

The corresponding orthogonal change of basis matrix is


 
− √1 0 √1
[ ] 2 2
P = u1 u2 u3 =  0 1 0 .
√1 0 √1
2 2

Moreover, putting x = P x0 , we can express the equation of the quadric surface in


the new coordinate system

2x01 2 + 1x02 2 + 0x03 2 = 2x01 2 + x02 2 = 1. (222)

Thus it is easy to see what the level surface is: an elliptical cylinder perpendicular
to the x01 , x02 plane. The three ‘principal axes’ in this case are the two axes of the
ellipse in the x01 , x02 plane and the x03 axis, which is the central axis of the cylinder.

Representing the graph in the new coordinates makes it easy to understand its
geometry. Suppose, for example, that we want to find the points on the graph
which are closest to the origin. These are the points at which the x01 -axis intersects
1
the surface. These are the points with new coordinates x01 = ± √ , x02 = x03 = 0. If
2
you want the coordinates of these points in the original coordinate system, use the
change of coordinates formula
x = P x0 .

Thus, the old coordinates of the minimum point with new coordinates (1/ 2, 0, 0)
are given by
 √1   1   1
− 2 0 √12 √ −2
2
 0 1 0  0  =  0 .
√1 0 √12 0 1
2 2

Exercises for 12.7.

1. Find the principal axes and classify the central conic x2 + xy + y 2 = 1.

2. Identify the conic defined by x2 + 4xy + y 2 = 4. Find its principal axes, and
find the points closest and furthest (if any) from the origin.

3. Identify the conic defined by 2x2 + 72xy + 23y 2 = 50. Find its principal axes,
and find the points closest and furthest (if any) from the origin.
646 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

4. Find the principal axes and classify the central quadric defined by

x2 − y 2 + z 2 − 4xy − 4yz = 1.

5. (Optional) Classify the surface defined by

x2 + 2y 2 + z 2 + 2xy + 2yz − z = 0.

Hint: This is not a central quadric. To classify it, first apply the methods of
the section to the quadratic expression x2 + 2y 2 + z 2 + 2xy + 2yz to find a new
coordinate system in which this expression has the form λ1 x02 + λ2 y 02 + λ3 z 02 .
Use the change of coordinates formula to express z in terms of x0 , y 0 , and
z 0 and then complete squares to eliminate all linear terms. At this point, it
should be clear what the surface is.

12.8 A Digression on Constrained Maxima and Min-


ima

There is another approach to finding the principal axes of a conic, quadric, or


hyperquadric. Consider for an example an ellipse in R2 centered at the origin. One
of the principal axes intersects the conic in the two points at greatest distance from
the origin, and the other intersects it in the two points at least distance from the
origin. Similarly, two of the three principal axes of a central ellipsoid in R3 may be
obtained in this way. Thus, if we didn’t know about eigenvalues and eigenvectors,
we might try to find the principal axes by maximizing (or minimizing) the function
giving the distance to the origin subject to the quadratic equation defining the conic
or quadric. In such a problem, we need to minimize a function assuming there are
one or more relations or constraints among the variables. In this section we shall
consider problems of this kind in general.

We start by considering the case of a single constraint. Suppose we want to max-


imize (minimize) the real valued function f (x) = f (x1 , x2 , . . . , xn ) subject to the
constraint g(x) = g(x1 , x2 , . . . , x1 ) = c. For n = 2, this has a simple geomet-
ric interpretation. The locus of the equation g(x1 , x2 ) = c is a level curve of the
function g, and we want to maximize (minimize) the function f on that curve. Sim-
ilarly, for n = 3, the level set g(x1 , x2 .x3 ) = c is a surface in R3 , and we want
to maximize (minimize) f on that surface. In Rn , we call the level set defined by
g(x1 , x2 , . . . , xn ) = c a hypersurface, and the problem is to maximize (minimize)
the function f on that hypersurface.
12.8. A DIGRESSION ON CONSTRAINED MAXIMA AND MINIMA 647

g(x) = c
g(x) = c

n = 2. Level curve in the plane. n = 3. Level surface in space.

Examples Maximize f (x, y) = x + 3y on the hyperbola g(x, y) = x2 − y 2 = 1.

Maximize f (x, y) = x2 + y 2 on the ellipse g(x, y) = x2 + 4y 2 = 3. (This is easy if


you draw the picture.)

Minimize f (x, y, z) = 2x2 +3xy+y 2 +xz−4z 2 on the sphere g(x, y, z) = x2 +y 2 +z 2 =


1.

Minimize f (x, y, z, t) = x2 + y 2 + z 2 − t2 on the hypersphere g(x, y, z, t) = x2 + y 2 +


z 2 + t2 = 1.

We shall concentrate on the case of n = 3 variables, but the reasoning for any n
is similar. We want to maximize (or minimize) f (x) on a level set g(x) = c in
R3 , where as usual we abbreviate x = (x1 , x2 , x3 ). Assume that both f and g are
smooth functions defined on open sets in R3 , and that the level set g(x) = c has a
well defined tangent plane at a potential maximum point. The latter assumption
means that the normal vector ∇g does not vanish at the point. It follows from
this assumption that every vector v perpendicular to ∇g at the point is a tangent
vector for some curve in the level set passing through the point. (Refer back to
the discussion of tangent planes and the implicit function theorem in Chapter III,
Section 8.)

Suppose such a curve is given by the parametric representation x = x(t).


648 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

v
x = x (t)

f g(x) = c

By the chain rule we have

df dx
= ∇f · = ∇f · v
dt dt
where v = dx/dt. If the function attains a maximum on the level set at the given
point, it also attains a maximum along this curve, so we conclude that

df
= ∇f · v = 0.
dt
As above, we can arrange that the vector v is any possible vector in the tangent
plane at the point. Since there is a unique direction perpendicular to the tangent
plane, that of ∇g, we conclude that ∇f is parallel to ∇g, i.e.,

∇f (x) = λ∇g(x) (223)

for some scalar λ.

f is parallel to g

maximum point

other point
12.8. A DIGRESSION ON CONSTRAINED MAXIMA AND MINIMA 649

(223) is a necessary condition which must hold at any maximum point where f and
g are smooth and ∇g 6= 0. (It doesn’t by itself guarantee that there is a maximum at
the point. There could be a minimum or even no extreme value at all at the point.)
Taking components, we obtain 3 scalar equations for the 4 variables x1 , x2 , x3 , λ.
We would not expect, even in the best of circumstances to get a unique solution
from this, but the defining equation for the level surface

g(x) = c

provides a 4th equation. We still won’t generally get a unique solution, but we
will usually get at most a finite number of possible solutions. Each of these can be
examined further to see if f attains a maximum (or minimum) at that point in the
level set. Notice that the variable λ plays an auxiliary role since we really only want
the coordinates of the point x. (In some applications, λ has some significance beyond
that.) This method is due to the 19th century French mathematician Lagrange and
λ is called a Lagrange multiplier.

Example 239 Suppose we want to maximize the function f (x, y, z) = x + y − z on


the sphere x2 +y 2 +z 2 = 1. We take g(x, y, z) = x2 +y 2 +z 2 . Then, ∇f = h1, 1, −1i
and ∇g = h2x, 2y, 2zi, so the relation ∇f = λ∇g yields

1 = λ(2x)
1 = λ(2y)
−1 = λ(2z)

to which we add the equation

x2 + y 2 + z 2 = 1.

From the first three equations, we obtain


1 1 1
x= y= z=−
2λ 2λ 2λ
1 1 1
+ 2 + 2 =1
4λ2 4λ 4λ
3 2

4 √
3
λ=± .
2

Thus we have two possible solutions. For λ = 3/2, we obtain the point
√ √ √ √
(1/ 3, 1/ 3, −1/ 3) at which f = x + y − z = 3.

For λ = − 3/2, we obtain the point
√ √ √ √
(−1/ 3, −1/ 3, 1/ 3) at which f = − 3.
650 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

Maximum

Minimum

Since the level set x2 + y 2 + z 2 = 1 is a closed bounded set, and since the function
f is continuous, both maximum and minimum values must be attained somewhere
on the level set. The only two candidates we have come up with are the two points
given above, so it is clear the first is a maximum point and the second is a minimum
point.

The method of Lagrange multipliers often leads to a set of equations which is difficult
to solve. Sometimes a great deal of ingenuity is required, so you should treat each
problem as unique and expect to have to be creative about solving it.

Example 240 Suppose we want to minimize the function f (x, y) = x2 + 4xy + y 2


on the circle x2 + y 2 = 1. For this problem n = 2, and the level set is a curve. Take
g(x, y) = x2 + y 2 . Then ∇f = h2x + 4y, 4x + 2yi, ∇g = h2x, 2yi, and ∇f = λ∇g
yields the equations

2x + 4y = λ(2x)
4x + 2y = λ(2y)

to which we add

x2 + y 2 = 1.

After canceling a common factor of 2, the first two equations may be written in
matrix form [ ][ ] [ ]
1 2 x x

2 1 y y
12.8. A DIGRESSION ON CONSTRAINED MAXIMA AND MINIMA 651

which says that [ ]


x
y
is an eigenvector for the eigenvalue λ, and the equation x2 + y 2 = 1 says it is a unit
eigenvector. You should know how to solve such problems, and we leave it to you
to make the required calculations. (See also Example 237 in the previous section
where we made these calculations in another context.) The eigenvalues are λ = 3
and λ = −1. For λ = 3, a basic unit eigenvector is
[ ]
1 1
u1 = √ ,
2 1
and every other eigenvector is of the form cu1 . The latter will be a unit vector if
and only |c| = 1, i.e., c = ±1. We√ conclude
√ that λ =√3 yields√two solutions of the
Lagrange mulitplier problem: (1/ 2, 1/ 2) and (−1/ 2, −1/ 2). At each of these
points f (x, y) = x2 + 4xy + y 2 = 3.

For λ = −1, we obtain the basic unit eigenvector


[ ]
1 −1
u2 = √ ,
2 1
√ √
and a similar
√ analysis
√ (which you should do) yields the two points: (1/ 2, −1/ 2)
and (−1/ 2, 1/ 2). At each of these points f (x, y) = x2 + 4xy + y 2 = −1.

Min Max

Max Min

Hence, the function attains its maximum value at the first two points and its mini-
mum value at the second two.
652 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

Example 241 Suppose we want to minimize the function g(x, y) = x2 + y 2 (which


is the square of the distance to the origin) on the conic f (x, y) = x2 + 4xy + y 2 = 1.
Note that this is basically the same as the previous example except that the roles
of the two functions are reversed. The Lagrange mulitplier condition ∇g = λ∇f is
the same as the condition ∇f = (1/λ)∇g provided λ 6= 0. (λ 6= 0 in this case since
otherwise ∇g = 0, which yields x = y = 0. However, (0, 0) is not a point on the
conic.) We just solved that problem and found eigenvalues 1/λ = 3 or 1/λ = −1.
In this case, we don’t need unit eigenvectors, so to avoid square roots we choose
basic eigenvectors
[ ] [ ]
1 −1
v1 = and
1 1

corresponding respectively to λ = 3 and λ = −1. The endpoint of v1 does not lie


on the conic, but any other eigenvector for λ = 3 is of the form cv1 , so all we need
(x, y) = x2 + 4xy + y 2 = 1.
to do is adjust c so that the point satisfies the equation f√
√ = 1√or c = ±1/ 6. Thus, we obtain the
2
Substituting (x,√ y) =√(c, c) yields 6c
two points (1/ 6, 1/ 6) and (−1/ 6, −1/ 6). For λ = −1, substituting (x, y) =
(−c, c) in the equation yields −2c2 = 1 which has no solutions.

Thus,
√ the√ only candidates
√ for a minimum
√ (or maximum) are the first pair of points:
(1/√ 6, 1/ 6) and (−1/ 6, −1/ 6). A simple calculation shows these are both
1/ 3 units from the origin, but without further analysis, we can’t tell if this is
the maximum, the minimum, or neither. However, it is not hard to classify this
conic—see the previous section—and discover that it is a hyperbola. Hence, the
two points are minimum points.

The Rayleigh-Ritz Method Example 240 above is typical of a certain class of


Lagrange multiplier problems. Let A be a real symmetric n×n matrix, and consider
the problem of maximizing (minimizing) quadratic function f (x) = xt Ax subject
to the constraint g(x) = |x|2 = 1. This is called the Rayleigh–Ritz problem. For
n = 2 or n = 3, the level set |x|2 = 1 is a circle or sphere, and for n > 3, it is called
a hypersphere.

Alternatvely, we could reverse the roles of the functions f and g, i.e., we could try
to maximize (minimize) the square of the distance to the origin g(x) = |x|2 on the
level set f (x) = 1. Because the Lagrange multiplier condition in either case asserts
that the two gradients ∇f and ∇g are parallel, these two problems are very closely
related. The latter problem—finding the points on a conic, quadric, or hyperquadric
furthest from (closest to) the origin—is easier to visualize, but the former problem—
maximizing or minimizing the quadratic function f on the hypersphere |x| = 1—is
easier to compute with.

Let’s go about applying the Lagrange Multiplier method to the Rayleigh–Ritz prob-
lem. The components of ∇g are easy:

∂g
= 2xi , i = 1, 2, . . . n.
∂xi
12.8. A DIGRESSION ON CONSTRAINED MAXIMA AND MINIMA 653

The calculation of ∇f is harder. First write



n ∑
n
f (x) = xj ( ajk xk )
j=1 k=1

and then carefully apply the product rule together with ajk = akj . The result is

∂f ∑n
=2 aij xj i = 1, 2, . . . , n.
∂xi j=1

(Work this out explicitly in the cases n = 2 and n = 3 if you don’t believe it.) Thus,
the Lagrange multiplier condition ∇f = λ∇g yields the equations

n
2 aij xj = λ(2xi ) i = 1, 2, . . . , n
j=1

which may be rewritten in matrix form (after canceling the 2’s)

Ax = λx. (225)

To this we must add the equation of the level set

g(x) = |x|2 = 1.

Thus, any potential solution x is a unit eigenvector for the matrix A with eigenvalue
λ. Note also that for such a unit eigenvector, we have

f (x) = xt Ax = xt (λx) = λxt x = λ|x|2 = λ.

Thus the eigenvalue is the extreme value of the quadratic function at the point on
the (hyper)sphere given by the unit eigenvector.

The upshot of this discussion is that for a real symmetric matix A, the Rayleigh–Ritz
problem is equivalent to the problem of finding an orthonormal basis of eigenvectors
for A.

The Rayleigh–Ritz method may be used to show that a real symmetric matrix has
real eigenvalues without invoking the use of complex vectors as we did previously
in Section 4. (See Theorem 12.1.) Here is an outline of the argument. The hy-
persphere g(x) = |x|2 = 1 is a closed bounded set in Rn for any n. It follows
from a basic theorem in analysis that any continuous function, in particular the
quadratic function f (x), must attain both maximum and minimum values on the
hypersphere. Hence, the Lagrange multiplier problem always has solutions, which
by the above algebra amounts to the assertion that the real symmetric matrix A
must have at least one eigenvalue. This suggests a general procedure for showing
that all the eigenvalues are real. First find the largest eigenvalue by maximizing
the quadratic function f (x) on the set |x|2 = 1. Let x = u1 be the corresponding
654 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

eigenvector. Change coordinates by choosing an orthonormal basis starting with


u1 . Then the additional basis elements will span the subspace perpendicular to u1
and we may obtain a lower dimensional quadratic function by restricting f to that
subspace. We can now repeat the process to find the next smaller real eigenvalue.
Continuing in this way, we will obtain an orthonormal basis of eigenvectors for A
and each of the corresponding eigenvalues will be real.

The Rayleigh–Ritz Method generalizes nicely for complex Hermitian matrices and
also for infinite dimensional analogues. In quantum mechanics, for example, one
considers complex valued functions ψ(x, y, z) defined on R3 satisfying the condition
∫∫∫
|ψ(x, y, z)|2 dV < ∞.
R3

Such functions are called wave functions, and the set of all such functions form a
complex vector space. Certain operators A on this vector space represent observable
quantities, and the eigenvalues of these operators represent the possible results of
measurements of these observables. Since the vector space is infinite dimensional,
one can’t represent these operators by finite matrices, so the usual method of de-
termining eigenvalues and eigenvectors breaks down. However, one can generalize
many of the ideas we have developed here. For example, one may define the inner
product of two wave functions by the formula
∫∫∫
hψ|φi = ψ(x, y, z)φ(x, y, z) dV.
R3

Then, one may determine the eigenvalues of a Hermitian operator A by the studying
the optimization problem for the quantity

hψ|Aψi

subject to the condition hψ|ψi = 1.

Lagrange Multipliers with More Than One Constraint In Rn , suppose we


want to maximize (minimize) a function f (x) subject to m constraints

g1 (x) = c1 , g2 (x) = c2 , . . . , gr (x) = cm .

We can make the scalar functions gi (x) the components of a vector function g :
Rn → Rm . Then the m constraining equations may be summarized by a single
vector constraint
g(x) = c = hc1 , c2 , . . . , cm i.
In this way, we may view the constraint as defining a level set (for g) in Rn , and
the problem is to maximize f on this level set. The level set may also be viewed as
the intersection of the m hypersurfaces in Rn which are level sets of the component
scalar functions gi (x) = ci .

Example 242 Consider the problem of finding the highest point on the curve of
intersection of the plane x + 2y + z = 1 with the sphere x2 + y 2 + z 2 = 21.
12.8. A DIGRESSION ON CONSTRAINED MAXIMA AND MINIMA 655

x + 2y + z = 1

Level set

2 2
x + y + z 2= 21

Here we take f (x, y, z) = z and


[ ] [ ]
g1 (x, y, z) x + 2y + z
g(x, y, z) = = 2
g2 (x, y, z) x + y2 + z2
[ ]
1
c= .
21

If we assume that f and g are smooth, then just as before we obtain


df
= ∇f · v = 0
dt
for every vector v tangent to a curve in the level set through the maximum point.
Every such v, since it is tangent to the level set, will be perpendicular to each of
the normal vectors ∇gi at the point, i.e.,

∇g1 · v = 0
∇g2 · v = 0
..
.
∇gm · v = 0.

If we make the gradient vectors into the rows of a matrix, this system may be
rewritten  
∇g1
 ∇g2 
 
 ..  v = 0. (226)
 . 
∇gm
656 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

In the case of one constraint, we assumed that ∇g 6= 0 so there would be a well


defined tangent plane at the maximum point. Now, we need a more stringent
condition: the gradients at the potential maximum point

∇g1 , ∇g2 , . . . , ∇gm

should form a linearly independent set. This means that the m × n system (226)
has rank m. Hence, the solution space of all vectors v satisfying (226) is n − m-
dimensional. This solution space is called the tangent space to the level set at
the point. In these circumstances, it is possible to show (from higher dimensional
analogues of the implicit function theorem) that every vector v in this tangent space
is in fact tangent to a curve lying in the level set. Using this, we may conclude that,
at a maximum point,
∇f · v = 0
for every vector v in the tangent space. Consider then the (m + 1) × n system
 
∇g1
 ∇g2 
 
 .. 
 .  v = 0.
 
∇gm 
∇f

This cannot have rank m + 1 since it has exactly the same solution space as the
system (226). Hence, it has rank m, and the only way that could happen is if the
last row is dependent on the m previous rows, i.e.,

∇f = λ1 ∇g1 + λ2 ∇g2 + · · · + λm ∇gm . (227)

The scalars λ1 , λ2 , . . . , λm are called Lagrange multipliers.


[ ]
Example 242, continued We have ∇f = 0 0 1 and
[ ] [ ]
∇g1 1 2 1
= .
∇g2 2x 2y 2z

Hence, the Lagrange multiplier condition ∇f = λ1 ∇g1 + λ2 ∇g2 amounts to


[ ] [ ] [ ]
0 0 1 = λ1 1 2 1 + λ2 2x 2y 2z

which yields

λ1 + 2λ2 x = 0
2λ1 + 2λ2 y = 0
λ1 + 2λ2 z = 1.

To this we must add the constraints


x + 2y + z = 1
(228)
x + y 2 + z 2 = 21.
2
12.8. A DIGRESSION ON CONSTRAINED MAXIMA AND MINIMA 657

In total, this gives 5 equations for the 5 unknowns x, y, z, λ1 , λ2 . We can solve these
equations by being sufficiently ingenious, but there is a short cut. The multiplier
condition just amounts to the assertion that {∇g1 , ∇g2 , ∇f } is a dependent set.
(But, it is assumed that {∇g1 , ∇g2 } is independent.) However, this set is dependent
if and only if
   
∇g1 0 0 1
det ∇g2  = det  1 2 1=0
∇f 2x 2y 2z
i.e. 1(2(2x) − 2y) = 2(2x − y) = 0
i.e. y = 2x.

Putting this in (228) yields

5x + z = 1 5x2 + z 2 = 21
5x2 + (1 − 5x)2 = 30x2 − 10x + 1 = 21
30x2 − 10x − 20 = 0
2
x = 1, − .
3

Using z = 1 − 5x, y = 2x yields the following two points as possible maximum


points:
(1, 2, −4) and (−2/3, −4/3, 13/3).

It is clear that the maximum value of f (x, y, z) = z is attained at the second point.

There is one minor issue that was ignored in the above calculations. The reasoning
is only valid at points at which the ‘tangent space’ to the level set is well defined
as defined above. In this case the level set is a curve (in fact, it is a circle), and
the tangent space is 1 dimensional, i.e., it is a line. The two gradients ∇g1 , ∇g2
generally span the plane perpendicular to the tangent line, but it could happen at
some point that one of the gradients is a multiple of the other. In that case the two
level surfaces g1 (x) = c1 and g2 (x) = c2 are tangent to one another at the given
point, so we would expect some problems. For example, consider the intersection
of the hyperbolic paraboloid z − x2 + y 2 = 0 with its tangent plane at the origin
z = 0. This ‘curve’ consists two straight lines which intersect at the origin. At any
point other than the origin, there is a well defined tangent line, i.e., whichever of
the two lines is appropriate, but at the origin there is a problem.

In general, there is no way to know that a maximum or minimum does not occur at
a point where the tangent space is not well defined. Hence, all such points must be
considered possible candidates for maximum or minimum points. In Example 242,
however, it is fairly clear geometrically that there are no such points. This can also
be confirmed analytically by seeing that {∇g1 , ∇g2 } is independent at every point
of the set. For, since ∇g1 6= 0, the only way the pair could be dependent is by a
658 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS

relation of the form ∇g2 = c∇g1 . This yields


[ ] [ ]
2x 2y 2z = c 1 2 1
2x = c, 2y = c2c, 2z = c
x = z, y = x/2

and it is easy to see these equations are not consistent with x + 2y + z = 1, x2 +


y 2 + z 2 = 21.

Exercises for 12.8.

1. Find the maximum value of f (x, y) = 2x+y subject to the constraint x2 +y 2 =


4.

2. Find the mimimum value of f (x, y, z) = x2 + y 2 + z 2 given the constraint


x + y + z = 10.

3. Find the maximum and minimum values of the function f (x, y) = x2 + y 2


given the constraint x2 + xy + y 2 = 1.

4. Find the maximum and/or minimum value of f (x, y, z) = x2 − y 2 + z 2 − 4xy −


4yz subject to x2 + y 2 + z 2 = 1.

5. The derivation of the Lagrange multiplier condition ∇f = λ∇g assumes that


the ∇g 6= 0, so there is a well defined tangent ‘plane’ at the potential maximum
or minimum point. However, a maximum or minimum could occur at a point
where ∇g = 0, so all such points should also be checked. (Similarly, either f
or g might fail to be smooth at a maximum or minimum point.) With these
remarks in mind, find where f (x, y, z) = x2 + y 2 + z 2 attains its minimum
value subject to the constraint g(x, y, z) = x2 + y 2 − z 2 = 0.

6. Consider as in Example 2 the problem of maximizing f (x, y) = x2 + 4xy + y 2


given the constraint x2 +y 2 = 1. This is equivalent to maximizing F (x, y) = xy
on the circle x2 + y 2 = 1. (Why?) Draw a diagram showing the circle and
selected level curves F (x, y) =
√ c of √
the function F√ . Can you
√ see why F (x, y)
attains its maximum at (1/ 2, 1/ 2) and (−1/ 2, −1/ 2) without using
any calculus? Hint: consider how the level curves of F intersect the circle
and decide from that where F is increasing, and where it is decreasing on the
circle.

7. Find the maximum and minimum values (if any) of the function f (x, y, z) =
x2 + y 2 + z 2 on the line of intersection of the two planes x + y + 2z = 2 and
2x − y + z = 4.

8. Find the highest point, i.e., maximize f (x, y, z) = z, on the ellipse which is
the intersection of the cylinder x2 + 2y 2 = 2 with the plane x + y − z = 10.
12.8. A DIGRESSION ON CONSTRAINED MAXIMA AND MINIMA 659

9. For the problem of maximizing (minimizing) f (x1 , . . . , xn ) subject to con-


straints gi (x1 , . . . , xn )) = ci , i = 1, 2, . . . , m, how many equations in how
many unknowns does the method of Lagrange multipliers yield?
660 CHAPTER 12. MORE ABOUT LINEAR SYSTEMS
Chapter 13

Nonlinear Systems

13.1 Introduction

So far we have concentrated almost entirely on linear differential equations. This


is appropriate for several reasons. First, most of classical physics is described by
linear equations, so knowing how to solve them is fundamental in applying the laws
of physics. Second, even in situations where nonlinear equations are necessary to
describe phenomena, it is often possible to begin to understand the solutions by
making linear approximations. Finally, linear equations are usually much easier L
to study than nonlinear equations. Be that as it may, nonlinear equations and θ
nonlinear systems have become increasingly important in applications, and at the
same time a lot of progress has been made in understanding their solutions. In this
chapter we shall give a very brief introduction to the subject. m
We start with a couple of typical examples.

Example 243 In your physics class you studied the behavior of an undamped g
θ
pendulum. To make our analysis easier, we shall assume that the pendulum consists
of a point mass m connected to a fixed pivot by a massless rigid rod of length L.

Then, using polar coordinates to describe the motion, we have for the tangential
acceleration
d2 θ dr dθ
aθ = r 2 + 2 .
dt dt dt
(Refer to Chapter I, Section 2 for acceleration in polar coordinates.) Since r =
L, dr/dt = 0, we obtain
aθ = Lθ00 .

On the other hand, the component of acceleration in the tangential direction is

661
662 CHAPTER 13. NONLINEAR SYSTEMS

−g sin θ, so we obtain the second order differential equation


g
θ00 = − sin θ. (229)
L
This is a second order nonlinear equation. It is usually solved as follows. Assume θ
is small. Then
sin θ = θ + O(θ3 )
so (229) may be approximated by the second order linear equation
g
θ00 = − θ.
L
It is easy to solve this equation:
(√ )
g
θ = A cos t+δ .
L
However, this approximation is certainly not valid if θ is large. For example, if you
give the mass a big enough shove, it will revolve about the pivot through a complete
circuit and in the absence of friction will continue to do that indefinitely. We clearly
need another approach if we want to understand what happens in general.

Unfortunately equation (229) can’t be solved explicitly in terms of known functions.


However, it is possible to get a very good qualitative understanding of its solutions.
To explore this, we consider the equivalent first order system. Let x1 = θ and
x2 = θ0 . Then x02 = θ00 = −(g/L) sin θ = −(g/L) sin x1 . Hence, the desired system
is

x01 = x2
g
x02 = − sin x1
L
or, in vector form
[ ]
0 x2
x = f (x) = . (230)
− Lg sin x1

As noted earlier, we can’t find x = x(t) explicitly as a function of t, but it is


possible to learn quite a lot about the the solutions by looking at the geometry of
the solution curves. In this case, we can describe the geometry by eliminating t
from the differential equations. We have
dx2 dx2 /dt g sin x1
= =− .
dx1 dx1 /dt L x2
This equation may be solved by separation of variables. I leave the details to you,
but the general solution is

Lx2 2 − 2g cos x1 = C. (231)


13.1. INTRODUCTION 663

This equation may also be derived quite easily from the law of conservation of
energy. Namely, multiplying by mL/2 and expressing everything in terms of θ
1
yields m(Lθ0 )2 − mgL cos θ = C. The first term on the left represents the kinetic
2
energy and the second the potential energy.

(231) gives a family of curves called the orbits of the system. (The term ‘orbit’
arises from celestial mechanics which has provided much of the motivation for the
study of nonlinear systems.) You can sketch these orbits by hand, but a computer
program will make it quite a bit easier. Such a diagram is called a phase portrait
of the system. (The term ‘phase’ is used because the x1 , x2 -plane is called the
phase plane for historical reasons.) Note that (231) exhibits the orbits as the level
curves of a function but it does not tell you the directions that solutions ‘flow’ along
dx
those orbits. It is easy to determine these directions by examining the vector
dt
at typical points. For example, if 0 < x1 < π and x2 > 0, then from (230), we see
dx1 dx2 g
that = x2 > 0 and = − sin x1 < 0. It follows that the solutions move
dt dt L
along the orbits downward and to the right in that region.

x
2

x
1

Examination of the phase portrait exhibits some interesting phenomena. First look
at the vaguely elliptical orbits which circle the origin. These represent periodic
solutions in which the pendulum swings back and forth. (To see that, it suffices to
follow what happens to θ = x1 as the solution moves around an orbit in the phase
plane.)
664 CHAPTER 13. NONLINEAR SYSTEMS

x
2

x
x 1 t
1

Next look at the origin. This represents the constant solution x1 = 0, x2 = 0, in


which the pendulum does not move at all (x0 (t) = 0 for all t). This may be obtained
from (231) by taking C = −2g. There are two other constant solutions represented
in the phase portrait: for C = 2g we obtain x1 = π, x2 = 0 or x1 = −π, x2 = 0.
These represent the same physical situation; the pendulum is precariously balanced
on top of the rod (θ = π or θ = −π). This physical situation is an example
of an unstable equilibrium. Given a slight change in its position θ or velocity θ0 ,
the pendulum will move off the balance point, and the corresponding solution will
eventually move quite far from the equilibrium point. On the other hand, the
constant solution x1 = x2 = 0 represents a stable equilibrium.

Stable Unstable
Unstable

Consider the two orbits which appear to connect the two unstable equilibrium
points. Neither of the equilibrium points is actually part of either orbit. For, once
13.1. INTRODUCTION 665

the pendulum is in equilibrium it will stay there forever, and if it is not in equi-
librium, it won’t ever get there. You should think this out carefully for yourself
and try to understand what actually happens to the pendulum for each of these
solutions.

The remaining orbits represent motions in which the pendulum swings around the
pivot in circuits which repeat indefinitely. (The orbits don’t appear to repeat in
the phase plane, but you should remember that values of x1 = θ differing by 2π
represent the same physical configuration of the pendulum.)

It should be noted in passing that different solutions of the system of differential


equations can produce the same orbit. Indeed, we can start a solution off at t = t0
at any point x0 on an orbit, and the resulting solution will trace out that orbit.
Solutions obtained this way are the same except for a shift in the time scale.

Example 244 In the study of ecological systems, one is often interested in the
dynamics of populations. Earlier in this course we considered the growth or decline
of a single population. Consider now two populations x and y where the size of each
depends on the other. For example, x might represent the number of caribou present
in a given geographical region and y might represent the number of wolves which
prey on the caribou. This is a so-called prey–predator problem. The mathematical
model for such an interaction is often expressed as a system of differential equations
of the form

dx
= px − qxy
dt
dy
= −ry + sxy (232)
dt

where p, q, r, s are positive constants. The justification for such a model is as follows.
In absence of predators, the prey will follow a Malthusian law dx/dt = px where p
is the birthrate of the prey. However, in the presence of predators, there will be an
additional term limiting the rate of growth of x which depends on the likelihood of
an encounter between prey and predator. This likelihood is assumed to be propor-
tional to the product xy of the two population sizes. Similarly, without prey, it is
assumed that the population of predators will decline according to the Malthusian
law dy/dt = −ry, but then the term sxy is added to account for the rate of popula-
tion growth for the predators which can be supported from the existing prey. Note
that this model is derived from rather simple minded considerations. Even so, the
predictions of such a model may correspond quite well with observations. However,
one should bear in mind that there may be other models which work just as well.

As in the previous example, the system (232) can’t be solved explicitly at a function
of t, but we can get a pretty good description of its phase portrait.

dy dy/dt −ry + sxy


= =
dx dx/dt px − qxy
666 CHAPTER 13. NONLINEAR SYSTEMS

yields
(ry − sxy)dx + (px − qxy)dy = 0
dx dy
(r − sx) + (p − qy) =0
x y
r ln x − sx + p ln y − qy = C
ln xr y p − (sx + qy) = C.

These curves may be graphed (after choosing plausible values of the constants
p, q, r, s).

(r/s, p/q)

x = 0

-r t
y =Ce

pt
x = Ce y = 0

We only included orbits in the first quadrant since those are the only ones that have
significance for population problems.

Note that there are two constant solutions. First, x = y = 0 is certainly an


equilibrium point. The other constant solution may be determined from (232) by
setting
dx
= px − qxy = 0
dt
dy
= −ry + sxy = 0.
dt
For, anything obtained this way will certainly be constant and also a solution of
(232). In this case, we get
x(p − qy) = 0
y(r − sx) = 0.
13.1. INTRODUCTION 667

From the first equation x = 0 or y = p/q. From the second y = 0 or x = r/s.


However, x = 0 is not consistent with x = r/s and similarly y = 0 is not consistent
with y = p/q, so we obtain a second constant solution x = r/s, y = p/q. It
represents an equilibrium in which both populations stay fixed at non-zero values.

The positive x axis is an orbit, and it corresponds to the situation where there
are no predators (x = Cept , y = 0). Note that this orbit does not contain the
origin, but limt→−∞ x(t) = 0. Similarly, the positive y axis represents the situation
with no prey (x = 0, y = Ce−rt ). This shows that the origin represents an unstable
equilibrium. On the other hand, the point (r/s, p/q) represents a stable equilibrium.
(Can you see why?)

The remaining orbits correspond to solutions in which each population varies pe-
riodically. However, the exact relation between the times of maximum population
for prey and predator may be quite subtle.

x(t)

r/s
y(t)

p/q

y(t)
t
x(t)

Orbit Two populations as functions of time

The general nonlinear first order system has the form


dx
= f (x, t)
dt
where f (x, t) is vector valued function taking values in Rn and x = x(t) is a (n-
dimensional) vector valued solution. It is often the case, as in both examples,
that f doesn’t depend explicitly on t. Such systems are called time independent
or autonomous. The critical points of a system are those points a satisfying
f (a, t) = 0 for all t, or, in the autonomous case, just f (a) = 0. Each critical point
corresponds to a constant solution, x(t) = a for all t. The behavior of solutions
668 CHAPTER 13. NONLINEAR SYSTEMS

near a critical point in the phase portrait of the system tell us something about the
stability of the equilibrium represented by the critical point. In the examples we
noted at least two different types of behavior near a critical point.

Unstable equilibrium

Saddle point

In the above case, called a saddle point, The above situation occurred in the pen-
dulum example at the unstable equilibria. Some solutions approach the critical
point and then depart, some solutions approach the critical point asymptotically as
t → ∞, some solutions do the reverse for t → −∞. Such a critical point is called a
saddle point,

Stable equilibrium

Center

The above situation occurred in both examples. Nearby solutions repeat periodi-
cally. Such a critical point is called a center.

There are many other possibilities.

Example 245 Consider a damped rigid pendulum with damping dependent on


velocity. Newton’s second law results in a differential equation of the form
g
θ00 = − sin θ − aθ0
L
13.1. INTRODUCTION 669

where a > 0. It is fairly clear how the damping will affect the behavior of such
a pendulum. There will still be unstable equilibria in the phase plane which cor-
respond to the pendulum precariously balanced on end (x1 = θ equal to an odd
multiple of π and x2 = θ0 = 0). There will be stable equilibria in the phase plane
which correspond to the pendulum hanging downward at rest (x1 = θ equal to an
even multiple of π and x2 = θ0 = 0). Near the stable equilibrium, the pendulum
will oscillate with decreasing amplitude (and velocity), so the corresponding orbits
in the phase plane will spiral inward toward the equilibrium point and approach it
asymptotically. Here is what the phase portrait looks like in general.

The above physical reasoning is convincing, but it is also useful to have a more
mathematical approach. To this end. we convert the second order equation for θ
to the system
x01 = x2
g
x02 = − sin x1 − ax2 . (233)
L
The critical points are (nπ, 0) where n = 0, ±1, ±2, . . . . The method we used for
sketching the phase portrait of the undamped pendulum doesn’t work in this case,
but we can use what we learned previously to guide us to an understanding of the
behavior near the critical point (0, 0) as follows. Let
1 2 g
U= x2 − cos x1 .
2 L
This quantity was constant in the undamped case by the law of conservation of
energy. Since some energy is lost because of the damping term, U is no longer
670 CHAPTER 13. NONLINEAR SYSTEMS

constant. Indeed, we have


dU 1 g
= 2x2 x02 + sin x1 x01
dt 2 L
g g
= x2 (− sin x1 − ax2 ) + sin x1 x2
L L
= −ax2 2 .
Thus, dU/dt < 0 except where the path crosses the x1 -axis (i.e., x2 = 0). However,
at points on the x1 -axis, the velocity dx/dt (given by (233)) is directed off the axis.
Using this information, it is possible to see that U decreases steadily along any orbit
of the damped system as the orbit descends inward crossing the level curves
Lx2 2 − 2g cos x1 = 2LU = C
of the undamped system. In the limit, the orbit approaches the critical point at the
origin. This confirms the physical interpretation described above near the critical
point x1 = θ = 0, x2 = θ0 = 0. Here is what the phase portrait looks like in general.

x
2
Orbit of damped system Level curves U = c
orbits of undamped sytem

x
1

As noted above, the orbits around the critical point at the origin spiral in toward
the critical point and approach it asymptotically. In general, such a critical point
is called a focus. In this case, the critical point represents a stable equilibrium,
but, in general, the reverse situation where the orbits spiral out from the origin
is possible. Then the critical point represents an unstable equilibrium, but is still
called a focus.

Generally, we may summarize the phenomena illustrated above by calling a critical


point stable if any solution which comes sufficiently close to the critical point for one
13.2. LINEAR APPROXIMATION 671

time t stays close to it for all subsequent time. Otherwise, we shall call the critical
point unstable. If all nearby solutions approach the critical point asymptotically as
t → ∞, it is called asymptotically stable.

As we shall see, the behavior of the phase portrait near a critical point can often
be reduced to a problem in linear algebra.

Exercises for 13.1.

1. In each case find the critical points of the indicated system.


(a) x01 = x1 − x1 x2 , x02 = −3x2 + 2x1 x2 .
(b) x01 = −x1 + 2x1 x2 , x02 = x2 − x2 2 + x1 x2 .
(c) x01 = x1 − x1 x2 , x02 = −x2 + x1 x2 , x03 = x3 + x1 x2 .

2. Consider the general 2 × 2 linear system x0 = Ax. Show that x1 = 0, x2 = 0


gives the only critical point if det A 6= 0. What happens if det A = 0? How
does this generalize to an n × n linear system?

3. For each of the following systems, first find the critical points. Then eliminate
t, solve the ensuing first order differential equation in x1 and x2 and sketch
some orbits. Use the system to determine the direction of ‘flow’ at typical
points along the orbits.
(a) x01 = −x2 , x02 = 2x1 .
(b) x01 = 2x1 + x2 , x02 = x1 − 2x2 .
(c) x01 = x1 − x1 x2 , x02 = −x2 + x1 x2 .

13.2 Linear Approximation

Suppose we want to study the behavior of a system

dx
= f (x)
dt
near a critical point x = a. (We assume the system is time independent for simplic-
ity.) One way to do this is to approximate f (x) by a linear function in the vicinity of
the point a. In order to do this, we need a short digression about multidimensional
calculus.

The Derivative of a Function f : Rn → Rm Let f : Rn → Rm denote a smooth


function. Since f is an m-dimensional vector valued function, it may be specified
672 CHAPTER 13. NONLINEAR SYSTEMS

by m component functions  
f1
 f2 
 
f = . 
 .. 
fm
where each component is a scalar function fi (x1 , x2 , . . . , xn ) of n variables. Fix a
point a = (a1 , a2 , . . . , an ) in the domain of f . For each component, we have the
linear approximation
fi (x) = fi (a) + ∇fi (a) · (x − a) + o(|x − a|).
(See Chapter III, Section 4.) We may put these together in a single vector equation
     
f1 (x) f1 (a) ∇f1 (a)
 f2 (x)   f2 (a)   ∇f2 (a) 
     
f (x) =  .  =  .  +  ..  (x − a) + o(|x − a|).
 ..   ..   . 
vm (x) vm (a) ∇fm (a)
Let  
∇f1
 ∇f2 
 
Df =  . 
 .. 
∇fm
be the m × n matrix with rows the gradients of the component functions fi . The
∂fi
i, j entry of Df is . Then the above equation may be written more compactly
∂xj
f (x) = f (a) + Df (a)(x − a) + o(|x − a|). (234)
The matrix Df is called the derivative of the function f . It plays the same role for
vector valued functions that the gradient plays for scalar valued functions.

Example 246 Let m = n = 2 and suppose


[ ]
x2
f (x1 , x2 ) = .
− sin x1
Let’s consider the behavior of this function near a = (0, 0).

First calculate the derivative.


[ ]
∇f1 = 0 1
[ ]
∇f2 = − cos x1 0

so
[ ]
0 1
Df (0, 0) = .
−1 0
13.2. LINEAR APPROXIMATION 673

Hence, formula (234) yields

f (x1 , x2 ) = f (0, 0) + Df (0, 0)x + o(|x|)


[ ] [ ][ ] [ ]
0 0 1 x1 x2
≈ + = .
0 −1 0 x2 −x1

Example 247 Let m = 2, n = 3 and take


[ ]
x1 + 2x2 + x3
f (x1 , x2 , x3 ) = .
x1 2 + x2 2 + x3 2

Then [ ]
1 2 1
Df (x1 , x2 , x3 ) = .
2x1 2x2 2x3
Suppose we want to study the behavior of f near the point (1, 2, −4). We have
[ ]
1
f (1, 2, −4) =
21

and [ ]
1 2 1
Df (1, 2, −4) =
2 4 −8
so, according to (234),
 
[ ] [ ] x −1
1 1 2 1  1
f (x1 , x2 , x3 ) ≈ + x2 − 2 .
21 2 4 −8
x3 + 4

The linear approximation is an invaluable tool for the study of functions Rn → Rm .


We have already seen its use in a variety of circumstances for scalar valued functions.
It also arose implicitly when we were studying change of variables for multiple
integrals. For example, a change of variables in R2 may be described by a function
g : R2 → R2 and the Jacobian determinant used in the correction factor for double
integrals is just  
∂g1 ∂g1
 1 ∂x2 
det Dg = det  ∂x ∂g2 ∂g2  .
∂x1 ∂x2
A similar remark applies in R3 , and indeed the change of variable rule may be
generalized to integrals in Rn by using the correction factor | det Dg| where g :
Rn → Rn is the function giving the change of variables.

Analysis of Non-Linear Systems near Critical Points We want to apply the


above ideas to the analysis of an autonomous nonlinear system
dx
= f (x) (235)
dt
674 CHAPTER 13. NONLINEAR SYSTEMS

near a critical point a. Here, m = n and f : Rn → Rn . By definition, f (a) = 0 at


a critical point, so the linear approximation gives

f (x) = f (a) + Df (a)(x − a) + o(|x − a|) = Df (a)(x − a) + o(|x − a|).

If we put this in (235) and drop the error term, we obtain

dx
= Df (a)(x − a).
dt
This may be simplified further by the change of variables y = x−a which in essence
dy dx
moves the critical point to the origin. Since = , this yields
dt dt
dy
= Ay (236)
dt
where A = Df (a) is the derivative matrix at the critical point. In this way, we
have replaced the nonlinear system (235) by the linear system (236), at least near
the critical point. If dropping the error term o(|x − a|) doesn’t affect things too
severely, we expect the behavior of solutions of the linear system to give a pretty
good idea of what happens to solutions of the nonlinear system near the critical
point. At least in principle, we know how to solve linear systems.

Example 248 Consider the pendulum system


dx1
= x2
dt
dx2
= − sin x1
dt
where the units have been chosen so g = L. Consider the critical point (π, 0)
corresponding to the unstable equilibrium discussed earlier. We have
[ ] [ ]
0 1 0 1
Df = = at (π, 0).
− cos x1 0 1 0

Hence, putting y1 = x1 − π, y2 = x2 , near the critical point, the system is approxi-


mated by the linear system [ ]
dy 0 1
= y.
dt 1 0
(In the ‘y’ coordinates, the critical point is at the origin.) This system is quite easy
to solve. I leave the details to you. The eigenvalues are λ = 1, λ = −1. A basis of
eigenvectors is given by
[ ]
−1
v1 = corresponding to λ = 1
1
[ ]
1
v2 = corresponding to λ = −1.
1
13.2. LINEAR APPROXIMATION 675

The general solution of the linear system is

y = c1 et v1 + c2 e−t v2 (237)

or

y1 = −c1 et + c2 e−t
y2 = c1 et + c2 e−t

The phase portrait for the solution near (0, 0) can be worked out by trying various
c1 and c2 and sketching [the resulting
] orbits. However, it is easier to see what it
looks like if we put P = v1 v2 and make the change of coordinates y = P z. As
in Chapter XII, Section 6, this will have the effect of diagonalizing the coefficient
matrix and yielding the system

dz1
= z1
dt
dz2
= −z2
dt
with solutions
z1 = c1 et z2 = c2 e−t . (238)
From this, it is clear that the orbits are the family of hyperbolas given by

z1 z2 = c1 c2 = c.

This includes the case c = 0 which gives the degenerate ‘hyperbola’ consisting of
the lines z1 = 0 and z2 = 0. (The degenerate case actually consists of five orbits:
the critical point (0, 0) and the positive and negative half lines on each axis.)

x y
2 2
z
1
z
2

x ,y
1 1

x =π
1
676 CHAPTER 13. NONLINEAR SYSTEMS

The z1 and z2 axes in this diagram are directed respectively along the vectors v1
and v2 . The direction in which each orbit is traversed may be determined from the
explicit solutions (238).

You should compare the above picture with the phase portrait derived in Section 1
for the pendulum problem. In this example, the local analysis merely confirms what
we already knew from the phase portrait that was derived in Section 1. However, in
general, sketching the phase portrait of the nonlinear system may be very difficult
or even impossible. Hence, deriving a picture near each critical point by linear
analysis is a useful first step in understanding the whole picture.

The previous example illustrates generally what happens for a 2 dimensional system
when the eigenvalues are real and of opposite signs. In this case the critical point
is called a saddle point, and it is clearly unstable.

Example 249 Consider the prey-predator problem described by the system


dx1
= x1 − x1 x2
dt
dx2
= −x2 + x1 x2 .
dt
This assumes that the populations are being measured in bizarre units, and with
respect to these units the constants p, q, r, s are all 1. This is rather unrealistic, but
it does simplify the algebra quite a lot. As before, there are two critical points in the
first quadrant: (0, 0) and (r/s, p/q) = (1, 1). Let’s study the linear approximation
near (1, 1). We have
[ ] [ ]
1 − x2 −x1 0 −1
Df = = at (1, 1).
x2 −1 + x1 1 0
Hence, the approximating linear system is
[ ]
dy 0 −1
= y
dt 1 0
where y1 = x1 − 1, y2 = x2 − 1. I leave it to you to work out the solutions of this
system. The eigenvalues are λ = ±i, so we need to use complex solutions. An
eigenvector for λ = i is [ ]
i
u=
1
and a corresponding complex solution is
eit u.
As usual, we can find a linearly independent pair of real solutions by taking real
and imaginary parts. To help you work similar problems in homework, we shall do
this in slightly greater generality than necessary in the particular case.
[ ] [ ]
0 1
u = v + iw where v = and w = .
1 0
13.2. LINEAR APPROXIMATION 677

Then
eit u = (cos t + i sin t)(v + iw) = v cos t − w sin t + i(v sin t + w cos t).
Taking real and imaginary parts yields the two solutions
[ ]
[ ] cos t
v cos t − w sin t = v w
− sin t
[ ]
[ ] sin t
v sin t + w cos t = v w .
cos t
[ ]
This suggests that we put P = v w (so that its columns are the real and
imaginary parts of the eigenvector u), and make the change of variables y = P z.
Then, in the ‘z’ coordinate system, we obtain two linearly independent solutions
[ ] [ ]
cos t sin t
and .
− sin t cos t
Any solution is then a linear combination
[ ] [ ] [ ]
cos t sin t c1 cos t + c2 sin t
z = c1 + c2 = .
− sin t cos t −c1 sin t + c2 cos t
Now put c1 = A cos δ, c2 = A sin δ. The above solution takes the form
[ ]
A cos(t − δ)
z= .
−A sin(t − δ)
This gives a family of circles which are traversed clockwise with respect to the z1
and z2 axes. However, the z1 and z2 axes have orientation opposite to that of the
original axes. (Look at v = e2 and w = e1 which are ‘unit’ vectors along the new
axes.) Thus, with respect to the y1 , y2 axes, the motion is counter-clockwise.

z1
x
2

z
2

x
1
678 CHAPTER 13. NONLINEAR SYSTEMS

Note that the phase portrait we derived in Section 1 for the prey-predator system
gives a similar picture near the critical point (π, 0).

A similar analysis applies to any 2-dimensional real system if the eigenvalues are
complex. If the eigenvalues are purely imaginary, i.e., of the form ±iω, then the
results are similar to what we got in Example 249. In the ‘z’ coordinate system,
the orbits look like circles and they are traversed clockwise. However, the change
from the ‘y’ coordinates to the ‘z’ coordinates may introduce changes of scale,
different for the two axes. Hence, the orbits are really ellipses when viewed in the
original coordinate system. Also, as in Example 249, the change of coordinates may
introduce a reversal of orientation. A critical point of this type is called a center.

If the eigenvalues are not purely imaginary, then they are of the form a ± bi with
a 6= 0, and all solutions have the additional factor eat . This has the effect of turning
the ‘ellipses’ into spirals. If a < 0, eat → 0 as t → ∞, so all solutions spiral in
towards the origin. If a > 0, they all spiral out. In either case, the critical point is
called a focus. If a < 0 the critical point is stable, and if a > 0 it is not.

a < 0 a > 0

The above examples by no means exhaust all the possibilities even in the 2-dimensional
case. In the remaining cases, the eigenvalues λ1 , λ2 are real and of the same sign.
The exact nature of the phase portrait will depend on the how the linear algebra
works out, but in all these cases the critical point is called a node. Here are some
pictures of nodes:
13.2. LINEAR APPROXIMATION 679

Stable node λ < λ < 0 Unstable node λ > λ > 0


1 2 1 2

Stable Unstable

Nodes with λ = λ and two linearly independent eignevectors


1 2

Stable Unstable
Nodes with λ 1 = λ 2, generalized eignevectors needed
680 CHAPTER 13. NONLINEAR SYSTEMS

Note that in each case the critical point is stable if the eigenvalues are negative.

Example 250 Consider the linear system


[ ]
dx − 3 −1
= x.
dt 1 −1

The only critical point is the origin, and of course the linear approximation there
is just the original system.

It turns out that there is no basis of eigenvectors in this case, so we must use the
method of generalized eigenvectors. In fact, this example was worked out in Chapter
XI, Section 8, Example 2. We found there that
[ ]
−2t
[ ] e−2t
x1 = e (e1 + tv2 ) = e1 v2
te−2t
[ ]
[ ] 0
x2 = e−2t v2 = e1 v2
e−2t

form a basis for the solution space, where


[ ] [ ]
1 −1
e1 = and v2 = (A + 2I)e1 = .
0 1

The general solution is


[ ]
[ ] c1 e−2t
x = c1 x1 + c2 x2 = e1 v2 .
c1 te−2t + c2 e−2t

This suggests making the change of coordinates x = P z where


[ ]
[ ] 1 −1
P = e1 v2 = .
0 1

In the new coordinates, the general solution is given by


[ ]
c1 e−2t
z=
c1 te−2t + c2 e−2t

which in components is

z1 = c1 e−2t
z2 = c1 te−2t + c2 e−2t .

As t → ∞, both z1 and z2 approach zero because of the factor e−2t . Also,


z2 c2
=t+
z1 c1
so for large t, both z1 and z2 have the same sign. On the other hand, for t sufficiently
negative, z1 and z2 have opposite signs, i.e., z starts off either in the fourth quadrant
13.2. LINEAR APPROXIMATION 681

or the second quadrant. (Note that this argument required c1 6= 0. What does the
orbit look like if c1 = 0?)

We leave it as an challenge for you to sketch the orbits of this system. You should get
one of the diagrams sketched above. You should first do it in the z1 , z2 coordinate
system. However, to interpret this in the original x1 , x2 coordinates, you should
notice that the ’z’ axes are not perpendicular. Namely, the z1 -axis is the same as
the x1 axis, but the z2 -axis points along the vector v2 , so it makes an angle of 3π/4
with the positive x1 axis.

If the matrix Df (a) is singular , the theory utilized above breaks down. The follow-
ing example indicates how that might occur.

Example 251 Consider the system


[ ]
dx −1 1
= x.
dt 1 −1
The origin is a critical point, and since the system is linear, the linear approximation
there is the same as the system itself.

The eigenvalues turn out to be λ = −2 and λ = 0. For λ = −2 a basic eigenvector


is [ ]
−1
v1 = .
1
For λ = 0, a basic eigenvector is
[ ]
1
v2 = .
1
Put these together to form a change of basis matrix
[ ]
−1 1
P = .
1 1
Then, by our theory [ ] [ ]
−1 1 −2 0
P −1 P =
1 −1 0 0
and the change of variables x = P z yields the ‘uncoupled’ system
dz1
= −2z1
dt
dz2
= 0.
dt
The solution is
z1 = C1 e−2t z2 = C2
This is a family of half lines perpendicular to the z2 axis. Every point on the z2 -axis
is a critical point and is approached asymptotically from either side as t → ∞.
682 CHAPTER 13. NONLINEAR SYSTEMS

x
2
z
2 z
1

x1

In the above discussion, we have been assuming that the behavior of a non-linear
system near a critical point may be determined from the behavior of the linear
approximation. Unfortunately, that is not always the case. First of all, if the
matrix Df (a) is singular, the behavior near the critical point depends strongly on
the higher order terms which were ignored in forming the linear approximation.
Even if Df (a) is non-singular, in some cases, the higher order terms can exert
enough influence to change the nature of the critical point.

Example 252 Consider the nonlinear system

dx1
= x2 − x1 (x1 2 + x2 2 )
dt
dx2
= −x1 − x2 (x1 2 + x2 2 ). (239)
dt

(0, 0) is clearly a critical point. (Are there any more?) Also,


[ ] [ ]
−3x1 2 − x2 2 1 − 2x1 x2 0 1
Df = = at (0, 0).
−1 − 2x1 x2 −x1 2 − 3x2 2 −1 0

Hence, the approximating linear system is


[ ]
dx 0 1
= x.
dt −1 0

The eigenvalues are λ = ±i, so the critical point is a center. The phase portrait of
the linear system consists of a family of closed loops centered at the origin.

On the other hand, we can solve the nonlinear system exactly in this case if we
switch to polar coordinates in the x1 , x2 -plane. Put x1 = r cos θ, x2 = r sin θ in
13.2. LINEAR APPROXIMATION 683

(239). We get

dr dθ
cos θ − r sin θ = r sin θ − r3 cos θ
dt dt
dr dθ
sin θ + r cos θ = −r cos θ − r3 sin θ.
dt dt

Multiply the first equation by cos θ, the second by sin θ and add to obtain

dr
= −r3
dt
dr
− 3 = dt
r
1
=t+c
2r2
1
r= √
2t + c1

Similarly, multiplying the first equation by sin θ, the second by cos θ and subtracting
the first from the second yields


r = −r.
dt

However, r = 0 is the critical point which we already know yields a constant solution,
so we may assume r 6= 0. Hence, we get


= −1
dt
θ = −t + c2 .

This clearly represents a family of solutions which spiral in towards the origin,
approaching it asymptotically at t → ∞. Thus, the additional nonlinear terms
turned a center into a focus. In this case, it is a stable focus with the orbits
spiraling in toward the critical point. In other cases, the non-linear terms might
perturb things in the other direction so that the orbits would spiral out from the
origin
684 CHAPTER 13. NONLINEAR SYSTEMS

OR OR

In general, for a two dimensional system, a center for the linear approximation can
stay a center or can become a focus in the non-linear system. Fortunately, for a two
dimensional system, if f is sufficiently smooth (i.e., C 2 ), and Df is non-singular at
the critical point, this is the only case in which the nonlinear terms can significantly
affect the structure of the phase portrait.

In every other non-singular case, i.e., a node, a saddle point, or a focus, a theorem of
Poincaré assures us that the linear approximation gives a true picture of the phase
portrait near the critical point.

Note that in the previous examples, the signs of the eigenvalues of the linear systems
(or the signs of the real parts of complex eigenvalues) played an important role. It is
clear why that would be the case. A basic solution will have a factor of the form eat
and if a < 0, the basic solution will necessarily converge to zero at t → ∞. Hence,
if these signs are negative for both basic solutions, every solution will converge to
zero as t → ∞, so the corresponding critical point will be asymptotically stable. On
the other hand, if one eigenvalue is negative and one is positive (a saddle point),
the situation is more complicated. (What if both signs are positive?)

Needless to say the situation is even more complicated for higher dimensional sys-
tems. It is still true that if all the eigenvalues of the linear approximation are
negative, or, if complex, have negative real parts, then all solutions near the critical
point converge to it as t → ∞, i.e., the corresponding equilibrium is asymptotically
stable. However, the basic linear algebra is considerably more complicated, so it is
not so easy to classify exactly what happens even in the linear case. Stable orbits
and Attractors The behavior of a nonlinear system near its critical points helps
us to understand the system, but it is certainly not the only thing of interest. For
example, the solution of the prey-predator problem yields many periodic solutions.
13.2. LINEAR APPROXIMATION 685

The orbits traced out by such solutions are stable in the following sense: any so-
lution which at some time t comes close to the orbit remains close to it for all
subsequent t. That means that if we stray a little from such an orbit, we won’t ever
get very far from it. From this perspective, a critical point is just a stable orbit
which happens to be a point. We should be interested in finding all stable orbits,
but of course it is much harder to find the non-constant ones.

Much of the theory of nonlinear systems was motivated by questions in celestial


mechanics. The general mathematical problem in that subject is the so-called n-
body problem where we attempt to describe the motion of an arbitrary number of
point masses subject to the gravitational forces between them. A complete solution
of that problem still eludes us despite intense study over several centuries. Even
fairly simple questions remain unanswered. For example, one would assume that
the Solar System as a whole will continue to behave more or less as it does now
as long as it is not disturbed by a significant perturbation such as a star passing
nearby. However, no one has been able to prove, for example, that the entire system
is ‘stable’ in the sense that it remains bounded for all time. Thus, it is conceivable
that at some point in time a planet might cease to follow its normal orbit and
leave the solar system altogether. (At least, it is conceivable to mathematicians,
who generally believe that something may happen until they prove that it can’t
happen!)

Modern developments in the study of nonlinear systems are often concerned with
‘stability’ questions such as those mentioned above. One such result is a famous
theorem of Poincaré and Bendixson. It asserts the following: if an orbit of a 2
dimensional nonlinear system enters a bounded region in the phase plane and re-
mains there forever, and if that bounded region contains no critical points, then
either the orbit is periodic itself, or it approaches a periodic orbit asymptotically.
(If critical points were not excluded from the bounded region, the constant solu-
tions represented by those points would violate the conclusion of the theorem. Also,
the presence of critical points could ‘disrupt’ the behavior of other paths in rather
subtle ways.) The following example illustrates this phenomenon.

Example 253 Consider the system


dx
= −y + x(1 − x2 − y 2 )
dt
dy
= x + y(1 − x2 − y 2 ).
dt
This system can be solved explicitly if we switch to polar coordinates. Putting
x = r cos θ, y = r sin θ in the above system yields
dr dθ
cos θ − r sin θ = −r sin θ + r cos θ(1 − r2 )
dt dt
dr dθ
sin θ + r cos θ = r cos θ + r sin θ(1 − r2 ).
dt dt
Multiply the first equation by sin θ and subtract it from cos θ times the second
686 CHAPTER 13. NONLINEAR SYSTEMS

equation to obtain

r = r.
dt
Thus, either r = 0 or

=1
dt
θ = t + D,
where D is an arbitrary constant. Similarly, multiplying the first equation by cos θ
and adding it to sin θ times the second equation yields
dr
= r(1 − r2 ).
dt
We see from this that r = 0 and r = 1 are solutions in which dr/dt = 0. (r = −1
is not a solution because by definition r ≥ 0.) r = 0 yields a critical point at the
origin. r = 1 (together with θ = t + D) yields a periodic solution for which the
orbit is a circle of radius 1 centered at the origin. If we exclude these solutions, we
may separate variables to obtain

dr
= t + c1 .
r(1 − r2 )
The left hand side may be computed by the method of partial fractions which yields
1 1
ln r − ln |1 − r| − ln(1 + r) = t + c1 .
2 2
I did it instead using Mathematica which neglected to include the absolute values
in the second term, but fortunately I remembered them. (The absolute values are
not necessary for the other terms since r, r + 1 > 0.) This may be further simplified
as follows.
r
ln √ = t + c1
|1 − r2 |
r
√ = c2 et
|1 − r2 |
r2
= c3 e2t .
|1 − r2 |
Note that the constant necessarily satisfies c3 > 0. We now consider two cases. If
0 < r < 1, we have
r2
= c3 e2t
1 − r2
r2 = c3 e2t (1 − r2 )
r2 (1 + c3 e2t ) = c3 e2t
c3 e2t
r2 =
1 + c3 e2t
13.2. LINEAR APPROXIMATION 687

Divide both numerator and denominator by c3 e2t and take the square root to obtain


1
r= . (240)
Ce−2t +1

If you follow what happened to the constant at each stage, you will see that the
constant we end up with satisfies C > 0.

If r > 1, we may continue instead as follows.

r2
= c3 e2t
r2 − 1
r2 = c3 e2t (r2 − 1)
r2 (c3 e2t − 1) = c3 e2t
c3 e2t
r2 = .
c3 e2t − 1

Divide numerator and denominator by c3 e2t to obtain


1
r= . (241)
1 − Ce−2t

As above C > 0.

We may summarize all the above cases except the critical point r = 0 by writing

1
r= √
1 + Ce−2t
θ =t+D (242)

where C > 0 for r < 1, C = 0 for r = 1, and C < 0 for r > 1. Note that for
C > 0, the solution spirals outward from the origin and approaches the periodic
orbit r = 1 asymptotically as t → ∞. Similarly, for C < 0, the solution spirals
inward and approaches the periodic orbit r = 1 asymptotically as t → ∞. All these
paths behave as the Poincaré–Bendixson Theorem predicts.
688 CHAPTER 13. NONLINEAR SYSTEMS

You can choose for the bounded region any annular (ring shaped) region containing
the circle r = 1. You can’t allow the critical point at the origin in the bounded
region because then the constant solution x(t) = 0 would violate the conclusion of
the theorem.

A periodic orbit that is asymptotically stable (i.e., all solutions which get sufficiently
near it approach it asymptotically) is called an attractor. In the above example the
orbit r = 1 is an attractor. On the other hand, the critical point r = 0 exhibits the
opposite kind of behavior, so it might aptly be called a repeller.

Exercises for 13.2.

1. Find Df for the functions f : Rn → Rm given below. The function is given


in a different form in each case, but you should assume it has been rewritten
in standard form  
f1
 f2 
 
f =  . .
 .. 
fm

(a) f1 (x, y) = x2 + y 2 , f2 (x, y) = 2xy.


 
x1 x2 + x2 2
(b) f (x1 , x2 ) =  x2 3 
x1 + 3x2
(c) f (r, θ) = hr cos θ, r sin θi.
1
(d) f (x, y, z) = 2 uρ .
ρ
13.2. LINEAR APPROXIMATION 689

2. Let F : R3 → R3 denote a vector field on R3 . Show that the divergence of F


is the sum of the diagonal entries of the 3 × 3 derivative matrix DF. Relate
the curl of F to the entries of the matrix DF − (DF)t .
Note that the sum of the diagonal entries of a square matrix A is usually
called the trace of A.
3. For each of the following 2 × 2 linear systems, first solve the system, and then
sketch the phase portrait in the vicinity of the origin (which is the critical
point). You may have to change coordinates to get a good picture.
(a) x01 = −x2 , x02 = 6x1 − 5x2 .
(b) x01 = 3x1 − 2x2 , x02 = 5x1 − 3x2 .
(c) x01 = 2x1 + x2 , x02 = −7x1 − 3x2 .
4. Sketch the phase portrait of the system in Example 252.
5. Consider the system x01 = 2x1 − x1 2 − x1 x2 , x02 = 3x2 − x2 2 − 2x1 x2 .
(a) Find the critical points.
(b) Find Df at each critical point.
dx
(c) Solve the linear system = Df (a)x at each critical point.
dt
(d) Sketch the phase portraits resulting from part (c), and try to piece these
together into a coherent phase portrait for the original non-linear system.
6. Repeat the above steps for the system x01 = x1 2 + x2 2 − 2, x02 = x1 x2 − 1.
7. Consider the system dx1 /dt = x2 , dx2 /dt = −(g/L) sin x1 − ax2 for the
damped pendulum described in Section 1. Find the linear approximation
at (0, 0). Show that if a is a sufficiently small positive quantity, then (0, 0) is
a stable focus. What conclusion can you draw about the phase portrait of the
nonlinear system near (0, 0)? Compare with the analysis in Section 1.
8. Two populations compete for the same resources and the competition is de-
structive to both. The following system provides a model governing their
interaction.
dx
= 10x − x2 − 6xy
dt
dy
= 5y − y 2 − xy.
dt
Note that if y = 0, then x obeys a logistic law and similarly for y if x = 0.
(a) Find the critical points.
(b) Determine the linear system approximating the nonlinear system near
each critical point.
(c) Solve each linear system at least as far as determining the eigenvalues of
the coefficient matrix.
690 CHAPTER 13. NONLINEAR SYSTEMS

(d) Sketch the phase portrait of each of these linear systems. You can look in
the text to determine the general form of the phase portrait from the signs of
the eigenvalues, but you may still have to figure out other details. You could
do this by finding the general solution of each linear system in an appropriate
new coordinate system, but other methods may suffice.
(e) Check that each critical point is one of the types for which the phase
portrait of the linear system is a good approximation of the phase portrait
of the nonlinear system. Try to sketch the phase portrait of the nonlinear
system in the first quadrant. Does your diagram tell you anything about the
growth or decline of the two populations?
9. With reference
√ to Example 7, find those values of t for which the solution
r = 1/ 1 − Ce−2t is defined. Assume C > 0. What significance does this
have?
Appendices

691
Appendix A

Creative Commons Legal Text

Creative Commons Legal Code

Attribution-ShareAlike 3.0 Unported

CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
LEGAL SERVICES. DISTRIBUTION OF THIS LICENSE DOES NOT CREATE AN
ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
REGARDING THE INFORMATION PROVIDED, AND DISCLAIMS LIABILITY FOR
DAMAGES RESULTING FROM ITS USE.

License

THE WORK (AS DEFINED BELOW) IS PROVIDED UNDER THE TERMS OF THIS CREATIVE
COMMONS PUBLIC LICENSE ("CCPL" OR "LICENSE"). THE WORK IS PROTECTED BY
COPYRIGHT AND/OR OTHER APPLICABLE LAW. ANY USE OF THE WORK OTHER THAN AS
AUTHORIZED UNDER THIS LICENSE OR COPYRIGHT LAW IS PROHIBITED.

BY EXERCISING ANY RIGHTS TO THE WORK PROVIDED HERE, YOU ACCEPT AND AGREE
TO BE BOUND BY THE TERMS OF THIS LICENSE. TO THE EXTENT THIS LICENSE MAY
BE CONSIDERED TO BE A CONTRACT, THE LICENSOR GRANTS YOU THE RIGHTS
CONTAINED HERE IN CONSIDERATION OF YOUR ACCEPTANCE OF SUCH TERMS AND
CONDITIONS.

1. Definitions

a. "Adaptation" means a work based upon the Work, or upon the Work and

693
694 APPENDIX A. CREATIVE COMMONS LEGAL TEXT

other pre-existing works, such as a translation, adaptation,


derivative work, arrangement of music or other alterations of a
literary or artistic work, or phonogram or performance and includes
cinematographic adaptations or any other form in which the Work may be
recast, transformed, or adapted including in any form recognizably
derived from the original, except that a work that constitutes a
Collection will not be considered an Adaptation for the purpose of
this License. For the avoidance of doubt, where the Work is a musical
work, performance or phonogram, the synchronization of the Work in
timed-relation with a moving image ("synching") will be considered an
Adaptation for the purpose of this License.
b. "Collection" means a collection of literary or artistic works, such as
encyclopedias and anthologies, or performances, phonograms or
broadcasts, or other works or subject matter other than works listed
in Section 1(f) below, which, by reason of the selection and
arrangement of their contents, constitute intellectual creations, in
which the Work is included in its entirety in unmodified form along
with one or more other contributions, each constituting separate and
independent works in themselves, which together are assembled into a
collective whole. A work that constitutes a Collection will not be
considered an Adaptation (as defined below) for the purposes of this
License.
c. "Creative Commons Compatible License" means a license that is listed
at https://creativecommons.org/compatiblelicenses that has been
approved by Creative Commons as being essentially equivalent to this
License, including, at a minimum, because that license: (i) contains
terms that have the same purpose, meaning and effect as the License
Elements of this License; and, (ii) explicitly permits the relicensing
of adaptations of works made available under that license under this
License or a Creative Commons jurisdiction license with the same
License Elements as this License.
d. "Distribute" means to make available to the public the original and
copies of the Work or Adaptation, as appropriate, through sale or
other transfer of ownership.
e. "License Elements" means the following high-level license attributes
as selected by Licensor and indicated in the title of this License:
Attribution, ShareAlike.
f. "Licensor" means the individual, individuals, entity or entities that
offer(s) the Work under the terms of this License.
g. "Original Author" means, in the case of a literary or artistic work,
the individual, individuals, entity or entities who created the Work
or if no individual or entity can be identified, the publisher; and in
addition (i) in the case of a performance the actors, singers,
695

musicians, dancers, and other persons who act, sing, deliver, declaim,
play in, interpret or otherwise perform literary or artistic works or
expressions of folklore; (ii) in the case of a phonogram the producer
being the person or legal entity who first fixes the sounds of a
performance or other sounds; and, (iii) in the case of broadcasts, the
organization that transmits the broadcast.
h. "Work" means the literary and/or artistic work offered under the terms
of this License including without limitation any production in the
literary, scientific and artistic domain, whatever may be the mode or
form of its expression including digital form, such as a book,
pamphlet and other writing; a lecture, address, sermon or other work
of the same nature; a dramatic or dramatico-musical work; a
choreographic work or entertainment in dumb show; a musical
composition with or without words; a cinematographic work to which are
assimilated works expressed by a process analogous to cinematography;
a work of drawing, painting, architecture, sculpture, engraving or
lithography; a photographic work to which are assimilated works
expressed by a process analogous to photography; a work of applied
art; an illustration, map, plan, sketch or three-dimensional work
relative to geography, topography, architecture or science; a
performance; a broadcast; a phonogram; a compilation of data to the
extent it is protected as a copyrightable work; or a work performed by
a variety or circus performer to the extent it is not otherwise
considered a literary or artistic work.
i. "You" means an individual or entity exercising rights under this
License who has not previously violated the terms of this License with
respect to the Work, or who has received express permission from the
Licensor to exercise rights under this License despite a previous
violation.
j. "Publicly Perform" means to perform public recitations of the Work and
to communicate to the public those public recitations, by any means or
process, including by wire or wireless means or public digital
performances; to make available to the public Works in such a way that
members of the public may access these Works from a place and at a
place individually chosen by them; to perform the Work to the public
by any means or process and the communication to the public of the
performances of the Work, including by public digital performance; to
broadcast and rebroadcast the Work by any means including signs,
sounds or images.
k. "Reproduce" means to make copies of the Work by any means including
without limitation by sound or visual recordings and the right of
fixation and reproducing fixations of the Work, including storage of a
protected performance or phonogram in digital form or other electronic
696 APPENDIX A. CREATIVE COMMONS LEGAL TEXT

medium.

2. Fair Dealing Rights. Nothing in this License is intended to reduce,


limit, or restrict any uses free from copyright or rights arising from
limitations or exceptions that are provided for in connection with the
copyright protection under copyright law or other applicable laws.

3. License Grant. Subject to the terms and conditions of this License,


Licensor hereby grants You a worldwide, royalty-free, non-exclusive,
perpetual (for the duration of the applicable copyright) license to
exercise the rights in the Work as stated below:

a. to Reproduce the Work, to incorporate the Work into one or more


Collections, and to Reproduce the Work as incorporated in the
Collections;
b. to create and Reproduce Adaptations provided that any such Adaptation,
including any translation in any medium, takes reasonable steps to
clearly label, demarcate or otherwise identify that changes were made
to the original Work. For example, a translation could be marked "The
original work was translated from English to Spanish," or a
modification could indicate "The original work has been modified.";
c. to Distribute and Publicly Perform the Work including as incorporated
in Collections; and,
d. to Distribute and Publicly Perform Adaptations.
e. For the avoidance of doubt:

i. Non-waivable Compulsory License Schemes. In those jurisdictions in


which the right to collect royalties through any statutory or
compulsory licensing scheme cannot be waived, the Licensor
reserves the exclusive right to collect such royalties for any
exercise by You of the rights granted under this License;
ii. Waivable Compulsory License Schemes. In those jurisdictions in
which the right to collect royalties through any statutory or
compulsory licensing scheme can be waived, the Licensor waives the
exclusive right to collect such royalties for any exercise by You
of the rights granted under this License; and,
iii. Voluntary License Schemes. The Licensor waives the right to
collect royalties, whether individually or, in the event that the
Licensor is a member of a collecting society that administers
voluntary licensing schemes, via that society, from any exercise
by You of the rights granted under this License.

The above rights may be exercised in all media and formats whether now
697

known or hereafter devised. The above rights include the right to make
such modifications as are technically necessary to exercise the rights in
other media and formats. Subject to Section 8(f), all rights not expressly
granted by Licensor are hereby reserved.

4. Restrictions. The license granted in Section 3 above is expressly made


subject to and limited by the following restrictions:

a. You may Distribute or Publicly Perform the Work only under the terms
of this License. You must include a copy of, or the Uniform Resource
Identifier (URI) for, this License with every copy of the Work You
Distribute or Publicly Perform. You may not offer or impose any terms
on the Work that restrict the terms of this License or the ability of
the recipient of the Work to exercise the rights granted to that
recipient under the terms of the License. You may not sublicense the
Work. You must keep intact all notices that refer to this License and
to the disclaimer of warranties with every copy of the Work You
Distribute or Publicly Perform. When You Distribute or Publicly
Perform the Work, You may not impose any effective technological
measures on the Work that restrict the ability of a recipient of the
Work from You to exercise the rights granted to that recipient under
the terms of the License. This Section 4(a) applies to the Work as
incorporated in a Collection, but this does not require the Collection
apart from the Work itself to be made subject to the terms of this
License. If You create a Collection, upon notice from any Licensor You
must, to the extent practicable, remove from the Collection any credit
as required by Section 4(c), as requested. If You create an
Adaptation, upon notice from any Licensor You must, to the extent
practicable, remove from the Adaptation any credit as required by
Section 4(c), as requested.
b. You may Distribute or Publicly Perform an Adaptation only under the
terms of: (i) this License; (ii) a later version of this License with
the same License Elements as this License; (iii) a Creative Commons
jurisdiction license (either this or a later license version) that
contains the same License Elements as this License (e.g.,
Attribution-ShareAlike 3.0 US)); (iv) a Creative Commons Compatible
License. If you license the Adaptation under one of the licenses
mentioned in (iv), you must comply with the terms of that license. If
you license the Adaptation under the terms of any of the licenses
mentioned in (i), (ii) or (iii) (the "Applicable License"), you must
comply with the terms of the Applicable License generally and the
following provisions: (I) You must include a copy of, or the URI for,
the Applicable License with every copy of each Adaptation You
698 APPENDIX A. CREATIVE COMMONS LEGAL TEXT

Distribute or Publicly Perform; (II) You may not offer or impose any
terms on the Adaptation that restrict the terms of the Applicable
License or the ability of the recipient of the Adaptation to exercise
the rights granted to that recipient under the terms of the Applicable
License; (III) You must keep intact all notices that refer to the
Applicable License and to the disclaimer of warranties with every copy
of the Work as included in the Adaptation You Distribute or Publicly
Perform; (IV) when You Distribute or Publicly Perform the Adaptation,
You may not impose any effective technological measures on the
Adaptation that restrict the ability of a recipient of the Adaptation
from You to exercise the rights granted to that recipient under the
terms of the Applicable License. This Section 4(b) applies to the
Adaptation as incorporated in a Collection, but this does not require
the Collection apart from the Adaptation itself to be made subject to
the terms of the Applicable License.
c. If You Distribute, or Publicly Perform the Work or any Adaptations or
Collections, You must, unless a request has been made pursuant to
Section 4(a), keep intact all copyright notices for the Work and
provide, reasonable to the medium or means You are utilizing: (i) the
name of the Original Author (or pseudonym, if applicable) if supplied,
and/or if the Original Author and/or Licensor designate another party
or parties (e.g., a sponsor institute, publishing entity, journal) for
attribution ("Attribution Parties") in Licensor’s copyright notice,
terms of service or by other reasonable means, the name of such party
or parties; (ii) the title of the Work if supplied; (iii) to the
extent reasonably practicable, the URI, if any, that Licensor
specifies to be associated with the Work, unless such URI does not
refer to the copyright notice or licensing information for the Work;
and (iv) , consistent with Ssection 3(b), in the case of an
Adaptation, a credit identifying the use of the Work in the Adaptation
(e.g., "French translation of the Work by Original Author," or
"Screenplay based on original Work by Original Author"). The credit
required by this Section 4(c) may be implemented in any reasonable
manner; provided, however, that in the case of a Adaptation or
Collection, at a minimum such credit will appear, if a credit for all
contributing authors of the Adaptation or Collection appears, then as
part of these credits and in a manner at least as prominent as the
credits for the other contributing authors. For the avoidance of
doubt, You may only use the credit required by this Section for the
purpose of attribution in the manner set out above and, by exercising
Your rights under this License, You may not implicitly or explicitly
assert or imply any connection with, sponsorship or endorsement by the
Original Author, Licensor and/or Attribution Parties, as appropriate,
699

of You or Your use of the Work, without the separate, express prior
written permission of the Original Author, Licensor and/or Attribution
Parties.
d. Except as otherwise agreed in writing by the Licensor or as may be
otherwise permitted by applicable law, if You Reproduce, Distribute or
Publicly Perform the Work either by itself or as part of any
Adaptations or Collections, You must not distort, mutilate, modify or
take other derogatory action in relation to the Work which would be
prejudicial to the Original Author’s honor or reputation. Licensor
agrees that in those jurisdictions (e.g. Japan), in which any exercise
of the right granted in Section 3(b) of this License (the right to
make Adaptations) would be deemed to be a distortion, mutilation,
modification or other derogatory action prejudicial to the Original
Author’s honor and reputation, the Licensor will waive or not assert,
as appropriate, this Section, to the fullest extent permitted by the
applicable national law, to enable You to reasonably exercise Your
right under Section 3(b) of this License (right to make Adaptations)
but not otherwise.

5. Representations, Warranties and Disclaimer

UNLESS OTHERWISE MUTUALLY AGREED TO BY THE PARTIES IN WRITING, LICENSOR


OFFERS THE WORK AS-IS AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY
KIND CONCERNING THE WORK, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE,
INCLUDING, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTIBILITY,
FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT, OR THE ABSENCE OF
LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OF ABSENCE OF ERRORS,
WHETHER OR NOT DISCOVERABLE. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION
OF IMPLIED WARRANTIES, SO SUCH EXCLUSION MAY NOT APPLY TO YOU.

6. Limitation on Liability. EXCEPT TO THE EXTENT REQUIRED BY APPLICABLE


LAW, IN NO EVENT WILL LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY FOR
ANY SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR EXEMPLARY DAMAGES
ARISING OUT OF THIS LICENSE OR THE USE OF THE WORK, EVEN IF LICENSOR HAS
BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

7. Termination

a. This License and the rights granted hereunder will terminate


automatically upon any breach by You of the terms of this License.
Individuals or entities who have received Adaptations or Collections
from You under this License, however, will not have their licenses
terminated provided such individuals or entities remain in full
700 APPENDIX A. CREATIVE COMMONS LEGAL TEXT

compliance with those licenses. Sections 1, 2, 5, 6, 7, and 8 will


survive any termination of this License.
b. Subject to the above terms and conditions, the license granted here is
perpetual (for the duration of the applicable copyright in the Work).
Notwithstanding the above, Licensor reserves the right to release the
Work under different license terms or to stop distributing the Work at
any time; provided, however that any such election will not serve to
withdraw this License (or any other license that has been, or is
required to be, granted under the terms of this License), and this
License will continue in full force and effect unless terminated as
stated above.

8. Miscellaneous

a. Each time You Distribute or Publicly Perform the Work or a Collection,


the Licensor offers to the recipient a license to the Work on the same
terms and conditions as the license granted to You under this License.
b. Each time You Distribute or Publicly Perform an Adaptation, Licensor
offers to the recipient a license to the original Work on the same
terms and conditions as the license granted to You under this License.
c. If any provision of this License is invalid or unenforceable under
applicable law, it shall not affect the validity or enforceability of
the remainder of the terms of this License, and without further action
by the parties to this agreement, such provision shall be reformed to
the minimum extent necessary to make such provision valid and
enforceable.
d. No term or provision of this License shall be deemed waived and no
breach consented to unless such waiver or consent shall be in writing
and signed by the party to be charged with such waiver or consent.
e. This License constitutes the entire agreement between the parties with
respect to the Work licensed here. There are no understandings,
agreements or representations with respect to the Work not specified
here. Licensor shall not be bound by any additional provisions that
may appear in any communication from You. This License may not be
modified without the mutual written agreement of the Licensor and You.
f. The rights granted under, and the subject matter referenced, in this
License were drafted utilizing the terminology of the Berne Convention
for the Protection of Literary and Artistic Works (as amended on
September 28, 1979), the Rome Convention of 1961, the WIPO Copyright
Treaty of 1996, the WIPO Performances and Phonograms Treaty of 1996
and the Universal Copyright Convention (as revised on July 24, 1971).
These rights and subject matter take effect in the relevant
jurisdiction in which the License terms are sought to be enforced
701

according to the corresponding provisions of the implementation of


those treaty provisions in the applicable national law. If the
standard suite of rights granted under applicable copyright law
includes additional rights not granted under this License, such
additional rights are deemed to be included in the License; this
License is not intended to restrict the license of any rights under
applicable law.

Creative Commons Notice

Creative Commons is not a party to this License, and makes no warranty


whatsoever in connection with the Work. Creative Commons will not be
liable to You or any party on any legal theory for any damages
whatsoever, including without limitation any general, special,
incidental or consequential damages arising in connection to this
license. Notwithstanding the foregoing two (2) sentences, if Creative
Commons has expressly identified itself as the Licensor hereunder, it
shall have all rights and obligations of Licensor.

Except for the limited purpose of indicating to the public that the
Work is licensed under the CCPL, Creative Commons does not authorize
the use by either party of the trademark "Creative Commons" or any
related trademark or logo of Creative Commons without the prior
written consent of Creative Commons. Any permitted use will be in
compliance with Creative Commons’ then-current trademark usage
guidelines, as may be published on its website or otherwise made
available upon request from time to time. For the avoidance of doubt,
this trademark restriction does not form part of the License.

Creative Commons may be contacted at https://creativecommons.org/.


Index

Df , 672 Banach–Tarski paradox, 207


Jm (t), 456 basis of a vector space, 522
O-notation, 88 basis, orthonormal, 625
Ym (t), 470 basis, standard, 524
Γ(x), 456 Bessel function, 456
Rn , 10 Bessel function, modified, 475
uθ , 19 Bessel function, roots of, 473
ur , 19 Bessel function, spherical, 475
Cn , 518 Bessel functions of 2nd kind, 470
C 1 functions, 95 Bessel’s equation, 367, 452, 453, 457, 463,
C 2 -function, 115 467
∇ operator in cylindrical coordinates, 280 binomial coefficients, 416
∇ operator in polar coordinates, 280 binomial theorem, 415
∇ operator in spherical coordinates, 280 boundary, 97
∇f , 87 boundary of a surface, orientation, 257
∇f operator, 233 boundary, irrelevance for integrals, 127
n-tuples, 10 bounded horizonitally by graphs, 135
o-notation, 88 bounded radially by graphs, 146
bounded vertically by graphs, 132
absolute convergence of a series, 389
absolute value of a complex number, 343
cardiod, 143
acceleration vector, 14
alternating series, 385 center in phase portrait, 668
amplitude of a harmonic oscillator, 61 center of mass, 121
analytic function, 403 central conic or quadric, 641
angular frequency, 61 central quadric, 72
angular momentum, 31 chain rule, 99, 101
approximation linear, 84 change of variables in integrals, 197
arc length, 43 characteristic equation, 545, 563
Argand diagram, 343 circle of latitude, 166
argument of a complex number, 343 circulation, 261
associative law for matrices, 491 closed form, 293
asymptotic stability for a critical point, 671 closed set, 97
attractor, 688 cofactor, 558
autonomous system, 667 collapsing sum, 376
azimuthal angle, 164 column space of a matrix, 533
commutative law, fails for matrix product,
back substitution, 495 492

702
INDEX 703

comparison test for convergence of a series, cylindrical coordinates, 157


381 cylindrical coordinates, triple integral in, 158
complex conjugate, 343
complex eigenvalues, 572 damped harmonic oscillator, 328
complex number, 343 damped pendulum, exact equation, 668
complex number, absolute value, 343 definite quadratic form, 430
complex number, argument, 343 degenerate quadratic form, 431
complex number, imaginary part, 343 density, 123
complex number, modulus, 343 derivative of a vector function, 672
complex number, real part, 343 derivative, partial, 78
complex scalar, 518 determinant of upper triangular matrix, 551
complex solutions of a second order differen- determinant, 2 × 2, 28
tial equation, 347 determinant, 3 × 3 and vector product, 29
components, 6 determinant, 3 × 3 and volume, 30
conditional convergence of a series, 389 determinant, definition of, 548
conic, central, 641 diagonalizable, 589
conjugate of a complex number, 343 differentiable function of several variables, 87,
connected set, 222 94
conservation of energy, 663 differential, 84
conservative field, 222 differential equation, order of, 57
conservative fields, 234 differential form, 291
conservative vector field, 293 differential, total, 89
constraint, 646 differentiation of a power series, 398
continuous function of several variables, 77 dimension of a vector space, 524
coordinates with respect to a basis, 525 dimension, invariance of, 526
coordinates, change of, 633 dipole, 223
coordinates, cylindrical, 157 Dirac δ function, 206
coordinates, normal, 636 Dirac delta function, 247
coordinates, spherical, 164 directed line segment, 3
Cramer’s rule, 560 direction field, 320
critical point of a function, 428, 434 directional derivative, 91, 93
critical point of a system, 667 discontinuity, 77
critical point, asymptotic stability of, 671 discriminant of a quadratic form, 430
critical point, center, 668, 678 disk, 96
critical point, center to focus, 683 distribution, 207
critical point, focus, 670, 678 divergence, 234
critical point, node, 678 divergence of improper integral, 177
critical point, saddle, 668 divergence theorem, 236
critical point, stability of, 670 divergence theorem, plane version, 249
critical points of a differential form, 297 divergence, interpretation of, 238
critically damped harmonic oscillator, 352 domain of a function, 68
cross product, 27 dot product, 22, 623, 624
curl, 234 dot product in n dimensions, 25
curl, interpretation of, 261 dot product, differentiation of, 26
cylinder, surface integral on, 192 double integral, 126, 132
cylindrical cell, 159 double integral in polar coordinates, 143
704 INDEX

drum head, vibrations of, 446 generalized eigenvector, 586


dummy variable, 307 generalized eigenvectors, 680
geometric series, 370
eigenfunction, 570 global properties, 272
eigenvalue, 544, 562 gradient, 87
eigenvalues, complex, 572 gradient in polar coordinates, 105
eigenvector, 544, 562 gradient normal to level surface, 102
eigenvector, generalized, 586 gradient, significance of, 92
elementary matrices, 499 Gram–Schmidt Process, 629
elementary row operations, 496 graph of a function, 68
ellipse in polar coordinates, 144 graph of function, surface integral on, 193
ellipsoid, 71 gravitational attraction of a sphere, 173
elliptic paraboloid, 72 Green’s first identity, 240
energy, conservation of, 663 Green’s second identity, 240
equilibrium, stable, 664 Green’s theorem, area, 253
error estimate for sum of a series by integral Green’s theorem, first form, 249
test, 380 Green’s theorem, second form, 250
error estimate for sum of an alternating se-
ries, 385 harmonic oscillator, 350
Euler’s equation, 367, 450 harmonic oscillator, damped, 328
Euler’s method, 322 harmonic oscillator, forced, 359
Euler’s method (improved), 324 harmonic oscillator, resonant frequency, 365,
Euler’s method (improved), error, 325 366
Euler’s method, error, 324 harmonic oscillator, simple, 59
exact form, 292 harmonic oscillator, undamped, 328
existence theorem for second order linear equa- harmonic series, 370
tions, 330 helix, 13, 16
existence theorem for systems, 540 Hermitian matrix, 624
existence theorem, first order differential equa- higher order equations, 481
tions, 317 homogeneous linear first order differential equa-
exponential of a matrix, 580, 601 tion, 303
homogeneous linear second order differential
focus in phase portrait, 670 equation, 330, 331
form, differential, 291 homogeneous linear second order equation,
frequency, 61 constant coefficients, 339
frequency, angular, 61 homogeneous system, 511
Frobenius, method of, 453 homogeneous system of differential equations,
Fubini’s theorem, 135 537
function space, 516 hyperbolic paraboloid, 72
fundamental solution matrix, 599 hyperboloid of one sheet, 71
fundamental theorem of algebra, 572 hyperboloid of two sheets, 71
fundamental theorem of calculus, 125 hyperquadric, 642
hypersurface, 646
Gamma function, 456
Gauss’s law, 246 identity matrix, 492
Gauss–Jordan reduction, 498 imaginary part of a complex number, 343
Gaussian reduction, 495 implicit differentiation, 111
INDEX 705

implicit function theorem, 109 limit comparison test for convergence of a se-
improper integral, 176 ries, 382
inconsistent system, 501 limit of a function of several variables, 76
indefinite quadratic form, 430 line as intersection of planes, 40
independence, linear, 520 line integral, 47, 184
indicial equation, 454 line, parametric representation of, 33
infinite series, 84 line, symmetric equations of, 34
inhomogeneous linear first order differential line, vector equation of, 33
equation, 304 linear approximation, 84, 672
inhomogeneous linear second order differen- linear combination, 520
tial equation, 330, 356 linear combination of solutions of a linear dif-
inhomogeneous system, 604 ferential equation, 333
inhomogeneous system, particular solution, linear dependence of solutions of a linear dif-
515 ferential equation, 333
initial conditions for a differential equation, linear first order differential equation, 302
58 linear independence, 520
inner product in C n , 623 linear independence in Rn , 531
inner product in Cn , 624 linear independence of solutions of a linear
integral on surface, 189 differential equation, 333, 335
integral test for convergence, 378 linear operator, 519, 538
integral test, error estimate, 380 linear second order differential equation, 329
integral, change of variables in, 197 linear system of algebraic equations, 493
integral, double, 126 local properties, 272
integral, improper, 176 locus, 65
integral, iterated, 134 logarithmic singularity, 469, 470
integral, line, 184 logistic law, 309
integral, triple, 150 longitude, 166
integrating factor for a differential form, 297 longitudinal angle, 164
integration of a power series, 398
inverse of a matrix, 504 Malthus, 309
inverse square law, 102, 172 matrix, 484
invertible matrix, 504 matrix inverse, 504
iterated integral, 134, 153 matrix product, 487
Jacobian, 197, 202, 673 matrix series, 581
Jordan canonical form, 596 matrix, exponential of, 580
Jordan reduction, 498 matrix, invertible, 504
matrix, non-singular, 501
Lagrange multiplier, 649 matrix, rank of, 511
Laplacian in curvilinear coordinates, 287 matrix, singular, 501
latitude, 166 maxima and minima with constraints, 646
Lebesgue integral, 205 measure theory, 205
left hand orientation, 6 meridian of longitude, 166
Legendre’s equation, 367, 452 method of Frobenius, 453, 460, 464, 466
Legendre’s equations, 437 method of Frobenius, equal roots, 467
level curve, 70 method of Frobenius, roots differing by an
level surface, 73 integer, 469
706 INDEX

minor, 558 particular solution, linear second order dif-


model, 308 ferential equation, 356
modified Bessel function, 475 particular solution, second order linear dif-
modulus of a complex number, 343 ferential equation, 330
molecular oscillation, 609 pendulum, damped, exact equation, 668
molecular oscillations, 481 pendulum, exact equation, 661
moment of inertia, 162 period of an oscillator, 61
multidimensional Taylor series, 422 phase plane, 663
phase portrait, 663
Neumann functions, 470 pivot, 502
node, 678 plane, equation of, 37
non-linear system, 661 plane, normal to, 37
non-singular matrix, 501 Poincaré–Bendixson Theorem, 685
normal coordinates, 636 Poincareé, 684
normal distribution, 179 polar coordinates, 17
normal mode, 612, 636 polar coordinates, double integral in, 143
normal mode, relation to eigenvectors, 618 polar coordinates, gradient in, 105
null space of a linear operator, 519 polar coordinates, graphs in, 143
numerical methods, 322 polar rectangle, 145
polynomial equations, 568
Olbers’ paradox, 172
population, 665
open set, 96
position vector, 5
orbit, 663
potential funtion, 226
orbit, stability of, 685
potential, scalar, 226
order of a differential equation, 57
potential, vector, 274
order of magnitude, 88
power series, 84, 369, 394
ordinary point, 437
orientation of axes, 6 prey–predator problem, 665
orientation of axes in space, 8 principal axes, 627
orthogonal matrix, 637 Principal Axis Theorem, 627
orthonormal basis, 625 Principal Axis Theorem, proof, 638
oscillator, 59 product of matrices, 487
overdamped harmonic oscillator, 351 pseudo-inverse of a matrix, 512

paraboloid, elliptic, 72 quadratic form, 430


paraboloid, hyperbolic, 72 quadric surface, 71
parallelogram law, 4 quadric, central, 641
parameter, 33
parametric representation of a surface, 186 radius of convergence of a power series, 395,
partial derivative, 78 398, 411
partial derivative, higher, 115 radius of convergence of a solution, 443
partial derivative, mixed, 115 rank of a matrix, 511
particular solution by guessing, linear second ratio test for convergence of a series, 391
order equation, 358 Rayleigh–Ritz problem, 652
particular solution of a system, 604 real part of a complex number, 343
particular solution, linear first order equa- recurrence relation, 439
tion, 305 reduction of order, 467, 470
INDEX 707

reduction of order for second order linear equa- surface integral, 189
tions, 354 surface integral for parametric surface, 191
regular singular point, 451 surface, parametric representation of, 186
remainder, multidimensional Taylor series, 424 symmetric matrices, eigenvalues of, 622
remainder, Taylor series, 406 system of differential equations, 479
resonant frequency, harmonic oscillator, 365, system, autonomous, 667
366 system, critical point of, 667
right hand rule for vector product, 27 system, homogeneous, 511
root test for convergence of a series, 392 system, inconsistent, 501
rounding, 386 system, inhomogeneous, 604
row operations, elementary, 496 system, non-linear, 661
row operations, reversibility of, 499
row space of a matrix, 533 tail of a series, importance of, 372
Runge-Kutte method, 326 tangent plane, 89
tangent plane and differentiability, 87
saddle, 72 tangent plane to a graph, 81
saddle point in phase portrait, 668 tangent plane to a level surface, 102
scalar product, 22 tangent plane to level surface, 108
scalar, complex, 518 Taylor series, 404
Schwartz, 207 Taylor series, multidimensional, 422
screening test, three dimensions, 231, 234 Taylor series, remainder, 406
screening test, two dimensions, 230, 270, 293 thermodynamics, 79, 105, 117, 292
torus, 188
secular equation, 611
total differential, 89
separation of variables for ordinary differen-
transpose of a matrix, 556
tial equation, 58
triple integral, 150
separation of variables for partial differential
triple integral in cylindrical coordinates, 158
equation, 446
triple integral in spherical coordinates, 169
series of matrices, 581
truncation, 386
series solution of a differential equation, 367
simply connected plane regions, 269 underdamped harmonic oscillator, 352
singular matrix, 501 undetermined coefficients, linear second or-
singular point, 437 der equation, 358
solid angle, 239 uniform circular motion, 13, 20
spans curves, 272 uniqueness theorem for second order linear
speed, 44 equations, 330
sphere, surface integral on, 191 uniqueness theorem, first order differential
spherical Bessel function, 475 equations, 317
spherical cell, 168 unitary matrix, 637
spherical coordinates, 164 upper triangular matrix, 502
spherical coordinates, triple integral in, 169 upper triangular matrix, determinant of, 551
stability of an orbit, 685
stability of critical point, 670 variation of parameters for systems, 605
standard basis, 524 variation of parameters, linear second order
Stokes’s theorem, 257 differential equation, 356
subspace of a vector space, 518 vector, 3
surface area, 191 vector addition, 4
708 INDEX

vector function, 12
vector operators, 278
vector potential, 274
vector product, 27
vector space, 514, 516
vector, representation by directed line seg-
ment, 3
vectors, subtraction of, 5
velocity vector, 13

work, 46
Wronskian for second order linear differential
equations, 335, 337
Wronskian of a system, 603

zero vector, 5

You might also like