Undergraduate Texts in Mathematics
Miklós Laczkovich
Vera T. Sós
Real
Analysis
Foundations and Functions of One
Variable
Undergraduate Texts in Mathematics
Undergraduate Texts in Mathematics
Series Editors
Sheldon Axler
San Francisco State University, San Francisco, CA, USA
Kenneth Ribet
University of California, Berkeley, CA, USA
Advisory Board:
Colin Adams, Williams College
David A. Cox, Amherst College
Pamela Gorkin, Bucknell University
Roger E. Howe, Yale University
Michael Orrison, Harvey Mudd College
Lisette G. de Pillis, Harvey Mudd College
Jill Pipher, Brown University
Fadil Santosa, University of Minnesota
Undergraduate Texts in Mathematics are generally aimed at third- and fourthyear undergraduate mathematics students at North American universities. These
texts strive to provide students and teachers with new perspectives and novel
approaches. The books include motivation that guides the reader to an appreciation of interrelations among different aspects of the subject. They feature examples
that illustrate key concepts as well as exercises that strengthen understanding.
More information about this series at http://www.springer.com/series/666
Miklós Laczkovich
•
Vera T. Sós
Real Analysis
Foundations and Functions of One Variable
First English Edition
123
Miklós Laczkovich
Department of Analysis
Eötvös Loránd University
Budapest, Hungary
Vera T. Sós
Alfréd Rényi Institute of Mathematics
Hungarian Academy of Sciences
Budapest, Hungary
ISSN 0172-6056
ISSN 2197-5604 (electronic)
Undergraduate Texts in Mathematics
ISBN 978-1-4939-2765-4
ISBN 978-1-4939-2766-1 (eBook)
DOI 10.1007/978-1-4939-2766-1
Library of Congress Control Number: 2015938228
Springer New York Heidelberg Dordrecht London
1st Hungarian edition: T. Sós, Vera, Analı́zis I/1 © Nemzeti Tankönyvkiadó, Budapest, 1972
2nd Hungarian edition: T. Sós, Vera, Analı́zis A/2 © Nemzeti Tankönyvkiadó, Budapest, 1976
3rd Hungarian edition: Laczkovich, Miklós & T. Sós, Vera: Analı́zis I © Nemzeti Tankönyvkiadó,
Budapest, 2005
4th Hungarian edition: Laczkovich, Miklós & T. Sós, Vera: Analı́zis I © Typotex, Budapest, 2012
Translation from the Hungarian language 3rd edition: Valós analı́zis I by Miklós Laczkovich
& T. Sós, Vera, © Nemzeti Tankönyvkiadó, Budapest, 2005. All rights reserved © Springer 2015.
© Springer New York 2015
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, express or implied, with respect to the material contained herein or for any
errors or omissions that may have been made.
Printed on acid-free paper
Springer Science+Business Media LLC New York is part of Springer Science+Business Media (www.
springer.com)
Preface
Analysis forms an essential basis of mathematics as a whole, as well as of the natural
sciences, and more and more of the social sciences too. The theory of analysis (differentiation and integration) was created—after Galileo’s insight—for the purposes
of describing the universe in the language of mathematics. Working out the precise
theory took almost 300 years, with a large portion of this time devoted to definitions
that encapsulate the essence of limits and continuity. Mastering these concepts can
be a difficult process; this is one of the reasons why analysis is only barely present
in most high-school curricula.
At the same time, in postsecondary education where mathematics is part of the
program—including various branches of science and mathematics—analysis appears as a basic requirement. Our book is intended to be an introductory analysis
textbook; we believe it would be useful in any areas where analysis is a part of the
curriculum, in addition to the above, also in teacher education, engineering, or even
some high schools. In writing this book, we used the experience we gained from our
many decades of lectures at the Eotvos Lorand University, Budapest, Hungary.
We have placed strong emphasis on discussing the foundations of analysis: before
we begin the actual topic of analysis, we summarize all that the theory builds upon
(basics of logic, sets, real numbers), even though some of these concepts might be
familiar from previous studies. We believe that a strong basis is needed not only for
those who wish to master higher levels of analysis, but for everyone who wants to
apply it, and especially for those who wish to teach analysis at any level.
The central concepts of analysis are limits, continuity, the derivative, and the
integral. Our primary goal was to develop the precise concepts gradually, building
on intuition and using many examples. We introduce and discuss applications of our
topics as much as possible, while ensuring that understanding and mastering of this
difficult material is advanced. This, among other reasons, is why we avoided a more
abstract or general (topological or multiple variable) discussion.
We would like to emphasize that the—classical, mostly more than 100 year old—
results discussed here still inspire active research in different areas. Due to the nature
of this book, we cannot delve into this; we only mention a small handful of unsolved
problems.
v
vi
Preface
Mastering the material can be only achieved through solving many exercises of
various difficulties. We have posed more than 500 exercises in our book, but few of
these are routine questions—which can be found in many workbooks and exercise
collections. However, we found it important to include questions that call for deeper
understanding of results and methods. Of these, several more difficult questions, requiring novel ideas, are marked by (∗). A large number of exercises come with hints
for solutions, while many others are provided with complete solutions. Exercises
with hints and solutions are denoted by (H) and (S) respectively.
The book contains a much greater amount of material than what is necessary for
most curricula. We trust that the organization of the book—namely the structure
of subsections—makes the selection of self-contained curricula possible for several
levels.
The book drew from Vera T. Sós’ university textbook Analı́zis, which has been
in print for over 30 years, as well as analysis lecture notes by Miklós Laczkovich.
This book, which is the translation of the third edition of the Hungarian original,
naturally expands on these sources and the previous editions in both content and
presentation.
Budapest, Hungary
May 16, 2014
Miklós Laczkovich
Vera T. Sós
Contents
1
A Brief Historical Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
2
Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1 A Few Words About Mathematics in General . . . . . . . . . . . . . . . . . . .
2.2 Basic Concepts in Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Proof Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4 Sets, Functions, Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
11
11
15
21
3
Real Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1 Decimal Expansions: The Real Line . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Bounded Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Exponentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4 First Appendix: Consequences of the Field Axioms . . . . . . . . . . . . . .
3.5 Second Appendix: Consequences of the Order Axioms . . . . . . . . . . .
27
36
40
45
48
49
4
Infinite Sequences I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1 Convergent and Divergent Sequences . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Sequences That Tend to Infinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 Uniqueness of Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4 Limits of Some Specific Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
53
56
58
61
5
Infinite Sequences II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1 Basic Properties of Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 Limits and Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3 Limits and Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
63
65
67
72
6
Infinite Sequences III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.1 Monotone Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.2 The Bolzano–Weierstrass Theorem and Cauchy’s Criterion . . . . . . . . 82
vii
viii
Contents
7
Rudiments of Infinite Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
8
Countable Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
9
Real-Valued Functions of One Real Variable . . . . . . . . . . . . . . . . . . . . . . 103
9.1 Functions and Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
9.2 Global Properties of Real Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
9.3 Appendix: Basics of Coordinate Geometry . . . . . . . . . . . . . . . . . . . . . 115
10
Continuity and Limits of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
10.1 Limits of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
10.2 The Transference Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
10.3 Limits and Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
10.4 Continuous Functions in Closed and Bounded Intervals . . . . . . . . . . . 144
10.5 Uniform Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
10.6 Monotonicity and Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
10.7 Convexity and Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
10.8 Arc Lengths of Graphs of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
10.9 Appendix: Proof of Theorem 10.81 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
11
Various Important Classes of Functions (Elementary Functions) . . . . 167
11.1 Polynomials and Rational Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
11.2 Exponential and Power Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
11.3 Logarithmic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
11.4 Trigonometric Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
11.5 The Inverse Trigonometric Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 193
11.6 Hyperbolic Functions and Their Inverses . . . . . . . . . . . . . . . . . . . . . . . 195
11.7 First Appendix: Proof of the Addition Formulas . . . . . . . . . . . . . . . . . 199
11.8 Second Appendix: A Few Words on Complex Numbers . . . . . . . . . . 201
12
Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
12.1 The Definition of Differentiability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
12.2 Differentiation Rules and Derivatives of the Elementary
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
12.3 Higher-Order Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
12.4 Linking the Derivative and Local Properties . . . . . . . . . . . . . . . . . . . . 229
12.5 Intermediate Value Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
12.6 Investigation of Differentiable Functions . . . . . . . . . . . . . . . . . . . . . . . 241
13
Applications of Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
13.1 L’Hôpital’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
13.2 Polynomial Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
13.3 The Indefinite Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
13.4 Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
13.5 The Catenary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
13.6 Properties of Derivative Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
Contents
ix
13.7 First Appendix: Proof of Theorem 13.20 . . . . . . . . . . . . . . . . . . . . . . . 289
13.8 Second Appendix: On the Definition of Trigonometric Functions
Again . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
14
The Definite Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
14.1 Problems Leading to the Definition of the Definite Integral . . . . . . . . 295
14.2 The Definition of the Definite Integral . . . . . . . . . . . . . . . . . . . . . . . . . 299
14.3 Necessary and Sufficient Conditions for Integrability . . . . . . . . . . . . . 305
14.4 Integrability of Continuous Functions and Monotone Functions . . . . 315
14.5 Integrability and Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
14.6 Further Theorems Regarding the Integrability of Functions and
the Value of the Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
14.7 Inequalities for Values of Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
15
Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
15.1 The Link Between Integration and Differentiation . . . . . . . . . . . . . . . 333
15.2 Integration by Parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
15.3 Integration by Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
15.4 Integrals of Elementary Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
15.4.1 Rational Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
15.4.2 Integrals Containing Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
15.4.3 Rational Functions of ex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
15.4.4 Trigonometric Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
15.5 Nonelementary Integrals of Elementary Functions . . . . . . . . . . . . . . . 358
15.6 Appendix: Integration by Substitution for Definite Integrals (Proof
of Theorem 15.22) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
16
Applications of Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
16.1 The General Concept of Area and Volume . . . . . . . . . . . . . . . . . . . . . . 369
16.2 Computing Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
16.3 Computing Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
16.4 Computing Arc Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
16.5 Polar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
16.6 The Surface Area of a Surface of Revolution . . . . . . . . . . . . . . . . . . . . 392
16.7 Appendix: Proof of Theorem 16.20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
17
Functions of Bounded Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
18
The Stieltjes Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
19
The Improper Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
19.1 The Definition and Computation of Improper Integrals . . . . . . . . . . . 417
19.2 The Convergence of Improper Integrals . . . . . . . . . . . . . . . . . . . . . . . . 428
19.3 Appendix: Proof of Theorem 19.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
x
Contents
Erratum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E1
Hints, Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
Chapter 1
A Brief Historical Introduction
The first problems belonging properly to mathematical analysis arose during fifth
century BCE, when Greek mathematicians became interested in the properties of
various curved shapes and surfaces. The problem of squaring a circle (that is, constructing a square of the same area as a given circle with only a compass and straightedge) was well known by the second half of the century, and Hippias had already
discovered a curve called the quadratix during an attempt at a solution. Hippocrates
was also active during the second half of the fifth century BCE, and he defined the
areas of several regions bound by curves (“Hippocratic lunes”).
The discovery of the fundamental tool of mathematical analysis, approximating the unknown value with arbitrary precision, is due to Eudoxus (408–355 BCE).
Eudoxus was one of the most original thinkers in the history of mathematics. His
discoveries were immediately appreciated by Greek mathematicians; Euclid (around
300 BCE) dedicated an entire book (the fifth) of his Elements [3] to Eudoxus’s theory
of proportions of magnitudes. Eudoxus invented the method of exhaustion as well,
and used it to prove that the volume of a pyramid is one-third that of a prism with the
same base and height. This beautiful proof can be found in book XII of the Elements
as the fifth theorem.
The method of exhaustion is based on the fact that if we take away at least onehalf of a quantity, then take away at least one-half of the remainder, and continue this
process, then for every given value v, sooner or later we will arrive at a value smaller
than v. One variant of this principle is nowadays called the axiom of Archimedes,
even though Archimedes admits, in his book On the Sphere and Cylinder, that mathematicians before him had already stated this property (and in the form above, it
appeared as the first theorem in book X of Euclid’s Elements). Book XII of the Elements gives many applications of the method of exhaustion. It is worth noting the
first application, which states that the ratio of the area of two circles is the same as
the ratio of the areas of the squares whose sides are the circles’ diameters. The proof
uses the fact that the ratio between the areas of two similar polygons is the same as
the ratio of the squares of the corresponding sides (and this fact is proved by Euclid
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 1
1
2
1 A Brief Historical Introduction
in his previous books in detail). Consider a circle C. A square inscribed in the circle
contains more than half of the area of the circle, since it is equal to half of the area of
the square circumscribed about the circle. A regular octagon inscribed in the circle
contains more than half of the remaining area of the circle (as seen in Figure 1.1).
This is because the octagon is larger than the square by four isosceles triangles, and
each isosceles triangle is larger than half of the corresponding slice of the circle,
since it can be inscribed in a rectangle that is twice the triangle. A similar argument
tells us that a regular 16-gon covers at least one-half of the circle not covered by the
octagon, and so on. The method of exhaustion (the axiom of Archimedes) then tells
us that we can inscribe a regular polygon in the circle C such that the areas differ by
less than any previously fixed number.
To finish the proof, it is easiest
to introduce modern notation. Consider two circles, C1 and C2 , and let
ai and di denote the areas and the
diameters of the circle Ci (i = 1, 2).
We want to show that a1 /a2 = d12 /d22 .
Suppose that this is not true. Then
a1 /a2 is either larger or smaller than
d12 /d22 . It suffices to consider the case
a1 /a2 > d12 /d22 , since in the second
case, a2 /a1 > d22 /d12 , so we can interchange the roles of the two circles to return to the first case. Now if
a1 /a2 > d12 /d22 , then the value
Fig. 1.1
a1 d12
δ=
−
a2 d22
is positive. Inscribe a regular polygon P1 into C1 that differs from the area of C1 by
less than δ · a2 . If a regular polygon P2 similar to P1 is inscribed in C2 , then the ratio
of the areas of P1 and P2 is equal to the ratio of the squares of the corresponding
sides, which in turn is equal to d12 /d22 (which was shown precisely by Euclid). If the
area of Pi is pi (i = 1, 2), then
d2
a1
p1
a1 − δ · a2
a1
− δ = 12 =
>
=
−δ,
a2
p2
a2
a2
d2
which is a contradiction.
Today, we would express the above theorem by saying that the area of a circle is
a constant times the square of the diameter of a circle. This constant was determined
by Archimedes. In his work Measurement of a Circle, he proves that the area of a
circle is equal to the area of the right triangle whose side lengths are the radius of
the circle and the circumference of the circle. With modern notation (and using the
theorem above), this is none other than the formula π r2 , where π is one-half of the
circumference of the unit circle.
1 A Brief Historical Introduction
3
Archimedes (287–212 BCE)
ranks as one of the greatest
mathematicians of all time, but
is without question the greatest mathematician of antiquity.
Although the greater part of his
works is lost, a substantial corpus has survived. In his works,
he computed the areas of various regions bounded by curves
(such as slices of the parabola),
determined the surface area
and volume of the sphere, the
arc length of certain spirals,
and studied paraboloids and
hyperboloids obtained by rotations. Archimedes also used
Fig. 1.2
the method of exhaustion, but
he extended the idea by approximating figures not only from the inside, but from the outside as well. Let us
see how Archimedes used this method to find the area beneath the parabola. We will
use modern notation again.
The area of the region beneath the parabola and above [0, 1], as seen in Figure 1.2
will be denoted by A. The (shaded) region over the ith interval (for any n and i ≤ n)
can be approximated by rectangles from below and from above, which—with the
help of Exercise 2.5(b)—gives
1
A> ·
n
1 2
n−1 2
(n − 1) · n · (2n − 1) 1 1
=
+···+
> − ,
n
n
6n3
3 n
1
A< ·
n
n 2
1 2
n · (n + 1) · (2n + 1) 1 1
+···+
< + .
=
n
n
6n3
3 n
and
It follows that
A − 1 < 1 .
3 n
(1.1)
This approximation does not give a precise value for A for a specific n. However, the
approximation (1.1) for every n already shows that the area of A can only be 1/3.
4
1 A Brief Historical Introduction
Indeed, if A = 1/3 were the case, that is, |A − 1/3| = α > 0, then if n ≥ 1/α ,
then (1.1) would not be satisfied. Thus the only possibility is that |A − 1/3| = 0, so
A = 1/3.
It was a long time before the work of Archimedes was taken up again and built
upon. There are several possible reasons for this: the lack of a proper system of notation, the limitations of the geometric approach, and the fact that the mathematicians
of the time had an aversion to questions concerned with infinity and with movement.
Whether it is for these reasons or others, analysis as a widely applicable general
method or as a branch of science on its own appeared only when European mathematicians of the seventeenth century decided to describe movement and change
using the language of mathematics. This description was motivated by problems
that occur in everyday life and in physics. Here are some examples:
• Compute the velocity and acceleration of a free-falling object.
• Determine the trajectory of a thrown object. Determine what height the object
reaches and where it lands.
• Describe other physical processes, such as the temperature of a cooling object. If
we know the temperature at two points in time, can we determine the temperature
at every time?
• Construct tangent lines to various curves. How, for example, do we draw the
tangent line to a parabola at a given point?
• What is the shape of a suspended rope?
• Solve maximum/minimum problems such as the following: What is the cylinder
with the largest volume that can be inscribed in a ball? What path between two
points can be traversed in the shortest time if velocity varies based on location?
(This last question is inspired by the refraction of light.)
• Find approximate solutions of equations.√
• Approximate values of powers (e.g., 2 3 ) and trigonometric functions (e.g.,
sin 1◦ ).
It turned out that these questions are strongly linked with determining area, volume, and arc length, which are also problems arising from the real world. Finally, to
solve these problems, mathematicians of the seventeenth century devised a theory,
called the differential calculus, or in today’s terms, differentiation, which had three
components.
The first component was the coordinate system, which is attributed to René
Descartes (1596–1650), even though such a system had been used by Apollonius
(262–190 BCE) when he described conic sections. However, Descartes was the first
to point out that the coordinate system can help transform geometric problems into
algebraic problems.
Consider the parabola, for example. By definition this is the set of points that lie
equally distant from a given point and a given line. This geometric definition can
1 A Brief Historical Introduction
5
be transformed into a simple algebraic
condition with the help of the coordinate system. Let the given point
be P = (0, p), and the given line the
horizontal line y = −p, where p is a
fixed positive number. The distance of
the point (x, y) from the point P is
x2 + (y − p)2 , while the distance of
the point from the line is |y + p|. Thus
the point (x, y) is on the parabola if
and only if x2 + (y − p)2 = |y + p|.
Squaring both sides gives
x2 + y2 − 2py + p2 = y2 + 2py + p2 ,
Fig. 1.3
which we can rearrange to give us that
x2 = 4py or y = x2 /(4p). Thus we get the equation of the parabola, an algebraic
condition that describes the points of the parabola precisely: a point (x, y) lies on
the parabola if and only if y = x2 /(4p).
The second component of the differential calculus is the concept of a variable
quantity. Mathematicians of the seventeenth century considered quantities appearing
in physical problems to be variables depending on time, whose values change from
one moment to another. They extended this idea to geometric problems as well.
Thus every curve was thought of as the path of a continuously moving point. This
concept does not interpret the equation y = x2 /(4p) as one in which y depends on x,
but as a relation between x and y both depending on time as the point (x, y) traverses
the parabola.
The third and most important component of the differential calculus was the
notion of a differential of a variable quantity. The essence of this concept is the
intuitive notion that every change is the result of the sum of “infinitesimally small”
changes. Thus time itself is made up of infinitesimally small time intervals. The
differential of the variable quantity x is the infinitesimally small value by which x
changes during an infinitesimally small time interval. The differential of x is denoted
by dx. Thus the value of x after an infinitesimally small time interval changes to
x + dx.
How did calculus work? We illustrate the thinking of the wielders of calculus
with the help of some simple examples.
The key to solving maximum/minimum problems was the fact that if the variable
y reaches its highest value at a point, then dy = 0 there (since when a thrown object
reaches the highest point along its path, it flies horizontally “for an instant”; thus if
the y-coordinate of the object has a maximum, then dy = 0 there).
Let us use calculus to determine the largest value of t − t 2 . Let x = t − t 2 . Then
at the maximum of x, we should have dx = 0. Now dx is none other than the change
of x as t changes to t + dt. From this, we infer that
dx = (t + dt) − (t + dt)2 − t − t 2 = dt − 2t · dt − (dt)2 = dt − 2t · dt.
6
1 A Brief Historical Introduction
In the last step above, the (dt)2 term was “ignored.” That is, it was simply left out of
the computation, based on the argument that the value (dt)2 is “infinitely smaller”
than all the rest of the values appearing in the computation. Thus the condition
dx = 0 gives dt − 2t · dt = 0, and thus after dividing by dt, we obtain 1 − 2t = 0 and
t = 1/2. Therefore, t − t 2 takes on its largest value at t = 1/2. The users of calculus
were aware of the fact that the argument above lacks the requisite mathematical
precision. At the same time, they were convinced that the argument leads to the
correct result.
Now let us see a problem involving the construction of tangent lines to a curve.
At the tangent point, the direction of the tangent line
should be the same as that
of the curve. The direction
of the curve at a point (x, y)
can be computed by connecting the point by a line to a
point “infinitesimally close”;
this will tell us the slope of
the tangent. After an infinitesimally small change in time,
the x-coordinate changes to
x + dx, while the y-coordinate
changes to y + dy. The point
(x+dx, y+dy) is thus a point
of the curve that is “infinitesimally close” to (x, y).
The slope of the line conFig. 1.4
necting the points (x, y) and
(x+dx, y+dy) is
(y + dy) − y dy
= .
(x + dx) − x dx
This is the quotient of two differentials, thus a differential quotient. We get that the
slope of the tangent line at the point (x, y) is exactly the differential quotient dy/dx.
Computing this quantity is very simple.
Consider, for example, the parabola with equation y = x2 . Since the point (x+dx,
y + dy) also lies on the parabola, the equation tells us that
dy = (y + dy) − y = (x + dx)2 − x2 = 2xdx + (dx)2 = 2xdx,
where we “ignore” the term (dx)2 once again. It follows that dy/dx = 2x, so at the
point (x, y), the slope of the tangent of the parabola given by the equation y = x2
is 2x. Now consider the point (a, a2 ) on the parabola. The slope of the tangent is 2a
here, so the equation of the tangent is
y = 2a · (x − a) + a2 .
1 A Brief Historical Introduction
7
This intersects the x-axis at a/2. Thus—according to mathematicians of the seventeenth century—we can construct the line tangent to the parabola at the point (a, a2 )
by connecting the point (a/2, 0) to the point (a, a2 ).
Finally, let us return to the problem of computing area that we already addressed.
Take the parabola given by the equation y = x2 again, and compute the area of the
region R that is bordered by the segment [0, x] on the x-axis, the arc of the parabola
between the origin and the point (x, x2 ), and the segment connecting the points (x, 0)
and (x, x2 ). Let A denote the area in question. Then A itself is a variable quantity.
After an infinitesimally small change in time, the value of x changes to x + dx, so the
region R grows by an infinitesimally narrow “rectangle” with width dx and height x.
Thus the change of the area A is dA = y · dx = x2 · dx.
Let us look for a variable z whose differential is exactly x2 · dx. We saw before that d(x2 ) = 2x · dx. A similar computation gives d(x3 ) = 3x2 · dx. Thus the
choice z = x3 /3 works, so dz = x2 · dx. This means that the differentials of the
unknowns A and z are the same: dA = dz. This, in turn, means that d(A − z) =
dA − dz = 0, that is, that A − z does not change, so it is constant. If x = 0, then A and
z are both equal to zero, so the constant A − z is zero, so A = z = x3 /3. When x = 1,
we obtain Archimedes’ result. Again, the users of calculus were convinced that they
had arrived at the correct value.
We can see that calculus is a very efficient method, and many different types of
problems can be tackled with its help. Calculus, as a stand-alone method, was developed by a list of great mathematicians (Barrow, Cavalieri, Fermat, Kepler, and many
others), and was completed—by seventeenth-century standards—by Isaac Newton
(1643–1727) and G. W. Leibniz (1646–1716). Mathematicians immediately seized
on this method and produced numerous results. By the end of the century, it was
time for a large-scale monograph that would summarize what had been obtained
thus far. This was L’Hospital’s book Infinitesimal Calculus (1696), which remained
the most important textbook on calculus for nearly all of the next century.
From the beginning, calculus was met with heavy criticism, which was completely justified. For the logic of the method was unclear, since it worked with imprecise definitions, and the arguments themselves were sometimes obscure. The great
mathematicians of antiquity were no doubt turning over in their graves. The “proofs”
outlined above seem to be convincing, but they leave many questions unanswered.
What does an infinitely small quantity really mean? Is such a quantity zero, or is it
not? If it is zero, we cannot divide by it in the differential quotient dy/dx. But if it
is not zero, then we cannot ignore it in our computations. Such a contradiction in a
mathematical concept cannot be overlooked. Nor was the method of computing the
maximum very convincing. Even if we accept that the differential at the maximum
is zero (although the argument for this already raises questions and eyebrows), we
would need the converse of the statement: if the differential is zero, then there is a
maximum. However, this is not always the case. We know that d(x3 ) = 3x2 · dx = 0
if x = 0, while x3 clearly does not have a maximum at 0.
An important part of the criticism of calculus is aimed at the contradictions having to do with infinite series. Adding infinitely many numbers together (or more
generally, just the concept of infinity) was found to be problematic much earlier, as
8
1 A Brief Historical Introduction
the Greek philosopher Zeno1 had already shown. He illustrated the problem with
his famous paradox of Achilles and the tortoise. The paradox states that no matter
how much faster Achilles runs than the tortoise, if the tortoise is given a head start,
Achilles will never pass it. For Achilles needs some time to catch up to where the
tortoise is located when Achilles starts to run. However, once he gets there, the tortoise has already moved away to a further point. Achilles requires some time to get
to that further point, but in the meantime, the tortoise travels even farther, and so on.
Thus Achilles will never catch up to the tortoise.
Of course, we all know that Achilles will catch up to the tortoise, and we can
easily compute when that happens. Suppose that Achilles runs ten yards every second, while the tortoise travels at one yard per second (to make our computations
simpler, we let Achilles race against an exceptionally fast tortoise). If the tortoise
gets a one-yard advantage, then after x seconds, Achilles will be 10x yards from the
starting point, while the tortoise will be 1 + x yards away from that point. Solving
the equation 10x = 1 + x, we get that Achilles catches up to the tortoise after x = 1/9
seconds.
Zeno knew all of this too; he just wanted to show that an intuitive understanding
of summing infinitely many components to produce movement is impossible and
leads to contradictions. Zeno’s argument expressed in numbers can be summarized
as follows: Achilles needs to travel 1 yard to catch up to the starting point of the
tortoise, which he does in 1/10 of a second. During that time, the tortoise moves
1/10 of a yard. Achilles has to catch up to this point as well, which takes 1/100 of
a second. During that time, the tortoise moves 1/100 yards, to which Achilles takes
1/1000 seconds to catch up, and so on. In the end, Achilles needs to travel infinitely
many distances, and this requires (1/10) + (1/100) + (1/1000) + · · · seconds in
total. Thus we get that
1
1
1
1
+
+
+··· = .
10 100 1000
9
(1.2)
With this, we have reduced Zeno’s paradox to the question whether can we put
infinitely many segments (distances) next to each other so that we get a bounded
segment (finite distance), or in other words, can the sum of infinitely many numbers
be finite?
If the terms of an infinite series form a geometric sequence, then its sum can be
determined using simple arithmetic—at least formally. Consider the series 1 + x +
x2 + · · · , where x is an arbitrary real number. If 1 + x + x2 + · · · = A, then
A = 1 + x · (1 + x + x2 + · · · ) = 1 + x · A,
which, when x = 1, gives us the equality
1 + x + x2 + · · · =
1
Zeno (333–262 BCE) Greek philosopher.
1
.
1−x
(1.3)
1 A Brief Historical Introduction
9
If in (1.3) we substitute x = 1/10 and subtract 1 from both sides, then we get (1.2).
In the special case x = 1/2, we get the identity 1 + 1/2 + 1/4 + · · · = 2, which is
immediate from Figure 1.5 as well.
Fig. 1.5
However, the identity (1.3) can produce strange results as well. If we substitute
x = −1 in (1.3), then we get that
1 − 1 + 1 − 1 + · · · = 21 .
(1.4)
This result is strange from at least two viewpoints. On one hand, we get a fraction
as a result of adding integers. On the other hand, if we put parentheses around pairs
of numbers, the result is
(1 − 1) + (1 − 1) + · · · = 0 + 0 + · · · = 0.
In fact, if we begin the parentheses elsewhere, we get that
1 − (1 − 1) − (1 − 1) − . . . = 1 − 0 − 0 − . . . = 1.
Thus three different numbers qualify as the sum of the series 1 − 1 + 1 − 1 + · · · :
1/2, 0, and 1.
We run into a different problem if we seek the sum of the series 1 + 1 + 1 + · · · .
If its value is y, then
1 + y = 1 + (1 + 1 + 1 + · · · ) = 1 + 1 + 1 + · · · = y.
Such a number y cannot exist, however. We could say that the sum must be ∞, but
can we exclude −∞? We could argue that we are adding positive terms so cannot
get a negative number, but are we so sure? If we substitute x = 2 in (1.3), then we
get that
1 + 2 + 4 + · · · = −1,
(1.5)
and so, it would seem, the sum of positive numbers can actually be negative.
These strange, impossible, and even contradictory results were the subject of
many arguments up until the beginning of the nineteenth century. To resolve the
contradictions, some fantastic ideas were born: to justify the equality 1 + 2 + 4 +
· · · = −1, there were some who believed that the numbers “start over” and that after
infinity, the negative numbers follow once again.
The arguments surrounding calculus lasted until the end of the nineteenth century, and they often shifted to philosophical arguments. Berkeley maintained that
10
1 A Brief Historical Introduction
the statements of calculus are no more scientific than religious beliefs, while Hegel
argued that the problems with calculus could be solved only through philosophical
arguments.
These problems were eventually solved by mathematicians by replacing the
intuitive but hazy definitions leading to contradictions with precisely defined mathematical concepts. The variable quantities were substituted by functions, and the
differential quotients by derivatives. The sums of infinite series were defined by
Augustin Cauchy (1789–1857) as the limit of partial sums.2 As a result of this
clarifying process—in which Cauchy, Karl Weierstrass (1815–1897), and Richard
Dedekind (1831–1916) played the most important roles—by the end of the nineteenth century, the theory of differentiation and integration (or analysis, for short)
reached the logical clarity that mathematics requires.
The creation of the precise theory of analysis was one of the greatest intellectual
accomplishments of modern Western culture. We should not be surprised when this
theory—especially its basics, and first of all its central concept, the limit—is found
to be difficult. We want to facilitate the mastery of this concept as much as possible,
which is why we begin with limits of sequences. But before everything else, we must
familiarize ourselves with the foundations on which this branch of mathematics,
analysis, is based.
2
See the details of this in Chapter 7.
Chapter 2
Basic Concepts
2.1 A Few Words About Mathematics in General
In former times, mathematics was defined as the science concerned with numbers
and figures. (This is reflected in the title of the classic book by Hans Rademacher
and Otto Toeplitz, Von Zahlen und Figuren, literally On Numbers and Figures [6].)
Nowadays, however, such a definition will not do, for modern algebra deals with
abstract structures instead of numbers, and some branches of geometry study objects that barely resemble any figure in the plane or in space. Other branches of
mathematics, including analysis, discrete mathematics, and probability theory, also
study objects that we would not call numbers or figures. All we can say about the
objects studied in mathematics is that generally, they are abstractions from the real
world (but not always). In the end, mathematics should be defined not by the objects we study, but how we study them. We can say that mathematics is the science
that establishes absolute and irrefutable facts about abstract objects. In mathematics,
these truths are called theorems, and the logical arguments showing that they are
irrefutable are called proofs. The method (or language) of proofs is mathematical
logic.
2.2 Basic Concepts in Logic
Mathematical logic works with statements. Statements are declarative sentences that
are either true or false. (That is, we cannot call the wish “If I were a swift cloud
to fly with thee” a statement, nor the command “to thine own self be true.”) We
can link statements together using logical operators to yield new statements. The
logical operators are conjunction (and), disjunction (or), negation (not), implication
(if, then), and equivalence (if and only if).
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 2
11
12
2 Basic Concepts
Logical Operators
Conjunction joins two statements together to assert that both are true. That is, the
conjunction of statements A and B is the statement “A and B,” denoted by A ∧ B. The
statement A ∧ B is true if both A and B are true. It is false otherwise.
Disjunction asserts that at least one of two statements is true. That is, the disjunction of statements A and B is the statement “A or B,” denoted by A ∨ B. The
statement A ∨ B is true if at least one of A and B is true, while if both A and B are
false, then so is A ∨ B.
It is important to note that in everyday life, the word “or” can be used with several
different meanings:
(i) At least one of two statements is true (inclusive or).
(ii) Exactly one of two statements is true (complementary or).
(iii) At most one of two statements is true (exclusive or).
For example, if Alicia says, “I go to the theater or the opera every week,” then
she is probably using the inclusive or, since there might be weeks when she goes
to both. On the other hand, if a young man talking to his girlfriend says, “Today
let’s go to the movies or go shopping,” then he probably means that they will do
exactly one of those two activities, but not both, so he is using the complementary
or. Finally, if at a family dinner, a father is lecturing his son by saying, “At the table,
one either eats or reads,” then he means that his son should do at most one of those
things at a time, so he is using the exclusive or.
Let us note that in mathematics, the logical operator or is, unless stated otherwise,
meant as an inclusive or.
Negation is exactly what it sounds like. The negation of statement A is the statement “A is false,” or in other words, “A is not true.” We can express this by saying
“not A,” and we denote it by A. The statement A is true exactly when A is false.
We need to distinguish between the concepts of a statement that negates A and a
statement that contradicts A. We say that a statement B contradicts A if both A and B
cannot be true simultaneously. The negation of a statement A is also a contradiction
of A, but the converse is not necessarily true: a contradiction is not necessarily a
negation. If we let A denote the statement “the real number x is positive,” and we
let B denote “the real number x is negative,” then B contradicts A, but B is not the
negation of A, since the correct negation is “the real number x is negative or zero.”
Similarly, the statement “these letters are black” does not have the contradiction
“these letters are white” as its negation, for while it contradicts our original statement, the correct negation would be “these letters are not black” (since there are
many other colors besides white that they could be). The negation of statement A
comprises all cases in which A is not true.
It is easy to check that if A and B are arbitrary statements, then the identities
A ∧ B = A ∨ B and A ∨ B = A ∧ B hold.
Implications help us express when one statement follows from another. We can
express this by the statement “if A is true, then so is B,” or “if A, then B” for short.
2.2 Basic Concepts in Logic
13
We denote this by A ⇒ B. We also say “A implies B.” The statement A ⇒ B then
means that in every case in which A is true, B is true. We can easily see that A ⇒ B
shares the same value as A ∨ B in the sense that these statements are true exactly in
the same cases. (Think through that the statement “if the victim was not at home
Friday night, then his radio was playing” means exactly the same thing as “the
victim was at home Friday night or his radio was playing (or both).”)
The statement A ⇒ B (or A ∨ B) is true in every case in which B is true, and also
in every case in which A is false. The only case in which A ⇒ B is false is that A is
true and B is false. We can convince ourselves that A ⇒ B is true whenever A is false
with the following example. If we promise a friend “if the weather is nice tomorrow,
we will go on a hike,” then there is only one case in which we break our promise: if
tomorrow it is nice out but we don’t go on a hike. In all the other cases—including
whatever happens if the weather is not nice tomorrow—we have kept our promise.
In mathematics, just as we saw with the use of the term “or,” the “if, then” construction does not always correspond to its use in everyday language. Generally,
when we say “if A then B,” we mean that there is a cause-and-effect relationship
between A and B. And therefore, a statement of the form “if A then B” in which
A and B are unrelated sounds nonsensical or comical (“if it’s Tuesday, this must
be Belgium”). In mathematical logic, we do not concern ourselves with the (philosophical) cause; we care only for the truth values of our statements. The implication
“if it is Tuesday, then this is Belgium” is true exactly when the statement “it is not
Tuesday or this is Belgium” is true, which depends only on whether it is Tuesday
and whether we are in Belgium. If it is Tuesday and this is not Belgium, then the
implication is false, and in every other case it is true.
We can express the A ⇒ B implication in words as “B follows from A,” “A is
sufficient for B,” and “B is necessary for A.”
Equivalence expresses the condition of two statements always being true at the
same time. The statement “A is equivalent to B,” denoted by A ⇐⇒ B, is true if
both A and B are true or both A and B are false. If one of A and B is true but the other
false, then A ⇐⇒ B is false. If A ⇐⇒ B, we can say “A if and only if B,” or “A is
necessary and sufficient for B.”
Quantifiers
We call statements that contain variables predicates. Their truth or falsity depends
on the values we substitute for the variables. For example, the statement “Mr. N
is bald” is a predicate with variable N, and whether it is true depends on whom
we substitute for N. Similarly, “x is a perfect square” is a predicate that is true if
x = 4, but false if x = 5. Let A(x) be a predicate, where x is the variable. Then there
are two ways to form a “normal” statement (fixed to be either true or false) from
it. The first of these is “A(x) is true for all x,” which we denote by (∀x)A(x). The
symbol ∀ is called the universal quantifier. It represents an upside-down A as in
14
2 Basic Concepts
“All.” The second is “there exists an x such that A(x) is true,” which we denote by
(∃x)A(x). Here ∃ is called the existential quantifier. It represents the E in “Exists”
written backward.
For the above examples, (∃N)(Mr. N is bald) and (∃x)(x is a square) are true,
since there are bald men and some numbers are squares. The statements (∀N)(Mr.
N is bald) and (∀x)(x is a square) are false, however, since not all men are bald, and
not all numbers are squares.
It is easy to check that for every predicate A(x), we have the identities
(∃x) A(x) = (∀x) A(x) and
(∀x) A(x) = (∃x) A(x).
We will often see statements of the form (∀x)(A(x) ⇒ B(x)). This statement
means that for every x, if A(x) is true, then B(x) is true. More concisely, whenever
A(x) is true, B(x) is true. If A(x) is never true, then (∀x)(A(x) ⇒ B(x)) is true.
Exercises
2.1. Boys and girls are dancing together at a party. Let D(G, B) denote the statement
that girl G danced with boy B at some point during the party. Determine which of
the following statements imply another of the statements:
(a) (∃G)(∀B)D(G, B);
(b) (∀B)(∃G)D(G, B);
(c) (∃B)(∀G)D(G, B);
(d) (∀G)(∃B)D(G, B);
(e) (∀G)(∀B)D(G, B);
(f) (∃G)(∃B)D(G, B).
2.2. Regarding this same dance party, determine which of the following statements
imply another:
(a) (∃G)(∀B)D(G, B);
(b) (∀B)(∃G)D(G, B);
(c) (∀G)(∃B)D(G, B);
(d) (∀G)(∀B)D(G, B).
2.3. How many subsets H does the set {1, 2, . . . , n} have such that the statement
(∀x)(x ∈ H ⇒ x + 1 ∈
/ H) is true? (S)
2.4. How many subsets H does the set {1, 2, . . . , n} have such that the statement
(∀x)([(x ∈ H) ∧ (x + 1 ∈ H)] ⇒ x + 2 ∈ H) is true?
2.3 Proof Techniques
15
2.3 Proof Techniques
Proof by Contradiction
Arguing by contradiction proceeds by supposing that the negation of the statement
we want to prove is true and then deducing a contradiction; that is, we show that
some (not necessarily the original) statement and its negation both must hold. Since
this is clearly impossible, our original assumption must have been false, so the statement we wanted to prove is true.
Let us see a simple example. It is clear that we cannot place nine nonattacking
rooks on a chessboard, that is, nine rooks such that no rook is attacking another
rook. Surely, no matter how we place nine rooks, there will be two of them in the
same row, and those two rooks are therefore attacking each other. This argument can
be viewed as a simple form of proof by contradiction: we suppose that our statement
is false (that we can place nine nonattacking rooks on the chessboard), and in this
case, we contradict our original statement.
A less straightforward example follows. Cut off two opposing corner squares of
a chessboard. Show that one cannot cover the rest of the board using 31 dominoes,
each of which covers two adjacent squares. For suppose that such a covering exists.
Since each domino covers one black and one white square, we have 31 black and
31 white squares covered. However, two opposing corner squares are of the same
color, so the numbers of black and white squares remaining are different: there are
30 of one and 32 of the other. This is a contradiction, so such a covering cannot exist.
(In this argument, the statement that we showed to be true along with its negation
is the following: the number of covered white squares is equal to the number of
covered black squares.)
Perhaps the most classical example of proof by contradiction is the following.
√
Theorem 2.1. 2 is irrational (that is, it cannot be written as a quotient of two
integers).
We give two proofs.
√
Proof I. Suppose that 2 = p/q, where p and q are positive integers, and suppose q
is the smallest such denominator. Then 2q2 = p2 , so p2 is even. This means that p
2
2
2
2
2
must also be even, so let p = 2p1 . Then
√ 2q = (2p1 ) = 4p1 , so q = 2p1 , whence q
is also even. If we let q = 2q1 , then 2 = p/q = p1 /q1 . Since q1 < q, this contradicts
the minimality of q.
We shall use the symbol to indicate the end of a proof. In the above proof, the
actually stands for “This
√ contradiction can only arise from a faulty
√ assumption.
Thus our assumption that 2 is rational is false. We conclude that 2 is irrational.”
√
Proof II. Suppose again that 2 = p/q, where p and q are positive integers, and q
is the smallest such denominator. Then
16
2 Basic Concepts
√
2q − p 2 − (p/q) 2 − 2 √
=
=√
= 2.
p−q
(p/q) − 1
2−1
Since 2p − q and p − q are integers and 0 < p − q < q, we have contradicted the
minimality of q again.
The above examples use a proof by contradiction to show the impossibility or
nonexistence of something. Many times, however, we need to show the existence of
something using a proof by contradiction. We give two examples.
Theorem 2.2. If a country has only finitely many cities, and every city has at least
two roads leading out of it, then we can make a round trip between some cities.
Proof. Suppose that we cannot make any round trips. Let C1 be a city, and suppose
we start travelling from C1 along a road. If we arrive at the city C2 , then we can keep
going to another city C3 , as there are at least two roads going from C2 . Also the city
C3 is different than C1 , since otherwise we would have made a round trip already,
which we assumed is impossible. We can keep travelling from C3 to a new city C4 ,
as there are at least two roads leading out from C3 . This C4 is different from all the
previous cities. Indeed if for example C4 = C2 , then C2 ,C3 ,C4 = C2 would be a round
trip. From C4 we can go on to a new city C5 , and then to C6 and so on continuing
indefinitely. However, this is a contradiction, as according to our assumption there
are only finitely many cities in the country.
⊔
⊓
Theorem 2.3. There are infinitely many prime numbers.
Proof. Suppose there are only finitely many primes; call them p1 , . . . , pn . Consider
the number N = p1 · · · pn + 1. It is well known—and not hard to prove—that N (like
every integer greater than 1) has a prime divisor. Let p be any of the prime divisors
of N. Then p is different from all of p1 , . . . , pn , since otherwise, both N and N − 1
would be divisible by p, which is impossible. But by our assumption p1 , . . . , pn are
all the prime numbers, and this is a contradiction.
⊔
⊓
Proof by Induction
Another important proof technique is known as mathematical induction. Induction
arguments prove infinitely many statements at once. In its simplest form, we prove
the statements A1 , A2 , A3 , . . . in two steps: First we show that A1 is true, and then we
show that An+1 follows from An for every n. (This second step, namely An ⇒ An+1 ,
is called the inductive step, and An is called the induction hypothesis.)
These two steps truly prove all of the statements An . Indeed, the first statement,
A1 , is proved directly. Since we have shown that An implies An+1 for each n, it
follows that in particular, A1 implies A2 . That is, since A1 is true, A2 must be true as
well. But by the inductive step, the truth of A3 follows from the truth of A2 , and thus
A3 is true. Then from A3 ⇒ A4 , we get A4 , and so on.
2.3 Proof Techniques
17
Let us see a simple example.
Theorem 2.4. For every positive integer n, 2n > n.
Proof. The statement An that we are trying to prove for all positive integers n is
2n > n. The statement A1 , namely 21 > 1, is obviously true. Given the induction
hypothesis 2n > n, it follows that 2n+1 = 2 · 2n = 2n + 2n ≥ 2n + 1 > n + 1. This
proves the theorem.
⊔
⊓
It often happens that the index of the first statement is not 1, in which case we
need to alter our argument slightly. We could, for example have stated the previous
theorem for every nonnegative integer n, since 20 > 0 is true, and the inductive step
works for n ≥ 0.
Here is another example: if we want to show that 2n > n2 for n ≥ 5, then in the
first step we have to check this assertion for n = 5 (32 > 25), and then show, as
before, in the inductive step that if the statement holds for n, then it holds for n + 1.
Now as an application of the method, we prove an important inequality (from
which Theorem 2.4 immediately follows).
Theorem 2.5 (Bernoulli’s1 Inequality). If a ≥ −1, then for every positive integer
n, we have
(1 + a)n ≥ 1 + na.
We have equality if and only if n = 1 or a = 0.
Proof. If n = 1, then the statement is clearly true. We want to show that if the statement holds for n, then it is true for n + 1. If a ≥ −1, then 1 + a ≥ 0, so
(1 + a)n+1 = (1 + a)n (1 + a) ≥ (1 + na)(1 + a) = 1 + (n + 1)a + na2
≥ 1 + (n + 1)a.
The above argument also shows that (1 + a)n+1 = 1 + (n + 1)a can happen only if
⊔
⊓
na2 = 0, that is, if a = 0.
There are times when this simple form of induction does not lead to a proof. To
illustrate this, we examine the Fibonacci2 numbers, which are defined as follows.
Let u0 = 0 and u1 = 1. We define u2 , u3 , . . . to be the sum of the two previous
(already defined) numbers. That is, u2 = 0 + 1 = 1, u3 = 1 + 1 = 2, u4 = 1 + 2 = 3,
u5 = 2 + 3 = 5, and so on. Clearly, the Fibonacci numbers are increasing, that is,
u0 ≤ u1 ≤ u2 ≤ · · · . Using induction, we now show that un < 2n for each n. First of
all, u0 = 0 < 1 = 20 . If un < 2n is true, then
un+1 = un + un−1 ≤ un + un < 2n + 2n = 2n+1 ,
which proves the statement.
1
2
Jacob Bernoulli (1654–1705), Swiss mathematician.
Fibonacci (Leonardo of Pisa) (c. 1170–1240), Italian mathematician.
18
2 Basic Concepts
Actually, more than un < 2n is true, namely un < 1.7n holds for all n. However,
this cannot be proved using induction in the same form as above, since the most we
can deduce from un < 1.7n is un+1 = un + un−1 ≤ un + un < 1.7n + 1.7n = 2 · 1.7n ,
which is larger than 1.7n+1 . To prove that un+1 < 1.7n+1 , we need to use not only
that un < 1.7n , but also that un−1 < 1.7n−1 . The statement then follows, since
un+1 = un + un−1 < 1.7n + 1.7n−1 = 1.7 · 1.7n−1 + 1.7n−1 = 2.7 · 1.7n−1
< 2.89 · 1.7n−1 = 1.72 · 1.7n−1 = 1.7n+1 .
The proof then proceeds as follows. First we check that u0 < 1.70 and u1 < 1.71
hold (both are clear). Suppose n > 1, and let us hypothesize that un−1 < 1.7n−1
and un < 1.7n . Then as the calculations above show, un+1 < 1.7n+1 also holds. This
proves the inequality for every n. Indeed, let us denote the statement un < 1.7n by An .
We have shown explicitly that A0 and A1 are true. Since we showed (An−1 ∧ An ) ⇒
An+1 for every n, (A0 ∧ A1 ) ⇒ A2 is true. Therefore, A2 is true. Then (A1 ∧ A2 ) ⇒ A3
shows that A3 is true, and so on.
We can use this method of induction to prove relationships between the arithmetic, geometric, and harmonic means. We define the arithmetic mean of the numbers a1 , . . . , an to be
a1 + · · · + an
A=
.
n
For nonnegative a1 , . . . , an , the geometric mean is
G=
√
n
a1 · · · an ,
and for positive a1 , . . . , an , the harmonic mean is
1
1
H = n·
+···+
a1
an
−1
.
The term “mean” indicates that each of these numbers falls between the largest
and the smallest of the ai . If the largest of the ai is M and the smallest is m, then
A ≥ n · m/n = m, and similarly, A ≤ M. If the numbers are not all equal, then m <
A < M also holds. This is so because one of the numbers is then larger than m, so
a1 + · · · + an > n · m and A > m. We can similarly obtain A < M.
The same method can be used to see that for m > 0, we have m ≤ G ≤ M and
m ≤ H ≤ M; moreover, if the numbers are not all equal, then the inequalities are
strict.
Another important property of the three means above is that if we append the
arithmetic mean A to a1 , . . . , an , then the resulting extended set of numbers has the
same arithmetic mean, namely A. That is,
a1 + · · · + an + A n · A + A
=
= A.
n+1
n+1
2.3 Proof Techniques
19
The same is true (and can be proved similarly) for the geometric and harmonic
means.
Theorem 2.6. If a1 , . . . , an are arbitrary nonnegative numbers, then
√
a1 + · · · + an
n
.
a1 · · · an ≤
n
Equality holds if and only if a1 = · · · = an .
Proof. Let a1 , . . . , an be arbitrary nonnegative numbers, and let A and G denote their
respective arithmetic and geometric means. Suppose that k of the numbers a1 , . . . , an
are different from A. We prove the statement using induction on k. (The number of
terms, n, is arbitrary.) If k = 0, then the statement clearly holds, since then, all terms
are equal to A, so the value of each mean is also A. Since k = 1 can never happen (if
all terms but one of them are equal to A, then the arithmetic mean cannot be equal
to A), the statement holds for k = 1 as well.
Now let k > 1, and suppose that our statement is true for all sets of numbers such
that the number of terms different from A is either k − 1 or k − 2. We prove that
G < A. We may suppose that a1 ≤ · · · ≤ an , since the order of the numbers plays no
role in the statement. Since there are terms not equal to A, we see that a1 < A < an .
This also implies that k ≥ 2.
If 0 occurs among the terms, then G = 0 and A > 0, so the statement is true.
So we can suppose that the terms are positive. Replace an with A, and a1 with
a1 + an − A. This new set of numbers will have the same arithmetic mean A, since
the replacements did not affect the value of their sum. Also, the new geometric mean
after the replacement must have increased, since A(a1 + an − A) > a1 an , which is
equivalent to (an − A)(A − a1 ) > 0, which is true.
In this new set of numbers, the number of terms different from A decreased by
one or two. Then, by our induction hypothesis, G′ ≤ A, where G′ denotes the geometric mean of the new set. Since G < G′ , we have G < A, which concludes the
proof.
⊔
⊓
Theorem 2.7. If a1 , . . . , an are arbitrary positive numbers, then
n·
1
1
+···+
a1
an
−1
≤
√
n
a1 · · · an .
Equality holds if and only if a1 = · · · = an .
Proof. Apply the previous theorem to the numbers 1/a1 , . . . , 1/an , and then take the
reciprocal of both sides of the resulting inequality.
⊔
⊓
The more general form of induction (sometimes called strong induction, while the
previous form is sometimes known as weak induction) is to show that A1 holds, then
in the induction step to show that if A1 , . . . , An are all true, then An+1 is also true.
This proves that every An is true just as our previous arguments did.
20
2 Basic Concepts
Exercises
2.5. Prove that the following identities hold for every positive integer n:
(a)
(b)
(c)
(d)
(e)
xn − yn
= xn−1 + xn−2 · y + · · · + x · yn−2 + yn−1 ;
x−y
n · (n + 1) · (2n + 1)
12 + · · · + n2 =
;
6
n · (n + 1) 2
13 + · · · + n3 =
;
2
1 1
1
1
1
1− + −···−
=
+···+ ;
2 3
2n n + 1
2n
1
n−1
1
+···+
=
.
1·2
(n − 1) · n
n
2.6. Express the following sums in simpler terms:
1
1
+···+
;
1·2·3
n · (n + 1) · (n + 2)
(b) 1 · 2 + · · · + n · (n + 1);
(c) 1 · 2 · 3 + · · · + n · (n + 1) · (n + 2).
(a)
2.7. Prove that the following inequalities hold for every positive integer n:
√
√
1
1
(a) n ≤ 1 + √ + · · · + √ < 2 n;
n
2
2
1
1
(b) 1 + √ + · · · + √ ≤ 3 − √ .
n· n
n
2· 2
2.8. Denote the nth Fibonacci number by un . Prove that un > 1.6n /3 for every n ≥ 1.
2.9. Prove the following equalities:
(a) u2n − un−1 un+1 = (−1)n+1 ;
(b) u21 + · · · + u2n = un un+1 .
2.10. Express the following sums in simpler terms:
(a)
(b)
(c)
(d)
u0 + u2 + · · · + u2n ;
u1 + u3 + · · · + u2n+1 ;
u0 + u3 + · · · + u3n ;
u1 u2 + · · · + u2n−1 u2n .
2.11. Find the flaw in the following argument: We want to prove that no matter
how we choose n lines in the plane such that no two are parallel, with n > 1, they
will always intersect at a single point. The statement is clearly true for n = 2. Let
n ≥ 2, and suppose the statement is true for n lines. Let l1 , . . . , ln+1 be lines such
that no two are parallel. By the induction hypothesis, the lines l1 , . . . , ln intersect at
the single point P, while the lines l2 , . . . , ln+1 intersect at the point Q. Since each of
the lines l2 , . . . , ln passes through both P and Q, necessarily P = Q. Thus we see that
each line goes through the point P. (S)
2.4 Sets, Functions, Sequences
21
2.12. If we have n lines such that no two are parallel and no three intersect at one
point, into how many regions do they divide the plane? (H)
2.13. Prove that finitely many lines (or circles) partition the plane into regions that
can be colored with two colors such that the borders of two regions with the same
color do not share a common segment or circular arc.
2.14. Deduce Bernoulli’s inequality from the inequality of arithmetic and geometric
means. (S)
2.15. Prove that if a1 , . . . , an ≥ 0, then
a1 + · · · + an
≤
n
a21 + · · · + a2n
.
n
(Use the arguments of the proof of Theorem 2.7.)
2.16. Find the largest value of x2 · (1 − x) for x ∈ [0, 1]. (H)
2.17. Find the cylinder of maximal volume contained in a given right circular cone.
2.18. Find the cylinder of maximal volume contained in a given sphere.
2.4 Sets, Functions, Sequences
Sets
Every branch of mathematics involves inspecting some set of objects or elements
defined in various ways. In geometry, those elements are points, lines, and planes;
in analysis, they are numbers, sequences, and functions; and so on. Therefore, we
need to clarify some basic notions regarding sets.
What is a set? Intuitively, a set is a collection, family, system, assemblage, or
aggregation of specific things. It is important to note that these terms are redundant
and are rather synonyms than definitions. In fact, we could not even accept them as
definitions, since we would first have to define a collection, family, system, assemblage, or aggregation, putting us back right where we started. Sometimes, a set is
defined as a collection of things sharing some property. Disregarding the fact that
we again used the term “collection,” we can raise another objection: what do we
mean by a shared property? That is a very subjective notion, which we cannot allow
in a formal definition. (Take, for example, the set that consists of all natural numbers and all circles in the plane. Whether there is a common property here could be
a matter of debate.) So we cannot accept this definition either.
It seems that we are unable to define sets using simpler language. In fact, we
have to accept that there are concepts that we cannot express in simpler terms (since
22
2 Basic Concepts
otherwise, the process of defining things would never end), so we need to have
some fundamental concepts that we do not define. A set is one of those fundamental
concepts. All that we suppose is that every item in our universe of discourse is either
an element of a given set or not an element of that set. (Here we are using the word
“or” to mean the complementary or.)
If x is an element of the set H (we also say that x belongs to H, or is in H), we
denote this relationship by x ∈ H. If x is not an element of the set (we also say that
x does not belong to H, or is not in H), we denote this fact by x ∈
/ H.
We can represent sets themselves in two different ways. The ostensibly simpler way is to list the set’s elements inside a pair of curly braces: A = {1, 2, 3, 4, 6, 8,
12, 24}. If a set H contains only one element, x, then we denote it by H = {x}. We
can even list an infinite set of elements if the relationship among its elements is
unambiguous, for example N = {0, 1, 2, 3, . . . }. The other common way of denoting
sets is that after a letter or symbol denoting a generic element we put a colon, and
then provide a rule that defines such elements:
N = {n : n is a nonnegative integer} = {0, 1, 2, . . . },
N+ = {n : n is a positive integer} = {1, 2, . . . },
B = {n : n is an odd natural number} = {2n − 1 : n = 1, 2, . . . } = {1, 3, 5, . . . },
1
1 1
C=
: n = 1, 2, . . . = 1, , , . . . ,
n
2 3
D = {n : n | 24 and n > 0} = {1, 2, 3, 4, 6, 8, 12, 24}.
What do we mean by the equalities here? By agreement, we consider two sets A and
B to be equal if they have the same elements, that is, for every object x, x ∈ A if and
only if x ∈ B. Using notation:
A = B ⇐⇒ (∀x)(x ∈ A ⇐⇒ x ∈ B).
In other words, A = B if every element of A is an element of B and vice versa. Let us
note that if we list an element multiple times, it does not affect the set. For example,
{1, 1, 2, 2, 3, 3, 3} = {1, 2, 3}, and {n2 : n is an integer} = {n2 : n ≥ 0 is an integer} =
{0, 1, 4, 9, 16, . . . }.
As we have seen, a set can consist of a single element, for example {5}. Such a
set is called a singleton; this set, of course, is not equal to the number 5 (which is
not a set), nor to {{5}} which is indeed a set (and a singleton), but its only element
is not 5 but {5}. We see that an element of a set can be a set itself.
Is it possible for a set to have no elements? Consider the following examples:
G = {p : p is prime, and 888 ≤ p ≤ 906},
H = n2 : n ∈ N and the sum of the decimal digits of n2 is 300 .
At first glance, these sets seem just as legitimate as all the previously listed ones.
But if we inspect them more carefully, we find out that they are empty, that is, they
2.4 Sets, Functions, Sequences
23
have no elements. Should we exclude them from being sets? Were we to do so, then
before specifying any set, we would have to make sure that it had an element. Other
than making life more difficult, there would be another drawback: we are not always
able to check this. It is a longstanding open problem whether or not there is an odd
prefect number (a number is perfect if it is equal to the sum of its positive proper
divisors). If we exclude sets with no elements from being sets, we would not be able
to determine whether {n : n is an odd perfect number} is a well-defined set.
Thus it seems practical to agree that the above definitions form sets, and so (certainly in the case of G or H above) we accept sets that do not have any elements.
How many such sets are there? By our previous notion of equality, there is just one,
since if neither A nor B has any elements, then it is vacuously true that every element
of A is an element of B, and vice versa, since there are no such elements. We call
this single set the empty set and denote it by 0.
/
If every element of a set B is an element of a set A, then we say that B is a subset
of A, or that B is contained in A. Notation: B ⊂ A or A ⊃ B; we can say equivalently
that B is a subset of A and that A contains B.3
It is clear that A = B if and only if A ⊂ B and B ⊂ A. If B ⊂ A, but B = A, then
we say that B is a proper subset of A. We denote this by B A.
Just as we define operations between numbers (such as addition and multiplication), we can define operations between sets. For two sets A and B, their union is
the set of elements that belong to at least one of A and B. We denote the union of A
and B by A ∪ B, so
A ∪ B = {x : x ∈ A ∨ x ∈ B}.
We can define the union of any number of sets, even infinitely many: A1 ∪ A2 ∪ · · ·
∪ An denotes the set of elements that belong to at least one of A1 , . . . , An . This same
set can also be denoted more concisely by ni=1 Ai . Similarly, A1 ∪A2 ∪· · · or ∞
i=1 Ai
denotes the set of all elements that belong to at least one of A1 , A2 , . . . .
For two sets A and B, their intersection is the set of elements that belong to both
A and B. We denote the intersection of A and B by A ∩ B, so
A ∩ B = {x : x ∈ A ∧ x ∈ B}.
The intersection of any finite number of sets or infinitely many sets is defined similarly as for union.
We say that two sets A and B are disjoint if A ∩ B = 0.
/
For two sets A and B, their set-theoretic difference is the set of elements that are
in A but not in B. We denote the difference of sets A and B by A \ B, so
A \ B = {x : x ∈ A ∧ x ∈
/ B}.
Let H be a fixed set, and let X ⊂ H. We call H \ X the complement (with respect to
H) of X, and we denote it by X. It is easy to see that
A = A,
3
Sometimes, containment is denoted by ⊆.
24
2 Basic Concepts
and the so-called De Morgan identities are also straightforward:
A ∩ B = A ∪ B,
A ∪ B = A ∩ B.
Here are some further identities:
A \ A = 0,
/
A \ 0/ = A,
A ∪ A = A,
A ∩ A = A,
A ∪ B = B ∪ A,
A ∩ B = B ∩ A,
A ∪ (B ∪C) = (A∪B)∪C = A∪B∪C,
A ∩ (B ∩C) = (A∩B)∩C = A∩B∩C,
A ∪ (B ∩C) = (A ∪ B) ∩ (A ∪C),
A ∩ (B ∪C) = (A ∩ B) ∪ (A ∩C),
A ⊂ A,
A ⊂ B, B ⊂ C ⇒ A ⊂ C,
0/ ⊂ A.
(2.1)
Functions
Consider a mathematical expression that contains the variable x, such as
x2 + 1 or
x+3
.
x−2
To evaluate such an expression for a particular numeric value of x, we have to compute the value of the expression when the given number is substituted for x (this
is possible, of course, only if the expression makes sense; in the second example,
x = 2 is impermissible). These formulas thereby associate certain numbers with
other numbers. There are many ways of creating such correspondences between
numbers. For example, we could consider the number of positive divisors of an integer n, the sum of the digits of n, etc. We can even associate numbers with other
things, for example to each person their weight or how many strands of hair they
have. Even more generally, we could associate to each person their name. These examples define functions, or mappings. These two terms are synonymous, and they
mean the following.
Consider two sets A and B. Suppose that for each a ∈ A, we somehow associate
with a an element b ∈ B. Then this association is called a function or mapping.
2.4 Sets, Functions, Sequences
25
If this function is denoted by f , then we say that f is a function from A to B, and we
write f : A → B. If the function f associates a ∈ A to b ∈ B, we say that f maps a to
b (or a is mapped to b under f ), or that the value of f at a is b, and we denote this
by b = f (a). We call A the domain of f .
When we write down a formula, for example n2 + 1, and we want to emphasize
that we are not talking about the number n2 + 1 but the mapping that maps n to
n2 + 1, then we can denote this by
n → n2 + 1 (n ∈ N).
Sequences
Writing arbitrary elements sequentially, that is, in a particular order, gives us a
sequence. If the number of elements in the list is n, then we say that we have a sequence of length n. To define a sequence, we need to specify what the first, second,
and generally the kth element is, for each k = 1, . . . , n. A sequence of length n is usually denoted by (a1 , a2 , . . . , an ), where, of course, instead of a we can use any other
letter or symbol. We often call the elements a1 , . . . , an the terms of the sequence;
the number representing each term’s position is called its index. We consider two
sequences of length n to be equal if their kth terms agree for√
each k = 1, . . . , n, so the
,
a
,
a
)
=
(Chicago,
2, Shakespeare) if and
order of the terms matters. That
is,
(a
1 2 3
√
only if a1 = Chicago, a2 = 2, and a3 = Shakespeare. The terms of the sequence
need not be distinct, so (2, 3, 3, 3) is a sequence of length 4.
We sometimes also call sequences of length n ordered n-tuples. For n = 2, instead of ordered two-tuples we say ordered pairs, and for n = 3, we usually say
ordered triples.
If we want to make the ambiguous “sequentially” above more precise, we may
say that a sequence of length n is a mapping with domain {1, 2, . . . , n} that maps k to
the kth term of the sequence. So the sequence defined by the map a : {1, 2, . . . , n} →
B is (a(1), a(2), . . . , a(n)), or as we previously denoted it, (a1 , a2 , . . . , an ).
We will often work with infinite sequences. We obtain these by writing infinitely
many elements sequentially. More precisely, we define an infinite sequence to be a
function with domain N+ = {1, 2, . . . }. That is, the function a : N+ → B defines a
sequence (a(1), a(2), . . . ), which we can also denote by (a1 , a2 , . . . ) or by (ai )∞
i=1 .
We also consider functions with domain N as infinite sequences. These can be
denoted by (a(0), a(1), . . . ), or (a0 , a1 , . . . ), or even (ai )∞
i=0 . More generally, every
function from a set of the form {k, k + 1, . . . } is called an infinite sequence; the
notation is straightforward.
Exercises
2.19. Prove the identities in (2.1).
2.20. Prove that (A ∪ B) \ (A ∩ B) = (A \ B) ∪ (B \ A).
26
2 Basic Concepts
2.21. Let A△B denote the set (A \ B) ∪ (B \ A). (We call A△B the symmetric difference of A and B.) Show that for arbitrary sets A, B, C,
(a) A△0/ = A,
(b) A△A = 0,
/
(c) (A△B)△C = A△(B△C).
2.22. Prove that x ∈ A1 △A2 △ · · · △An holds if and only if x is an element of an odd
number of A1 , . . . , An .4
2.23. Determine which of the following statements are true and which are false:5
(a)
(b)
(c)
(d)
(A ∪ B) \ A = B,
(A ∪ B) \C = A ∪ (B \C),
(A \ B) ∩C = (A ∩C) \ B = (A ∩C) \ (B ∩C),
A \ B = A \ (A ∩ B).
2.24. Let U(A1 , . . . , An ) and V (A1 , . . . , An ) be expressions that are formed from the
operations ∪, ∩, and \ on the sets A1 , . . . , An . (For example, U = A1 ∩ (A2 ∪ A3 )
and V = (A1 ∩ A2 ) ∪ (A1 ∩ A3 ).) Prove that U(A1 , . . . , An ) = V (A1 , . . . , An ) holds for
every A1 , . . . , An if and only if it holds for all sets A1 , . . . , An such that the nonempty
ones are all equal. (∗ H)
4
We can write A1 △A2 △ · · · △An , since we showed in part (c) of Exercise 2.21 that different placements of parentheses define the same set (so we may simply omit them).
5 That is, either prove that the identity holds for every A, B, C, or give some A, B, C for which it
does not hold.
Chapter 3
Real Numbers
What are the real numbers? The usual answer is that they comprise the rational and
irrational numbers. That is correct, but what are the irrational numbers? They are
the numbers whose infinite decimal expansions are infinite and nonrepeating. But
for this, we need to know precisely what an infinite decimal expansion is. We obtain
an infinite decimal expansion by
1. executing a division algorithm indefinitely, e.g.,
1
= 0,142857142857 . . . ,
7
or
2. locating a point on the number line, e.g.,
√
1< 2<2
√
1,4 < 2 < 1,5
√
1,41 < 2 < 1,42
etc. Based on this, we can say the decimal expansion of
√
2 is 1.41 . . . .
We note that the decimal expansion in 1. above also determines the location of
the respective (rational) point on the number line. That is, a decimal expansion like
1.41421356 . . . , always locates a point on the number line.
Now the question is the following: is the decimal expansion the number itself, or
just a representation of it? The latter hypothesis is supported by the fact that we can
write the number in different numeral systems to obtain different representations.
For example, the number 1/2 has the decimal form 0.5, while in binary, 1/2 = 0.1;
in ternary, 1/2 = 0.111 . . . . But if decimal expansions are just a representation of the
number, then what is the number itself? Perhaps it is the point that is represented by
the decimal expansion?
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 3
27
28
3 Real Numbers
We can imagine the real numbers in various ways. But they will always be objects
that we can use to measure distance, area, etc., and on which we can have operations
(addition, multiplication, etc.). In the end, there are two ways in which we can clarify
what real numbers are:
I. Constructive approach. We state that one of the above (or another) concepts
defines the real numbers. For example, we can declare that the set of real numbers is the set of all infinite decimal expansions. The advantage of such a construction is that it answers the question as to what the real numbers are, albeit
arbitrarily. But a disadvantage also arises: those who had a different image of
real numbers will have a hard time following.
In a constructive approach, operations are usually not easy to define. (For example, it is not really clear what the product 2 · 0.898899888999 . . . should be.)
Properties of an operation, such as the distributive law a(b + c) = ab + ac, can
also be inconvenient to check.
II. Axiomatic approach. In the axiomatic construction, we do not state what the
real numbers are, but what properties they satisfy. Everyone can imagine what
they want, but we fix some basic properties, and we refer only to these. Whoever
accepts that the real numbers are like this also must accept any conclusions that
we logically draw from the basic properties. Of course, these properties should
be something that we should generally expect from the real numbers.
The axiomatic approach does not make direct constructions useless, since the
question arises whether there exists such a construct satisfying the basic properties that we have stated. Several direct constructions are known. The construction by infinite decimal expansions can be seen in the book [1], while another
construction will be touched upon in Remark 6.14. A third construction can be
seen in Walter Rudin’s textbook [7].
In the following sections, we follow an axiomatic approach to the real numbers. In particular, the notion of real numbers will be a fundamental concept.
The basic properties that we accept without proof are called the axioms of the
real numbers. We state the axioms in four groups. The first group consists of
axioms regarding operations (addition and multiplication), while the second is
made up of properties about ordering (less than, greater than). The third and
fourth groups are just one axiom each. These will express that there are “arbitrarily large” natural numbers, as well as the fact that the set of real numbers is
“complete”.
I. Field Axioms
The first group of axioms requires some further fundamental concepts. We denote
the set of real numbers by R. We assume that there are two operations defined on the
real numbers, called addition and multiplication. By this, we mean that for any two
(not necessarily distinct) real numbers a, b ∈ R, there correspond a number denoted
by a + b (the sum of a and b) as well as a number denoted by a · b (the product of
3 Real Numbers
29
a and b). We also suppose that two distinct numbers 0 and 1 are specified. The first
group of axioms deals with these concepts.
1.
2.
3.
4.
5.
6.
7.
8.
9.
Commutativity of addition: a + b = b + a for each a, b ∈ R.
Associativity of addition: (a+b)+c = a+(b+c) for each a, b, c ∈ R.
a + 0 = a for each a ∈ R.
For each a ∈ R, there exists a b ∈ R such that a + b = 0.
Commutativity of multiplication: a · b = b · a for each a, b ∈ R.
Associativity of multiplication: (a · b) · c = a · (b · c) for each a, b, c ∈ R.
a · 1 = a for each a ∈ R.
For each a ∈ R, a = 0, there exists a b ∈ R such that a · b = 1.
Distributivity: a · (b + c) = a · b + a · c for each a, b, c ∈ R.
If two operations that satisfy the nine axioms above are defined on a set, then we
say that the set with the two operations is a field. (This includes the specification of
the distinct elements 0 and 1.) The first group of axioms thus tells us that the real
numbers form a field. This is why we call axioms 1–9 the field axioms.
It is easy to show that for each a ∈ R, there is exactly one b ∈ R such that a+b = 0
(see Theorem 3.28 in the first appendix). We denote this unique b by −a. Similarly,
for each a = 0 there is exactly one b ∈ R such that a · b = 1 (see Theorem 3.30 in
the first appendix). This element b will be denoted by a1 or 1/a. If c = 0, then we
denote a · (1/c) by ac or a/c.
We can show that all the usual properties of arithmetic we are used to and use
without hesitation in solving algebraic expressions follow from the field axioms. We
list these properties in the first appendix of the chapter, proving the most important
ones.
II. Order axioms
The second group of axioms requires another fundamental concept. We suppose
that there is a so-called order relation, denoted by < (less than), defined on the real
numbers. By this, we mean that for any two real numbers a and b, a < b is either true
or false. (We could also formulate this as follows: we have a map from the ordered
pairs of real numbers to the logical statements “true” and “false.” If (a, b) maps to
“true,” then we denote this by a < b.) The order axioms fix some properties of this
order relation.
10. Trichotomy: For any two real numbers a and b, exactly one of a < b, a = b,
b < a is true.
11. Transitivity: For each a, b, c ∈ R, if a < b and b < c, then a < c.
12. For each a, b, c ∈ R, if a < b, then a + c < b + c.
13. For each a, b, c ∈ R, if a < b and 0 < c, then a · c < b · c.
30
3 Real Numbers
To talk about consequences of the order axioms, we need to introduce some
notation. We say that a ≤ b (a is less than or equal to b) if a < b or a = b. The
notation a > b is equivalent to b < a; a ≥ b is equivalent to b ≤ a.
We call the number a positive if a > 0; negative if a < 0; nonnegative if a ≥ 0;
nonpositive if a ≤ 0.
We call numbers that we get from adding 1 repeatedly natural numbers:
1, 1 + 1 = 2, 1 + 1 + 1 = 2 + 1 = 3, . . . .
The set of natural numbers is denoted by N+ .
The integers are the natural numbers, these times −1, and 0. The set of integers
is denoted by Z.
We denote the set of nonnegative integers by N. That is, N = N+ ∪ {0}.
A real number is rational if we can write it as p/q, where p and q are integers
and q = 0. We will denote the set of rational numbers by Q.
A real number is called irrational if it is not rational.
As with the case of field axioms, any properties and rules we have previously
used to deal with inequalities follow from the order axioms above. We will discuss
these further in the second appendix of this chapter. We recommend that the reader
look over the Bernoulli inequality as well as the relationship between the arithmetic and geometric means, as well as the harmonic and arithmetic means (Theorems 2.5, 2.6, 2.7), and check to make sure that the properties used in the proofs all
follow from the order axioms.
We highlight one consequence of the order axioms, namely that the natural
numbers are positive and distinct; more precisely, they satisfy the inequalities
0 < 1 < 2 < · · · (see Theorem 3.38 in the second appendix). Another important
fact is that there are no neighboring real numbers. That is, for arbitrary real numbers a < b, there exists a c such that a < c < b, for example, c = (a + b)/2 (see
Theorem 3.40).
Definition 3.1. The absolute value of a real number a, denoted by |a|, is defined as
follows:
a, if a ≥ 0,
|a| =
−a, if a < 0.
From the definition of absolute value and the previously mentioned consequences
of the order axioms, it is easy to check the statements below for arbitrary real numbers a and b:
|a| ≥ 0, and |a| = 0 if and only if a = 0;
|a| = | − a|;
|a · b| = |a| · |b|;
a |a|
1
1
and =
;
If b = 0, then =
b
|b|
b
|b|
Triangle Inequality: |a + b| ≤ |a| + |b|; ||a| − |b|| ≤ |a − b|.
3 Real Numbers
31
III. The Axiom of Archimedes
14. For an arbitrary real number b, there exists a natural number n larger than b.
If we replace b by b/a (where a and b are positive numbers) in the above axiom,
we get the following consequence:
If a and b are arbitrary positive numbers, then there exists a natural number n
such that n · a > b.
And if instead of b we write 1/ε , where ε > 0, then we get:
If ε is an arbitrary positive number, then there exists a natural number n such
that 1/n < ε .
An important consequence of the axiom of Archimedes is that the rational numbers are “everywhere dense” within the real numbers.
Theorem 3.2. There exists a rational number between any two real numbers.
Proof. Let a < b be real numbers, and suppose first that 0 ≤ a < b. By the axiom
of Archimedes, there exists a positive integer n such that 1/n < b − a. By the same
axiom, there exists a positive integer m such that a < m/n. Let k be the smallest
positive integer for which a < k/n. Then
k−1
k
≤a< ,
n
n
and thus
k k−1 1
k
−a ≤ −
= < b − a.
n
n
n
n
So what we have obtained is that a < k/n < b, so we have found a rational number
between a and b.
Now suppose that a < b ≤ 0. Then 0 ≤ −b < −a, so by the above argument, there
is a rational number r such that −b < r < −a. Then a < −r < b, and the statement
is again true.
Finally, if a < 0 < b, then there is nothing to prove, since 0 is a rational number.
⊔
⊓
Remark 3.3. The axiom of Archimedes is indeed necessary, since it does not follow
from the previous field and order axioms. We can show this by giving an example
of an ordered field (a set with two operations and an order relation where the field
and order axioms hold) where the axiom of Archimedes is not satisfied. We outline
the construction.
We consider the set of polynomials in a single variable with integer coefficients, that is, expressions of the form an xn +an−1 xn−1 + · · · +a1 x+a0 , where n ≥ 0,
a0 , . . . , an are integers, and an = 0, and also the expression 0. We denote the
set of polynomials with integer coefficients by Z[x]. We say that the polynomial
an xn +an−1 xn−1 + · · · +a1 x+a0 is nonzero if at least one of its coefficients ai is
32
3 Real Numbers
nonzero. In this case, the leading coefficient of the polynomial is ak , where k is
the largest index with ak = 0. Expressions of the form p/q, where p, q ∈ Z[x] and
q = 0, are called rational expressions with integer coefficients (often simply rational
expressions). We denote the set of rational expressions by Z(x). We consider the
rational expressions p/q and r/s to be equal if ps = qr; that is, if ps and qr expand
to the same polynomial.
We define addition and multiplication of rational expressions as
ps + qr
p r
pr
p r
+ =
and
· = .
q s
qs
q s
qs
One can show that with these operations, the set Z(x) becomes a field (where 0/1
and 1/1 play the role of the zero and the identity element respectively). We say that
the rational expression p/q is positive if the signs of the leading coefficients of p
and q agree. We say that the rational expression f is less than the rational expression
g if g − f is positive. We denote this by f < g. It is easy to check that Z(x) is an
ordered field; that is, the order axioms are also satisfied. In this structure, the natural
numbers will be the constant rational expressions n/1.
Now we show that in the above structure, the axiom of Archimedes is not met.
We must show that there exists a rational expression that is greater than every natural
number. We claim that the expression x/1 has this property. Indeed, (n/1) < (x/1)
for each n, since the difference (x/1) − (n/1) = (x − n)/1 is positive.
IV. Cantor’s Axiom
The properties listed up until now (the 14 axioms and their consequences) still do
not characterize the real numbers, since it is clear that the rational numbers satisfy
those same properties. On the other hand, there are properties that we expect from
the real numbers but are not true for rational numbers. For example, we expect a
solution in the real numbers to the equation x2 = 2, but we know that a rational
solution does not exist (Theorem 2.1).
The last, so-called Cantor’s axiom,1 plays a central role in analysis. It expresses
the fact that the set of real numbers is in some sense “complete.”
To state the axiom, we need some definitions and notation. Let a < b. We call the
set of real numbers x for which a ≤ x ≤ b is true a closed interval, and we denote it
by [a, b]. The set of x for which a < x < b holds is denoted by (a, b), and we call it
an open interval.
Let In = [an , bn ] be a closed interval for every natural number n. We say that
I1 , I2 , . . . form a sequence of nested closed intervals if I1 ⊃ I2 ⊃ . . . ⊃ In ⊃ . . . , that
is, if
an ≤ an+1 < bn+1 ≤ bn
holds for each n. We can now formulate Cantor’s axiom.
1
Georg Cantor (1845–1918) German mathematician.
3 Real Numbers
33
15. Every sequence of nested closed intervals I1 ⊃ I2 ⊃ . . . has a common point,
that is, there exists a real number x such that x ∈ In for each n.
Remark 3.4. It is important that the intervals In be closed: a sequence of nested open
intervals does not always contain a common element. If, for example, Jn = (0, 1/n)
(n = 1, 2, . . .), then J1 ⊃ J2 ⊃ . . ., but the intervals Jn do not contain a common
element. For if x ∈ J1 , then x > 0. Then by the axiom of Archimedes, there exists an
n for which 1/n < x, and for this n, we see that x ∈ Jn .
Let us see when the nested closed intervals I1 , I2 , . . . have exactly one common
point.
Theorem 3.5. The sequence of nested closed intervals I1 , I2 , . . . has exactly one
common point if and only if there is no positive number that is smaller than every
bn − an , that is, if for every δ > 0, there exists an n such that bn − an < δ .
Proof. If x and y are both shared elements and x < y, then an ≤ x < y ≤ bn , and so
y − x ≤ bn − an for each n. In other words, if the closed intervals I1 , I2 , . . . contain
more than one common element, then there exists a positive number smaller than
each bn − an .
Conversely, suppose that bn − an ≥ δ > 0 for all n. Let x be a common point
of the sequence of intervals. If bn ≥ x + (δ /2) for every n, then x + (δ /2) is also
a common point. Similarly, if an ≤ x − (δ /2) for all n, then x − (δ /2) is also a
common point. One of these two cases must hold, since if bn < x + (δ /2) for some
n and am > x − (δ /2) for some m, then for k ≥ max(n, m), we have that x − (δ /2) <
⊔
⊓
ak < bk < x + (δ /2) and bk − ak < δ , which is impossible.
Cantor’s axiom concludes the axioms of the real numbers. With this axiomatic
construction, by the real numbers we mean a structure that satisfies axioms 1–15.
We can also express this by saying that
the real numbers form an Archimedean ordered field in which Cantor’s axiom
is satisfied.
As we mentioned before, such a field exists. A sketch of the construction of a
field satisfying the conditions will be given in Remark 6.14.
Before beginning our detailed exposition of the theory of analysis, we give an
important example
of the application of Cantor’s axiom. If a ≥ 0 and k is a positive
√
integer, then k a denotes the nonnegative number whose kth power is a. But it is not
at all obvious that such a number
exists. As we saw, the first 14 axioms do not even
√
guarantee the√existence of 2. We show that from the complete axiom system, the
existence of k a follows.
Theorem 3.6. If a ≥ 0 and k is a positive integer, then there exists exactly one nonnegative real number b for which bk = a.
34
3 Real Numbers
Proof. We can suppose that a > 0. We give the proof only for the special case k = 2;
the general case can be proved similarly. (Later, we give another proof of the theorem; see Corollary 10.59.)
We will find the b we want as the common point of a sequence of nested closed
intervals. Let u1 and v1 be nonnegative numbers for which u21 ≤ a ≤ v21 . (Such are,
for example, u1 = 0 and v1 = a + 1, since (a + 1)2 > 2 · a > a.)
Suppose that n ≥ 1 is an integer, and we have already defined the numbers un and
vn such that
u2n ≤ a ≤ v2n
holds. We distinguish two cases. If
un + vn
2
2
(3.1)
< a,
then let
un+1 =
un + vn
and vn+1 = vn .
2
But if
un + vn
2
2
≥ a,
then let
un + vn
.
2
It is clear that in both cases, [un+1 , vn+1 ] ⊂ [un , vn ] and
un+1 = un and vn+1 =
u2n+1 ≤ a ≤ v2n+1 .
With this, we have defined un and vn for each n. It follows from the definition that
the intervals [un , vn ] are nested closed intervals, so by Cantor’s axiom, there exists a
common point.
If b is a common point, then un ≤ b ≤ vn , so
u2n ≤ b2 ≤ v2n
(3.2)
holds for each n. Thus by (3.1), a and b2 are both common points of the interval
system [u2n , v2n ]. We want to see that b2 = a. By Theorem 3.5, it suffices to show that
for every δ > 0, there exists n such that v2n − u2n < δ .
We obtained the [un+1 , vn+1 ] interval by “halving” the interval [un , vn ] and taking
one of the halves. From this, we see clearly that vn+1 − un+1 = (vn − un )/2. Of
course, we can also conclude this from the definition, since
vn −
un + vn
un + vn
vn − un
=
− un =
.
2
2
2
3 Real Numbers
35
Then by induction, we see that vn − un = (v1 − u1 )/2n−1 for each n. From this, we
see that
v2n −u2n = (vn −un ) · (vn +un ) ≤
2 · v21
4 · v21
4 · v21
v1 −u1
·
(v
+v
)
≤
=
≤
1
1
2n−1
2n−1
2n
n
(3.3)
for each n. Let δ be an arbitrary positive number. By the axiom of Archimedes, there
exists n for which 4 · v21 /n is smaller than δ . This shows that for suitable n, we have
v2n − u2n < δ , so by Theorem 3.5, b2 = a.
The uniqueness of b is clear. For if 0 < b1 < b2 , then b21 < b22 , so only one of b21
and b22 can be equal to a.
⊔
⊓
Let us note that Theorem 2.1 became truly complete only now. In Chapter
√ 1,
2 acwhen stating the theorem, we did not care whether the number denoted
by
√
tually exists. In the proof of Theorem 2.1, we proved only that if 2 exists, then it
cannot be rational. It is true, however, that in this proof we used only the field and
order axioms (check).
Exercises
3.1. Consider the set {0, 1, . . . , m − 1} with addition and multiplication modulo m.
(By this, we mean that i + j ≡ k if the remainders of i + j and k on dividing by m
are the same, and similarly i · j ≡ k if the remainders of i · j and k on dividing by m
agree.) Show that this structure satisfies the field axioms if and only if m is prime.
3.2. Give an addition and a multiplication rule on the set {0, 1, a, b} that satisfy the
field axioms.
3.3. Let F be a subset of the real numbers such that 1 ∈ F and F = {0, 1}. Suppose
that if a, b ∈ F and a = 0, then (1/a) − b ∈ F. Prove that F is a field.
3.4. Prove that a finite field cannot be ordered in a way that it satisfies the order
axioms. (H)
3.5. Using the field and order axioms, deduce the properties of the absolute value
listed.
3.6. Check that the real numbers with the operation (a, b) → a + b + 1 satisfy the
first four axioms. What is the zero element? Define a multiplication with which we
get a field.
3.7. Check that the positive real numbers with the operation (a, b) → a · b satisfy the
first four axioms. What is the zero element? Define a multiplication with which we
get a field.
3.8. Check that the positive rational numbers with the operation (a, b) → a · b satisfy
the first four axioms. What is the zero element? Prove that there is no multiplication
here that makes it a field.
36
3 Real Numbers
3.9. Check that the set of rational expressions with the operations and ordering given
in Remark 3.3 satisfy the field and order axioms.
3.10. Is it true that the set of rational expressions with the given operations satisfies
Cantor’s axiom? (H)
3.11. In Cantor’s axiom, we required the sequence of nested intervals to be made up
of closed, bounded, and nonempty intervals. Check that the statement of Cantor’s
axiom is no longer true if any of these conditions are omitted.
3.1 Decimal Expansions: The Real Line
As we mentioned before, the decimal expansion of a real number “locates a point
on the number line.” We will expand on this concept now. First of all, we will give
the exact definition of the decimal expansion of a real number. We will need the
conventional notation for finite decimal expansions: if n is a nonnegative integer
and each a1 , . . . ak is one of 0, 1, . . . , 9, then n.a1 . . . ak denotes the sum
n+
a1
a2
ak
+
+···+ k .
10 102
10
Let x be an arbitrary nonnegative real number. We say that the decimal expansion
of x is n.a1 a2 . . . if
n ≤ x ≤ n + 1,
1
,
10
1
n.a1 a2 ≤ x ≤ n.a1 a2 + 2 ,
10
n.a1 ≤ x ≤ n.a1 +
(3.4)
and so on hold.
In other words, the decimal expansion of x ≥ 0 is n.a1 a2 . . . if
n.a1 . . . ak ≤ x ≤ n.a1 . . . ak +
1
10k
(3.5)
holds for each positive integer k.
Several questions arise from the above definitions. Does every real number have
a decimal expansion? Is the expansion unique? Is every decimal expansion the decimal expansion of a real number? (That is, for a given decimal expansion, does
there exist a real number whose decimal expansion agrees with it?) The following
theorems give answers to these questions.
Theorem 3.7. Every nonnegative real number has a decimal expansion.
Proof. Let x ≥ 0 be a given real number. By the axiom of Archimedes, there exists
a positive integer larger than x. If k is the smallest positive integer larger than x, and
3.1 Decimal Expansions: The Real Line
37
n = k − 1, then n ≤ x < n + 1. Since n + (a/10) (a = 0, 1, . . . , 10) is not larger than
x for a = 0 but larger for a = 10, there is an a1 ∈ {0, 1, . . . , 9} for which n.a1 ≤
x < (n.a1 ) + 1/10. Since (n.a1 ) + (a/102 ) (a = 0, 1, . . . , 10) is not larger than x for
a = 0 but larger for a = 10, there is an a2 ∈ {0, 1, . . . , 9} for which n.a1 a2 ≤ x <
(n.a1 a2 ) + 1/102 . Repeating this process, we get the digits a1 , a2 , . . . , which satisfy
(3.5) for each k.
⊔
⊓
Let us note that in the above theorem, we saw that for each x ≥ 0, there is a
decimal expansion n.a1 a2 . . . for which the stronger inequality
n.a1 . . . ak ≤ x < n, a1 . . . ak +
1
10k
(3.6)
holds for each positive integer k. Decimal expansions with this property are unique,
since there is only one nonnegative integer for which n ≤ x < n + 1, only one digit
1
, and so on.
a1 for which n.a1 ≤ x < (n.a1 ) + 10
Now if x has another decimal expansion m.b1 b2 . . . , then this cannot satisfy (3.6),
so either x = m + 1 or x = m.b1 . . . bk + (1/10k ) for some k. It is easy to check that
in the case of x = m + 1, we have n = m + 1, ai = 0 and bi = 9 for each i; in the case
of x = m.b1 . . . bk + (1/10k ) we get ai = 0 and bi = 9 for each i > k. We have then
the following theorem.
Theorem 3.8. Positive numbers with a finite decimal expansions have two infinite
decimal expansions: one has all 0 digits from a certain point on, while the other
repeats the digit 9 after some point. Every other nonnegative real number has a
unique decimal expansion.
The next theorem expresses the fact that the decimal expansions of real numbers
contain all formal decimal expansions.
Theorem 3.9. For arbitrary n ∈ N and (a1 , a2 , . . . ) consisting of digits {0, 1, . . . , 9},
there exists exactly one nonnegative real number whose decimal expansion is
n.a1 a2 . . . .
Proof. The condition (3.5) expresses the fact that x is an element of
1
Ik = n,a1 . . . ak , n,a1 . . . ak + k
10
for each k. Since these are nested closed intervals, Cantor’s axiom implies that there
exists a real number x such that x ∈ Ik for each k. The fact that this x is unique
follows from Theorem 3.5, since by the axiom of Archimedes, for each δ > 0, there
⊔
⊓
exists a k such that 1/k < δ , and then 1/10k < 1/k < δ .
Remarks 3.10. 1. Being able to write real numbers in decimal form has several interesting consequences. The most important corollary concerns the question of how
accurately the axioms of the real numbers describe the real numbers. The question
is whether we “forgot” something from the axioms; is it not possible that further
38
3 Real Numbers
properties are needed? To understand the answer, let us recollect the idea behind the
axiomatic approach. Recall that we pay no attention to what the real numbers actually are, just the properties they satisfy. Instead of the set R we are used to, we could
take another set R′ , assuming that there are two operations and a relation defined on
it that satisfy the 15 axioms.
The fact that we were able to deduce our results about decimal expansions using
only the 15 axioms means that these are true in both R and R′ . So we can pair each
nonnegative element of x with the x′ ∈ R′ that has the same decimal expansion.
We extend this association to R by setting (−x)′ = −x′ . By the above theorems
regarding decimal expansions, we have a one-to-one correspondence2 between R
and R′ .
It can also be shown (although we do not go into detail here) that this correspondence commutes with the operations, that is, (x + y)′ = x′ + y′ and (x · y)′ = x′ · y′ for
each x, y ∈ R, and moreover, x < y holds if and only if x′ < y′ . The existence of such
a correspondence (or isomorphism, for short) shows us that R and R′ are “indistinguishable”: if a statement holds in one, then it holds in the other as well. This fact is
expressed in mathematical logic by saying that any two models of the axioms of real
numbers are isomorphic. So if our goal is to describe the properties of the real numbers as precisely as possible, then we have reached this goal; including other axioms
cannot restrict the class of models satisfying the axioms of real numbers any further.
2. Another important consequence of being able to express real numbers as infinite
decimals is that we get a one-to-one correspondence between the set of real numbers
and points on a line.
Let e be a line, and let us pick two different points P and Q that lie on e. Call the
−→
direction PQ positive, and the opposite direction negative. In our correspondence
we assign the number 0 to P, and 1 to Q. From the point P, we can then measure
multiples of the distance PQ in both positive and negative directions, and assign the
integers to the corresponding points. If we subdivide each segment determined by
these integer points into k smaller equal parts (for each k), we get the points that we
can map to rational numbers. Let x be a nonnegative real number, and let its decimal
expansion be n.a1 a2 . . .. Let Ak and Bk be the points that we mapped to n.a1 . . . ak ,
and (n.a1 . . . ak ) + (1/10k ) respectively. It follows from the properties of the line that
there is exactly one common point of the segments Ak Bk . This will be the point A
that we map to x. Finally, measuring the segment PA in the negative direction from
P yields us the point that will be mapped to −x.
It can be shown that this has given us a one-to-one correspondence between the
points of e and the real numbers. The real number corresponding to a certain point
on the line is called its coordinate, and the line itself is called a number line, or the
real line. In the future, when we talk about a number x, we will sometimes refer to
it as the point with the coordinate x, or simply the point x.
2
We say that a function f is a one-to-one correspondence (or a bijective map, or a bijection)
between sets A and B if different points of A get mapped to different points of B (that is, if a1 = a2
then f (a1 ) = f (a2 )), and for every b ∈ B there exists an a ∈ A such that f (a) = b.
3.1 Decimal Expansions: The Real Line
39
The benefit of this correspondence is that we can better see or understand certain
statements and properties of the real numbers when they are viewed as the number
line. Many times, we get ideas for proofs by looking at the real line. It is, however,
important to note that properties that we can “see” on the number line cannot be
taken for granted, or as proved; in fact, statements that seem to be true on the real
line might turn out to be false. In our proofs, we can refer only to the fundamental
principles of real numbers (that is the axioms) and the theorems already established.
Looking at the real numbers as the real line suggests many concepts that prove to
be important regardless of our visual interpretation. Such is, for example, the notion
of an everywhere dense set.
Definition 3.11. We say that a set of real numbers H is everywhere dense in R if
every open interval contains elements of H; that is, if for every a < b, there exists
an x ∈ H for which a < x < b.
So for example, by Theorem 3.2, the set of rational numbers is everywhere dense
in R. We now show that the same is true for the set of irrational numbers.
Theorem 3.12. The set of irrational numbers is everywhere dense in R.
Proof. Let
everywhere dense
√ a < b be√arbitrary. Since the set of rational numbers is√
√
and a − 2 < √
b − 2, there is a rational number r such that a − 2 < r < b − 2.
Then
√ a < r + 2 < b, and the√open interval (a, b) contains the irrational number
r√+ 2. The√irrationality of r + 2 follows from the fact that if it were rational, then
2 = (r + 2) − r would also be rational, whereas it is not.
⊔
⊓
Motivated by the visual representation of the real line, we shall sometimes use
the word segment instead of interval. Now we will expand the concept of intervals
(or segments).
Let H be a closed or open interval. Then clearly, H contains every segment whose
endpoints are contained in H (this property is called convexity). It is reasonable to
call every set that satisfies this property an interval. Other than closed and open
intervals, the following are intervals as well.
Let a < b. The set of all x for which a ≤ x < b is denoted by [a, b) and is called
a left-closed, right-open interval. The set of all x that satisfy a < x ≤ b is denoted
by (a, b] and is called a left-open, right-closed interval. That is,
[a, b) = {x : a ≤ x < b} and (a, b] = {x : a < x ≤ b}.
Intervals of the form [a, b], (a, b), [a, b), and (a, b] are called bounded (or finite)
intervals. We also introduce the notation
(−∞, a] = {x : x ≤ a},
(−∞, a) = {x : x < a},
as well as (−∞, ∞) = R.
[a, ∞) = {x : x ≥ a},
(a, ∞) = {x : x > a},
(3.7)
40
3 Real Numbers
Intervals of the form (−∞, a], (−∞, a), [a, ∞), (a, ∞) as well as (−∞, ∞) = R
itself are called unbounded (or infinite) intervals. Out of these, (−∞, a] and [a, ∞)
are closed half-lines (or closed rays), while (−∞, a) and (a, ∞) are open half-lines
(or open rays).
We consider the empty set and a set consisting of a single point (a singleton)
as intervals too; these are the degenerate intervals. Singletons are considered to be
closed intervals, which is expressed by the notation [a, a] = {a}.
Remark 3.13. The symbol ∞ appearing in the unbounded intervals does not represent a specific object. The notation should be thought of as an abbreviation. For
example, [a, ∞) is merely a shorter (and more expressive) way of writing the set
{x : x ≥ a}. The symbol ∞ will pop up many more times. In every case, the same is
true, so we give meaning (in each case clearly defined) only to the whole expression.
Exercises
3.12. Prove that
(a) If x and y are rational, then x + y is rational.
(b) If x is rational and y is irrational, then x + y is irrational.
Is it true that if x and y are irrational then x + y is irrational?
3.13. Prove that the decimal expansion of a positive real number is periodic if and
only if the number is rational.
3.14. Prove that the set of numbers having finite decimal expansions and their negatives is everywhere dense.
3.15. Partition the number line into the union of infinitely many pairwise disjoint
everywhere dense sets.
3.16. Let H ⊂ R be a nonempty set that contains the difference of every pair of its
(not necessarily distinct) elements. Prove that either there is a real number a such
that H = {n · a : n ∈ Z}, or H is everywhere dense. (∗ H)
3.17. Prove that if α is irrational, then the set {n · α + k : n, k ∈ Z} is everywhere
dense.
3.2 Bounded Sets
If a set of real numbers A is finite and nonempty, then there has to be a largest element among them (we can easily check this by induction on the number of elements
in A). If, however, a set has infinitely many elements, then there is not necessarily
3.2 Bounded Sets
41
a largest element. Clearly, none of the sets R, [a, ∞), and N have a largest element.
These sets all share the property that they have elements larger than every given real
number (in the case of N, this is the axiom of Archimedes). We see that such a set
can never have a largest element.
Sets without the above property are more interesting, that is, sets whose elements
are all less than some real number. Such is, for example, the open interval (a, b),
whose elements are all smaller than b. However the set (a, b) does not have a largest
element: if x ∈ (a, b), then x < b, so by Theorem 3.40, there exists a real number y
such that x < y < b, so automatically y ∈ (a, b). Another example is the set
n−1
1 2
, ,...,
,... .
(3.8)
B=
2 3
n
Every element of this is less than 1, but there is no largest element, since for every
element (n − 1)/n, the next one, n/(n + 1) is larger.
To understand these phenomena better, we introduce some definitions and notation. If a set A has a largest (or in other words, maximal) element, we denote it by
max A. If a set A has a smallest (or in other words, minimal) element, we denote it
by min A. If the set A is finite, A = {a1 , . . . an }, then instead of max A, we can write
max1≤i≤n ai , and similarly, we can write min1≤i≤n ai instead of min A.
Definition 3.14. We say that a set of real numbers A is bounded from above if there
exists a number b such that for every element x of A, x ≤ b holds (that is no element
of A is larger than b). Every number b with this property is called an upper bound
of A. So the set A is bounded from above exactly when it has an upper bound.
The set A is bounded from below if there exists a number c such that for every
element x of A, the inequality x ≥ c holds (that is, no element of A is smaller than c).
Every number c with such a property is called a lower bound of A. The set A is
bounded from below if and only if it has a lower bound.
We say A is bounded when it is bounded both from above and from below.
Let us look at the previous notions using these new definitions. If max A exists,
then it is clearly also an upper bound of A, so A is bounded from above. The following implications hold:
(A is finite and nonempty) ⇒ (max A exists) ⇒ (A is bounded from above).
The reverse implications are not usually true, since, for example, the closed interval
[a, b] has a largest element but is not finite, while (a, b) is bounded from above but
does not have a largest element.
Further examples: The set N+ is bounded from below (in fact, it has a smallest
element) but is not bounded from above.
Z is not bounded from above or from below.
Every finite interval is a bounded set.
Half-lines are not bounded.
42
3 Real Numbers
The set C = 1, 21 , . . . , n1 . . . is bounded, since the greatest element is 1, so it is
also an upper bound. On the other hand, every element of C is nonnegative, so 0 is
a lower bound. The set does not have a smallest element.
Let us note that 0 is the greatest lower bound of C. Clearly, if ε > 0, then by the
axiom of Archimedes, there is an n such that 1/n < ε . Since 1/n ∈ C, this means
that ε is not a lower bound.
We can see similar behavior in the case of the open interval (a, b). As we saw,
the set does not have a greatest element. But we can again find a least upper bound:
it is easy to see that the number b is the least upper bound of (a, b). Or take the
set B defined in (3.8). This does not have a largest element, but it is easy to see that
among the upper bounds, there is a smallest one, namely 1. The following important
theorem expresses the fact that this is true for every (nonempty) set bounded from
above. Before stating the theorem, let us note that if b is an upper bound of a set H,
then every number greater than b is also an upper bound. Thus a set bounded from
above always has infinitely many upper bounds.
Theorem 3.15. Every nonempty set that is bounded from above has a least upper
bound.
Proof. Let A be a nonempty set that is bounded from above. We will obtain the
least upper bound of A as a common point of a sequence of nested closed intervals
(similarly to the proof of Theorem 3.6).
Let v1 be an upper bound of A. Let, moreover, a0 be an arbitrary element of A
(this exists, for we assumed A = 0),
/ and pick an arbitrary number u1 < a0 . Then u1
is not an upper bound of A, and u1 < v1 (since u1 < a0 ≤ v1 ).
Let us suppose that n ≥ 1 is an integer and we have defined the numbers un < vn
such that un is not an upper bound, while vn is an upper bound of A. We distinguish
two cases. If (un + vn )/2 is not an upper bound of A, then let
un+1 =
un + vn
and vn+1 = vn .
2
However, if (un + vn )/2 is an upper bound of A, then let
un+1 = un and vn+1 =
un + vn
.
2
It is clear that in both cases, [un+1 , vn+1 ] ⊂ [un , vn ], and un+1 is not an upper bound,
while vn+1 is an upper bound of A.
With the above, we have defined un and vn for every n. It follows from the definition that the sequence of intervals [un , vn ] forms a sequence of nested closed intervals, so by Cantor’s axiom, they have a common point. We can also see that there
is only one common point. By Theorem 3.5, it suffices to see that for every number
δ > 0, there exists an n for which vn − un < δ . But it is clear that (just as in the proof
of Theorem 3.6) vn − un = (v1 − u1 )/2n−1 for each n. So if n is large enough that
(v1 − u1 )/2n−1 < δ (which will happen for some n by the axiom of Archimedes),
then vn − un < δ will also hold.
3.2 Bounded Sets
43
So we see that the sequence of intervals [un , vn ] has one common point. Let this
be denoted by b. We want to show that b is an upper bound of A. Let a ∈ A be
arbitrary, and suppose that b < a. Then un ≤ b < a ≤ vn for each n. (Here the third
inequality follows from the fact that vn is an upper bound.) This means that a is a
common point of the sequence of intervals [un , vn ], which is impossible. This shows
that a ≤ b for each a ∈ A, making b an upper bound.
Finally, we show that b is the least upper bound. Let c be another upper bound,
and suppose that c < b. Then un < c < b ≤ vn for each n. (Here the first inequality
follows from the fact that un ≥ c would mean that un is an upper bound, but that
cannot be.) This means that c is a common point of the interval sequence [un , vn ],
which is impossible. Therefore, we have b ≤ c for each upper bound c, making b
the least upper bound.
⊔
⊓
A straightforward modification of the above proof yields Theorem 3.15 for lower
bounds.
Theorem 3.16. Every nonempty set that is bounded from below has a greatest lower
bound.
We can reduce Theorem 3.16 to Theorem 3.15. For let A be a nonempty set
bounded from below. It is easy to check that the set B = {−x : x ∈ A} is nonempty
and bounded from above. By Theorem 3.15, B has a least upper bound. If b is the
least upper bound of B, then it is easy to see that −b will be the greatest lower bound
of A.
The above argument uses the field axioms (since even the existence of the numbers −x requires the first four axioms). It is worth noting that Theorem 3.16 has
a proof that uses only the order axioms in addition to Theorem 3.15 (see Exercise 3.25).
Definition 3.17. We call the least upper bound of a nonempty set A that is bounded
from above the supremum of A, and denote it by sup A. The greatest lower bound
of a nonempty set A that is bounded from below is called the infimum of A, and is
denoted by inf A.
For completeness, we extend the notion of infimum and supremum to sets that
are not bounded.
Definition 3.18. If the set A is not bounded from above, then we say that the least
upper bound or supremum of A is infinity, and we denote it by sup A = ∞. If the set
A is not bounded from below, then we say that the greatest lower bound or infimum
of A is negative infinity, and we denote it by inf A = −∞.
Remark 3.19. Clearly, sup A ∈ A (respectively inf A ∈ A) holds if and only if A
has a largest (respectively smallest) element, and then max A = sup A (respectively
min A = inf A). The infimum and supremum of a nonempty set A agree if and only
if A is a singleton.
44
3 Real Numbers
For any two sets A, B ⊂ R, the sumset, denoted by A + B, is the set {a + b : a ∈
A, b ∈ B}. The following relationships hold between the supremum and infimum of
sets and their sumsets.
Theorem 3.20. If A, B are nonempty sets, then sup(A + B) = sup A + sup B and
inf(A + B) = inf A + inf B.
If either one of sup A and sup B is infinity, then what we mean by the statement
is that sup(A + B) is infinity. Similarly, if either one of inf A and inf B is negative
infinity, then the statement is to be understood to mean that inf(A + B) is negative
infinity.
Proof. We prove only the statement regarding the supremum. If sup A = ∞, then A is
not bounded from above. It is clear that A + B is then also not bounded from above,
so sup(A + B) = ∞.
Now suppose that both sup A and sup B are finite. If a ∈ A and b ∈ B, then we
know that a + b ≤ sup A + sup B, so sup A + sup B is an upper bound of A + B.
On the other hand, if c is an upper bound of A + B, then for arbitrary a ∈ A
and b ∈ B, we have a + b ≤ c, that is, a ≤ c − b, so c − b is an upper bound
for A. Then sup A ≤ c − b, that is, b ≤ c − sup A for each b ∈ B, meaning that
c − sup A is an upper bound of B. From this, we get that sup B ≤ c − sup A, that
is, sup A + sup B ≤ c, showing that sup A + sup B is the least upper bound of the set
A + B.
⊔
⊓
Exercises
3.18. Let H be a set of real numbers. Which properties of H do the following statements express?
(a) (∀x ∈ R)(∃y ∈ H)(x < y);
(b) (∀x ∈ H)(∃y ∈ R)(x < y);
(c) (∀x ∈ H)(∃y ∈ H)(x < y).
3.19. Prove that
max(a, b) =
|a − b| + a + b
2
and
min(a, b) =
−|a − b| + a + b
.
2
3.20. Let A ∩ B = 0.
/ What can we say about the relationships between sup A, sup B;
sup(A ∪ B), sup(A ∩ B); and sup(A \ B)?
√ √
3.21. Let A = (0, 1), B = [− 2, 2] and
1
1
+
+
+
:n∈N , m∈N .
C=
2n 2m
Find—if they exist—the supremum, infimum, maximum, and minimum of the
above sets.
3.3 Exponentiation
45
3.22. Let A · B = {a · b : a ∈ A, b ∈ B} for arbitrary sets A, B ⊂ R. What kind of
relationships can we find between sup A, sup B, inf A, inf B and inf(A · B), sup(A · B)?
What if we suppose A, B ⊂ (0, ∞)?
3.23. Let A be an arbitrary set of numbers, and let
B = {−b : b ∈ A},
C = {1/c : c ∈ A, c = 0}.
What kind of relationships can we find between sup A, inf A, sup B, inf B, supC,
infC?
3.24. Prove that if a > 0,√
k ∈ N+ , H− = {x > 0 : xk < a}, and H+ = {x > 0 : xk > a},
k
k
k
then sup H− = inf H√
+ = a, that is, (sup H− ) = (inf H+ ) = a. (This can also show
k
us the existence of a for a > 0.)
3.25. Let X be an ordered set. (This means that we have a relation < given on X that
satisfies the first two of the order axioms, trichotomy and transitivity.) Suppose that
whenever a nonempty subset of X has an upper bound, it has a least upper bound.
Show that if a nonempty subset of X has a lower bound, then it has a greatest lower
bound. (H)
3.26. We say a set H ⊂ R is convex if [x, y] ⊂ H whenever x, y ∈ H, x < y. Prove that
a set is convex if and only if it is an interval. (Use the theorem about the existence of
a least upper bound and greatest lower bound.) Show that without assuming Cantor’s
axiom, this would not be true.
3.27. Assume the field axioms, the order axioms, and the statement that if a nonempty set has an upper bound, then it has a least upper bound. Deduce from this the
axiom of Archimedes and Cantor’s axiom. (∗ S)
3.3 Exponentiation
We denote the n-fold product a · . . . · a by an , and call it the nth power of a (we call
n the exponent and a the base). It is clear that for all real numbers a, b and positive
integers x, y, the equations
(ab)x = ax · bx ,
ax+y = ax · ay ,
(ax )y = axy
(3.9)
hold. Our goal is to extend the notion of exponentiation in a way that satisfies the
above identities. If a = 0, the equality ax+y = ax · ay can hold if and only if we
declare a0 to be 1, and a−n to be 1/an for every positive integer n. Accepting this
definition, we see that the three identities of (3.9) hold for each a, b = 0 and x, y ∈ Z.
In the following, we concern ourselves only with powers of numbers different from
zero; we define only the positive integer powers of zero (for now).
46
3 Real Numbers
To extend exponentiation to rational exponents, we use Theorem 3.6, guaranteeing the existence of roots. If a < 0 and k is odd, then we denote the number − k |a|
√ k
√
by k a. Then ( k a) = a clearly holds.
Let r be rational, and suppose that r = p/q, where p, q are relatively prime integers, and q > 0. If a = 0, then by (ax )y = axy , if ar = b, then bq = a p√holds. If q
is odd, then this uniquely determines b: the only possible value is b = q a p . If q is
even, then p√is odd, and a p = bq must be positive. This is possible only if a > 0, and
then b = ± q a p . Since it is natural that every
√ power of a positive number should be
positive, the logical conclusion is that b = q a p . We accept the following definition.
Definition 3.21. Let p and q be relatively prime
integers, and suppose q > 0. If
√
a > 0, then the value of a p/q is defined to be q a p . We define a p/q similarly if a < 0
and q is odd.
In the following, we deal only with powers of positive numbers.
Theorem 3.22. If a > 0, n, m are integers, and m > 0, then an/m =
we did not require n and m to be relatively prime.)
√
m n
a . (Note that
Proof. Let n/m
= p/q,
where p, q are relatively prime integers and q > 0. We have
√
√
m n
q p
to
show
that
a
=
√
√ mq a . Since both sides are positive, it suffices to show that
m n mq
a
= q a p , that is, anq = amp . However, this is clear, since n/m = p/q,
so nq = mp.
⊔
⊓
Theorem 3.23. The identities in (3.9) hold for each positive a, b and rational x, y.
Proof. We prove only the first identity, since the rest can be proved similarly. Let
x = p/q, where p, q√are relatively prime integers and q > 0. Then (ab)x = q (ab) p
√
and ax · bx = q a p · q b p . Since these are all positive numbers, it suffices to show that
q √
q
√
q
q
(ab) p = q a p · b p .
The left side equates to (ab) p . To compute the right side, we apply the first identity
in (3.9) to the integer powers q and then p.
We get that
q √ q
√
√ q √
q p
q p q p
a · b
= q ap ·
b
= a p · b p = (ab) p .
⊔
⊓
Theorem 3.24. ar > 0 for each positive a and rational r. If r1 < r2 , then a > 1
implies ar1 < ar2 , while 0 < a < 1 leads to ar1 > ar2 .
Proof. The inequality ar >
√0 is clear from the definition. If a > 1 and p, q are positive integers, then a p/q = q a p > 1, and so ar > 1 for each positive rational r. So if
r1 < r2 , then ar2 = ar1 ar2 −r1 > ar1 . The statement for 0 < a < 1 can be proved the
same way.
⊔
⊓
3.3 Exponentiation
47
To extend exponentiation to irrational powers, we will keep in mind the monotone property shown in the previous theorem. Let a > 1. If we require that whenever
x ≤ y, we also have ax ≤ ay , then ax needs to satisfy ar ≤ ax ≤ as whenever s and
r are rational numbers such that r ≤ x ≤ s. We show that this restriction uniquely
defines ax .
Theorem 3.25. If a > 1, then for an arbitrary real number x, we have
sup{ar : r ∈ Q, r < x} = inf{as : s ∈ Q, s > x}.
(3.10)
If 0 < a < 1, then for an arbitrary real number x, we have
inf{ar : r ∈ Q, r < x} = sup{as : s ∈ Q, s > x}.
(3.11)
Proof. Let a > 1. The set A = {ar : r ∈ Q, r < x} is nonempty and bounded from
above, since as is an upper bound for each rational s greater than x. Thus α = sup A
is finite, and α ≤ as whenever s > x for rational s. Then α is a lower bound of the
set B = {as : s ∈ Q, s > x}, so β = inf B is finite and α ≤ β . We show that α = β .
Suppose that β > α , and let (β /α ) = 1 + h, where h > 0. For each positive
integer n, there exists an integer k for which k/n ≤ x < (k + 1)/n. Then we know
that (k − 1)/n < x < (k + 1)/n, and so a(k−1)/n ≤ α < β ≤ a(k+1)/n . We get
a(k+1)/n
β
≤ (k−1)/n = a2/n ,
α
a
and by applying Bernoulli’s inequality,
n
β
2
= (1 + h)n ≥ 1 + nh.
a ≥
α
However, this is impossible if n > a2 /h. Thus α = β , which proves (3.10). The
second statement can be proved similarly.
⊔
⊓
Definition 3.26. Let a > 1. For an arbitrary real number x, the number ax denotes
sup{ar : r ∈ Q, r < x} = inf{as : s ∈ Q, s > x}. If 0 < a < 1, then the value of ax is
inf{ar : r ∈ Q, r < x} = sup{as : s ∈ Q, s > x}. We define the exponent 1x to be 1
for each x.
Let us note that if x is rational, then the above definition agrees with the previously defined value by Theorem 3.25.
Theorem 3.27. ax > 0 for each positive a and real number x. If x1 < x2 , then a > 1
implies ax1 < ax2 , while 0 < a < 1 leads to ax1 > ax2 .
Proof. Let a > 1. For an arbitrary x, we can pick a rational r < x such that ax ≥ ar >
0. If x1 < x2 , then let r1 and r2 be rational numbers for which x1 < r1 < r2 < x2 .
Then ax1 ≤ ar1 < ar2 ≤ ax2 . We can argue similarly in the case 0 < a < 1.
⊔
⊓
Later, we will see (refer to Theorem 11.4) that all three identities in (3.9) hold
for every a, b > 0 and x, y ∈ R.
48
3 Real Numbers
Exercises
3.28. Prove that if n ∈ N, then
√
n is either an integer or irrational.
√
3.29. Prove that if n, k ∈ N+ , then k n is either an integer or irrational.
3.30. Let a > 0 be rational. Prove that if aa is rational, then a is an integer.
3.31. Let a and b be rational numbers, 0 < a < b. Prove that ab = ba holds if and
only if there exists an n ∈ N+ such that
1
a = 1+
n
n
and
1
b = 1+
n
n+1
.
(H)
3.32. Prove that if 0 < a ≤ b are real numbers and r > 0 is rational, then ar ≤ br . (S)
3.33. Prove that if x > −1 and b ≥ 1, then (1 + x)b ≥ 1 + bx. But if x > −1 and
0 ≤ b ≤ 1, then (1 + x)b ≤ 1 + bx. (H S)
3.4 First Appendix: Consequences of the Field Axioms
Theorem 3.28. If a + b = 0 and a + c = 0, then b = c.
Proof. Using the first three axioms, we get that
c = c + 0 = c + (a + b) = (c + a) + b = (a + c) + b = 0 + b = b + 0 = b.
⊔
⊓
If we compare the result of the previous theorem with axiom 4, then we get that
for every a ∈ R, there is exactly one b such that a + b = 0. We denote this unique b
by −a.
Theorem 3.29. For every a and b, there is exactly one x such that a = b + x.
Proof. If x = (−b) + a, then
b + x = b + ((−b) + a) = (b + (−b)) + a = 0 + a = a + 0 = a.
On the other hand, if a = b + x, then
x = x + 0 = 0 + x = ((−b) + b) + x = (−b) + (b + x) = (−b) + a.
⊔
⊓
From now on, we will denote the element (−b) + a = a + (−b) by a − b.
3.5 Second Appendix: Consequences of the Order Axioms
49
Theorem 3.30. If a · b = 1 and a · c = 1, then b = c.
This can be proven in the same way as Theorem 3.28, using axioms 5–8 here.
Comparing Theorem 3.30 with axiom 8, we obtain that for every a = 0, there exists
exactly one b such that a · b = 1. We denote this unique element b by 1a or 1/a.
Theorem 3.31. For any a and b = 0, there is exactly one x such that a = b · x.
Proof. Mimicking the proof of Theorem 3.29 will show us that x = a · (1/b) is the
unique real number that satisfies the condition of the theorem.
⊔
⊓
If a, b ∈ R and b = 0, then we denote the number a · (1/b) by
a
b
or a/b.
Theorem 3.32. Every real number a satisfies a · 0 = 0.
Proof. Let a · 0 = b. By axioms 3 and 9,
b = a · 0 = a · (0 + 0) = (a · 0) + (a · 0) = b + b.
Since b + 0 = b also holds, Theorem 3.29 implies b = 0.
⊔
⊓
It is easy to check that each of the following identities follows from the field
axioms:
−a = (−1) · a,
(a − b) − c = a − (b + c),
(−a) · b = −(a · b),
1
a/b
a·c
a c
· =
(b, d = 0).
b d
b·d
With the help of induction, it is easy to justify that putting parentheses anywhere
in a sum or product of several terms does not change the value of the sum or product.
For example, (a + b) + (c + d) = (a + (b + c)) + d and (a · b) · (c · d) = a · ((b · c) · d).
Therefore, we can omit parentheses in sums or products; by the sum a1 + · · · + an
and the product a1 · . . . · an , we mean the number that we would get by putting parentheses at arbitrary places in the sum (or product), thereby adding or multiplying two
numbers at a time.
=
b
a
(a, b = 0),
3.5 Second Appendix: Consequences of the Order Axioms
Theorem 3.33. If a < b and c < d, then a + c < b + d. If a ≤ b and c ≤ d, then
a + c ≤ b + d.
Proof. Let a < b and c < d. Applying axiom 12 and commutativity twice, we get
that a + c < b + c = c + b < d + b = b + d. The second statement is a clear consequence of this.
⊔
⊓
We can similarly show that if 0 < a < b and 0 < c < d, then a·c < b·d; moreover,
if 0 ≤ a ≤ b and 0 ≤ c ≤ d, then a · c ≤ b · d. (We need to use Theorem 3.32 for the
proof of the second statement.)
50
3 Real Numbers
Theorem 3.34. If a < b then −a > −b.
Proof. By axiom 12, we have −b = a + (−a − b) < b + (−a − b) = −a.
⊔
⊓
Theorem 3.35. If a < b and c < 0, then a·c > b·c. If a ≤ b and c ≤ 0, then a·c ≥ b·c.
Proof. Let a < b and c < 0. By the previous theorem, −c > 0, so by axiom 13, we
have −a · c = a · (−c) < b · (−c) = −b · c. Thus using the previous theorem again,
we obtain a · c > b · c. The second statement is a simple consequence of this one,
using Theorem 3.32.
⊔
⊓
Theorem 3.36. 1 > 0.
Proof. By axiom 10, it suffices to show that neither the statement 1 = 0 nor 1 < 0
holds. We initially assumed the numbers 0 and 1 to be distinct, so all we need to
exclude is the case 1 < 0. Suppose that 1 < 0. Then by Theorem 3.35, we have 1·1 >
0 · 1, that is, 1 > 0. However, this contradicts our assumption. Thus we conclude that
our assumption was false, and so 1 > 0.
⊔
⊓
Theorem 3.37. If a > 0, then 1/a > 0. If 0 < a < b, then 1/a > 1/b. If a = 0, then
a2 > 0.
Proof. Let a > 0. If 1/a ≤ 0, then by Theorems 3.35 and 3.32, we must have
1 = (1/a) · a ≤ 0 · a = 0, which is impossible. Thus by axiom 10, only 1/a > 0 is
possible.
Now suppose that 0 < a < b. If 1/a ≤ 1/b, then by axiom 13, b = (1/a) · a · b ≤
(1/b) · a · b = a, which is impossible.
The third statement is clear by axiom 13 and Theorem 3.35.
⊔
⊓
Theorem 3.38. The natural numbers are positive and distinct.
Proof. By Theorem 3.36, we have that 0 < 1. Knowing this, axiom 12 then implies
1 = 0 + 1 < 1 + 1 = 2. Applying axiom 12 again yields 2 = 1 + 1 < 2 + 1 = 3, and
so on. Finally, we get that 0 < 1 < 2 < · · · , and using transitivity, both statements of
the theorem are clear.
⊔
⊓
Theorem 3.39. If n is a natural number, then for every real number a, we have
a + · · · + a = n · a.
n terms
Proof. a + · · · + a = 1 · a + · · · + 1 · a = (1 + · · · + 1) · a = n · a.
n terms
n terms
n terms
⊔
⊓
Theorem 3.40. There are no neighboring real numbers. That is, for arbitrary real
numbers a < b, there exists a real number c such that a < c < b. One such number,
for example, is c = (a + b)/2.
Proof. By Theorem 3.39 and axiom 12, 2 · a = a + a < a + b. If we multiply this
inequality by 1/2 (which is positive by Theorems 3.37 and 3.38), then we get that
a < c. A similar argument shows that c < b.
⊔
⊓
Chapter 4
Infinite Sequences I
In this chapter, we will be dealing with sequences of real numbers. For brevity, by a
sequence we shall mean an infinite sequence whose terms are all real numbers.
We can present a sequence in various different ways. Here are a few examples
(each one is defined for n ∈ N+ ):
(an ) = 1, 21 , . . . , n1 , . . . ;
Examples 4.1. 1. an = 1n :
(an ) = 1, − 12 , 13 , − 41 , . . . ;
2. an = (−1)n+1 · 1n :
3. an = (−1)n :
(an ) = (−1, 1, . . . , −1, 1, . . . );
4. an = (n + 1)2 :
(an ) = (4, 9, 16, . . . );
√
√
√
√
√
5. an = n + 1 − n:
(an ) =
2 − 1, 3 − 2, . . . ;
6. an = n+1
(an ) = 2, 23 , 43 , 54 , . . . ;
n :
(an ) = (−1, 4, −9, 16, . . . );
7. an = (−1)n · n2 :
8. an = n + n1 :
(an ) = 2, 25 , 10
, 17
,... ;
3
4
√ √
√
(an ) =
11, 12, . . . ;
9. an = n + 10:
n
10. an = 1 + 1n ;
n 2
11. an = 1 + n1 ;
n
12. an = 1 + n12 ;
,... ;
13. a1 = −1, a2 = 2, an = (an−1 + an−2 )/2 (n ≥ 3): (an ) = −1, 2, 21 , 54 , 78 , 17
16
√ √ √
√
14. a1 = 1, a2 = 3, an+1 = n a1 · · · · · an (n ≥ 2): (an ) = 1, 3, 3, 3, 3, . . . ;
√
√
√
15. a1 = 0, an+1 = 2 + an (n ≥ 1): (an ) = 0, 2, 2 + 2, . . . ;
n, if n is even,
16. an =
:
(an ) = (1, 2, 1, 4, 1, 6, 1, 8, . . . );
1, if n is odd
17. an = the nth prime number:
(an ) = (2, 3, 5, 7, 11, . √
. . );
18. an = the nth digit of the infinite decimal expansion of 2:
(an ) = (4, 1, 4, 2, 1, 3, 5, 6, . . .).
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 4
51
52
4 Infinite Sequences I
In the first 12 sequences, the value of an is given by an “explicit formula.” The
terms of (13)–(15) are given recursively. This means that we give the first few, say
k, terms of the sequence, and if n > k, then the nth term is given by the terms with
index less than n. In (16)–(18), the terms are not given with a specific “formula.”
There is no real difference, however, concerning the validity of the definitions; they
are all valid sequences. As we will later see, whether an (or generally a function)
can be expressed with some kind of formula depends only on how frequently such
an expression occurs and how important it is. For if it defines an important map that
we use frequently, then it is worthwhile to define some notation and nomenclature
that converts the long definition (of an or the function) into a “formula.” If it is
not so important or frequently used, we usually leave it as a lengthy description.
For example, in the above sequence (9), an is the positive number whose square is
n + 10. However, the map expressed here (the square root) occurs so frequently and
with such importance that we create a new symbol for it, and the definition of an
becomes a simple formula.
Exercises
4.1. Give a closed formula for the nth term of sequence (13) in Example 4.1. (S)
4.2. Let p(x) = xk − c1 xk−1 − c2 xk−2 − . . . − ck , let α1 , . . . , αm be roots of the polynomial p (not necessarily all of the roots), and let β1 , . . . , βm be arbitrary real numbers.
Show that the sequence
an = β1 · α1n + · · · + βm · αmn (n = 1, 2, . . .)
satisfies the recurrence relation (recursion) an = c1 an−1 + c2 an−2 + · · · + ck an−k for
all n > k. (H)
4.3. Give a closed formula for the nth term of the following sequences, given by
recursion.
(a) u0 = 0, u1 = 1, un = un−1 + un−2 (n ≥ 2) (HS);
(b) a0 = 0, a1 = 1, an = an−1 + 2 · an−2 (n ≥ 2);
(c) a0 = 0, a1 = 1, an = 2 · an−1 + an−2 (n ≥ 2).
4.4. Let p(x) = xk − c1 xk−1 − c2 xk−2 − . . . − ck , and let α be a double root of the
polynomial p (this means that (x − α )2 can be factored from p). Show that the
sequence an = n · α n (n = 1, 2, . . .) satisfies the following recurrence for all n > k:
an = c1 an−1 + c2 an−2 + · · · + ck an−k .
4.5. Give a closed formula for the nth term of the sequence
a0 = 0, a1 = 0, a2 = 1, an = an−1 + an−2 − an−3 (n ≥ 3)
given by a recurrence relation.
4.1 Convergent and Divergent Sequences
53
4.1 Convergent and Divergent Sequences
When we make measurements, we are often faced with a value that we can only
express through approximation—although arbitrarily precisely. For example, we
define the circumference (or area) of a circle in terms of the perimeter (or area)
of the inscribed or circumscribed regular n-gons. According to this definition, the
circumference (or area) of the circle is the number that the perimeter (or area) of
the inscribed regular n-gon “approximates arbitrarily well as n increases past every
bound.”
Some of the above sequences also have this property that the terms “tend” to
some “limit value.” The terms of the sequence (1) “tend” to 0 in the sense that if n
is large, the value of 1/n is “very small,” that is it is very close to 0. More precisely,
no matter how small an interval we take around 0, if n is large enough, then 1/n
is inside this interval (there are only finitely many n such that 1/n is outside the
interval).
The terms of the sequence (2) also “tend” to 0, while the terms of (6) “tend” to 1
by the above notion.
For the sequences (3), (4), (7), (8), (9), (16), (17), and (18), no number can be
found that the terms of any of these sequences “tend” to, by the above notion.
The powers defining the an in the sequences (10), (11), and (12) all behave differently as n increases: the base approaches 1, but the exponent gets very big. Without
detailed examination, we cannot “see” whether they tend to some value, and if they
do, to what. We will see later that these three sequences each behave differently
from the point of view of limits.
To define limits precisely, we use the notion outlined in the examination of (1).
We give two definitions, which we promptly show to be equivalent.
Definition 4.2. The sequence (an ) tends to b (or b is the limit of the sequence) if
for every ε > 0, there are only finitely many terms falling outside the interval (b −
ε , b + ε ). In other words, the limit of (an ) is b if for every ε > 0, the terms of the
sequence satisfy, with finitely many exceptions, the inequality b − ε < an < b + ε .
Definition 4.3. The sequence (an ) tends to b (or b is the limit of the sequence), if
for every ε > 0 there exists a number n0 (depending on ε ) such that
|an − b| < ε for all indices n > n0 .
(4.1)
Let us show that the two definitions are equivalent. Suppose first that the sequence (an ) tends to b by Definition 4.2. Consider an arbitrary ε > 0. Then only
finitely many terms of the sequence fall outside (b − ε , b + ε ). If there are no terms
of the sequence outside the interval, then (4.1) holds for any choice of n0 . If terms of
the sequence fall outside (b − ε , b + ε ), then out of those finitely many terms, there
is one with maximal index. Denote this index by n0 . Then for each n > n0 , the an
are in the interval (b − ε , b + ε ), that is, |an − b| < ε if n > n0 . Thus we see that (an )
satisfies Definition 4.3.
54
4 Infinite Sequences I
Secondly, suppose that (an ) tends to b by Definition 4.3. Let ε > 0 be given. Then
there exists an n0 such that if n > n0 , then an is in the interval I = (b − ε , b + ε ). Thus
only among the terms ai (i ≤ n0 ) can there be terms that do not fall in the interval I.
The number of these is at most n0 , thus finite. It follows that the sequence satisfies
Definition 4.2.
If the sequence (an ) tends to b, then we denote this by
lim an = b
n→∞
or
an → b, as n → ∞ (or just an → b).
If there is a real number b such that limn→∞ an = b, then we say that the sequence
(an ) is convergent. If there is no such number, then the sequence (an ) is divergent.
Examples 4.4. 1. By Definition 4.3, it is easy to see that the sequence (1) in 4.1 truly
tends to 0, that is, lim 1/n = 0. For if ε is an arbitrary positive number, then n > 1/ε
n→∞
implies 1/n < ε , and thus |(1/n) − 0| = 1/n < ε . That is, we can pick the n0 in the
definition to be 1/ε . (The definition did not require that n0 be an integer. But it is
clear that if some n0 satisfies the conditions of the definition, then every number
larger than n0 does as well, and among these—by the axiom of Archimedes—is an
integer.)
2. In the same way, we can see that the sequence (2) in 4.1 tends to 0. Similarly,
lim
n→∞
n+1
= 1,
n
or (6) is also convergent, and it has limit 1. Clearly, (n + 1)/n ∈ (1 − ε , 1 + ε ) if
1/n < ε , that is, the only an outside the interval (1 − ε , 1 + ε ) are those for which
1/n ≥ ε , that is, n ≤ 1/ε .
3. Now we show that the sequence (5) tends to 0, that is,
√
√
lim ( n + 1 − n) = 0.
n→∞
(4.2)
Since
√
√
1
1
n+1− n = √
√ < √ ,
n+1 + n 2 n
√
if n > 1/(4 ε 2 ), then 1/(2 n) < ε and an ∈ (−ε , ε ).
Remarks 4.5. 1. It is clear that if a threshold n0 is good, that is it satisfies the conditions of (4.1), then every number larger than n0 is also a good index. Generally
when finding a threshold n0 we do not strive to find the smallest one.
2. It is important to note the following regarding Definition 4.2. If the infinite sequence (an ) has only finitely many terms outside the interval (a − ε , a + ε ), then
naturally there are infinitely many terms inside the interval (a − ε , a + ε ). It is clear,
however, that for the sequence (an ), if for every ε > 0 there are infinitely many terms
inside the interval (a − ε , a + ε ) it does not necessarily mean that there are only
4.1 Convergent and Divergent Sequences
55
finitely many terms outside the interval; that is, it does not follow that lim an = a.
n→∞
For example, in Example 4.1, for the sequence (3) there are infinitely many terms
inside (1 − ε , 1 + ε ) for every ε > 0 (and the same holds for (−1 − ε , −1 + ε )), but if
ε < 2, then it is not true that there are only finitely many terms outside (1 − ε , 1 + ε ).
That is, 1 is not a limit of the sequence, and it is easy to see that the sequence is divergent.
3. Denote the set of numbers occurring in the sequence (an ) by {an }. Let us examine
the relationship between (an ) and {an }. On one hand, we know that for the set, a
number is either an element of it or it is not, and there is no meaning in saying that it
appears multiple times, while in a sequence, a number can occur many times. In fact,
for the infinite sequence (an ) in Example (3), the corresponding set {an } = {−1, 1}
is finite. (This distinction is further emphasized by talking about elements of sets,
and terms of sequences.) Consider the following two properties:
I. For every ε > 0, there are finitely many terms of (an ) outside the interval (a −
ε , a + ε ).
II. For every ε > 0, there are finitely many elements of {an } outside the interval
(a − ε , a + ε ).
Property I. means that lim an = a. The same cannot be said of Property II., since
n→∞
we can take our example above, where {an } had only finitely many elements outside
every interval (a − ε , a + ε ), but the sequence was still divergent. Therefore, it is
clear that I implies II, but II does not imply I. In other words, I is a stronger property
than II.
Exercises
4.6. Find the limits of the following sequences, and find an n0 (not necessarily the
smallest) for a given ε > 0 as in Definition 4.3.
√
(a) 1/ n;
(b) (2n + 1)/(n + 1);
√
(c) (5n − 1)/(7n + 2);
(d) 1/(n − n);
√
√
√
(e) (1 + · · · + n)/n2 ;
(f) ( 1 + 2 + · · · + n)/n4/3 ;
√
√
(h) n2 + 1 + n2 − 1 − 2n;
1 + (1/n) − 1 ;
(g) n ·
√
√
1
1
1
(i) 3 n + 2 − 3 n − 2;
+
+···+
.
(j)
1·2 2·3
(n − 1) · n
4.7. Consider the definition of an → b: (∀ε > 0)(∃n0 )(∀n ≥ n0 )(|an − b| < ε ). Permuting and changing the quantifiers yields the following statements:
(a) (∀ε > 0)(∃n0 )(∃n ≥ n0 )(|an − b| < ε );
(b) (∀ε > 0)(∀n0 )(∀n ≥ n0 )(|an − b| < ε );
(c) (∀ε > 0)(∀n0 )(∃n ≥ n0 )(|an − b| < ε );
56
4 Infinite Sequences I
(d) (∃ε > 0)(∀n0 )(∀n ≥ n0 )(|an − b| < ε );
(e) (∃ε > 0)(∀n0 )(∃n ≥ n0 )(|an − b| < ε );
(f) (∃ε > 0)(∃n0 )(∀n ≥ n0 )(|an − b| < ε );
(g) (∃ε > 0)(∃n0 )(∃n ≥ n0 )(|an − b| < ε );
(h) (∃n0 )(∀ε > 0)(∀n ≥ n0 )(|an − b| < ε );
(i) (∃n0 )(∀ε > 0)(∃n ≥ n0 )(|an − b| < ε );
(j) (∀n0 )(∃ε > 0)(∀n ≥ n0 )(|an − b| < ε );
(k) (∀n0 )(∃ε > 0)(∃n ≥ n0 )(|an − b| < ε ).
What properties of (an ) do these statements express? For each property, give a sequence (if exists) with the given property.
4.8. Prove that a convergent sequence always has a smallest or a largest term.
4.9. Give examples such that an − bn → 0, but an /bn does not tend to 1, as well as
an /bn → 1 but an − bn does not tend to 0.
4.10. Prove that if (an ) is convergent, then (|an |) is convergent. Is this statement true
the other way around?
4.11. If a2n →a2 , does it follow that an →a? If a3n →a3 , does it follow that an → a?
√
√
4.12. Prove that if an → a > 0, then an → a.
4.13. For the sequence (an ), consider the corresponding sequence of arithmetic
means, sn = (a1 + · · · + an )/n. Prove that if lim an = a, then lim sn = a. Give a
n→∞
n→∞
sequence for which (sn ) is convergent but (an ) is divergent. (S)
4.2 Sequences That Tend to Infinity
It is easy to see that the sequences (3), (4), (7), (8), (9), (16), and (17) in Example 4.1
are divergent. Moreover, terms of the sequences (4), (8), (9), and (17)—aside from
diverging—share the trend that for “large” n, the an terms are “large”; more precisely, for an arbitrarily large number P, there are only finitely many terms of the
sequence that are smaller than P. We say that sequences like this “diverge to ∞.”
This is expressed precisely be the definition below. As in the case of convergent
sequences, we give two definitions and then show that they are equivalent.
Definition 4.6. We say that the limit of the sequence (an ) is ∞ (or that (an ) tends to
infinity) if for arbitrary P, there are only finitely many terms of the sequence outside
the interval (P, ∞).1
1
That is, to the left of P on the number line.
4.2 Sequences That Tend to Infinity
57
Definition 4.7. We say that the limit of the sequence (an ) is ∞ (or that (an ) tends to
infinity) if for arbitrary P, there exists a number n0 (depending on P) for which the
statement
an > P,
if
n > n0
(4.3)
holds.
We can show the equivalence of the above definitions in the following way. If
there are no terms outside the interval (P, ∞), then (4.3) holds for every n0 . If there
are terms outside the interval (P, ∞), but only finitely many, then among these indices, call the largest n0 , which will make (4.3) hold.
Conversely, if (4.3) holds, then there are at most n0 terms of the sequence, finitely
many, outside the interval (P, ∞).
If the sequence (an ) tends to infinity, then we denote this by limn→∞ an = ∞, or
by an → ∞ as n → ∞, or just an → ∞ for short. In this case, we say that the sequence
(an ) diverges to infinity.
We define the concept of tending to −∞ in the same manner.
Definition 4.8. We say that the limit of the sequence (an ) is −∞ (or that (an ) tends
to negative infinity) if for arbitrary P, there are only finitely many terms of the
sequence outside the interval (−∞, P).2
The following is equivalent.
Definition 4.9. We say that the limit of the sequence (an ) is −∞ (or that (an ) tends
to negative infinity) if for arbitrary P, there exists a number n0 (dependent on P) for
which if n > n0 , then an < P holds.
If the sequence (an ) tends to negative infinity, then we denote this by limn→∞ an =
−∞, or by an → −∞ if n → ∞, or just an → −∞ for short. In this case, we say that
the sequence (an ) diverges to negative infinity.
There are several sequences in Example 4.1 that tend to infinity. It is clear that (4)
and (8) are such sequences, for in both cases, if n > P, then an > P. The sequence
(9) also tends to infinity, for if n > P2 , then an > P.
Now we show that the sequence (11) tends to infinity as well. Let P be given.
Then Bernoulli’s inequality (Theorem 2.5) implies
1+
1
n
n 2
> 1 + n2 ·
1
= 1 + n,
n
so for arbitrary n > P, we have
2
1
1+
n
n 2
That is, to the right of P on the number line.
> 1 + n > P.
58
4 Infinite Sequences I
Exercises
4.14. For a fixed P, find an n0 (not necessarily the smallest one) satisfying Definition 4.7 for the following sequences:
√
(a) n − n;
(b) (1 + · · · + n)/n;
√
√
√
n2 − 10n
(c) ( 1 + 2 + · · · + n)/n;
;
(d)
10n + 100
(e) 2n /n.
4.15. Consider the definition of an → ∞: (∀P)(∃n0 )(∀n ≥ n0 )(an > P). Permuting
or changing the quantifiers yields the following statements:
(a) (∀P)(∃n0 )(∃n ≥ n0 )(an > P);
(b) (∀P)(∀n0 )(∀n ≥ n0 )(an > P);
(c) (∀P)(∀n0 )(∃n ≥ n0 )(an > P);
(d) (∃P)(∀n0 )(∀n ≥ n0 )(an > P);
(e) (∃P)(∀n0 )(∃n ≥ n0 )(an > P);
(f) (∃P)(∃n0 )(∀n ≥ n0 )(an > P);
(g) (∃P)(∃n0 )(∃n ≥ n0 )(an > P);
(h) (∃n0 )(∀P)(∀n ≥ n0 )(an > P);
(i) (∃n0 )(∀P)(∃n ≥ n0 )(an > P);
(j) (∀n0 )(∃P)(∀n ≥ n0 )(an > P);
(k) (∀n0 )(∃P)(∃n ≥ n0 )(an > P).
What properties of (an ) do these statements express? For each property, give a sequence (if exists) with the given property.
4.16. Prove that a sequence tending to infinity always has a smallest term.
4.17. Find the limit of (n2 + 1)/(n + 1) − an for each a.
√
4.18. Find the limit of n2 − n + 1 − an for each a.
4.19. Find the limit of
(n + a)(n + b) − n for each a, b.
4.20. Prove that if an+1 − an → c, where c > 0, then an → ∞.
4.3 Uniqueness of Limit
If the sequence (an ) is convergent, or tends to plus or minus infinity, then we say
that (an ) has a limit. Instead of saying that a sequence is convergent, we can say
that the sequence has a finite limit. If (an ) doesn’t have a limit, we can say that it
oscillates at infinity. The following table illustrates the classification above.
4.3 Uniqueness of Limit
convergent
divergent
59
an → b ∈ R
an → ∞
an → −∞
has no limit
has limit
oscillates at infinity
To justify the above classification, we need to show that the properties in the middle column are mutually exclusive. We will show more than this: in Theorem 4.14,
we will show that a sequence can have at most one limit. For this, we need the
following two theorems.
Theorem 4.10. If the sequence (an ) is convergent, then it is a bounded sequence.3
Proof. Let limn→∞ an = b. Pick, according to the notation in Definition 4.2, ε to be
1. We get that there are only finitely many terms of the sequence outside the interval
(a − 1, a + 1). If there are no terms greater than a + 1, then it is an upper bound. If
there are terms greater than a + 1, there are only finitely many of them. Out of these,
the largest one is the largest term of the whole sequence, and as such, is an upper
bound too. A bound from below can be found in the same fashion.
⊔
⊓
Remark 4.11. The converse of the statement above is not true: the sequence (−1)n
is bounded, but not convergent.
Theorem 4.12. If the sequence (an ) tends to infinity, then it is bounded from below
and not bounded from above. If the sequence (an ) tends to negative infinity, then it
is bounded from above and not bounded from below.
Proof. Let limn→∞ an = ∞. Comparing Definition 4.6 with the definition of being
bounded from above (3.14) clearly shows that (an ) cannot be bounded from above.
Pick, according to the notation in Definition 4.6, P to be 0. The sequence has
only finitely many terms outside the interval (0, ∞). If there are no terms outside the
interval, then 0 is a lower bound. If there are terms outside the interval, then there
are only finitely many. Out of these, the smallest one is the smallest term of the
whole sequence, and thus a lower bound too. This shows that (an ) is bounded from
below. The case an → −∞ can be dealt with in a similar way.
⊔
⊓
Remark 4.13. The converses of the statements of the theorem are not true. It is clear
that in Example 4.1, the sequence (16) is bounded from below (the number 1 is a
lower bound), not bounded from above, but the sequence doesn’t tend to infinity.
Theorem 4.14. Every sequence has at most one limit.
Proof. By Theorems 4.10 and 4.12, it suffices to show that every convergent sequence has at most one limit. Suppose that an → b and an → c both hold, where
b and c are distinct real numbers. Then for each ε > 0, only finitely many terms
of the sequence lie outside the interval (b − ε , b + ε ), so there are infinitely many
terms inside (b − ε , b + ε ). Let ε be so small that the intervals (b − ε , b + ε ) and
(c − ε , c + ε ) are disjoint, that is, do not have any common points. (Such is, for
3
By this, we mean that the set {an } is bounded.
60
4 Infinite Sequences I
example, ε = |c − b|/2.) Then there are infinitely many terms of the sequence outside the interval (c − ε , c + ε ), which is impossible, since then c cannot be a limit
⊔
⊓
of an .
Exercises
4.21. Let
S be the set of all sequences;
C be the set of convergent sequences;
D be the set of divergent sequences;
D∞ be the set of sequences diverging to ∞;
D−∞ be the set of sequences diverging to −∞;
O be the set of sequences oscillating at infinity;
K be the set of bounded sequences.
Prove the following statements:
(a) S = C ∪ D.
(b) D = D∞ ∪ D−∞ ∪ O.
(c) C ⊂ K.
(d) K ∩ D∞ = 0.
/
4.22. Give an example for each possible behavior of a sequence (an ) (convergent, tending to infinity, tending to negative infinity, oscillating at infinity), while
an+1 − an → 0 also holds. (H)
4.23. Give an example for each possible behavior of a sequence (an ) (convergent, tending to infinity, tending to negative infinity, oscillating at infinity), while
an+1 /an → 1 also holds.
4.24. Give an example of a sequence (an ) that
(a) is convergent,
(b) tends to infinity,
(c) tends to negative infinity,
while an < (an−1 + an+1 )/2 holds for each n > 1.
4.25. Prove that if an → ∞ and (bn ) is bounded, then (an + bn ) → ∞.
4.26. Is it true that if (an ) oscillates at infinity and is unbounded, and (bn ) is
bounded, then (an + bn ) oscillates at infinity and is unbounded?
4.27. Let (an ) be sequence√(18) from Example 4.1, that is, let an be the nth term in
the decimal expansion of 2. Prove that the sequence (an ) oscillates at infinity. (H)
4.4 Limits of Some Specific Sequences
61
4.4 Limits of Some Specific Sequences
Theorem 4.15.
(i) For every fixed integer p,
(ii) If p > 0, then limn→∞
⎧
⎪
⎨∞,
p
lim n = 1,
n→∞
⎪
⎩
0,
if p > 0,
if p = 0,
if p < 0.
(4.4)
√
p
n = ∞.
Proof. (i) Let first p > 0. For arbitrary P > 0, if n > P, then n p ≥ n > P. Then
n p → ∞. If p = 0, then n p = 1 for every n, so n p → 1. Finally, if p is a negative
integer and ε > 0, then
n > 1/ε implies
0 < n p ≤ 1/n < ε , which proves that n p → 0.
√
√
p
p
p
⊔
⊓
(ii) If n > P , then n ≥ P, so n → ∞.
Theorem 4.16. For every fixed real number a,
⎧
⎪
⎨∞, if a > 1,
n
lim a = 1, if a = 1,
n→∞
⎪
⎩
0, if |a| < 1.
(4.5)
If a ≤ −1, then (an ) oscillates at infinity.
Proof. If a > 1, then by Bernoulli’s inequality, we have that
an = (1 + (a − 1))n ≥ 1 + n · (a − 1)
holds for every n. So for an arbitrary real number P, if n > (P − 1)/(a − 1), then
an > P, so an → ∞.
If a = 1, then an = 1 for each n, and so an → 1.
If |a| < 1, then 1/|a| > 1. If an ε > 0 is given, then by the already proved statement, there is an n0 such that for n > n0 ,
n
1
1
1
>
=
n
|a|
|a|
ε
holds, that is, |an | = |a|n < ε . This means that an → 0 in this case.
Finally, if a ≤ −1, then for even n, we have an ≥ 1, while for odd n, we have
n
a ≤ −1. It is clear that a sequence with these properties can have neither a finite
nor an infinite limit.
⊔
⊓
Theorem 4.17.
(i) For every
√ fixed positive real number a, we have limn→∞
(ii) limn→∞ n n = 1.
√
n
a = 1.
62
4 Infinite Sequences I
Proof. Let a > 0 be fixed. If 0 < ε ≤ 1, then by Theorem 4.16, (1 + ε )n → ∞ and
(1− ε )n → 0. Thus there exist n1 and n2 such that when n > n1 , we have (1+ ε )n > a,
and when n > n2 , we have√(1 − ε )n < a. So if n > max(n1 , n2 ), then (1 − ε )n < a <
(1 + ε )n , that is, 1 − ε < n a < 1 + ε . This shows
√ that if 0 < ε ≤ 1, then there is an
n0 such that when n > n0 holds, we also have | n a − 1| < ε . It follows from this that
for arbitrary positive ε , we can find an n0 , since if ε ≥ 1, then the n0 corresponding
to 1 also works. We have shown (i).
(ii) Let 0 < ε < 1 be fixed. If n > 4/ε 2 is even, then by Bernoulli’s inequality, we
have (1 + ε )n/2 > nε /2, so
(1 + ε )n >
n 2
ε > n.
2
If n > 16/ε 2 is odd, then (1 + ε )(n−1)/2 > (n − 1)ε /2 > nε /4, which gives
n
(1 + ε ) > (1 + ε )
n−1
Therefore, if n > 16/ε 2 , then we have
>
n−1
ε
2
2
√
n
n < 1 + ε.
>
n 2
ε > n.
4
⊔
⊓
Chapter 5
Infinite Sequences II
Finding the limit of a sequence is generally a difficult task. Sometimes, just determining whether a sequence has a limit is tough. Consider the sequence√(18) in
Example 4.1, that is, let an be the nth digit in the decimal expansion of 2. We
√
know that (an ) does not have a limit. But does the sequence cn = n an have a limit?
First of all, let us note that an ≥ 1, and thus cn ≥ 1 for infinitely many n. Now if
there are infinitely many zeros among the terms an , then cn = 0 also holds for infinitely many n, so the sequence (cn ) is divergent. However, if there are only finitely
many zeros among
the terms an , that is, an = 0 for all n >√n0 , then 1 ≤ an ≤ 9, and
√
so 1 ≤ cn ≤ n 9 also holds if n√> n0 . By Theorem 4.17, n 9 → 1. Thus for a given
ε > 0, there is an n1 such that n 9 < 1 + ε for all n > n1 . So if n > max(n0 , n1 ), then
1 ≤ cn < 1 + ε , and thus cn → 1.
This reasoning shows that the sequence (cn ) has a limit if and only if there are
only finitely many zeros
√ among the terms an . However, the question whether the
decimal expansion of 2 has infinitely many zeros is a famous open problem in number theory. Thus with our current knowledge, we are unable to determine whether
the sequence (cn ) has a limit.
Fortunately, the above example is atypical; we can generally determine the limits
of sequences that we encounter in practice. In most cases, we use the method of
decomposing the given sequence into simpler sequences whose limits we know. Of
course, to determine the limit, we need to know how to find the limit of a sequence
that is constructed from simpler sequences. In the following, this will be explored.
5.1 Basic Properties of Limits
Definition 5.1. For a sequence (a1 , a2 , . . . , an , . . . ), we say that a subsequence is a
sequence of the form
(an1 , an2 , . . . , ank , . . . ),
where n1 < n2 < · · · < nk < . . . are positive integers.
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 5
63
64
5 Infinite Sequences II
A subsequence, then, is formed by deleting some (possibly infinitely many) terms
from the original sequence, keeping infinitely many.
Theorem 5.2. If the sequence (an ) has a limit, then every subsequence (ank ) does
too, and limk→∞ ank = limn→∞ an .
Proof. Suppose first that (an ) is convergent, and let limn→∞ an = b be a finite limit.
This means that for every positive ε , there are only finitely many terms of the sequence outside the interval (b − ε , b + ε ). Then clearly, the same holds for terms of
the subsequence, which means exactly that limk→∞ ank = b.
The statement can be proved similarly if (an ) tends to infinity or negative infinity.
⊔
⊓
We should note that the existence of a limit for a subsequence does not imply the
existence of a limit for the original sequence, as can be seen in sequences (3) and
(16) in Example 4.1. However, if we already know that (an ) has a limit, then (by
Theorem 5.2) every subsequence will have this same limit.
Definition 5.3. We say that the sequences (an ) and (bn ) have identical convergence
behavior if it is the case that (an ) has a limit if and only if (bn ) has a limit, in which
case the limits agree.
To determine limits of sequences, it is useful to inspect what changes we can
make to a sequence that results in a new sequence whose convergence behavior is
identical to that of the old one.
We will list a few such changes below:
I. We can “rearrange” a sequence, that is, we can change the order of its terms.
A rearranged sequence contains the same numbers, and moreover, each one
is listed the same number of times as in the previous sequence. (The formal
definition is as follows: the sequence (an1 , an2 , . . . ) is a rearrangement of the
sequence (a1 , a2 , . . .) if the map f (k) = nk (k ∈ N+ ) is a permutation of N+ ,
which means that f is a one-to-one correspondence from N+ to itself.)
II. We can repeat certain terms (possibly infinitely many) of a sequence finitely
many times.
III. We can add finitely many terms to a sequence.
IV. We can take away finitely many terms from a sequence.
The above changes can, naturally, change the indices of the terms.
Examples 5.4. Consider the following sequences:
(1) an = n:
(an ) = (1, 2, . . . , n, . . . );
(2)
an = n + 2:
(an ) = (3, 4, 5, . . . );
(3)
an = n − 2:
(an ) = (−1, 0, 1, . . . );
(4)
an = k, if k(k − 1)/2 < n ≤ k(k + 1)/2:
(an ) = (1, 2, 2, 3, 3, 3, 4, 4, 4, 4, . . . , k, . . . );
5.2 Limits and Inequalities
(5)
an = 2n − 1:
0, if n = 2k + 1,
(6) an =
:
k, if n = 2k
n + 1, if n = 2k + 1,
(7) an =
:
n − 1, if n = 2k
65
(an ) = (1, 3, 5, 7 . . . );
(an ) = (0, 1, 0, 2, 0, 3, . . . );
(an ) = (2, 1, 4, 3, 6, 5, . . . ).
In the above sequences, starting from the sequence (1), we can get
(2)
(3)
(4)
(7)
by a type IV change,
by a type III change,
by a type II change,
by a type I change.
Moreover, (6) cannot be a result of type I–IV changes applied to (1), since there
is only one new term in (6), 0, but it appears infinitely many times. (Although we
gained infinitely many new numbers in (4), too, we can get there using II.)
Theorem 5.5. The sequences (an ) and (bn ) have identical convergence behavior if
one can be reached from the other by applying a finite combination of type I–IV
changes.
Proof. The property that there are finitely or infinitely many terms outside an interval remains unchanged by each one of the type I–IV changes. From this, by Defini⊔
⊓
tion 4.2 and Definition 4.6, the statement of the theorem is clear.
Exercises
5.1. Prove that if every subsequence of (an ) has a subsequence that tends to b, then
an → b.
5.2. Prove that if the sequence (an ) does not have a subsequence tending to infinity,
then (an ) is bounded from above.
5.3. Prove that if (a2n ), (a2n+1 ), and (a3n ) are convergent, then (an ) is as well.
5.4. Give an example for an (an ) that is divergent, but for each k > 1, the subsequence (akn ) is convergent. (H)
5.2 Limits and Inequalities
First of all, we introduce some terminology. Let (An ) be a sequence of statements.
We say that An holds for all n sufficiently large if there exists an n0 such that An is
true for all n > n0 . So, for example, we can say that 2n > n2 for all n sufficiently
large, since this inequality holds for all n > 4.
66
5 Infinite Sequences II
Theorem 5.6. If limn→∞ an = ∞ and bn ≥ an for all n sufficiently large, then
limn→∞ bn = ∞. If limn→∞ an = −∞ and bn ≤ an for all n sufficiently large, then
limn→∞ bn = −∞.
Proof. Suppose that bn ≥ an if n > n0 . If an > P for all n > n1 , then bn ≥ an > P
holds for all n > max(n0 , n1 ). From this, it is clear that when limn→∞ an = ∞ holds,
limn→∞ bn = ∞ also holds. The second statement follows in the same way.
⊔
⊓
The following theorem is often known as the squeeze theorem (or sandwich
theorem).
Theorem 5.7. If an ≤ bn ≤ cn for all n sufficiently large and
lim an = lim cn = a,
n→∞
n→∞
then limn→∞ bn = a.
Proof. By the previous theorem, it suffices to restrict ourselves to the case that a is
finite. Suppose that an ≤ bn ≤ cn for all n > n0 . It follows from our assumption that
for every ε > 0, there exist n1 and n2 such that
and
a − ε < an < a + ε ,
if
a − ε < cn < a + ε ,
if n > n2 .
n > n1
Then for n > max(n0 , n1 , n2 ),
a − ε < an ≤ bn ≤ cn < a + ε ,
that is, bn → a.
⊔
⊓
We often use the above theorem for the special case an ≡ 0: if 0 ≤ bn ≤ cn and
limn→∞ cn = 0, then limn→∞ bn = 0.
The following theorems state that strict inequality between limits is inherited
by terms of sufficiently large index, while not-strict inequality between terms is
inherited by limits.
Theorem 5.8. Let (an ) and (bn ) be convergent sequences, and let limn→∞ an = a,
limn→∞ bn = b. If a < b, then an < bn holds for all n sufficiently large.
Proof. Let ε = (b − a)/2. We know that for suitable n1 and n2 , an < a + ε if n > n1 ,
and bn > b− ε if n > n2 . Let n0 = max(n1 , n2 ). If n > n0 , then both inequalities hold,
that is, an < a + ε = b − ε < bn .
⊔
⊓
Remark 5.9. Note that from the weaker assumption a ≤ b, we generally do not get
that an ≤ bn holds even for a single index. If, for example, an = 1/n and bn = −1/n,
then limn→∞ an = 0 ≤ 0 = limn→∞ bn , but an > bn for all n.
Theorem 5.10. Let (an ) and (bn ) be convergent sequences, and let limn→∞an =a,
limn→∞ bn = b. If an ≤ bn holds for all sufficiently large n, then a ≤ b.
5.3 Limits and Operations
67
Proof. Suppose that a > b. By Theorem 5.8, it follows that an > bn for all n sufficiently large, which contradicts our assumption.
⊔
⊓
Remark 5.11. Note that even the assumption an < bn does not imply a < b. If, for
example, an = −1/n and bn = 1/n, then an < bn for all n, but limn→∞ an = 0 =
limn→∞ bn .
Exercises
5.5. Prove that if an → a > 1, then (an )n → ∞.
5.6. Prove that if an → a, where |a| < 1, then (an )n → 0.
√
5.7. Prove that if an → a > 0, then n an → 1.
√
5.8. limn→∞ n 2n − n =?
5.9. Prove that if a1 , . . . , ak ≥ 0, then
lim
n→∞
n
an1 + · · · + ank = max ai .
1≤i≤k
(S)
5.3 Limits and Operations
We say that the sum of the sequences (an ) and (bn ) is the sequence (an + bn ). The
following theorem states that in most cases, the order of taking sums and taking
limits can be switched, that is, the sum of the limits is equal to the limit of the sum
of terms.
Theorem 5.12.
(i) If the sequences (an ) and (bn ) are convergent and an → a, bn → b, then the
sequence (an + bn ) is convergent and an + bn → a + b.
(ii) If the sequence (an ) is convergent, an → a and bn → ∞, then an + bn → ∞.
(iii) If the sequence (an ) is convergent, an →a and bn → − ∞, then an +bn → − ∞.
(iv) If an → ∞ and bn → ∞, then an + bn → ∞.
(v) If an → −∞ and bn → −∞, then an + bn → −∞.
Proof. (i) Intuitively, if an is close to a and bn is close to b, then an + bn is close to
a + b. Basically, we only need to make this idea precise using limits.
If an → a and bn → b, then for all ε > 0, there exist n1 and n2 for which |an − a| <
ε /2 holds if n > n1 , and |bn − b| < ε /2 holds if n > n2 . It follows from this, using
the triangle inequality, that
|(an + bn ) − (a + b)| ≤ |an − a| + |bn − b| < ε ,
if n > max(n1 , n2 ). Since ε was arbitrary, this proves that an + bn → a + b.
68
5 Infinite Sequences II
(ii) If (an ) is convergent, then by Theorem 4.10, it is bounded. This means that for
suitable K > 0, |an | ≤ K for all n. Let P be arbitrary. Since bn → ∞, there exists an
n0 such that when n > n0 , bn > P + K. Then an + bn > (−K) + (P + K) = P for all
n > n0 . Since P was arbitrary, this proves that an + bn → ∞. Statement (iii) can be
proved in the same way.
(iv) Suppose that an → ∞ and bn → ∞. Let P be arbitrary. Then there are n1 and n2
such that when n > n1 , an > P/2, and when n > n2 , bn > P/2. If n > max(n1 , n2 ),
then an + bn > (P/2) + (P/2) = P. Since P was chosen arbitrarily, this proves that
an + bn → ∞. Statement (v) can be proved in the same way.
⊔
⊓
If (an ) is convergent and an → a, then by applying statement (i) in Theorem 5.12
to the constant sequence bn = −a, we get that an − a → 0. Conversely, if an − a → 0,
then an = (an − a) + a → a. This shows the following.
Corollary 5.13. A sequence (an ) tends to a finite limit a if and only if an − a → 0.
The statements of Theorem 5.12 can be summarized in the table below.
lim an
a
∞
−∞
lim bn
b
a+b
∞
−∞
∞
∞
∞
?
−∞
−∞
?
−∞
The question marks appearing in the table mean that the given values of lim an
and lim bn do not determine the value of lim(an + bn ). Specifically, if lim an = ∞ and
lim bn = −∞ (or the other way around), then using only this information, we cannot
say what value lim(an + bn ) takes. Let us see a few examples.
an = n + c,
an = 2n,
bn = −n,
bn = −n,
an + bn = c → c ∈ R
an + bn = n → ∞
an = n,
bn = −2n, an + bn = −n → −∞
n
an = n + (−1) , bn = −n, an + bn = (−1)n oscillates at infinity.
We see that (an + bn ) can be convergent, can diverge to positive or negative infinity, and can oscillate at infinity. We express this by saying that the limit lim(an + bn )
is a critical limit if lim an = ∞ and lim bn = −∞. More concisely, we can say that
limits of the type ∞ − ∞ are critical.
Now we look at the limit of a product. We call the product of two sequences
(an ) and (bn ) the sequence (an · bn ).
Theorem 5.14.
(i) If the sequences (an ) and (bn ) are convergent and an → a, bn → b, then the
sequence (an · bn ) is convergent, and an · bn → a · b.
5.3 Limits and Operations
69
(ii) If the sequence (an ) is convergent, an → a where a > 0, and bn → ±∞, then
an · bn → ±∞.
(iii) If the sequence (an ) is convergent, an → a where a < 0, and bn → ±∞, then
an · bn → ∓∞.
(iv) If an → ±∞ and bn → ±∞,then an · bn → ∞.
(v) If an → ±∞ and bn → ∓∞, then an · bn → −∞.
Lemma 5.15. If an → 0 and (bn ) is bounded, then an · bn → 0.
Proof. Since (bn ) is bounded, there is a K > 0 such that |bn | ≤ K for all n. Let
ε > 0 be given. From the assumption that an → 0, it follows that |an | < ε /K for all
n sufficiently large. Thus |an · bn | < (ε /K) · K = ε for all n sufficiently large. Since
ε was chosen arbitrarily, this proves that an · bn → 0.
⊔
⊓
Proof (Theorem 5.14). (i) Since by our assumption, (an ) and (bn ) are convergent,
by Theorem 4.10, both are bounded. If an → a and bn → b, then by Corollary 5.13,
an − a → 0 and bn − b → 0. Moreover,
an · bn − a · b = (an − a) · bn + a · (bn − b).
(5.1)
By Lemma 5.15, both terms on the right-hand side tend to 0, so by Theorem 5.12,
the limit of the right-hand side is 0. Then an · bn − a · b → 0, so by Corollary 5.13,
an · bn → a · b.
(ii) Suppose that an → a > 0 and bn → ∞. Let P be an arbitrary positive number.
Since a/2 < a, there exists an n1 such that an > a/2 for all n > n1 . By the assumption that bn → ∞, there exists an n2 such that bn > 2P/a for all n > n2 . Then
for n > max(n1 , n2 ), an · bn > (a/2) · (2P/a) = P. Since P was chosen arbitrarily,
this proves that an · bn → ∞. It can be shown in the same way that if an → a > 0 and
bn → −∞, then an · bn → −∞. Statement (iii) can also be proved in the same manner.
(iv) If an → ∞ and bn → ∞, then for all P > 0, there exist n1 and n2 such that for
all n > n1 , an > P, and for all n > n2 , bn > 1. Then for n > max(n1 , n2 ), we have
an · bn > P · 1 = P. It can be shown in the same way that if an → −∞ and bn → −∞,
then an · bn → ∞. Statement (v) can also be proved in the same manner.
⊔
⊓
The statements of Theorem 5.14 are summarized in the table below.
lim an
a>0
0
a<0
∞
−∞
b>0
a·b
0
a·b
∞
−∞
0
0
0
0
?
?
lim bn
b<0
a·b
0
a·b
−∞
∞
∞
∞
?
−∞
∞
−∞
−∞
−∞
?
∞
−∞
∞
The question marks indicate the critical limits again. As the examples below
show, lim(an · bn ) is critical if an → 0 and bn → ∞. (In short, the limit of type 0 · ∞
is critical.)
70
5 Infinite Sequences II
an = c/n,
bn = n,
an = 1/n,
bn = n2 ,
an = −1/n,
bn = n2 ,
an =
bn = n,
(−1)n /n,
an · bn = c → c ∈ R
an · bn = n → ∞
an · bn = −n → −∞
an · bn = (−1)n oscillates at infinity.
Similar examples show that the limit of type 0 · (−∞) is also critical.
We now turn to defining quotient limits. Suppose that bn = 0 for all n. We sometimes call the sequence (an /bn ) the quotient sequence.
Theorem 5.16. Suppose that the sequences (an ) and (bn ) have limits, and that bn =
0 for all n. Then the limit of the sequence (an /bn ) is given by the table below.
lim bn
lim an
a>0
0
a<0
∞
−∞
b>0
a/b
0
a/b
∞
−∞
0
?
?
?
?
?
b<0
a/b
0
a/b
−∞
∞
∞
0
0
0
?
?
−∞
0
0
0
?
?
Lemma 5.17. If (bn ) is convergent and bn → b = 0, then 1/bn → 1/b.
Proof. Let ε > 0 be given; we need to show that |1/bn − 1/b| < ε if n is sufficiently
large. Since
1
1 b − bn
− =
,
bn b
b · bn
we have to show that if n is large, then b − bn is very small, while b · bn is not
too small. By the assumption that bn → b, there exists an n1 such that for n > n1 ,
|bn − b| < ε b2 /2. Since |b|/2 > 0, we can find an n2 such that |bn − b| < |b|/2 for
n > n2 . Then for n > n2 , |bn | > |b|/2, since if b > 0, then bn > b − (b/2) = b/2,
while if b < 0, then bn < b + (|b|/2) = b/2 = −|b|/2. Then for n > max(n1 , n2 ),
2
1
− 1 = |b − bn | < ε b /2 = ε .
bn b
|b · bn |
|b| · |b|/2
Since ε was arbitrary, this proves that 1/bn → 1/b.
⊔
⊓
Lemma 5.18. If |bn | → ∞, then 1/bn → 0.
Proof. Let ε > 0 be given. Since |bn | → ∞, there is an n0 such that for n > n0 ,
⊔
⊓
|bn | > 1/ε . Then when n > n0 , |1/bn | = 1/|bn | < ε , so 1/bn → 0.
Corollary 5.19. If bn → ∞ or bn → −∞, then 1/bn → 0.
Proof. It is easy to check that if bn → ∞ or bn → −∞, then |bn | → ∞.
⊔
⊓
5.3 Limits and Operations
71
Proof (Theorem 5.16). Suppose first that an → a ∈ R and bn → b ∈ R, b = 0. By
Theorem 5.14 and Lemma 5.17,
an
1
1 a
= an ·
→ a· = .
bn
bn
b b
If (an ) is convergent and bn → ∞ or bn → −∞, then by Theorem 5.14 and Corollary 5.19,
an
1
= an ·
→ a · 0 = 0.
bn
bn
Now suppose that an → ∞ and bn → b ∈ R, b > 0. Then by Theorem 5.14 and
Lemma 5.17,
1
an
= an ·
→ ∞,
bn
bn
since 1/bn → 1/b > 0. It can be seen similarly that in the case of an → ∞ and
bn → b < 0, we have an /bn → −∞; in the case of an → −∞ and bn → b > 0, we have
an /bn → −∞; while in the case of an → −∞ and bn → b < 0, we have an /bn → ∞.
With this, we have justified every (non-question-mark) entry in the table.
⊔
⊓
The question marks in the table of Theorem 5.16 once more denote the critical
limits. Here, however, we need to distinguish two levels of criticality. As the examples below show, the limit of type 0/0 is critical in the same sense that, for example,
the limit of type 0 · ∞ is.
an = c/n,
an = 1/n,
an = −1/n,
an =
(−1)n /n,
bn = 1/n,
bn =
1/n2 ,
bn =
1/n2 ,
bn = 1/n,
an /bn = c → c ∈ R
an /bn = n → ∞
an /bn = −n → −∞
an /bn = (−1)n oscillates at infinity.
We can see that if an → 0 and bn → 0, then (an /bn ) can be convergent, can tend
to infinity or negative infinity, and can oscillate at infinity as well.
The situation is different with the other question marks in the table of theorem 5.16. Consider the case that an → a > 0 and bn → 0. The examples an = 1,
bn = 1/n; an = 1, bn = −1/n; and an = 1, bn = (−1)n /n show that an /bn can tend
to infinity or negative infinity, but can oscillate at infinity as well. However, we do
not find an example in which an /bn is convergent. This follows immediately from
the following theorem.
Theorem 5.20.
(i) Suppose that bn → 0 and bn = 0 for all n. Then 1/|bn | → ∞.
(ii) Suppose that an → a = 0, bn → 0, and bn = 0 for all n. Then |an /bn | → ∞.
Proof. It is enough to prove (ii). Let P > 0 be given. There exists an n0 such that
for n > n0 , |an | > |a|/2 and |bn | < |a|/(2P). Then for n > n0 , |an /bn | > P, that is,
|an /bn | → ∞.
⊔
⊓
72
5 Infinite Sequences II
Finally let us consider the case that an → ∞ and bn → ∞. The examples an =
bn = n and an = n2 , bn = n show that an /bn can be convergent and can tend to infinity as well. Now let
n2 , if n is even,
an =
n, if n is odd
and bn = n. It is clear that an → ∞, and an /bn agrees with sequence (16) from
Example 4.1, which oscillates at infinity. However, we cannot find an example in
which an /bn tends to negative infinity. Since both an → ∞ and bn → ∞, an and bn
are both positive for all sufficiently large n. Thus an /bn is positive for all sufficiently
large n, so it cannot tend to negative infinity. Similar observations can be made for
the three remaining cases, when (an ) and (bn ) tend to infinity or negative infinity.
Exercises
5.10. Prove that if (an + bn ) is convergent and (bn ) is divergent, then (an ) is
divergent.
5.11. Is it true that if (an · bn ) is convergent and (bn ) is divergent, then (an ) is also
divergent?
5.12. Is it true that if (an /bn ) is convergent and (bn ) is divergent, then (an ) is also
divergent?
5.13. Prove that if lim (an − 1)/(an + 1) = 0, then lim an = 1.
n→∞
n→∞
5.14. Let limn→∞ an = a, limn→∞ bn = b. Prove that max(an , bn ) → max(a, b).
5.15. Prove that if an < 0 and an → 0, then 1/an → −∞.
5.4 Applications
First of all, we need generalizations of 5.12 (i) and 5.14 (i) for more summands and
terms, respectively.
Theorem 5.21. Let a1n , . . . , akn be convergent sequences,1 and let limn→∞ ain = bi
for all i = 1, . . . , k. Then the sequences a1n + · · · + akn and a1n · . . . · akn are also
convergent, and their limits are b1 + · · · + bk and b1 · . . . · bk respectively.
The statement can be proved easily by induction on k, using that the k = 2 case
was already proved in Theorems 5.12 and 5.14.
1
here ain denotes the nth term of the ith sequence
5.4 Applications
73
It is important to note that Theorem 5.21 holds only for a fixed number of sequences, that is, the assumptions of the theorem do not allow the number of sequences
(k) to depend on n. Consider
1=
1
1
+···+ ,
n
n
if the number of summands on the right-hand side is exactly n. Despite 1/n tending to 0, the sum on the left-hand side—the constant-1 sequence—still tends to 1.
Similarly,
√
√
n
n
2 = 2 · . . . · 2,
√
if the number of terms on the right side is exactly n. Even though n 2 tends to 1, the
product—the constant-2 sequence—still tends to 2.
As a first application, we will determine the limits of sequences that can be obtained from the index n and constants, using the four elementary operations.
Theorem 5.22. Let
cn =
a0 + a1 n + · · · + ak nk
(n = 1, 2, . . .),
b0 + b1 n + · · · + bℓ nℓ
where ak = 0 and bℓ = 0. Then
⎧
⎪
⎪0,
⎪
⎨∞,
lim cn =
n→∞
⎪−∞,
⎪
⎪
⎩
ak /bℓ ,
if ℓ > k,
if ℓ < k and ak /bℓ > 0,
if ℓ < k and ak /bℓ < 0,
if ℓ = k.
Proof. If we take out an nk from the numerator and nℓ from the denominator of the
fraction representing ck , then we get that
a0
ak−1
nk ak + n + · · · + nk
cn = ℓ ·
.
b0
bℓ−1
n
+···+ ℓ
bℓ +
n
n
(5.2)
By Theorem 4.15, in the second term, except for the first summands, everything
in both the numerator and the denominator tends to 0.
Then by applying Theorems 5.21 and 5.16, we get that the second term on the
right-hand side of (5.2) tends to ak /bℓ . From this, based on the behavior of nk−ℓ as
⊔
⊓
n → ∞ (Theorem 4.15), the statement immediately follows.
An important sufficient condition for convergence to 0 is given by the next
theorem.
Theorem 5.23. Suppose that there exists a number q < 1 such that an = 0 and
|an+1 /an | ≤ q for all n sufficiently large. Then an → 0.
74
5 Infinite Sequences II
Proof. If an = 0 and |an+1 /an | ≤ q for every n ≥ n0 , then
|an0 +1 | ≤ q · |an0 |,
|an0 +2 | ≤ q · |an0 +1 | ≤ q2 · |an0 |,
|an0 +3 | ≤ q · |an0 +2 | ≤ q3 · |an0 |,
(5.3)
and so on; all inequalities |an | ≤ qn−n0 · |an0 | for n > n0 hold. Since qn → 0 by
Theorem 4.16, an → 0.
⊔
⊓
Corollary 5.24. Suppose that an = 0 for all sufficiently large n, and an+1 /an → c,
where |c| < 1. Then an → 0.
Proof. Fix a number q, for which |c| < q < 1. Since |an+1 /an | → |c|, |an+1 /an | < q
for all sufficiently large n, so we can apply Theorem 5.23.
⊔
⊓
Remark 5.25. The assumptions required for Theorem 5.23 and Corollary 5.24 are
generally not necessary conditions to deduce an → 0. The sequence an = 1/n tends
to zero, but an+1 /an = n/(n + 1) → 1, so neither the assumptions of Theorem 5.23
nor the assumptions of Corollary 5.24 are satisfied.
Corollary 5.24 can often be applied to sequences that are given as a product. The
following theorem introduces two important special cases. The notation n! in the
second statement denotes the product 1 · 2 · . . . · n. We call this product n factorial.
Theorem 5.26.
(i) For an arbitrary real number a > 1 and positive integer k, we have nk /an → 0.
(ii) For an arbitrary real number a, an /n! → 0.
Proof. (i) Let an = nk /an . Then
k
1 + 1n
an+1
(n + 1)k · an
.
=
=
an
a
nk · an+1
Here the numerator tends to 1 by Theorem 5.21, and so an+1 /an → 1/a. Since a > 1
by our assumption, 0 < 1/a < 1, so we can apply Corollary 5.24.
(ii) If bn = an /n!, then
bn+1
a
an+1 · n!
=
.
= n
bn
a · (n + 1)! n + 1
Then bn+1 /bn → 0, so bn → 0 by Corollary 5.24.
⊔
⊓
By Theorem 4.16, an → ∞ if a > 1. We also know that nk → ∞ if k is a positive
integer. By statement (i) of the above theorem, we can determine that an /nk → ∞
(see Theorem 5.20), which means that the sequence (an ) is “much bigger” than the
sequence (nk ). To make this phenomenon clear, we introduce new terminology and
notation below.
5.4 Applications
75
Definition 5.27. Let (an ) and (bn ) be sequences tending to infinity. We say that the
sequence (an ) tends to infinity faster than the sequence (bn ) if an /bn → ∞. We can
also express this by saying that (an ) has a larger order of magnitude than (bn ), and
we denote this by (bn ) ≺ (an ).
By Theorems 4.15, 4.16, and 5.26, we can conclude that the following order-ofmagnitude relations hold:
(n) ≺ (n2 ) ≺ (n3 ) ≺ . . . ≺ (2n ) ≺ (3n ) ≺ (4n ) ≺ . . . ≺ (n!) ≺ nn .
Definition 5.28. If an → ∞, bn → ∞, and an /bn → 1, then we say that the sequences
(an ) and (bn ) are asymptotically equivalent, and we denote this by an ∼ bn .
Thus, for example, (n2 + n) ∼ n2 , since (n2 + n)/n2 = 1 + (1/n) → 1.
Exercises
5.16. Prove that if an > 0 and an+1 /an > q > 1, then an → ∞.
5.17. Prove that if an > 0 and an+1 /an → c, where c > 1, then an → ∞.
√
5.18. Show that if an > 0 and an+1 /an → q, then n an → q.
√
5.19. Give an example of a positive sequence (an ) for which n an → 1, but an+1 /an
does not tend to 1.
√
5.20. Prove that limn→∞ 2
n /nk
= ∞ for all k.
5.21. Let (a1n ), (a2n ), . . . be an arbitrary sequence of sequences tending to infinity.
(Here akn denotes the nth term of the kth sequence.) Prove that there exists a sequence
bn → ∞, whose order of magnitude is larger than the order of magnitude of every
(akn ). (H)
5.22. Suppose that
(a1n ) ≺ (a2n ) ≺ . . . ≺ (b2n ) ≺ (b1n ).
Prove that there exists a sequence (cn ) for which (akn ) ≺ (cn ) ≺ (bkn ) for all k.
5.23. Let p(n) = a0 + a1 n + · · · + ak nk , where ak > 0. Prove that p(n + 1) ∼ p(n).
Chapter 6
Infinite Sequences III
6.1 Monotone Sequences
In Theorem 4.10, we proved that for a sequence to converge, a necessary condition
is the boundedness of the sequence, and in our example of the sequence (−1)n , we
saw that boundedness is not a sufficient condition for convergence.
In the following, we prove, however, that for a major class of sequences, the
so-called monotone sequences, boundedness already implies convergence. Since
boundedness is generally easier to check than convergence directly, the theorem
is often a very useful tool in determining whether a sequence converges. Moreover,
as we will see, the theorem is important from a conceptual viewpoint and is closely
linked to Cantor’s axiom.
Definition 6.1. We say the sequence (an ) is monotone increasing if
a1 ≤ a2 ≤ · · · ≤ an ≤ an+1 ≤ . . . .
If the statement above with ≤ replaced by ≥ holds, we say that the sequence is
monotone decreasing, while in the case of < and > we say strictly monotone increasing and strictly monotone decreasing respectively.
We call the sequence (an ) monotone if one of the above cases applies.
Theorem 6.2. If the sequence (an ) is monotone and bounded, then it is convergent.
If (an ) is monotone increasing, then
lim an = sup{an },
n→∞
and if it is monotone decreasing, then
lim an = inf{an }.
n→∞
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 6
77
78
6 Infinite Sequences III
Proof. Let, for example, (an ) be monotone increasing, bounded, and set
α = sup{an }.
Since α is the least upper bound of the set {an }, it follows that for all ε > 0, α − ε
is not an upper bound, that is, there exists an n0 for which
an0 > α − ε .
The sequence is monotone increasing, so
an ≥ an0 , if n > n0 ,
and thus
0 ≤ α − an ≤ α − an0 < ε , if n > n0 .
We see that for all ε > 0, there exists an n0 for which
|α − an | < ε , if n > n0
holds, which means exactly that limn→∞ an = α .
⊔
⊓
We can extend Theorem 6.2 with the following result.
Theorem 6.3. If the sequence (an ) is monotone increasing and unbounded, then
limn→∞ an = ∞; if it is monotone decreasing and unbounded, then limn→∞ an = −∞.
Proof. Let (an ) be monotone increasing and unbounded. If (an ) is monotone increasing, then it is bounded from below, since a1 is a lower bound. Then the assumption that (an ) is unbounded means that (an ) is unbounded from above. Then for
all P, there exists an n0 (depending on P) for which an0 > P. But by monotonicity,
an ≥ an0 > P if n > n0 , that is, limn→∞ an = ∞. The statement for monotone decreasing sequences can be proved similarly.
⊔
⊓
In the previous theorem, it suffices to assume that the sequence is monotone for
n > n1 , since finitely many terms do not change the convergence behavior of the
sequence.
We often denote that (an ) is monotone increasing (or decreasing) and tends
to a by
an ր a;
(an ց a).
With the help of Theorem 6.2, we can justify the convergence of a few frequently
occurring sequences.
6.1 Monotone Sequences
79
Theorem 6.4. The sequence an =
bounded, thus convergent.
1 n
1+
is strictly monotone increasing and
n
Proof. By the inequality of arithmetic and geometric means,
n+1
1 n 1 + n · 1 + 1n
n+2
1
=
= 1+
.
1· 1+
<
n
n+1
n+1
n+1
Raising both sides to the (n + 1)th power yields
1
1+
n
n
1
< 1+
n+1
n+1
,
showing us that the sequence is strictly monotone increasing.
For boundedness from above, we will show that for all positive integers n and m,
1 m+1
1 n
1+
< 1+
.
n
m
(6.1)
Using the inequality of the arithmetic and geometric means again, we see that
n+m
1
1+
n
n
n+m
1 m n · 1 + n1 + m · 1 − m1
=
= 1,
· 1−
<
m
n+m
n+m
that is,
1+
1
n
n
1 m
· 1−
< 1.
m
(6.2)
Since for m > 1, the reciprocal of (1 − (1/m))m is
m
m
m
1
= 1+
,
m−1
m−1
m
dividing both sides of (6.2) by 1 − (1/m) yields (6.1). This proves that each
m+1
1 + (1/m)
is an upper bound of the sequence.
⊔
⊓
n
We denote1 the limit of the sequence 1 + 1n by e, that is,
1 n
.
e = lim 1 +
n→∞
n
(6.3)
As we will later see, this constant plays an important role in analysis and other
branches of mathematics.
1
This notation was introduced by Leonhard Euler (1707–1783), Swiss mathematician.
80
6 Infinite Sequences III
It can easily be shown that the limit of a strictly increasing sequence is larger
than any term in the sequence. If we compare this to (6.1) and Theorem 5.10, then
we get that
1 n
1 n+1
1+
< e ≤ 1+
(6.4)
n
n
for all n. This then implies e > 1.110 > 2.5 and e ≤ 1.26 < 3. Further, by (6.4),
1 n
1
1 n 3
0 < e− 1+
1+
<
< ,
−1
1+
n
n
n
n
and this (theoretically) provides an opportunity to estimate e with arbitrary fixed
precision.2 One can show (although perhaps not with the help of the above approximation), that e = 2.718281828459045 . . .. It can be proven that e is an irrational
number.3
Using the approximation (6.4), we can get more accurate information of the order
of magnitude of factorials.
Theorem 6.5.
(i) n! > (n/e)n for all n, and moreover,
(ii) n! < n · (n/e)n for all n ≥ 7.
Proof. We will prove both statements by induction. Since e > 1, 1! = 1 > 1/e. Suppose that n ≥ 1 and n! > (n/e)n . To prove the inequality (n + 1)! > ((n + 1)/e)n+1 ,
it suffices to show that
n n n + 1 n+1
>
,
(n + 1) ·
e
e
which is equivalent to (1 + 1/n)n < e. This proves (i).
In proving (ii), first of all, we check that 7! = 5040 < 7 · (7/e)7 . It is easy to check
that 720 < (2.56)7 , from which we get 5040 < 7 · (2.56)7 < 7 · (7/e)7 . Suppose that
n ≥ 7 and n! < n · (n/e)n . To prove (n + 1)! < (n + 1) · ((n + 1)/e)n+1 , it suffices to
show that
n n
n + 1 n+1
(n + 1) · n ·
≤ (n + 1) ·
,
e
e
which is equivalent to (1 + 1/n)n+1 ≥ e. This proves (ii).
⊔
⊓
The exact order √
of magnitude of n! is given by Stirling’s formula,4 , which states
n
that n! ∼ (n/e) ·
Theorem 15.15).
2
2π n. This can be proved as an application of integrals (see
In practice, (6.4) is not a very useful approximation. If we wanted to approximate e to 10 decimal
points, we would have to compute a 1010 th power. We will later give a much faster approximation
method.
3 In fact, one can also show that e is what is called a transcendental number, meaning that it
is not a root of any nonzero polynomial with integer coefficients. We will prove irrationality in
Exercises 12.87 and 15.23, while one can prove transcendence as an application of integration.
4 James Stirling (1692–1770) Scottish mathematician.
6.1 Monotone Sequences
81
With the help of Theorem 6.2, we can conclude the convergence of several
sequences given by recurrence relations.
4.1, that is, the sequence (an )
Example 6.6. Consider sequence (15) in Example
√
given by the recurrence a1 = 0, an+1 = 2 + an . We show that the sequence
is
√
monotone increasing. We prove an < an+1 using induction. Since a1 = 0 < 2 = a2 ,
the statement is true for n = 1. If it holds for n, then
an+1 =
2 + an <
2 + an+1 = an+2 ,
so it holds for n + 1 as well. This shows that the sequence is (strictly) monotone
increasing.
Now we show that the sequence is bounded from above, namely that 2 is an
upper bound. The statement√
an ≤ 2 follows
√ from induction as well, since a1 = 0 < 2,
and if an ≤ 2, then an+1 = 2 + an < 2 + 2 = 2. This shows that the sequence is
monotone and bounded, thus convergent. We can find the limit with the help of the
recurrence. Let limn→∞ an = a. Since a2n+1 = 2 + an for all n,
a2 = lim a2n+1 = lim (2 + an ) = 2 + a,
n→∞
n→∞
which gives a = 2 or a = −1. The second option is impossible, since the terms of
the sequence are nonnegative. So a = 2, that is, limn→∞ an = 2.
Exercises
6.1. Prove that if A ⊂ R and sup A = α ∈
/ A, then there exists a sequence (an ) for
which {an } ⊂ A, (an ) is strictly monotone increasing, and an → α . Is the statement
true if α ∈ A?
6.2. A sequence, in terms of monotonicity, boundedness, and convergence, can
behave (theoretically) in eight different ways (each property is either present or
not). In reality, how many of the eight cases can actually occur?
6.3. Suppose that the terms of the sequence (an ) satisfy the inequality an ≤ (an−1 +
an+1 )/2 for all n > 1. Prove that (an ) cannot oscillate at infinity. (H)
6.4. Prove that the following sequences, defined recursively, are convergent, and find
their limits.
√
(a) a1 = 0, an+1 = a + an (n = 1, 2, . . .), where a > 0 is fixed;
(b) a1 = 0, an+1 = 1/(2 − an ) (n = 1, 2, . . .);
(c) a1 = 0, an+1 = 1/(4 − an ) (n = 1, 2, . . .);
(d) a1 = 0,
a ) (n = 1, 2, . . .) (H);
√ an+1 = 1/(1
√+
√ n
(e) a1 = 2, an+1 = 2 an (n = 1, 2, . . .).
82
6 Infinite Sequences III
6.5. Leta > 0 begiven, and define the sequence (an ) by the recurrence a1 = a,
√
a
an+1 = an +
/2. Prove that an → a. (H)
an
n+1
is monotone decreasing.
6.6. Prove that the sequence 1 + 1n
1
1
6.7. Prove that n + 1 < e1+ 2 +···+ n < 3n for all n = 1, 2, . . .. (H)
6.8. Let a and b be positive numbers. Suppose that the
√ sequences (an ) and (bn ) satisfy a1 = a, b1 = b, and an+1 = (an + bn )/2, bn+1 = an bn for all n ≥ 1. Prove that
limn→∞ an = limn→∞ bn . (This value is the so-called arithmetic–geometric mean
of a and b.)
6.9. Prove that if (an ) is convergent and (an+1 − an ) is monotone, then it follows
that n · (an+1 − an ) → 0. Give an example for a convergent sequence (an ) for which
n · (an+1 − an ) does not tend to 0. (∗ H)
6.10. Suppose that (bn ) is strictly monotone increasing and tends to infinity. Prove
an − an−1
an
that if
→ c, then
→ c.
bn − bn−1
bn
6.2 The Bolzano–Weierstrass Theorem and Cauchy’s Criterion
We saw that the monotone sequences behave simply from the point of view of convergence. We also know (see Theorem 5.5) that rearrangement of the terms of a
sequence does not affect whether it converges, and if it does, its limit is the same.
It would be useful to see which sequences can be rearranged to give a monotone
sequence.
Every finite set of numbers can be arranged in ascending or descending order
to give a finite monotone sequence. It is clear however, that not every infinite sequence can be rearranged to give a monotone sequence. For example, the sequence
((−1)n /n) cannot be rearranged into a monotone sequence. The conditions for rearranging a sequence into one that is strictly monotone is given by the following
theorem.
Theorem 6.7. A sequence can be rearranged to give a strictly monotone increasing
sequence if and only if its terms are distinct and the sequence either tends to infinity
or converges to a value that is larger than all of its terms.
Proof. The terms of a strictly monotone sequence are distinct, so this is a necessary
condition for being able to rearrange a sequence into one that is strictly monotone.
We know that every monotone increasing sequence either tends to infinity or is convergent (Theorems 6.2 and 6.3). It is also clear that if a strictly increasing monotone
sequence is convergent, then its terms are smaller than the limit. This proves the
“only if” part of the statement.
6.2 The Bolzano–Weierstrass Theorem and Cauchy’s Criterion
83
Now suppose that we have a sequence (an ) whose terms are pairwise distinct and
an → ∞; we show that (an ) can be rearranged into a strictly increasing monotone
sequence. Consider the intervals
I0 = (−∞, 0], I1 = (0, 1], . . . , Ik = (k − 1, k], . . . .
Each of these intervals contains only finitely many terms of the sequence. If we list
the terms in I0 in monotone increasing order, followed by the ones in I1 , and so
on, then we get a rearrangement of the sequence into one that is strictly monotone
increasing.
Finally let us suppose that we have a sequence (an ) whose terms are distinct,
an → a finite, and an < a for all n. We show that (an ) can be rearranged into a
strictly increasing monotone sequence. Consider the intervals
1
, a − 1k , . . . .
J1 = (−∞, a − 1], J2 = a − 1, a − 12 , . . . , Jk = a − k−1
Each of these contains only finitely many terms of the sequence, and every term of
the sequence is in one of the Jk intervals. If we then list the terms in J0 in monotone
increasing order, followed by those in J1 , and so on, then we get a rearrangement of
the sequence into one that is strictly monotone increasing.
⊔
⊓
Using a similar argument, one can show that if a sequence is convergent and its
terms are smaller than its limit (but not necessarily distinct), then it can be rearranged into a (not necessarily strictly) increasing monotone sequence.
The following combinatorial theorem, while interesting in its own right, has important consequences.
Theorem 6.8. Every sequence has a monotone subsequence.
Proof. For a sequence (an ), we will call the terms ak peaks, if for all m > k, am ≤ ak .
We distinguish two cases.
I. The sequence (an ) has infinitely many peaks. In this case, the peaks form a monotone decreasing subsequence.
II. The sequence (an ) has finitely many peaks. Then there is an n0 such that whenever n ≥ n0 , an is not a peak. Since an0 is not a peak, then according to the definition
of a peak, there exists an n1 > n0 such that an1 > an0 . Since an1 is also not a peak,
there exists n2 > n1 such that an2 > an1 , and so on. We then get an infinite sequence
of indices n0 < n1 < · · · < nk < . . . for which
an0 < an1 < · · · < ank < . . . .
That is, in this case, the sequence has a (strictly) increasing monotone subsequence.
⊔
⊓
A theorem of fundamental importance follows.
84
6 Infinite Sequences III
Theorem 6.9 (Bolzano–Weierstrass5 Theorem). Every bounded sequence has a
convergent subsequence.
Proof. If a sequence is bounded, then every one of its subsequences is also bounded.
Then by our previous theorem, every bounded sequence has a bounded monotone
subsequence. From Theorem 6.2, it follows that this is convergent.
⊔
⊓
We can extend Theorem 6.8 with the following.
Theorem 6.10. If a sequence is not bounded from above, then it has a monotone subsequence that tends to ∞; if it is not bounded from below, then it has a
monotone subsequence that tends to −∞.
Proof. If {an } is not bounded from above, then there is an n1 such that an1 > 0.
Also, by not being bounded from above, we can find an index n2 for which
an2 > max(a1 , . . . , an1 , 1).
Then necessarily n2 > n1 . Following this procedure, we can find indices n1 , n2 , . . .
such that
ank+1 > max(a1 , . . . , ank , k)
for all k. Then n1 < n2 < · · · , an1 < an2 < · · · , and ank > k − 1. Our constructed
subsequence (ank ) is monotone increasing, diverging to ∞. The second statement of
the theorem can be proven in the same manner.
⊔
⊓
Since every monotone sequence has a limit, by Theorem 6.8, every sequence has
a subsequence that tends to a limit. In the following theorems, we will show that
the subsequences with limits determine the convergence behavior of the original
sequence.
Theorem 6.11. Suppose that whenever a subsequence of (an ) has a limit, then it
tends to b (where b can be finite or infinite). Then (an ) also tends to b.
Proof. Let first b = ∞, and let K be given. We will show that the sequence can
have only finitely many terms smaller than K. Suppose this is not the case, and
let an1 , an2 , . . . all be smaller than K. As we saw before, the sequence (ank ) has a
subsequence that tends to a limit. This subsequence, however, cannot tend to infinity
(since each one of its terms is smaller than K), which is impossible. Thus we see
that an → ∞. We can argue similarly if b = −∞.
Now suppose that b is finite, and let ε > 0 be given. We will see that the sequence
can have only finitely many terms outside the interval (b − ε , b + ε ). Suppose that
this is not true, and let an1 , an2 , . . . be terms that do not lie inside (b − ε , b + ε ).
As we saw before, the sequence (ank ) has a subsequence that tends to a limit. This
limit, however, cannot be b (since none of its terms is inside (b − ε , b + ε )), which
is impossible. Thus we see that an → b.
⊔
⊓
5
Bernhard Bolzano (1781–1848), Italian–German mathematician, and Karl Weierstrass
(1815–1897), German mathematician.
6.2 The Bolzano–Weierstrass Theorem and Cauchy’s Criterion
85
By Theorem 5.2, the statement of the previous theorem can be reversed: if
an → b, then all subsequences of (an ) that converge to a limit tend to b, since all
subsequences of (an ) tend to b. The following important consequence follows from
Theorems 5.2 and 6.11.
Theorem 6.12. A sequence oscillates at infinity if and only if it has two subsequences that tend to distinct (finite or infinite) limits.
The following theorem—called Cauchy’s6 criterion—gives a necessary and sufficient condition for a sequence to be convergent. The theorem is of fundamental
importance, since it gives an opportunity to check convergence without knowing
the limit; it places conditions only on differences between terms, as opposed to differences between terms and the limit.
Theorem 6.13 (Cauchy’s Criterion). The sequence (an ) is convergent if and only
if for every ε > 0, there exists an N such that for all n, m ≥ N, |an − am | < ε .
Proof. If (an ) is convergent and limn→∞ an = b, then for every ε > 0, there exists an
N for which |an − b| < ε /2 and |am − b| < ε /2 hold whenever n ≥ N and m ≥ N.
Then by the triangle inequality, |an − am | < ε if m, n ≥ N. This shows that the assumption in our theorem is a necessary condition for convergence.
Now we prove that convergence follows from the assumptions. First we show
that if the assumptions hold, the sequence is bounded. Certainly, considering the
assumption for ε = 1 yields an N such that |an − am | < 1 whenever n, m ≥ N. Here
if we set m to be N, we get that |an − aN | < 1 for all n ≥ N, which clearly says that
the sequence is bounded.
By the Bolzano–Weierstrass theorem, the fact that (an ) has a convergent subsequence follows. Let limn→∞ ank = b. We prove that (an ) is convergent and tends to
b. We give two different proofs of this.
I. Let ε > 0 be given. There exists a k0 such that |ank − b| < ε /2 whenever
k > k0 . On the other hand, by the assumption, there exists an N such that
|an − am | < ε /2 whenever n, m ≥ N. Let us fix an index k > k0 for which nk ≥ N.
Then for arbitrary n ≥ N,
|an − b| ≤ |an − ank | + |ank − b| < (ε /2) + (ε /2) = ε ,
which shows that an → b.
II. By Theorem 6.11, it suffices to show that every subsequence of (an ) that has
a limit tends to b. Let (ami ) be such a subsequence having a limit, and let
limi→∞ ami = c. Since (an ) is bounded, c is finite.
6
Augustin Cauchy (1789–1857), French mathematician.
86
6 Infinite Sequences III
Let ε > 0 be given. By our assumption, there exists an N such that |an − am | < ε
whenever n, m ≥ N. Now, ank → b and ami → c, so there are indices nk > N and
mi > N for which |ank − b| < ε and |ami − c| < ε . Then
|b − c| ≤ |ank − b| + |ami − c| + |ank − ami | < 3ε .
Since ε was arbitrary, b = c.
⊔
⊓
The statement of the Cauchy criterion can also be stated (less precisely but more
intuitively) by saying that a sequence is convergent if and only if its large-index
terms are close to each other. It is important to note that for convergence, it is necessary for terms with large indices to be close to their neighbors, but this in itself is
not sufficient. More precisely, the condition that an+1 − an → 0 is necessary, but not
sufficient, for (an ) to√be convergent.
Clearly, if an → a, then
√
√ an+1 − an → a − a = 0.
Moreover, by (4.2), n + 1 − n → 0, but the sequence ( n) is not convergent, but
tends to infinity. There even exists a sequence (an ) such that an+1 − an → 0, but the
sequence oscillates at infinity (see Exercise 4.22).
Remark 6.14. When we introduced the real numbers, we mentioned alternative constructions, namely the construction of structures that satisfy the axioms of the real
numbers. With the help of the Cauchy criterion, we briefly sketch such a structure,
taking for granted the existence and properties of the rational numbers. For brevity,
we call a sequence (an ) Cauchy if it satisfies the conditions of the Cauchy criterion,
that is, for every ε > 0, there exists an N such that |an − am | < ε for all n, m ≥ N.
Let us call the Cauchy sequences consisting of rational numbers C-numbers. We
consider the C-numbers (an ) and (bn ) to be equal if an − bn → 0, that is, if for every
ε > 0, there exists an N such that |an − bn | < ε for all n ≥ N.
We define the operations of addition and multiplication among C-numbers by the
formulas (an ) + (bn ) = (an + bn ) and (an ) · (bn ) = (an · bn ) respectively. (Of course,
we need to check whether these operations are well defined.) In the construction,
the roles of 0 and 1 are fulfilled by (un ) and (vn ), which are the constant 0 and 1
sequences, respectively.
One can show that the structure defined as above satisfies the field axioms.
The less-than relation is understood in the following way: (an ) < (bn ) if (an ) = (bn )
(that is, an − bn → 0) and there exists an N such that an < bn for all n ≥ N. It can be
shown that with this ordering, we get a structure that satisfies the axiom system of
the real numbers. For details of this construction, see [4, 8].
Exercises
6.11. Prove that if the sequence (an ) does not have a convergent subsequence, then
|an | → ∞.
6.12. Is it possible that (an ) does not have convergent subsequences, but (|an |) is
convergent?
6.2 The Bolzano–Weierstrass Theorem and Cauchy’s Criterion
87
6.13. Prove that if (an ) is bounded and every one of its convergent subsequences
tends to b, then an → b.
6.14. Prove that if the sequence (an ) does not have two convergent subsequences
that tend to distinct limits, then (an ) either has a limit or can be broken into the
union of two subsequences, one of which is convergent, while the absolute value of
the other tends to infinity.
6.15. Assume the field and ordering axioms, and the Bolzano–Weierstrass theorem.
Deduce from these the axiom of Archimedes and Cantor’s axiom. (∗ S)
6.16. Prove that if a sequence has infinitely many terms smaller than a, and infinitely
many terms larger than a, then it cannot be reordered to form a monotone sequence.
6.17. What is the exact condition for a sequence (an ) to be rearrangeable into a
monotone increasing sequence? (H)
6.18. Prove that every convergent sequence breaks into at most three (finite or infinite) subsequences, each of which can be rearranged to give a monotone sequence.
6.19. Prove that if |an+1 − an | ≤ 2−n for all n, then (an ) is convergent.
6.20. Suppose that an+1 − an → 0. Does it follow from this that a2n − an → 0?
6.21. Give examples of a sequence (an ) such that an → ∞, and
(a) a2n − an → 0 (S);
(b) an2 − an → 0;
(c) a2n − an → 0;
(d) for a fixed sequence sn > n made up of positive integers, asn − an → 0 (S).
6.22. Give an example of a sequence for which ank − an → 0 for all k > 1, but
a2n − an does not tend to 0. (∗ H)
6.23. Is it true, that the following statements are equivalent?
(a) (∀ε > 0)(∃N)(n, m ≥ N ⇒ |an − am | < ε ), and
(b) whenever sn ,tn are positive integers for which sn → ∞ and tn → ∞, then asn −
atn → 0. (H)
Chapter 7
Rudiments of Infinite Series
If we add infinitely many numbers (more precisely, if we take the sum of an infinite
sequence of numbers), then we get an infinite series.
Mathematicians in India were dealing with infinite series as early as the fifteenth
century, while European mathematics caught up with them only in the seventeenth.
But then, however, the study of infinite series underwent rapid development, because
mathematicians realized that certain quantities and functions can be computed more
easily if they are expressed as infinite series.
There were ideas outside of mathematics that led to infinite series as well. One
of the earliest such ideas was Zeno’s paradox about Achilles and the tortoise (as
seen in our brief historical introduction). So-called “elementary” mathematics and
recreational mathematics also give rise to questions that lead to infinite series:
1. What is the area of the shaded region in
Figure 7.1, formed by infinitely many triangles?
If the area of the big triangle is 1, then the area
we seek is clearly (1/4) + (1/42 ) + (1/43 ) + · · · .
On he other hand, the big triangle is the union of
three copies of the shaded region, so the area of
the region is 1/3. We get that
1
1
1
1
+ + +··· = .
4 42 43
3
2. A simple puzzle asks that if the weight of a
brick is 1 pound plus the weight of half a brick,
Fig. 7.1
then how heavy is one brick? Since the weight of
half of a brick is 1/2 pound plus the weight of a quarter of a brick, while the weight
of a quarter of a brick is 1/4 pound plus the weight of an eighth of a brick, and so
on, the weight of one brick is 1 + (1/2) + (1/4) + (1/8) + · · · . On the other hand,
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 7
89
90
7 Rudiments of Infinite Series
if we subtract half of a brick from both sides of the equation 1 brick = 1 pound + 12
brick, we get that half a brick weighs 1 pound. Thus the brick weighs 2 pounds,
and so
1 1 1
1 + + + + · · · = 2.
2 4 8
In the brief historical introduction, we saw several examples of how the sum of
an infinite series can result in some strange or contradictory statements (for example, (1.4) and (1.5)). We can easily see that these strange results come from faulty
reasoning, namely from the assumption that every infinite series has a well-defined
“predestined” sum. This is false, because only the axioms can be assumed as “predestined” (once we accept them), and every other concept needs to be created by
us. We have to give up on every infinite series having a sum: we have to decide
ourselves which series should have a sum, and then decide what that sum should
be. The concept that we create should satisfy certain expectations and should mirror
any intuition we might already have for the concept.
To define infinite series, let us start with finite sums. Even sums of multiple terms
are not “predestined,” since the axioms talk only about sums of two numbers. We
defined the sum of n numbers by adding parentheses to the sum (in the first appendix
of Chapter 3), which simply means that we get the sum of n terms after n − 1 summations. It is only natural to define the infinite sum a1 + a2 + · · · with the help of
the running sums a1 , a1 + a2 , a1 + a2 + a3 , . . ., which are called the partial sums.1
It is easy to check (and we will soon do so in Example 7.3) that the partial sums of
the series 1+1/2+1/4+1/8+· · · tend to 2, and the partial sums of 3/10+3/100+
3/1000 + · · · tend to 1/3. Both results seem to be correct; the second is simply
the decimal expansion of the number 1/3. On the other hand, the partial sums of
the problematic series 1 + 2 + 4 + · · · do not tend to −1, while the partial sums
of the series 1 − 1 + 1 − 1 + · · · does not tend to anything, but oscillate at infinity
instead. By all of these considerations, the definition below follows naturally.
To simplify our formulas, we introduce the following notation for sums of multiple terms: a1 + · · · + an = ∑ni=1 ai .
The infinite series a1 + a2 + · · · will also get new notation, which is ∑∞
n=1 an .
Definition 7.1. The partial sums of the infinite series ∑∞
n=1 an are the numbers sn =
∑ni=1 ai (n = 1, 2, . . .). If the sequence of partial sums (sn ) is convergent with limit
A, then we say that the infinite series ∑∞
n=1 an is convergent, and its sum is A. We
a
=
A.
denote this by ∑∞
n
n=1
If the sequence of partial sums (sn ) is divergent, then we say that the series
∑∞
n=1 an is divergent.
If limn→∞ sn = ∞ (or −∞), then we say that the sum of the series ∑∞
n=1 an is ∞
a
=
∞
(or
−∞).
(or −∞). We denote this by ∑∞
n
n=1
1
This viewpoint does not treat an infinite sum as an expression whose value is already defined,
but instead, the value is “gradually created” with our method. From a philosophical viewpoint, the
infinitude of the series is viewed not as “actual infinity” but “potential infinity.”
7 Rudiments of Infinite Series
91
Remark 7.2. Strictly speaking, the expression ∑∞
n=1 an alone has no meaning.
a
”
simply
means “consider the seThe phrase “consider the infinite series ∑∞
n
n=1
quence (an ),” with the difference that we usually are more concerned about the
partial sums2 a1 + · · · + an . For our purposes, the statement ∑∞
n=1 an = A is simply a
shorthand way of writing that limn→∞ (a1 + · · · + an ) = A.
Examples 7.3. 1. The nth partial sum of the series 1 + 1/2 + 1/4 + 1/8 + · · · is sn =
n−1 −i
2 = 2 − 2−n+1 . Since limn→∞ sn = 2, the series is convergent, and its sum
∑i=0
is 2.
2. The nth partial sum of the series 3/10 + 3/100 + 3/1000 + · · · is
n
sn = ∑ 3 · 10−i =
i=1
3 1 − 10−n
·
.
10 1 − (1/10)
Since limn→∞ sn = 3/9 = 1/3, the series is convergent, and its sum is 1/3.
3. The nth partial sum of the series 1 + 1 + 1 + · · · is sn = n. Since limn→∞ sn = ∞,
the sequence is divergent (and its sum is ∞).
4. The (2k)th partial sum of the series 1 − 1 + 1 − . . . is zero, while the (2k + 1)th
partial sum is 1 for all k ∈ N. Since the sequence (sn ) is oscillating at infinity, the
series is divergent (and has no sum).
5. The kth partial sum of the series
1−
1 1 1
+ − +···
2 3 4
(7.1)
is
sk = 1 −
1
1 1 1
+ − + · · · + (−1)k−1 · .
2 3 4
k
If n < m, then we can see that
1
1
1 1
−
+ · · · + (−1)n−m+1 · < .
|sm − sn | =
n+1 n+2
m
n
It follows that the sequence (sn ) satisfies Cauchy’s criterion (Theorem 6.13), so it
is convergent. This shows that the series (7.1) is convergent. We will later see that
the sum of the series is equal to the natural logarithm of 2 (see Exercise 12.92 and
Remark 13.16).
The second example above is a special case of the following theorem, which
states that infinite decimal expansions can be thought of as convergent series.
Theorem 7.4. Let the infinite decimal expansion of x be m.a1 a2 . . .. Then the infinite
a1
a2
series m + 10
+ 10
2 + · · · is convergent, and its sum is x.
There are some who actually mean the series (sn ) when they write ∑∞
n=1 an . We do not follow
this practice, since then, the expression ∑∞
n=1 an = A would state equality between a sequence and
a number, which is not a good idea.
2
92
7 Rudiments of Infinite Series
Proof. By the definition of the decimal expansion,
m.a1 . . . an ≤ x ≤ m.a1 . . . an +
1
10n
for all n, so limn→∞ m.a1 . . . an = x. We see that m.a1 . . . an is the (n + 1)th partial
sum of the series
a1
a2
m+
+
+··· ,
10 102
which makes the statement of the theorem clear.
⊔
⊓
The sums appearing in Examples 7.3.1. and 7.3.2. are special cases of the following theorem.
Theorem 7.5. The series 1 + x + x2 + · · · is convergent if and only if |x| < 1, and
then its sum is 1/(1 − x).
Proof. We already saw that in the case x = 1, the series is divergent, so we can
n−1 i
assume that x = 1. Then the nth partial sum of the series is sn = ∑i=0
x = (1 −
xn )/(1 − x). If |x| < 1, then xn → 0 and sn → 1/(1 − x). Thus the series is convergent
with sum 1/(1 − x).
If x > 1, then sn → ∞, so the series is divergent (and its sum is ∞). If, however,
x ≤ −1, then the sequence (sn ) oscillates at infinity, so the series is also divergent
(with no sum).
⊔
⊓
The next theorem outlines an important property of convergent series.
Theorem 7.6. If the series ∑∞
n=1 an is convergent, then limn→∞ an = 0.
Proof. Let the sum of the series be A. Since
an = (a1 + · · · + an ) − (a1 + · · · + an−1 ) = sn − sn−1 ,
we have an → A − A = 0.
⊔
⊓
Remark 7.7. The theorem above states that for ∑∞
n=1 an to be convergent, it is necessary for an → 0 to hold. It is important to note that this condition is in no way sufficient, since there are many divergent sequences
whose
√ terms tend to zero. A simple
√
example: The terms of the series ∑∞
i + 1 − i tend to zero by Example 4.4.3.
i=0
√ √
√
n−1
i + 1 − i = n → ∞ as n → ∞,
On the other hand, the nth partial sum is ∑i=0
so the series is divergent.
Another famous example of a divergent series whose terms tend to zero is the
3
series ∑∞
n=1 1/n, which is called the harmonic series.
3 The name comes from the fact that the wavelengths of the overtones of a vibrating string of length
h are h/n (n = 2, 3, . . .). The wavelengths h/2, h/3, . . . , h/8 correspond to the octave, octave plus
a fifth, second octave, second octave plus a major third, second octave plus a fifth, second octave
plus a seventh, and the third octave, respectively. Thus the series ∑∞
n=1 (h/n) contains the tone and
its overtones, which are often called harmonics.
7 Rudiments of Infinite Series
93
1
Theorem 7.8. The series ∑∞
n=1 n is divergent.
We give two proofs of the statement.
Proof. 1. If the nth partial sum of the series is sn , then
1
1
1
1
s2n − sn = 1 + + + · · · +
− 1+ ++···+
=
2
2n
2
n
1
1
++···+
≥
=
n+1
2n
1
1
=
≥ n·
2n 2
for all n. Suppose that the series is convergent and its sum is A. Then if n → ∞, then
s2n − sn → A − A = 0, which is impossible.
2. If n > 2k , then
1
1
++···+ k =
2
2
1 1
1
1
1
1
1
+
+···+
+···+ k ≥
= 1+ +
+
+ · · · + k−1
2
3 4
5
8
2 +1
2
1
1
1
1
≥ 1 + + 2 · + 4 · + · · · + 2k−1 · k =
2
4
8
2
1
= 1+k· .
2
sn ≥ 1 +
Thus limn→∞ sn = ∞, so the series is divergent, and its sum is ∞.
⊔
⊓
Remark 7.9. Since the harmonic series contains the reciprocal of every positive integer, one could expect the behavior of the series to have number-theoretic significance. This is true. Using the divergence of the harmonic series, we can give a new
proof of the fact that there exist infinitely many prime numbers. Suppose that this
were not true, and that there were only finitely many prime numbers. Let these be
p1 , . . . , pk . For all i and N, the relations
1+
−(N+1)
1 − pi
1
1
+···+ N =
pi
pi
1−
1
pi
<
1
1 − p1i
hold. Multiplying these together, we get that
k
∏
i=1
1
1
1+ +···+ N
pi
pi
k
1
1
i=1 1 − pi
<∏
(7.2)
for all N. (Here we use the notation ∏ki=1 ai = a1 · · · ak .) If we expand the multiplication on the left-hand side, then we get the reciprocal of every number whose
prime factorization does not contain any prime with power greater than or equal to N
94
7 Rudiments of Infinite Series
(since we assumed that there are no prime numbers other than p1 , . . . , pk ). It is clear
that every number up to N is there, so ∑Nn=1 1/n = sN is smaller than the right-hand
side of (7.2). This, however, is impossible, since sN → ∞ as N → ∞.
With a refinement of the proof above, one can show that the series consisting
of the reciprocals of all primes is divergent. We also know more precisely that the
sum of reciprocals of primes smaller than x is greater than log log x − 1 for all x ≥ 2
(see Chapter 5 of [2] or Corollary 18.16 and Theorem 18.17 of this book).
The second proof of Theorem 7.8 seems to tell us more than the first, because
it says not only that the series is divergent, but that its sum is infinite too. By the
following simple theorem, we see that the sum of a divergent series consisting of
nonnegative terms is always infinite.
Theorem 7.10.
(i) A series consisting of nonnegative terms is convergent if and only if the sequence
of its partial sums is bounded (from above).
(ii) If a series consisting of nonnegative terms is divergent, then its sum is infinite.
Proof. By the assumption that the terms of the series are nonnegative, we clearly
get that the sequence of partial sums of the series is monotone increasing. If this
sequence is bounded from above, then it is convergent by Theorem 6.2. Then the
series in question is convergent. If, however, the sequence of partial sums is not
bounded from above, then by Theorem 6.3, it tends to infinity, so the series will be
divergent, and its sum will be infinity.
⊔
⊓
We emphasize that by the above theorem, a series consisting of nonnegative
terms always has a sum: either a finite number (if the series converges) or infinity (if the series diverges).
2
Examples 7.11. 1. The series ∑∞
i=1 1/i is convergent, because its nth partial sum is
n
n
n
1
1
1
1
1
≤
1
+
=
1
+
−
∑ i2
∑ (i − 1) · i
∑ i − 1 i = 2 − n < 2.
i=1
i=2
i=2
During the seventeenth century and the beginning of the eighteenth, many math2
ematicians tried to determine the value of the series ∑∞
i=1 1/i . Finally, Johann
∞
4
2
2
Bernoulli and Euler independently found that ∑i=1 1/i = π /6. This fact now has
dozens of proofs (see http://mathworld.wolfram.com/RiemannZetaFunction Zeta2.
html). For a relatively elementary proof, see Exercise 11.36.
3/2 are less than
2. By part (b) of Exercise 2.7, the partial sums of the series ∑∞
i=1 1/i
3. Thus the series is convergent, and its sum is at most 3.
c
3. We generally call series of the form ∑∞
i=1 1/i hyperharmonic series. It is easy
to see that if b > 0, then
1+
4
1
1
1
1
+ · · · + b+1 ≤ 1 + −
b b · nb
2b+1
n
Johann Bernoulli (1667–1748) Swiss mathematician, brother of Jacob Bernoulli.
(7.3)
7 Rudiments of Infinite Series
95
c
for all n (see Exercise 7.5). It then follows that the hyperharmonic series ∑∞
i=1 1/i
is convergent for all c > 1 (see also Exercise 7.6). We denote the sum of the series
by ζ (c). By equality (7.3), it follows that every partial sum of the series is less than
c/(c − 1), and so 1 < ζ (c) ≤ c/(c − 1) for all c > 1.
The theorems of Johann Bernoulli and Euler can be summarized using this notation as ζ (2) = π 2 /6.
2
Remark 7.12. The series ∑∞
n=1 1/n is an example of the rare occurrence whereby
we can specifically determine the sum of a series (such as the series appearing in
Exercises 7.2 and 7.3). These can be thought of as exceptions, for we generally
c
cannot express the sum of an arbitrary series in closed form. The sums ∑∞
n=1 1/n
(that is, the values of ζ (c)) have closed formulas only for some special values of c.
We already saw that ζ (2) = π 2 /6. Bernoulli and Euler proved that if k is an even
positive integer, then ζ (k) is equal to a rational multiple of π k . However, to this day,
we do not know whether this is also true if k is an odd integer. In fact, nobody has
found closed expressions for the values of ζ (3), ζ (5), and so on for the past 300
years, and it is possible that no such closed form exists.
By the transcendence of the number π , we know that the values ζ (2k) are irrational for every positive integer k. In the 1970s, it was proven that the value ζ (3)
is also irrational. Whether the numbers ζ (5), ζ (7), and so on are rational or not,
however, is still an open question.
In the general case, the following theorem gives us a precise condition for the
convergence of a series.
Theorem 7.13 (Cauchy’s Criterion). The infinite series ∑∞
n=1 an is convergent if
and only if for all ε > 0, there exists an index N such that for all N ≤ n < m,
|an+1 + an+2 + · · · + am | < ε .
Proof. Since an+1 + an+2 + · · · + am = sm − sn , the statement is clear by Cauchy’s
⊔
⊓
criterion for sequences (Theorem 6.13).
Exercises
7.1. For a fixed ε > 0, give threshold indices above which the partial sums of the
following series differ from their actual sum by less than ε .
n
(a) ∑∞
n=0 1/2 ;
(c) 1 − 1/2 + 1/3 − 1/4 + · · · ;
2
(a) ∑∞
n=1 1/(n + 2n) =?
7.2. (c) ∑∞ 1/(n3 − n) =? (H S)
n=2
n
(b) ∑∞
n=0 (−2/3) ;
(d) 1/1 · 2 + 1/2 · 3 + 1/3 · 4 + · · · .
2
(b) ∑∞
n=1 1/(n + 4n + 3) =?
96
7 Rudiments of Infinite Series
7.3. Give a general method for determining the sums ∑∞
n=a p(n)/q(n) in which p and
q are polynomials, gr p ≤ gr q − 2, and q(x) = (x − a1 ) · · · (x − ak ), where a1 , . . . , ak
are distinct integers smaller than a. (H)
7.4. Are the following series convergent?
√
√
n
n
(a) ∑∞
(b) ∑∞
1 1/ 2;
2 1/√ log n;
√
∞
∞
(c) ∑2 n log((n + 1)/n);
(d) ∑1 (3 n − 2n )/(3 n + 2n ).
7.5. Prove inequality (7.3) for all b > 0. (H S)
7.6. Show that
1+
1
1
+···+ c
2c
n
2
1− c < 1
2
c
for all n = 1, 2, . . . and c > 0. Deduce from this that the series ∑∞
n=1 1/n is convergent for all c > 1.
7.7. Prove that limn→∞ ζ (1 + (1/n)) = ∞.
7.8. Let a1 , a2 , . . . be a listing of the positive integers that do not contain the digit 7
(in decimal representation). Prove that ∑∞
n=1 1/an is convergent. (H)
7.9. Let ∑∞
n=1 an be convergent. limn→∞ (an+1 + an+2 + · · · + an2 ) =?
7.10. Let the series (an ) satisfy
lim |an + an+1 + · · · + an+2n | = 0.
n→∞
Does it then follow that the series ∑∞
n=1 an is convergent? (H)
7.11. Let the series (an ) satisfy
lim |an + an+1 + · · · + an+in | = 0
n→∞
for every sequence of positive integers (in ). Does it then follow that the series
∑∞
n=1 an is convergent? (H)
7.12. Prove that if |an+1 − an | < 1/n2 for all n, then the series ∑∞
n=1 an is convergent.
∞
7.13. Suppose that an ≤ bn ≤ cn for all n. Prove that if the series ∑∞
n=1 an and ∑n=1 cn
∞
are convergent, then the series ∑n=1 bn is convergent as well.
7.14. Prove that if the series ∑∞
n=1 an is convergent, then
lim
n→∞
a1 + 2a2 + · · · + nan
= 0. (S)
n
Chapter 8
Countable Sets
While we were talking about sequences, we noted that care needs to be taken in
distinguishing the sequence (an ) from the set of its terms {an }. We will say that the
sequence (an ) lists the elements of H if H = {an }. (The elements of the set H and
thus the terms of (an ) can be arbitrary; we do not restrict ourselves to sequences
of real numbers.) If there is a sequence that lists the elements of H, we say that H
can be listed by that sequence. Clearly, every finite set can be listed, since if H =
{a1 , . . . , ak }, then the sequence (a1 , . . . , ak , ak , ak , . . .) satisfies H = {an }. Suppose
now that H is infinite. We show that if there is a sequence (an ) that lists the elements
of H, then there is another sequence listing H whose terms are all distinct. Indeed,
for each element x of H, we can choose a term an for which an = x. The terms
chosen in this way (in the order of their indices) form (ank ), a subsequence of (an ).
If ck = ank for all k, then the terms of the sequence (ck ) are distinct, and H = {ck }.
Like every sequence, (ck ) is also a function defined on the set N+ . Since its terms
are distinct and H = {ck }, this means that the map k → ck is a bijection between the
sets N+ and H (see the footnote on p. 38).
We have shown that the elements of a set H can be listed if and only if H is finite
or there is a bijection between H and N+ .
Definition 8.1. We say the set H is countably infinite, if there exists a bijective map
between N+ and H. The set H is countable if it is finite or countably infinite.
With our earlier notation, we can summarize our argument above by saying that
a set can be listed if and only if it is countable.
It is clear that N is countable: consider the sequence (0, 1, 2, . . .). It is also easy
to see that Z is countable, since the sequence (0, 1, −1, 2, −2, . . .) contains every
integer. More surprising is the following theorem.
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 8
97
98
8 Countable Sets
Theorem 8.2. The set of all rational numbers is countable.
Proof. We have to convince ourselves that there is a sequence that lists the rational
numbers. One example of such a sequence is
0 −1 0 1 −2 −1 0 1 2 −3 −2 −1 0 1
,
, , ,
,
, , , ,
,
,
, , ,
1
1
2 1
1
2
3 2 1
1
2
3
4 3
2 3 −4 −3 −2 −1 0 1 2 3 4 −5 −4
, ,
,
,
,
, , , , , ,
,
, ....
2 1
1
2
3
4
5 4 3 2 1
1
2
(8.1)
Fig. 8.1
Here fractions of the form p/q (where p, q are integers and q > 0) are listed by
the size of |p| + q. For all n, we list all fractions for which |p| + q = n in some order,
and then we attach the finite sequences we get for n = 1, 2, . . . to get an infinite
sequence (Figure 8.1). It is clear that we have listed every rational number in this
way.
⊔
⊓
Using similar techniques, we can show that sets much “larger” than the set of
rational numbers are also countable. We say that the complex number α is algebraic
if it is the root of a not identically zero polynomial with integer coefficients. It is
clear that every rational
√ number is algebraic, since p/q is the root of the polynomial
qx − p. The number 2 is also algebraic, since it is a root of x2 − 2.
Theorem 8.3. The set of algebraic numbers is countable.
Proof. We define the weight of the polynomial ak xk + · · · + a0 to be the number
k + |ak | + |ak−1 | + · · · + |a0 |. It is clear that for every n, there are only finitely many
integer-coefficient polynomials whose weight is n. It then follows that there is a
sequence f1 , f2 , . . ., that contains every nonconstant polynomial. We can get this by
8 Countable Sets
99
first listing all the integer-coefficient nonconstant polynomials whose weight is 2,
followed by all those whose weight is 3, and so on. We have then listed every integercoefficient nonconstant polynomial, since the weight of such a polynomial cannot be
0 or 1. Each polynomial fi can have only finitely many roots (see Lemma 11.1). List
the roots of f1 in some order, followed by the roots of f2 , and so on. The sequence
we get lists every algebraic number, which proves our theorem.
⊔
⊓
According to the following theorem, set-theoretic operations (union, intersection,
complement) do not lead us out of the class of countable sets.
Theorem 8.4.
(i) Every subset of a countable set is countable.
(ii) The union of two countable sets is countable.
Proof. (i) Let A be countable, and B ⊂ A. Suppose that the sequence (an ) lists the
elements of A. For each element x of B, choose a term an for which an = x. The
terms chosen in this way (in the order of their indices) form a subsequence of (an )
that lists the elements of B.
(ii) If the sequences (an ) and (bn ) list the elements of the sets A and B, then the
sequence (a1 , b1 , a2 , b2 , . . .) lists the elements of A ∪ B.
⊔
⊓
An immediate consequence of statement (ii) above (by induction) is that the
union of finitely many countable sets is also countable. By the following theorem,
more is true.
Theorem 8.5. The union of a countable number of countable sets is countable.
Proof. Let A1 , A2 , . . . be countable sets, and for each k, let akn be a sequence that
lists the elements of Ak . Then the sequence
1 1 2 1 2 3 1 2 3 4
a1 , a2 , a1 , a3 , a2 , a1 , a4 , a3 , a2 , a1 , . . .
∞
lists the elements
Ak . We
get the above sequence by writing all the finite
1 2 of k=1n−1
⊔
⊓
sequences an , an−1 , . . . , a2 , an1 one after another for each n.
Based on the previous theorems, the question whether there are uncountable sets
at all arises naturally. The following theorem gives an answer to this.
Theorem 8.6. The set of real numbers is uncountable.
Proof. Suppose that R is countable, and let (xn ) be a sequence of real numbers that
contains every real number. We will work toward a contradiction by constructing a
real number x that is not in the sequence. We outline two constructions.
I. The first construction is based on the following simple observation: if I is a closed
interval and c is a given number, then I has a closed subinterval that does not contain
c. This is clear: if we choose two disjoint closed subintervals, at least one will not
contain c.
Let I1 be a closed interval that does not contain x1 . Let I2 be a closed subinterval
of I1 such that x2 ∈
/ I2 . Following this procedure, let In be a closed subinterval of
100
8 Countable Sets
In−1 such that xn ∈
/ In . According to Cantor’s axiom, the intervals In have a shared
/ In . Thus x cannot be
point. If x ∈ ∞
n=1 In , then x = xn for all n, since x ∈ In , but xn ∈
a term in the sequence, which is what we were trying to show.
II. A second construction for a similar x is the following: Consider the decimal
expansion of x1 , x2 , . . .:
x1 = ±n1 .a11 a12 . . .
x2 = ±n2 .a21 a22 . . .
..
.
Let x = 0.b1 b2 . . ., where bi = 5 if aii = 5, and bi = 4 if aii = 5. Clearly, x is different
from each xn .
⊔
⊓
Theorems 8.4 and 8.6 imply that the set of irrational numbers is uncountable. If
it were countable, then—since Q is also countable—R = Q ∪ (R \ Q) would also be
countable, whereas it is not. We call a number transcendental if it is not algebraic.
Repeating the above argument—using the fact that the set of algebraic numbers is
countable—we get that the set of transcendental numbers is uncountable.
Definition 8.7. If there exists a bijection between two sets A and B, then we say that
the two sets are equivalent, or that A and B have the same cardinality, and we denote
this by A ∼ B.
By the above definition, a set A is countably infinite if and only if A ∼ N+ . It can
be seen immediately that if A ∼ B and B ∼ C, then A ∼ C; if f is a bijection from A
to B and g is a bijection from B to C, then the map x → g( f (x)) (x ∈ A) is a bijection
from A to C.
Definition 8.8. We say that a set H has the cardinality of the continuum if it is
equivalent to R.
We show that both the set of irrational numbers and the set of transcendental
numbers have the cardinality of the continuum. For this, we need the following
simple lemma.
Lemma 8.9. If A is infinite and B is countable, then A ∪ B ∼ A.
Proof. First of all, we show that A contains a countably infinite subsequence. Since
A is infinite, it is nonempty, and we can choose an x1 ∈ A. If we have already chosen
x1 , . . . , xn ∈ A, then A = {x1 , . . . , xn } (since then A would be finite), so we can choose
an element xn+1 ∈ A \ {x1 , . . . , xn }. Thus by induction, we have chosen distinct xn
for each n. Then X = {xn : n = 1, 2, . . .} is a countably infinite subset of A.
To prove the lemma, we can suppose that A ∩ B = 0,
/ since we can substitute B
with B \ A (which is also countable). By Theorem 8.4, X ∪ B is countable. Since it
is also infinite, X ∪ B ∼ N+ , and so X ∪ B ∼ X, since N+ ∼ X. Let f be a bijection
from X to X ∪ B. Then
8 Countable Sets
101
g(x) =
x,
if x ∈ A \ X,
f (x), if x ∈ X
is a bijection from A to A ∪ B.
⊔
⊓
Theorem 8.10. Both the set of irrational numbers and the set of transcendental
numbers have the cardinality of the continuum.
Proof. By the previous theorem, R \ Q ∼ (R \ Q) ∪ Q = R, so R \ Q has the cardinality of the continuum. By a straightforward modification of the argument, we find
that the set of transcendental numbers also has the cardinality of the continuum. ⊓
⊔
Theorem 8.11. Every nondegenerate interval has the cardinality of the continuum.
Proof. The interval (−1, 1) has the cardinality of the continuum, since the map
f (x) = x/(1 + |x|) is a bijection from R to (−1, 1). (The inverse of f is f −1 (x) =
x/(1 − |x|) (x ∈ (−1, 1)).) Since every open interval is equivalent to (0, 1) (the function (b − a)x + a maps (0, 1) to (a, b)), every open interval has the cardinality of the
continuum.
Moreover, Lemma 8.9 gives [a, b] ∼ (a, b), (a, b] ∼ (a, b) and [a, b) ∼ (a, b),
so we get that every bounded nondegenerate interval has the cardinality of the
continuum.
The proof for rays (or half-lines) having the cardinality of the continuum is left
as an exercise for the reader.
⊔
⊓
Exercises
8.1. Let an denote the nth term of sequence (8.1). What is the smallest n for which
an = −17/39?
8.2. Prove that the set of finite sequences with integer terms is countable. (H)
8.3. Show that the set of finite-length English texts is countable.
8.4. Prove that every set of disjoint intervals is countable. (H)
8.5. Prove that a set is infinite if and only if it is equivalent to a proper subset of
itself.
8.6. Prove that every ray (half-line) has the cardinality of the continuum.
8.7. Prove that if both A and B have the cardinality of the continuum, then so does
A ∪ B.
8.8. Prove that every circle has the cardinality of the continuum.
102
8 Countable Sets
8.9. Give a function that maps (0, 1] bijectively to the set of infinite sequences made
up of positive integers. (H)
8.10. Prove that the set of all subsets of N has the cardinality of the continuum.
8.11. Prove that the plane (that is, the set {(x, y) : x, y ∈ R}) has the cardinality of
the continuum. (H)
Chapter 9
Real-Valued Functions of One Real Variable
9.1 Functions and Graphs
Consider a function f : A → B. As we stated earlier, by this we mean that for every
element a of the set A, there exists a corresponding b ∈ B, which is denoted by
b = f (a).
We call the set A the domain of f , and we denote it by A = D( f ). The set of
b ∈ B that correspond to some a ∈ A is called the range of f , and is denoted by
R( f ). That is, R( f ) = { f (a) : a ∈ D( f )}. The set R( f ) is inside B, but generally it
is not required to be equal to B.
If C ⊂ A, then f (C) denotes all the b ∈ B that correspond to some c ∈ C; f (C) =
{ f (a) : a ∈ C}. By this notation, R( f ) = f (D( f )).
We consider two functions f and g to be equal if D( f ) = D(g), and f (x) = g(x)
holds for all x ∈ D( f ) = D(g).
Definition 9.1. For a function f : A → B, if for every a1 , a2 ∈ A, a1 = a2 , we have
f (a1 ) = f (a2 ), then we say that f is one-to-one or injective.
If f : A → B and R( f ) = B, then we say that f is onto or surjective.
If f : A → B is both one-to-one and onto, then we say that f is a one-to-one
correspondence between A and B. (In other words, f is a bijection, a bijective map.)
Definition 9.2. If f : A → B is a one-to-one correspondence between A and B, then
we write f −1 to denote the map that takes every b ∈ B to the corresponding a ∈ A
for which b = f (a). Then f −1 : B → A, D( f −1 ) = B, and R( f −1 ) = A. We call the
map f −1 the inverse function (or the inverse map or simply the inverse) of f .
Theorem 9.3. Let f : A → B be a bijection between A and B. Then the map g =
f −1 : B → A has an inverse, and g−1 = f .
Proof. The statement is clear from the definition of an inverse function.
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 9
⊔
⊓
103
104
9 Real-Valued Functions of One Real Variable
We can define various operations between functions that map pairs of functions
to functions. One such operations is composition.
Definition 9.4. We can define the composition of the functions f : A → B and
g : B → C as a new function, denoted by g ◦ f , that satisfies D(g ◦ f ) = A and
(g ◦ f )(x) = g( f (x)) for all x ∈ A. If f and g are arbitrary functions, then we define g ◦ f for those values x for which g( f (x)) is defined. That is, we would have
D(g ◦ f ) = {x ∈ D( f ) : f (x) ∈ D(g)}, and (g ◦ f )(x) = g( f (x)) for all such x.
Remark 9.5 (Composition Is Not Commutative). Consider the following example.
Let Z denote the set of integers, and let f (x) = x + 1 for all x ∈ Z. Moreover, let
g(x) = 1/x for all x ∈ Z \ {0}. Then D(g ◦ f ) = Z \ {−1}, and (g ◦ f )(x) = 1/(x + 1)
for all x ∈ Z\{−1}. On the other hand, D( f ◦g) = Z\{0}, and ( f ◦g)(x) = (1/x)+1
for all x ∈ Z \ {0}. We can see that f ◦ g = g ◦ f , since they are defined on different
domains. But in fact, it is easy to see that ( f ◦ g)(x) = (g ◦ f )(x) for each x where
both sides are defined.
From this point on, we will deal with functions whose domain and range are both
subsets of the real numbers. We call such functions real-valued function of a real
variable (or simply real functions for short).
We can also define addition, subtraction, multiplication, and division among real
functions.
Let f and g be real-valued functions. Their sum f + g and their difference f − g
are defined by the formulas ( f + g)(x) = f (x) + g(x) and ( f − g)(x) = f (x) − g(x)
respectively for all x ∈ D( f ) ∩ D(g). Then D( f + g) = D( f − g) = D( f ) ∩ D(g).
Similarly, their product f · g is defined on the set D( f ) ∩ D(g), having the
value f (x) · g(x) at x ∈ D( f ) ∩ D(g). Finally, their quotient, f /g, is defined by
( f /g)(x) = f (x)/g(x) at every x for which x ∈ D( f ) ∩ D(g) and g(x) = 0. That is,
D( f /g) = {x ∈ D( f ) ∩ D(g) : g(x) = 0}.
Like sequences, real functions can be represented in a variety of ways. Consider
the following examples.
Examples
9.6. 1. f (x) = x2 + 3 (x ∈ R);
1, if x is rational
2. f (x) =
(x ∈ R);
0, if x is irrational
3. f (x) = limn→∞ (1 + x + · · · + xn ) (x ∈ (−1, 1));
4. f (0.a1 a2 . . .) = 0.a2 a4 a6 . . ., where we exclude the forms 0.a1 . . . an 999 . . ..
We gave the function in (1) with a “formula”; in the other examples, the map
was defined in other ways. Just as for sequences, the way the function is defined is
irrelevant: using a formula to define a function is no better or worse than the other
ways (except perhaps shorter).
We can illustrate real functions in the plane using Cartesian coordinates.1 Let
f : A → B be a real function, that is, such that A ⊂ R and B ⊂ R. Consider points of
1
We summarize the basic definitions of coordinate geometry in the appendix of this chapter.
9.1 Functions and Graphs
105
the form (x, 0) on the x-axis, where x ∈ A. At each of these points raise a perpendicular line to the x-axis, and measure f (x) on this perpendicular (“upward” from the
x-axis if f (x) ≥ 0, and “downward” if f (x) ≤ 0). Then we get the points (x, f (x)),
where x ∈ A. We call the set of these points the graph of f , which we denote by
graph f . To summarize:
graph f = {(x, f (x)) : x ∈ A}.
Examples 9.7. 1. f (x) = ax + b;
(9.1)
2. f (x) = x2 ;
3. f (x) = (x − a)(x − b);
4. f (x) = x3 ;
5. f (x) = 1/x;
6. f (x) = |x|;
7. f (x) = [x], where [x] denotes the greatest integer that is less than or equal to x
(the floor of x);
8. f (x) = {x}, where {x} = x − [x];
1, if x is rational
x, if x is rational
9. f (x) =
10. f (x) =
0, if x is irrational;
0, if x is irrational.
Let us see the graphs of functions (1)–(10) (see Figure 9.1)!
The function (9), which we now see for the second time, will continue to appear in the sequel to illustrate various phenomena. This function is named—after its
discoverer—the Dirichlet2 function.
We note that illustrating functions by a graph is similar to our use of the number line, and what we said earlier applies: the advantage of these illustrations in the
plane are that some statements can be expressed graphically in a more understandable form, and often we can get an idea for a proof from the graphical interpretation.
However, we must again emphasize that something that we can “see” from inspecting a graph cannot be taken as a proof, and some things that we “see” turn out to
be false. As before, we can rely only on the axioms of the real numbers and the
theorems that we have proved from those.
Remark 9.8. Our more careful readers might notice that when we introduced functions, we were not as strict as we were when we defined sets. Back then, we noted
that writing sets as collections, classes, or systems does not solve the problem of
the definition, so we said that the definition of a set is a fundamental concept. We
reduced defining functions to correspondences between sets, but we did not define
what we meant by a correspondence. It is a good idea to choose the same solution as
in the case of sets, that is, we consider the notion of a function to be a fundamental
concept as well, where “correspondence” and “map” are just synonyms.
2
Lejeune Dirichlet (1805–1859), German mathematician.
106
9 Real-Valued Functions of One Real Variable
Fig. 9.1
We should note, however, that the concept of a function can be reduced to the
concept of sets. We can do this by generalizing what we defined as graphs. Graphs,
as defined in (9.1), can easily be generalized to maps between arbitrary sets. Let A
and B be arbitrary sets. The set of ordered pairs (a, b) where a ∈ A and b ∈ B is
called the Cartesian product of the sets A and B, denoted by A × B. That is,
A × B = {(a, b) : a ∈ A, b ∈ B}.
9.1 Functions and Graphs
107
If f : A → B is a map, then the graph of f can be defined by (9.1). The set graph f is
then a subset of the Cartesian product A × B. Then every function from A to B has a
corresponding set graph f ⊂ A × B. It is clear that different functions have different
graphs.
Clearly, the set graph f ⊂ A × B has the property that for every element a ∈ A,
there is exactly one element b ∈ B for which (a, b) ∈ graph f , namely b = f (a). Conversely, suppose that H ⊂ A × B, and for every a ∈ A, there is exactly one element
b ∈ B for which (a, b) ∈ H. Denote this b by f (a). This defines a map f : A → B,
and of course H = graph f . This observation is what makes it possible to reduce the
concept of graphs to the concept of sets. In axiomatic set theory, functions are defined to be subsets of the Cartesian product A × B that have the above property. We
do not follow this path (since the discussion of axiomatic set theory is not our goal);
instead, we continue to consider the notion of a function a fundamental concept.
Remark 9.9. We end this section with a note on mathematical notation. Throughout
the centuries, a convention has formed regarding notation of mathematical concepts
and objects. According to this, functions are most often denoted by f ; this comes
from the first letter of the Latin word functio (which is the root of our English word
function). If several functions appear in an argument, they are usually denoted by
f , g, h. For similar reasons, natural numbers are generally denoted by n, which is the
first letter of the word naturalis. Of course, to denote natural numbers, we often also
use the letters i, j, k, l, m preceding n. Constants and sequences are mostly denoted
by letters from the beginning of the alphabet, while variables (that is, the values to
which we apply functions) are generally denoted by the letters x, y, z at the end of
the alphabet.
Of course there is no theoretical requirement that a particular letter or symbol
be used for a certain object, and it can happen that we have to, or choose to, call
a function something other than f , g, h. However, following the conventions above
makes reading mathematical texts much easier, since we can discern the nature of
the objects appearing very quickly.
Exercises
9.1. Decide which of the following statements hold for all functions f : A → B and
all subsets H, K ⊂ A
(a) f (H ∪ K) = f (H) ∪ f (K);
(b) f (H ∩ K) = f (H) ∩ f (K);
(c) f (H \ K) = f (H) \ f (K).
9.2. For an arbitrary function f : A → B and set Y ⊂ B, let f −1 (Y ) denote the set of
x ∈ A for which f (x) ∈ Y . (We do not suppose that f has an inverse. To justify our
notation, we note that if an inverse of f −1 exists, then the two meanings of f −1 (Y )
denote the same set.)
108
9 Real-Valued Functions of One Real Variable
Prove that for arbitrary sets Y, Z ⊂ B,
(a) f −1 (Y ∪ Z) = f −1 (Y ) ∪ f −1 (Z);
(b) f −1 (Y ∩ Z) = f −1 (Y ) ∩ f −1 (Z);
(c) f −1 (Y \ Z) = f −1 (Y ) \ f −1 (Z).
9.3. Find the inverse functions for the following maps.
(a) (x + 1)/(x − 1), x ∈ R \ {1};
(b) 1/(2x + 3), x ∈ R \ {−3/2};
(c) x/(1 + |x|), x ∈ R.
9.4. Does there exist a function f : R → R for which ( f ◦ f )(x) = −x for all
x ∈ R? (H)
9.5. For every real number c, give a function fc : R → R such that fa+b = fa ◦ fb
holds for all a, b ∈ R. Can this be done even if f1 is an arbitrary fixed function? (∗ H)
9.6. (a) Give two real functions f1 , f2 for which there is no g such that both f1 and
f2 are of the form g ◦ . . . ◦ g. (H)
(b) Let f1 , . . . , fk be arbitrary real functions defined on R. Show that three functions
g1 , g2 , g3 can be given such that each f1 , . . . , fk can be written in the form gi1 ◦
gi2 ◦ . . . ◦ gis , (where i1 , . . . , is = 1, 2, 3). (∗ S)
(c) Can this be solved with two functions gi instead of three? (∗ H S)
(d) Can this be solved for an infinite sequence of fi ? (∗ H S)
9.2 Global Properties of Real Functions
The graphs of functions found in Examples 9.7 admit symmetries and other properties that are present in numerous other graphs as well. The graphs of functions
(2), (6), and (9) are symmetric with respect to the y-axis, the graphs of (4) and (5)
are symmetric with respect to the origin, and the graph of the function (8) is made
up of periodically repeating segments. The graphs of the functions (2), (4), and (6)
also admit other properties. In these graphs, the part above [0, ∞) “moves upward,”
which corresponds to larger f (x) for increasing x. The graphs of (2) and (3), as well
as the part over (0, ∞) in the graphs of (4) and (5), are “concave up,” meaning that
the line segment between any two points on the graph is always above the graph.
We will define these properties precisely below, and give tools and methods to
determine whether a function satisfies some of the given properties.
Definition 9.10. The function f is said to be even if for all x ∈ D( f ), we have
−x ∈ D( f ) and f (x) = f (−x).
The function f is odd if for all x ∈ D( f ), we have −x ∈ D( f ) and f (x) = − f (−x).
9.2 Global Properties of Real Functions
109
Remark 9.11. It is clear that for an arbitrary even integer n, the function f (x) = xn is
even, while if n is odd, then xn is odd (which is where the nomenclature comes from
in the first place). It is also clear that the graph of an even function is symmetric
with respect to the y-axis. This follows from the fact that the point (x, y) reflected
across the y-axis gives (−x, y). Graphs of odd functions, however, are symmetric
with respect to the origin, since the point (x, y) reflected through the origin gives
(−x, −y).
Definition 9.12. The function f is periodic if there is a d = 0 such that for all
x ∈ D( f ), we have x + d ∈ D( f ), x − d ∈ D( f ), and f (x + d) = f (x). We call the
number d a period of the function f .
Remark 9.13. It is easy to see that if d is a period of f , then k · d is also a period for
each integer k. Thus a periodic function always has infinitely many periods.
Not every periodic function has a smallest positive period. For example the periods of the Dirichlet function are exactly the rational numbers, and there is no smallest positive rational number.
Definition 9.14. The function f is bounded from above (or below) on the set A ⊂ R
if A ⊂ D( f ) and there exists a number K such that f (x) ≤ K ( f (x) ≥ K) for all x ∈ A.
The function f is bounded on the set A ⊂ R if it is bounded from above and from
below on A.
It is easy to see that f is bounded on A if and only if A ⊂ D( f ) and there is a
number K such that | f (x)| ≤ K for all x ∈ A. The boundedness of f on the set A
graphically means that for suitable K, the graph of f lies inside the region bordered
by the lines y = −K and y = K over A. The function f (x) = 1/x is bounded on
(δ , 1) for all δ > 0, but is not bounded on (0, 1). In the interval (0, 1), the function
is bounded from below but not from above.
The function
x, if x is rational
f (x) =
0, if x is irrational
is not bounded on (−∞, +∞). However, f is bounded on the set R \ Q.
Definition 9.15. The function f is monotone increasing (monotone decreasing) on
the set A ⊂ R if A ⊂ D( f ) and for all x1 ∈ A and x2 ∈ A such that x1 < x2 ,
f (x1 ) ≤ f (x2 )
( f (x1 ) ≥ f (x2 )).
(9.2)
If in (9.2) we replace ≤ or ≥ by < and > respectively, then we say that f is strictly
monotone increasing (decreasing). We call monotone increasing or decreasing functions monotone functions for short.
Let us note that if f is constant on the set A, then f is both monotone increasing
and monotone decreasing on A.
The Dirichlet function is neither monotone increasing nor monotone decreasing
on any interval. However, on the set of rational numbers, the Dirichlet function is
both monotone increasing and decreasing (since it is constant there).
110
9 Real-Valued Functions of One Real Variable
Convex and Concave Functions. Consider the points (a, f (a)) and (b, f (b)) on the
graph graph f . We call the line segment connecting these points a chord of graph f .
Let the linear function defining the line of the chord be ha,b , that is, let
ha,b (x) =
f (b) − f (a)
(x − a) + f (a).
b−a
Definition 9.16. The function f is convex on the interval I if for every a, b ∈ I and
a < x < b,
f (x) ≤ ha,b (x).
(9.3)
If in (9.3) we write < in place of ≤, then we say that f is strictly convex on I; if we
write ≥ or >, then we say that f is concave, or respectively strictly concave on I.
The property of f being convex on
I can be phrased in a more visual way
by saying that for all a, b ∈ I, the part
of the graph corresponding to the interval (a, b) is “below” the chord connecting (a, f (a)) and (b, f (b)). In the
figure, I = (α , β ) (Figure 9.2).
It is clear that f is convex on I if
and only if − f is concave on I. We note
that sometimes “concave down” is said
instead of concave, and “concave up”
instead of convex.
If the function f is linear on the intFig. 9.2
erval I, that is, f (x) = cx + d for some
constants c and d, then in (9.3), equality
holds for all x. Thus a linear function is
both concave and convex.
For applications, it is useful to give the inequality defining convexity in an alternative way.
Let a < b and 0 < t < 1. Then the number
x = ta + (1 − t)b is an element of (a, b), and
moreover, x is the point that splits the interval
[a, b] in a (1 − t) : t ratio. Indeed,
a = ta + (1 − t)a < ta + (1 − t)b =
= x < tb + (1 − t)b = b,
Fig. 9.3
so x ∈ (a, b). On the other hand, a simple computation shows that (x − a)/(b − x) = (1 − t)/t.
By reversing the computation, we can see that every element of (a, b) occurs in the
form ta + (1 −t)b, where 0 < t < 1. If x ∈ (a, b), then the choice t = (b − x)/(b − a)
works (Figure 9.3).
Now if a < x < b and x = ta + (1 − t)b, then
9.2 Global Properties of Real Functions
ha,b (x) =
111
f (b) − f (a)
· (ta + (1 − t)b − a) + f (a) = t f (a) + (1 − t) f (b).
b−a
If we substitute this into (9.3), then we get the following equivalent condition for
convexity.
Lemma 9.17. The function f is convex on the interval I if and only if for all numbers
a, b ∈ I and 0 < t < 1,
f (ta + (1 − t)b) ≤ t f (a) + (1 − t) f (b).
(9.4)
Remark 9.18. By the above argument, we also see that the strict convexity of the
function f is equivalent to a strict inequality in (9.4) whenever a = b.
Theorem 9.19 (Jensen’s3 Inequality). The function f is convex on the interval I if
and only if whenever a1 , . . . , an ∈ I, t1 , . . . ,tn > 0, and t1 + · · · + tn = 1, then
f (t1 a1 + · · · + tn an ) ≤ t1 f (a1 ) + · · · + tn f (an ).
(9.5)
If f is strictly convex, then strict inequality holds, assuming that the ai are not all
equal.
Proof. By Lemma 9.17, the function f is convex on an interval I if and only if (9.5)
holds for n = 2. Thus we need to show only that if the inequality holds for n = 2,
then it also holds for n > 2. We prove this by induction.
Let k > 2, and suppose that (9.5) holds for all 2 ≤ n < k and a1 , . . . , an ∈ I, as well
as for t1 , . . . ,tn > 0, t1 +· · ·+tn = 1. Let a1 , . . . ak ∈ I and t1 , . . .tk > 0, t1 +· · ·+tk = 1.
We see that
f (t1 a1 + · · · + tk ak ) ≤ t1 f (a1 ) + · · · + tk f (ak ).
(9.6)
Let
t = t1 + · · · + tk−1 ,
and
α = (t1 /t)a1 + · · · + (tk−1 /t)ak−1
β = (t1 /t) f (a1 ) + · · · + (tk−1 /t) f (ak−1 ).
By the induction hypothesis, f (α ) ≤ β . If we now use tk = 1 − t, then we get
f (t1 a1 + · · · + tk ak ) = f (t · α + (1 − t)ak ) ≤
≤ t · f (α ) + (1 − t) · f (ak ) ≤
≤ t · β + (1 − t) · f (ak ) =
= t1 f (a1 ) + · · · + tk f (ak ),
(9.7)
which proves the statement. The statement for strict convexity follows in the same
way.
⊔
⊓
3
Johan Ludwig William Valdemar Jensen (1859–1925), Danish mathematician.
112
9 Real-Valued Functions of One Real Variable
The following theorem links convexity and monotonicity.
Theorem 9.20. The function f is convex on an interval I if and only if for all a ∈ I,
the function x → ( f (x) − f (a))/(x − a) (x ∈ I \ {a}) is monotone increasing on the
set I \ {a}.
Fig. 9.4
Proof. Suppose that f is convex on I, and let a, x, y ∈ I, where a < x < y
(Figure 9.4). By (9.3),
f (x) ≤
f (y) − f (a)
(x − a) + f (a).
y−a
A simple rearrangement of this yields the inequality
f (x) − f (a)
f (y) − f (a)
≤
.
x−a
y−a
(9.8)
If x < a < y, then by (9.3),
f (a) ≤
f (y) − f (x)
(a − x) + f (x),
y−x
which we can again rearrange to give us (9.8).
If x < y < a, (9.8) follows in the same way. Thus we have shown that the function
f (x) − f (a)
x−a
is monotone increasing.
Now let us suppose that the function f (x) − f (a) /(x − a) is monotone increasing for all a ∈ I. Let a, b ∈ I, where a < x < b. Then
9.2 Global Properties of Real Functions
113
f (x) − f (a)
f (b) − f (a)
≤
,
x−a
b−a
which can be rearranged into
f (x) ≤
f (b) − f (a)
(x − a) + f (a).
b−a
Thus f satisfies the conditions of convexity.
⊔
⊓
Let us see some applications.
Example 9.21. First of all, we show that the function x2 is convex on R. By Theorem
9.20, we have to show that the function (x2 − a2 )/(x − a) = x + a is monotone increasing for all a, which is clear. Thus we can apply Jensen’s inequality to f (x) = x2 .
Choosing t1 = . . . = tn = 1/n, we get that
a1 + · · · + an 2 a21 + · · · + a2n
,
≤
n
n
or, taking square roots,
a1 + · · · + an
≤
n
a21 + · · · + a2n
n
(9.9)
for all a1 , . . . , an ∈ R. This is known as the root-mean-square–arithmetic mean
inequality (which we have encountered in Exercise 2.15).
Example 9.22. Now we show that the function 1/x is convex on the half-line (0, ∞).
We want to show that for every a > 0, the function
(1/x) − (1/a)
1
=−
x−a
a·x
is monotone increasing on the set (0, ∞) \ {a}, which is again clear.
If we now apply Jensen’s inequality to f (x) = 1/x with t1 = . . . = tn = 1/n, we get
the inequality between the arithmetic and harmonic means (Theorems 2.6 and 2.7).
Exercises
9.7. Let f and g be defined on (−∞, +∞) and suppose
(a) f is even, g is odd;
(b) f is even, g is even;
(c) f is odd, g is odd.
What can we say in each of the three cases about f + g, f − g, f · g, and f ◦ g, in
terms of being odd or even?
9.8. Repeat the previous question, replacing even and odd with monotone increasing
and monotone decreasing.
114
9 Real-Valued Functions of One Real Variable
9.9. Prove that every function f : R → R can be written as the sum of an odd and an
even function.
9.10. Suppose that the function f : R → R never takes on the value of −1. Prove
that if
f (x) − 1
f (x + 1) =
f (x) + 1
for all x, then f is periodic.
9.11. Suppose that f : R → R is periodic with the rational numbers as its periods. Is
it true that there exists a function g : R → R such that g ◦ f is the Dirichlet function?
9.12. Prove that the function
0, if x is irrational
f (x) =
q, if x = p/q, p ∈ Z, q ∈ N+ , (p, q) = 1
is not bounded on any interval I.
9.13. Let f be defined in the following way. Let x ∈ (0, 1], and let its infinite decimal
expansion be
x = 0.a1 a2 a3 . . . a2n−1 a2n . . . ,
where, for the sake of unambiguity, we exclude decimal expansions of the form
0.a1 . . . an 0 . . . 0 . . .. We define a function whose value depends on whether the
number
0.a1 a3 . . . a2n+1 . . .
is rational. Let
⎧
⎪
if 0.a1 a3 a5 . . . a2n+1 . . . is irrational,
⎨0,
f (x) = 0.a2n a2n+2 . . . a2n+2k . . . , if 0.a1 a3 . . . a2n+1 . . . is rational
⎪
⎩
and its first period starts at a2n−1 .
Prove that f takes on every value in (0, 1) on each subinterval of (0, 1). (It then
follows that it takes on each of those values infinitely often on every subinterval of
(0, 1).)
9.14. Prove that xk is strictly convex on [0, ∞) for all integers k > 1.
9.15. Prove that if a1 , . . . , an ≥ 0 and k > 1 is an integer, then
a1 + · · · + an
≤
n
9.16. Prove that
√
(a) x is strictly concave on [0, ∞);
k
ak1 + · · · + akn
.
n
9.3 Appendix: Basics of Coordinate Geometry
(b)
115
√
k
x is strictly concave on [0, ∞) for all integers k > 1.
9.17. Prove that if g is convex on [a, b], the range of f over [a, b] is [c, d], f is convex
and monotone increasing on [c, d], then f ◦ g is convex on [a, b].
9.18. Prove that if f is strictly convex on the interval I, then every line intersects the
graph of f in at most two points. (S)
9.3 Appendix: Basics of Coordinate Geometry
Let us consider two perpendicular lines in the plane, the first of which we call the
x-axis, the second the y-axis. We call the intersection of the two axes the origin. We
imagine each axis to be a number line, that is, for every point on each axis, there
is a corresponding real number that gives the distance of the point from the origin;
positive in one direction and negative in the other.
If we have a point P in the plane, we get its projection onto the x-axis by taking a
line that contains P and is parallel to the y-axis, and taking the point on this line that
crosses the x-axis. The value of this point as seen on the number line is called the first
coordinate of P. We get the projection onto the y-axis similarly, as well as the second
coordinate of P. If the first and second coordinates of P are a and b respectively, then
we denote this by P = (a, b). In this sense, we assign an ordered pair of real numbers
to every point in the plane. It follows from the geometric properties of the plane that
this mapping is bijective. Thus from now on, we identify points in the plane with
the ordered pair of their coordinates, and the plane itself with the set R × R = R2 .
Instead of saying “the point in the plane whose coordinates are a and b respectively,”
we say “the point (a, b).”
We can also call
√points in the plane vectors. The length of the vector u = (a, b) is
the number |u| = a2 + b2 . Among vectors, the operations of addition, subtraction,
and multiplication by a real number (scalar) are defined: the sum of the vectors
(a1 , a2 ) and (b1 , b2 ) is the vector (a1 + b1 , a2 + b2 ), their difference is (a1 − b1 ,
a2 − b2 ), and the multiple of the vector (a1 , a2 ) by a real number t is (ta1 ,ta2 ).
Addition by a given vector c ∈ R2 appears as a translation of the coordinate plane:
translating a vector u by the vector c takes us to the vector u + c. If A ⊂ R2 is a set
of vectors, then the set {u + c : u ∈ A} is the set A translated by the vector c.
If c = (c1 , c2 ) is a fixed vector that is not equal to 0, then the vectors t · c =
(tc1 ,tc2 ) (where t is an arbitrary real number) cover the points on the line connecting the origin and c. If we translate this line by a vector a, then we get the set
{a + tc : t ∈ R}; this is a line going through a.
Let a and b be distinct points. By our previous observations, we know that the
set E = {a + t(b − a) : t ∈ R} is a line that crosses the point a as well as b, since
a + 1 · (b − a) = b. That is, this is exactly the line that passes through a and b. Let
a = (a1 , a2 ) and b = (b1 , b2 ), where a1 = b1 . A point (x, y) is an element of E if and
only if
116
9 Real-Valued Functions of One Real Variable
x = a1 + t(b1 − a1 ) and y = a2 + t(b2 − a2 )
(9.10)
for a suitable real number t. If we isolate t in the first expression and substitute it
into the second, we get that
y = a2 +
b2 − a2
· (x − a1 ).
b1 − a1
(9.11)
Conversely, if (9.11) holds, then (9.10) also holds with t = (x − a1 )/(b1 − a1 ). This
means that (x, y) ∈ E if and only if (9.11) holds. In short we express this by saying
that (9.11) is the equation of the line E.
If a2 = b2 , then (9.11) takes the form y = a2 . This coincides with the simple
observation that the point (x, y) is on the horizontal line crossing a = (a1 , a2 ) if and
only if y = a2 . If a1 = b1 , then the equation of the line crossing points a and b is
clearly x = a1 .
For a, b ∈ R2 given, every point of the segment [a, b] can be obtained by measuring a vector of length at most |b − a| from the point a in the direction of b − a. In
other words, [a, b] = {a + t(b − a) : t ∈ [0, 1]}.
That is, (x, y) ∈ [a, b] if and only if there exists a number t ∈ [0, 1] for which
(9.10) holds. In the case that a1 < a2 , the exact condition is that a1 ≤ x ≤ a2 and that
(9.11) holds.
Chapter 10
Continuity and Limits of Functions
If we want to compute the value of a specific function at some point a, it may happen
that we can compute only the values of the function near a. Consider, for example,
the distance a free-falling object covers. This is given by the equation s(t) = g ·t 2 /2,
where t is the time elapsed, and g is the gravitational constant. Knowing this equation, we can easily compute the value of s(t). If, however, we want to calculate s(t)
at a particular time t = a by measuring the time, then we will not be able to calculate
the precise distance corresponding to this given time; we will obtain only a better
or worse approximation—depending on the precision of our instruments. However,
if we are careful, we will hope that if we use the value of t that we get from the
measurement to recover s(t), the result will be close to the original s(a). In essence,
such difficulties always arise when we are trying to find some data with the help of
another measured quantity. At those times, we assume that if our measured quantity
differs from the real quantity by a very small amount, then the value computed from
it will also be very close to its actual value.
In such cases, we have a function f , and we assume that f (t) will be close to
f (a) as long as t deviates little from a. We call this property continuity. The precise
definition of this idea is the following.
Definition 10.1. Let f be defined on an open interval containing a. The function f
is continuous at a if for all ε > 0, there exists a δ > 0 (dependent on ε ) such that
| f (x) − f (a)| < ε , if |x − a| < δ .
(10.1)
Continuity of the function f at a graphically means that the graph G = graph f
has the following property: given an arbitrary (narrow) strip
{(x, y) : f (a) − ε < y < f (a) + ε },
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 10
117
118
10 Continuity and Limits of Functions
there exists an interval (a − δ , a + δ )
such that the part of G over this interval
lies inside the given strip (Figure 10.1).
It is clear that if (10.1) holds with
some δ > 0, then it holds for all
δ ′ ∈ (0, δ ) as well. In other words, if
for some ε > 0 a δ > 0 is “good,” then
every positive δ ′ < δ is also “good.”
On the other hand, if a δ > 0 is “good”
for some ε , then it is good for all ε ′ > ε
Fig. 10.1
as well. If our goal is to deduce whether
a function is continuous at a point, then
generally there is no need to find the
best (i.e., largest) δ for each ε > 0. It suffices to find just one such δ . (The situation is analogous to determining the convergence of sequences: we do not have to
find the smallest threshold for a given ε ; it is enough to find one.)
Examples 10.2. 1. The constant function f (x) ≡ c is continuous at all a. For every
ε > 0, every positive δ is good.
2. The function f (x) = x is continuous at all a. For all ε > 0, the choice of δ = ε is
good.
3. The function f (x) = x2 is continuous at all a. If 0 < δ ≤ 1 and |x − a| < δ , then
|x2 − a2 | = |x − a| · |x + a| = |x − a| · |x − a + 2a| < |x − a| · (2|a| + 1).
Thus, if
δ = min 1,
ε
,
2|a| + 1
then |x2 − a2 | < δ · (2|a| + 1) < ε whenever |x − a| < δ .
4. The function f (x) = 1/x is continuous at all a = 0. To see this, for a given ε > 0 we
will choose the largest δ , and in fact, we will determine all x where | f (x) − f (a)| <
ε . Let, for simplicity’s sake, 0 < a < 1 and 0 < ε < 1. Then
1 1
a
= + ε , if x =
,
x a
1 + εa
1 1
a
= − ε , if x =
.
x a
1 − εa
Then, because 1/x is strictly monotone on (0, ∞),
a
a
<x<
,
1 + εa
1 − εa
a
a
| f (x) − f (a)| ≥ ε , if x ∈
,
.
1 + εa 1 − εa
| f (x) − f (a)| < ε , if
10 Continuity and Limits of Functions
119
It then follows that for a given a and ε ,
a
a
ε a2
a
− a, a −
=
δ = min
= a−
1 − εa
1 + εa
1 + εa 1 + εa
(10.2)
is the largest δ for which
| f (x) − f (a)| < ε ,
if x ∈ (a − δ , a + δ )
holds (Figure 10.2).
Later a more general theorem (10.44) will show that the continuity of f (x) = x2
and f (x) = 1/x follows directly from the continuity of f (x) = x without extra work.
Fig. 10.2
Fig. 10.3
5. The function
⎧
⎪
⎨ 1, if x > 0
f (x) = sgn x =
0, if x = 0
⎪
⎩
−1, if x < 0
is continuous at all a = 0, but at 0, it is not continuous. Since f ≡ 1 on the half-line
(0, ∞), it follows that for all a > 0 and ε > 0, the choice δ = a is good. Similarly, if
a < 0, then δ = |a| is a good δ for all ε . However, | f (x) − f (0)| = 1 if x = 0, so for
a = 0, if 0 < ε < 1, then we cannot choose a good delta (Figure 10.3).
6. The function
x, if x is rational
f (x) =
−x, if x is irrational
is continuous at 0, but not continuous at any x = 0. It is continuous at 0, since for all
x, | f (x) − f (0)| = |x|, so
| f (x) − f (0)| < ε ,
if |x − 0| < ε ,
that is, for all ε > 0, the choice δ = ε is good.
120
10 Continuity and Limits of Functions
Now we show that the function is discontinuous at a = 0. Let a be a rational
number. Then at all irrational x (sharing the same sign), | f (x) − f (a)| > |a|. This, in
turn, means that for 0 < ε < |a|, we cannot choose a good δ for that ε . The proof is
similar if a is irrational.
(We know that for all (δ > 0), there exist a rational number and an irrational
number in the interval (a − δ , a + δ ). See Theorems 3.2 and 3.12.) This example
is worth noting, for it shows that a function can be continuous at one point but not
continuous anywhere else. This situation is perhaps less intuitive than a function
that is continuous everywhere except at one point.
7. Let f (x) = {x} be the fractional-part function (see Figure 9.1, graph (8)). We
see that f is continuous at a if a is not an integer, and it is discontinuous at a if
a is an integer. Indeed, f (a) = a − [a] and f (x) = x − [a] if [a] ≤ x < [a + 1], so
| f (x) − f (a)| = |x − a| if [a] ≤ x < [a + 1]. Clearly, if a is not an integer, then
δ = min(ε , a − [a], [a] + 1 − a)
is a good δ . If, however, a is an integer, then
| f (x) − f (a)| = |x − (a − 1)| > 21 ,
if a − 12 < x < a,
which implies that, for example, if 0 < ε < 1/2, then there does not exist a good δ
for ε .
We can see that if a is an integer, then the continuity of the function f (x) = {x}
is prevented only by behavior on the left-hand side of a. If this is the case, we say
the function is continuous from the right. We make this more precise below.
Definition 10.3. Let f be defined on an interval [a, b). We say that the function f is
continuous from the right at a if for all ε > 0, there exists a δ > 0 such that
| f (x) − f (a)| < ε ,
if
0 ≤ x−a < δ.
(10.3)
The function f is continuous from the left at a if it is defined on an interval (c, a],
and if for all ε > 0, there exists a δ > 0 such that
| f (x) − f (a)| < ε ,
if
0 ≤ a−x < δ.
(10.4)
Exercises
10.1. Show that the function f is continuous at a if and only if it is continuous both
from the right and from the left there.
10.2. Show that the function [x] is continuous at a if a is not an integer, and continuous from the right at a if a is an integer.
10.1 Limits of Functions
121
10.3. For a given ε > 0, find a good δ (according to Definition 10.1) in the following
functions.
(a) f (x) = (x + 1)/(x − 1), a = 3;
(b) f (x) = √
x3 , a = 2;
(c) f (x) = x, a = 2.
10.4. Continuity of the function f : R → R at a can be written using the following
expression:
(∀ε > 0)(∃δ > 0)(∀x)(|x − a| < δ ⇒ | f (x) − f (a)| < ε ).
Consider the following expressions:
(∀ε > 0)(∀δ > 0)(∀x)(|x − a| < δ ⇒ | f (x) − f (a)| < ε );
(∃ε > 0)(∀δ > 0)(∀x)(|x − a| < δ ⇒ | f (x) − f (a)| < ε );
(∃ε > 0)(∃δ > 0)(∀x)(|x − a| < δ ⇒ | f (x) − f (a)| < ε );
(∀δ > 0)(∃ε > 0)(∀x)(|x − a| < δ ⇒ | f (x) − f (a)| < ε );
(∃δ > 0)(∀ε > 0)(∀x)(|x − a| < δ ⇒ | f (x) − f (a)| < ε ).
What properties of f do these express?
10.5. Prove that if f is continuous at a point, then | f | is also continuous there. Conversely, does the continuity of f follow from the continuity of | f |?
10.6. Prove that if f and g are continuous at a point a, then max( f , g) and min( f , g)
are also continuous at a.
10.7. Prove that if the function f : R → R is monotone increasing and assumes every
rational number as a value, then it is continuous everywhere.
10.8. Prove that if f : R → R is not constant, continuous, and periodic, then it has a
smallest positive period. (H)
10.1 Limits of Functions
Before explaining limits of functions, we discuss three problems that shed light on
the need for defining limits and actually suggest a good definition for them. The first
two problems raise questions that are of fundamental importance; one can almost
say that the theory of analysis was developed to answer these questions. The third
problem is a concrete exercise, but it also demonstrates the concept of a limit well.
1. The first problem is defining speed. For constant speed, the value of velocity is
v = s/t, where s is the length of the path traveled at a time t. Now consider movement
with a variable speed, and let s(t) denote the length of the path traveled at a time t.
The problem is defining and computing instantaneous velocity at a given time t0 .
Let ω (t) denote the average speed on the time interval [t0 ,t], that, is let
122
10 Continuity and Limits of Functions
ω (t) =
s(t) − s(t0 )
.
t − t0
This is the velocity that in the case of constant speed a point would have moving a
distance s(t) − s(t0 ) in time t − t0 . If, for example,
s(t) = t 3 and t0 = 2,
then
ω (t) =
t3 − 8
= (t 2 + 2t + 4).
t −2
In this case, if t is close to 2, then the average velocity in the interval [2,t] is close
to 12. It is clear that we should decide that 12 should be the instantaneous velocity
at t0 = 2.
Generally, if we know that there is a value v to which ω (t) is “very” close for all
t close enough to t, then we will call this v the instantaneous velocity at t0 .
2. The second problem concerns defining and finding a tangent line to the graph
of a function. Consider the graph of f and
a fixed point P = (a, f (a)) on it. Let ha (x)
denote the chord crossing P and (x, f (x))
(Figure 10.4). We wish to define the tangent line to the curve, that is, the—yet to be
precisely defined—line to which these lines
tend. Since these lines cross P, their slopes
uniquely determine them. The slope of the
chord ha (x) is given by
Fig. 10.4
ma (x) =
f (x) − f (a)
x−a
(x = a).
For example, in the case f (x) = 1/x,
1 1
−
1
ma (x) = x a = − .
x−a
xa
It can be seen that if x tends to a, then ma (x) tends to −1/a2 , in a sense yet to be
defined. Then it is only natural to say that the tangent at the point P is the line whose
slope is −1/a2 that intersects the point (a, 1/a). Thus the equation of the tangent is
y=−
1
1
(x − a) + .
a2
a
Generally, if the values
ma (x) =
f (x) − f (a)
x−a
10.1 Limits of Functions
123
tend to some value m while x tends to a (again, in a yet to be defined way), then
the line crossing P with slope m will be called the tangent of the graph of f at the
point P.
3. The third problem is finding the focal
length of a spherical mirror. Consider a spherically curved concave mirror with radius r. Light
rays traveling parallel to the principal axis at distance x from it will reflect off the mirror and intersect the principal axis at a point Px . The problem is to find the limit of Px as x tends to 0 (Figure 10.5).
Assuming knowledge of the law of reflection,
Fig. 10.5
we have
2
r
O Px
r
=√
, that is O Px = √
.
2
2
(r/2)
r −x
2 r 2 − x2
We see that if x is close enough to 0, then O Px gets arbitrarily close to the value r/2.
Thus the focal length of the spherically curved mirror is r/2.
In all three cases, the essence of the problem is the following: how do we define
what it means to say that “as x tends to a, the values f (x) tend to a number b” or
that “the limit of the function f at a is b”? The above three problems indicate that
we should define the limit of a function in a way that does not depend on the value
of f at x = a or the fact that the function f might not even be defined at x = a.
Continuity of f is the precise definition that
at places “close” to a, the values of f are “close”
to f (a). Then by conveniently modifying the
definition of continuity, we get the following
definition for the limit of a function.
Definition 10.4. Let f be defined on an open interval containing a, excluding perhaps a itself.
The limit of f at a exists and has the value b if
for all ε > 0, there exists a δ > 0 such that
| f (x) − b| < ε , if 0 < |x − a| < δ .
Fig. 10.6
(10.5)
(See Figure 10.6.) Noting the definition of continuity, we see that the following
is clearly equivalent:
Definition 10.5. Let f be defined on an open interval containing a, excluding
perhaps a itself. The limit of f at a exists and has the value b if the function
f (x), if x = a
∗
f (x) =
b,
if x = a
is continuous at a.
124
10 Continuity and Limits of Functions
We denote that f has the limit b at a by
lim f (x) = b, or f (x) → b, if x → a;
x→a
f (x) → b denotes that f does not tend to b.
If f is continuous at a, then the conditions of Definition 10.4 are satisfied with
b = f (a). Thus, the connection between continuity and limits can also be expressed
in the following way:
Let f be defined on an open interval containing a. The function f is continuous
at a if and only if limx→a f (x) exists and has the value f (a).
The statement limx→a f (x) = b holds the following interpretation in the graph of
f : given an arbitrarily “narrow” strip {(x, y) : b − ε < y < b + ε }, there exists a δ > 0
such that the part of the graph over the set (a − δ , a + δ ) \ {a} lies inside the strip.
The following theorem states that limits are unique.
Theorem 10.6. If limx→a f (x) = b and limx→a f (x) = b′ , then b = b′ .
Proof. Suppose that b′ = b. Let 0 < ε < |b′ −b|/2. Then the inequalities | f (x)−b| <
ε and | f (x) − b′ | < ε can never both hold at once (since if they did, then
|b − b′ | ≤ |b − f (x)| + | f (x) − b′ | < ε + ε = 2ε
would follow), which is impossible.
⊔
⊓
Examples 10.7. 1. The function f (x) = sgn2 x is not continuous at 0, but its limit
there exists and has the value 1. Clearly, the function
sgn2 x, if x = 0
∗
f (x) =
1,
if x = 0
has value 1 at all x, so f ∗ is continuous at 0.
2. We show that
x−2
= 1.
lim 2
x→2 x − 3x + 2
Since x2 − 3x + 2 = (x − 1)(x − 2), then whenever x = 2,
x−2
1
=
.
x2 − 3x + 2 x − 1
From this, we have that
x−2
1
2−x
x2 − 3x + 2 − 1 = x − 1 − 1 = x − 1 .
Since whenever |x − 2| < 1/2, |x − 1| > 1/2, it follows that |(2 − x)/(x − 1)| < ε as
long as 0 < |2 − x| < min (ε /2, 1/2).
10.1 Limits of Functions
125
3. Let
0, if x irrational
f (x) = 1
p
q , if x = q , where p, q integers, q > 0 and (p, q) = 1.
This function has the following strange properties from the point of view of continuity and limits:
a) The function f has a limit at every value a, which is 0 (even though f is not
identically 0).
b) The function f is continuous at every irrational point.
c) The function f is not continuous at any rational point.
To prove statement a), we need to show that if a is an arbitrary value, then for all
ε > 0, there exists a δ > 0 such that
| f (x) − 0| < ε ,
if
0 < |x − a| < δ .
(10.6)
For simplicity, let us restrict ourselves to the interval (−1, 1). Let ε > 0 be given,
and choose an integer n > 1/ε . By the definition of f , | f (x)| < 1/n for all irrational
numbers x and for all rational numbers x = p/q for which (p, q) = 1 and q > n. That
is, | f (x) − 0| ≥ 1/n only on the following points of (−1, 1):
1
2
1
2
n−1
1
.
0, ±1, ± , ± , ± , . . . , ± , ± , . . . , ±
2
3
3
n
n
n
(10.7)
Now if a is an arbitrary point of the interval (−1, 1), then out of the finitely many
numbers in (10.7), there is one that is different from a, and among those, the closest
to a. Let this one be p1 /q1 , and let δ = |p1 /q1 − a|. Then inside the interval (a −
δ , a + δ ), there is no number in (10.7) other than a, so | f (x)| < 1/n < ε if 0 <
|x − a| < δ = |p1 /q1 − a|, that is, for ε , δ = |p1 /q1 − a| is a good choice. Since
ε > 0 was arbitrary, we have shown that limx→a f (x) = 0.
b) Since for an irrational number a we have f (a) = 0, limx→a f (x) = f (a) follows,
so the function is continuous at every irrational point.
c) Since for a rational number a we have f (a) = 0, limx→a f (x) = f (a) follows, so
the function is not continuous at the rational points.
Remark 10.8. The function defined in Example 3 is named—after its discoverer—
the Riemann function.1 This function is then continuous at all irrational points, and
discontinuous at every rational point. It can be proven, however, that there does not
exist a function which is continuous at each rational point but discontinuous at all
irrational points (see Exercise 10.17).
Similar to the idea of one-sided continuity, there is also the notion of one-sided
limits.
1
Georg Friedrich Bernhard Riemann (1826–1866), German mathematician.
126
10 Continuity and Limits of Functions
Definition 10.9. Let f be defined on an open interval (a, c). The right-hand limit
of f at a is b if for every ε > 0, there exists a δ > 0 such that | f (x) − b| < ε if
0 < x−a < δ.
Notation: limx→a+0 f (x) = b, limx→a+ f (x) = b or f (x) → b, if x → a + 0, or even
shorter, f (a + 0) = b.
Left-hand limits can be defined and denoted in a similar way.
Remark 10.10. In the above notation, a + 0 and a − 0 are not numbers, simply symbols that allow a more abbreviated notation for expressing the property given in the
definition.
The following theorem is clear from the definitions.
Theorem 10.11. limx→a f (x) = b if and only if both f (a + 0) and f (a − 0) exist, and
f (a + 0) = f (a − 0) = b.
Similarly to when we talked about limits of sequences, we will need to define the
concept of a function tending to positive and negative infinity.
Definition 10.12. Let f be defined on an open interval containing a, except possibly
a itself. The limit of the function f at a is ∞ if for all numbers P, there exists a δ > 0
such that f (x) > P whenever 0 < |x − a| < δ .
Notation: limx→a f (x) = ∞, or f (x) → ∞ as
x → a.
The statement limx→a f (x) = ∞ expresses the
following property of the graph of f : for arbitrary
P, there exists a δ > 0 such that the graph over the
set (a − δ , a + δ ) \ {a} lies above the horizontal
line y = P (Figure 10.7).
We define the limit of f as −∞ at a point a
similarly. We will also need the one-sided equivalents for functions tending to ∞.
Fig. 10.7
Definition 10.13. Let f be defined on an open interval (a, c). The right-hand limit
of f at a is ∞ if for every number P, there exists a δ > 0 such that f (x) > P if
0 < x−a < δ.
Notation: limx→a+0 f (x) = ∞; f (x) → ∞, if x → a + 0; or f (a + 0) = ∞.
We define limx→a+0 f (x) = −∞ and limx→a−0 f (x) = ±∞ similarly.
But we are not yet done with the different variations of definitions of limits.
Definition 10.14. Let f be defined on the half-line (a, ∞). We say that the limit
of f at ∞ is b if for all ε > 0, there exists a K such that | f (x) − b| < ε if x > K
(Figure 10.8).
Notation: limx→∞ f (x) = b, or f (x)→b, if x → ∞.
10.1 Limits of Functions
127
We can similarly define the limit of f at −∞
being b.
Finally, we have one more type, in which
both the “place” and the “value” are infinite.
Definition 10.15. Let f be defined on the halfline (a, ∞). We say that the limit of f at ∞ is ∞
if for all P, there exists a K such that f (x) > P
if x > K.
We get three more variations of the last defiFig. 10.8
nition if we define the limit at ∞ as −∞, and the
limit at −∞ as ∞ or −∞.
To summarize, we have defined the following variants of limits:
⎧
⎧
⎧
⎪
⎪
⎪
⎨b is finite
⎨b is finite
⎨b is finite
;
lim
f
(x)
=
;
lim
f
(x)
=
.
lim f (x) =
∞
∞
∞
x→a
x→±∞
⎪
⎪
⎪
x→a±0
⎩
⎩
⎩
−∞
−∞
−∞
These are 15 variations of a concept that we can feel is based on a unifying idea.
To make this unifying concept clear, we introduce the notion of a neighborhood.
Definition 10.16. We define a neighborhood of a real number a as an interval of
the form (a − δ , a + δ ), where δ is an arbitrary positive number. Sets of the form
[a, a + δ ) and (a − δ , a] are called the right-hand, and respectively the left-hand
neighborhoods of the real number a.
A punctured neighborhood of the real number a is a set of the form (a − δ , a +
δ ) \ {a}, where δ is an arbitrary positive number. Sets of the form (a, a + δ ) and
(a − δ , a) are called the right-hand and respectively the left-hand punctured neighborhoods of the real number a.
Finally, a neighborhood of ∞ is a half-line of the form (K, ∞), where K is an arbitrary real number, while a neighborhood of −∞ is a half-line of the form (−∞, K),
where K is an arbitrary real number.
In the above definition, the prefix “punctured” indicates that we leave out the
point itself from its neighborhood, or that we “puncture” the set at that point. Now
with the help of the concept of neighborhoods, we can give a unified definition for
the 15 different types of limits, which also demonstrates the meaning of a limit
better.
Definition 10.17. Let α denote the real number a or one of the symbols a + 0, a − 0,
∞, or −∞. In each case, by the punctured neighborhood of α we mean the punctured neighborhood of a, the right-hand punctured neighborhood of a, the left-hand
punctured neighborhood of a, the neighborhood of ∞, or the neighborhood of −∞
respectively. Let β denote the real number b, the symbol ∞, or −∞.
Let f be defined on a punctured neighborhood of α . We say that limx→α f (x) = β
if for each neighborhood V of β , there exists a punctured neighborhood U̇ of α such
that f (x) ∈ V if x ∈ U̇.
128
10 Continuity and Limits of Functions
We let the reader verify that each listed case of the definition of the limit can be
obtained (as a special case) from this definition.
Examples 10.18. 1. limx→0+0 1/x = ∞, since 1/x > P, if 0 < x < 1/P.
limx→0−0 1/x = −∞, since 1/x < P, if 1/P < x < 0.√
2. limx→0 1/x2 = ∞, since 1/x2 > P, if 0 < |x| < 1/ P.
3. limx→∞ (1 − 2x)/(1 + x) = −2, as for arbitrary ε > 0,
1 − 2x
3
3
1 + x − (−2) = |1 + x| < ε , if x > ε − 1.
4. limx→−∞ 10x/(2x2 + 3) = 0, since for arbitrary ε > 0,
10x
5
5
2x2 + 3 < |x| < ε , if |x| > ε ,
and so the same holds if x < −5/ε .
5. limx→−∞ x2 = ∞, since x2 > P if x < − |P|.
6. We show that limx→∞ ax = ∞ for all a > 1. Let P be given. If n = [P/(a − 1)] + 1
and x > n, then using the monotonicity of ax (Theorem 3.27) and Bernoulli’s inequality, we get that
ax > an = (1 + (a − 1))n ≥ 1 + n · (a − 1) >
P
· (a − 1) = P.
a−1
7. Let f (x) = x · [1/x]. Then limx→0 f (x) = 1 (Figure 10.9).
Fig. 10.9
10.1 Limits of Functions
Since
129
⎧
⎪
⎪0,
⎪
⎨nx,
f (x) =
⎪−(n + 1)x,
⎪
⎪
⎩
−x,
if x > 1,
1
if n+1
< x ≤ n1 ,
1
,
if − 1n < x ≤ − n+1
if x ≤ −1,
we can see that if 1/(n + 1) < x ≤ 1/n holds, then n/(n + 1) < f (x) ≤ 1, that is,
| f (x) − 1| < 1/(n + 1). Similarly, if −1/n < x ≤ −1/(n + 1), then 1 ≤ f (x) =
−(n + 1)x < (n + 1)/n, that is, | f (x) − 1| < 1/n. It then follows that
1
1
| f (x) − 1| < , ha 0 < |x| < .
n
n
That is, for every ε > 0, the choice δ = 1/n is good if n > 1/ε .
8. Let f (x) = [x]/x (Figure 10.10), that is
⎧
⎪
if 0 < x < 1,
⎨0,
f (x) = n/x,
if n ≤ x < n + 1,
⎪
⎩
−(n + 1)/x, if −(n + 1) ≤ x < −n.
Fig. 10.10
Clearly limx→0+0 f (x) = 0; moreover, limx→0−0 f (x) = ∞, since f (x) = −1/x, if
−1 < x < 0.
Finally, we show that
lim f (x) = lim f (x) = 1.
x→∞
x→−∞
Clearly, n/(n + 1) = 1 − 1/(n + 1) < f (x) = n/x ≤ 1 if n ≤ x < n + 1, that is,
| f (x) − 1| < 1/(n + 1), if x ≥ n, so
lim f (x) = 1.
x→∞
130
10 Continuity and Limits of Functions
Similarly,
1=
−(n + 1) −(n + 1)
1
−(n + 1)
≤ f (x) =
<
= 1+
−(n + 1)
x
−n
n
if −(n + 1) ≤ x < −n; therefore, | f (x) − 1| < 1/n if x < −n. Thus we see that
limx→−∞ f (x) = 1.
Exercises
10.9. In the following functions, the limit β at α exists. Find β , and for each neighborhood V of β , give a neighborhood U̇ of α such that x ∈ U̇ implies f (x) ∈ V .
(a) f (x) = [x], α = 2 + 0;
(c) f (x) =
x
2x−1 ,
x
,
x2 −1
(b) f (x) = {x}, α = 2 + 0;
α = ∞;
(d) f (x) =
α = ∞;
√
√
(g) f (x) = x + 1 − x, α = ∞;
(e) f (x) =
x2 + 5x + 6
, α = ∞;
x2 + 6x + 5
√
(k) 3 x3 + 1 − x, α = ∞.
(i)
x
2x−1 ,
x
,
x2 −1
α = 21 + 0;
(f) f (x) =
α = 1 − 0.
√
√
3
x+ x
√ , α = ∞;
(h)
x− x
(j) 2−[1/x] , α = ∞;
√
10.10. Can we define the function ( x − 1)/(x − 1) at x = 1 such that it will be
continuous there?
√
10.11. Let n be a positive integer. Can we define the function ( n 1 + x−1)/x at x = 0
such that it will be continuous there?
10.12. Prove that the value of the Riemann function at x is
1 − lim max({x}, {2x}, . . . , {nx}).
n→∞
(H)
10.13. Suppose that the function f : R → R has a finite limit b(x) at every point
x ∈ R. Show that the function b is continuous everywhere.
10.14. Prove that if f : R → R is periodic and limx→∞ f (x) = 0, then f is identically 0.
10.15. Prove that there is no function f : R → R whose limit at every point x is
infinite. (H)
10.16. Prove that if at each point x the limit of the function f : R → R equals zero,
then there exists a point x at which f (x) = 0. (H)
10.17. Prove that if the function f : R → R is continuous at every rational point,
then there is an irrational point at which it is also continuous. (∗ S)
10.2 The Transference Principle
131
10.2 The Transference Principle
The concept of a limit of a function is closely related to the concept of a limit of a
sequence. This is expressed by the following theorem. (In the theorem, the meanings
of α and β are the same as in Definition 10.17.)
Theorem 10.19. Let f be defined on a punctured neighborhood U̇ of α . We have
limx→α f (x) = β if and only if whenever a sequence (xn ) satisfies
{xn } ⊂ U̇ and xn → α ,
(10.8)
then limn→∞ f (xn ) = β .
Proof. Suppose that α = a and β = b are finite real numbers. We first prove that if
limx→a f (x) = b, then f (xn ) → b whenever {xn } ⊂ U̇ and xn → a.
Let ε > 0 be given. We know that there exists a δ > 0 such that | f (x) − b| < ε if
0 < |x − a| < δ . If xn → a, then there exists an n0 for δ > 0 such that |xn − a| < δ
for all n > n0 . Since by (10.8), xn is in a punctured neighborhood of a, xn = a for
all n. Thus if n > n0 , then 0 < |xn − a| < δ , and so | f (xn ) − b| < ε . This shows that
f (xn ) → b.
Now we show that if for all sequences satisfying (10.8) we have f (xn ) → b, then
limx→a f (x) = b. We prove this by contradiction.
Suppose that limx→a f (x) = b does not hold. This means that there exists an ε > 0
for which there does not exist a good δ > 0, that is, in all (a − δ , a + δ ) ∩ U̇, there
exists an x for which | f (x) − b| ≥ ε . This is true for all δ = 1/n specifically, so for
all n ∈ N+ , there is an xn ∈ U̇ for which 0 < |xn − a| < 1/n and | f (xn ) − b| ≥ ε .
The sequence (xn ) we get in this way has the properties that xn → a and xn ∈ U̇, but
f (xn ) → b. This contradicts our assumptions.
Let us now consider the case that α = a + 0 and also β = ∞. Suppose that
limx→a+0 f (x) = ∞, and let (xn ) be a sequence approaching a from the right.2 We
want to show that f (xn ) → ∞. Let K be fixed. Then there exists a δ > 0 such that
f (x) > K for all a < x < a + δ . Since xn → a and xn > a, there exists an n0 such that
a < xn < a + δ holds for all n > n0 . Then f (xn ) > K if n > n0 , which shows that
f (xn ) → ∞.
Now suppose that f (xn ) → ∞ for all sequences for which xn → a and xn > a.
We want to show that limx→a+0 f (x) = ∞. We prove this by contradiction. If the
statement is false, then there exists a K for which there is no good δ , that is, for all
δ > 0, there exists an x ∈ (a, a + δ ) such that f (x) ≤ K. This is also true for each
δ = 1/n, so for each n ∈ N+ , there is an a < xn < a + 1/n such that f (xn ) ≤ K.
The sequence (xn ) we then get tends to a from the right, and f (xn ) → ∞, which
contradicts our initial assumption.
The statements for the rest of the cases can be shown similarly.
⊔
⊓
Remark 10.20. We see that a necessary and sufficient condition for the existence of
a limit is that for each sequence xn → α , {xn } ⊂ U̇ (i) ( f (xn )) has a limit, and (ii)
the value of limn→∞ f (xn ) is independent of the choice of the sequence (xn ).
2
This naturally means that xn > a for all n, and xn → a.
132
10 Continuity and Limits of Functions
Here condition (ii) can actually be dropped, since it follows from (i). We can
show this using contradiction: suppose that (i) holds, but (ii) is false. This would
mean that there exist a sequence
xn′ → α , {xn′ } ⊂ U̇
and a sequence
xn′′ → α , {xn′′ } ⊂ U̇
such that
lim f (xn′ ) = lim f (xn′′ ).
n→∞
But then the sequence
us with a sequence
n→∞
(x1′ , x1′′ , x2′ , x2′′ , . . . , xn′ , xn′′ , . . .),
which also tends to α , provides
( f (x1′ ), f (x1′′ ), f (x2′ ), f (x2′′ ), . . . , f (xn′ ), f (xn′′ ), . . .),
which would oscillate at infinity, since it would have two subsequences that tend to
two different limits. This, however, is impossible by (i).
We call Theorem 10.19 the transference principle, since it “transfers” the concept (and value) of limits of functions to limits of sequences.
The significance of the theorem lies in its ability to convert results pertaining to
limits of sequences into results about limits of functions. We will also want to use a
theorem connecting continuity to limits, but stating this will be much simpler than
Theorem 10.19, and we can actually reduce our new theorem to it.
Theorem 10.21. The function f is continuous at a point a if and only if f is defined
in a neighborhood of a and for each sequence (xn ), xn → a implies f (xn ) → f (a).
Proof. Suppose that f is continuous at a, and let (xn ) be a sequence that tends to a.
For a fixed ε , there exists a δ such that | f (x) − f (a)| < ε for all x ∈ (a − δ , a + δ ).
Since xn → a, xn ∈ (a − δ , a + δ ) for all sufficiently large n. Thus | f (xn ) − f (a)| < ε
for all sufficiently large n, which shows that f (xn ) → f (a).
Now suppose that f (xn ) → f (a) if xn → a. By Theorem 10.19, it then follows
that limx→a f (x) = f (a), that is, f is continuous at a.
⊔
⊓
For applications in the future, it is worth stating the following theorem.
Theorem 10.22. The finite limit limx→a−0 f (x) exists if and only if for for each sequence xn ր a, ( f (xn )) is convergent. That is, in the case of a left-hand limit, it
suffices to consider monotone increasing sequences (xn ). A similar statement holds
for right-hand limits.
Proof. To prove the theorem, it is enough to show that if for every sequence xn ր a,
( f (xn )) is convergent, then it follows that for all sequences xn → a such that xn < a,
the sequence ( f (xn )) is convergent.
10.2 The Transference Principle
133
But this is a simple corollary of the fact that every sequence xn < a, xn → a, can
be rearranged into a monotone increasing sequence (xkn ) (See Theorem 6.7 and the
remark following it), and if the rearranged sequence ( f (xkn )) is convergent, then the
⊔
⊓
original sequence ( f (xn )) is also convergent (see Theorem 5.5).
Another—albeit not as deep—connection between limits of functions and limits
of sequences is as follows. An infinite sequence is actually just a function defined
on the positive integers. Then the limit of the sequence a1 = f (1), a2 = f (2), . . . ,
an = f (n), . . . is the limit of f at ∞, at least restricted to the set of positive integers.
To make this precise, we define the notion of a limit restricted to a set.
Definition 10.23. Let α denote a real number, the symbol ∞, or −∞. We say that
α is a limit point or an accumulation point of the set A if every neighborhood of α
contains infinitely many points of A.
Definition 10.24. Let α be a limit point of A = b ∈ R. The limit of f at α restricted
to A is β if for each neighborhood V of β there exists a punctured neighborhood U̇
of α such that
f (x) ∈ V, if x ∈ U̇ ∩ A.
(10.9)
Notation: limx→α f (x) = β .
x∈A
Example 10.25. Let f be the Dirichlet function (as in (9) of Examples 9.7). Then for
an arbitrary real number c, we have limx→c,x∈Q f (x) = 1 and limx→c,x∈R\Q f (x) = 0,
since f (x) = 1 if x is rational, and f (x) = 0 if x is irrational. It is clear that every real
number c is a limit point of the set of rational numbers as well as the set of irrational
numbers, so the above limits have meaning.
Remarks 10.26. 1. By this definition, the limit limx→a+0 f (x) is simply the limit of
f at a restricted to the set (a, ∞).
2. The limit of the sequence an = f (n) (in the case n → ∞) is simply the limit of f
as x → ∞ restricted to the set N+ .
3. If α is not a limit point of the set A, then α has a punctured neighborhood U̇ such
that U̇ ∩ A = 0.
/ In this case, the conditions of the definition automatically hold (since
the statement x ∈ U̇ ∩ A is then vacuous). Thus (10.9) is true for all neighborhoods
V of β . This means that we get a meaningful definition only if α is a limit point
of A.
With regard to the above, the following is the most natural definition.
Definition 10.27. Let a ∈ A ⊂ D( f ). The function f is continuous at the point a restricted to the set A if for every ε > 0, there exists a δ > 0 such that | f (x) − f (a)| < ε
if x ∈ (a − δ , a + δ ) ∩ A. If A = D( f ), then instead of saying that f is continuous at
the point a restricted to D( f ), we say that f is continuous at a for short.
134
10 Continuity and Limits of Functions
Remarks 10.28. 1. By this definition, f being continuous from the right at a can also
be defined as f being continuous at a restricted to the interval [a, ∞).
2. In contrast to the definition of defining limits, we have to assume that f is defined
at a when we define continuity. However, we do not need to assume that a is a limit
point of the set A. If a ∈ A, but a is not a limit point of A, then we say that a is an
isolated point of A. It is easy to see that a is an isolated point of A if and only if
there exists a δ > 0 such that (a − δ , a + δ ) ∩ A = {a}. It then follows that if a is
an isolated point of A, then every function f : A → R is continuous at a restricted
to A. Clearly, for every ε > 0, the δ above has the property that | f (x) − f (a)| < ε
if x ∈ (a − δ , a + δ ) ∩ A, since the last condition is satisfied only by x = a, and
| f (a) − f (a)| = 0 < ε .
The following statements are often used; they are the equivalents to Theorems
5.7, 5.8, and 5.10.
Theorem 10.29 (Squeeze Theorem). If the inequality f (x) ≤ g(x) ≤ h(x) is satisfied in a punctured neighborhood of α and limx→α f (x) = limx→α h(x) = β , then
limx→α g(x) = β .
Proof. The statement follows from Theorems 5.7 and 10.19.
⊔
⊓
Theorem 10.30. If
lim
x→α , x∈A
f (x) = b < c =
lim
x→α , x∈A
g(x),
then there exists a punctured neighborhood U̇ of α such that f (x) < g(x) for all
x ∈ U̇ ∩ A.
Proof. By the definition of the limit, for ε = (c − b)/2, there exists a punctured
neighborhood U˙1 of α such that | f (x) − b| < (c − b)/2 whenever x ∈ A ∩ U˙1 . Similarly, there is a U˙2 such that |g(x) − c| < (c − b)/2 if x ∈ A ∩ U˙2 . Let U̇ = U˙1 ∩ U˙2 .
Then U̇ is also a punctured neighborhood of α , and if x ∈ A ∩ U̇, then f (x) <
b + (c − b)/2 = c − (c − b)/2 < g(x).
⊔
⊓
Theorem 10.31. If the limits limx→α f (x) = b and limx→α g(x) = c exist, and if
f (x) ≤ g(x) holds on a punctured neighborhood of α , then b ≤ c.
Proof. Let U̇ be a punctured neighborhood of α in which f (x) ≤ g(x). Suppose that
b > c. Then by the previous theorem, there exists a punctured neighborhood V˙ of α
such that f (x) > g(x) if x ∈ V˙ . This, however, is impossible, since the set U̇ ∩ V˙ is
nonempty, and for each of its elements x, f (x) ≤ g(x).
⊔
⊓
Corollary 10.32. If f is continuous in a and f (a) > 0, then there exists a δ > 0 such
that f (x) > 0 for all x ∈ (a − δ , a + δ ). If f ≥ 0 holds in a punctured neighborhood
of a and f is continuous at a, then f (a) ≥ 0.
10.2 The Transference Principle
135
Remark 10.33. The converse of Theorem 10.30 is not true: if f (x) < g(x) holds
on a punctured neighborhood of α , then we cannot conclude that limx→α f (x) <
limx→α g(x). If, for example, f (x) = 0 and g(x) = |x|, then f (x) < g(x) for all x = 0,
but limx→0 f (x) = limx→0 g(x) = 0.
The converse of Theorem 10.31 is also not true: if limx→α f (x) ≤ limx→α g(x),
then we cannot conclude that f (x) ≤ g(x) holds in a punctured neighborhood of α .
If, for example, f (x) = |x| and g(x) = 0, then limx→0 f (x) ≤ limx→0 g(x) = 0, but
f (x) > g(x) for all x = 0.
The following theorem for function limits is the analogue to Cauchy’s criterion.
Theorem 10.34. Let f be defined on a punctured neighborhood of α . The limit
limx→α f (x) exists and is finite if and only if for all ε > 0, there exists a punctured
neighborhood U̇ of α such that
| f (x1 ) − f (x2 )| < ε
(10.10)
if x1 , x2 ∈ U̇.
Proof. Suppose that limx→α f (x) = b ∈ R, and let ε > 0 be fixed. Then there exists
a punctured neighborhood U̇ of α such that | f (x) − b| < ε /2 if x ∈ U̇. It is clear that
(10.10) holds for all x1 , x2 ∈ U̇.
Now suppose that the condition formulated in the theorem holds. If xn → α and
xn = α for all n, then the sequence f (xn ) satisfies the Cauchy criterion. Indeed,
for a given ε , choose a punctured neighborhood U̇ such that (10.10) holds for all
x1 , x2 ∈ U̇. Since xn → α and xn = α for all n, there exists an N such that xn ∈ U̇
for all n ≥ N. If n, m ≥ N, then by (10.10), we have | f (xn ) − f (xm )| < ε . Then by
Theorem 6.13, the sequence ( f (xn )) is convergent.
Fix a sequence xn → α that satisfies xn = α for all n, and let limn→∞ f (xn ) = b.
If yn → α is another sequence satisfying yn = α for all n, then the combined sequence (x1 , y1 , x2 , y2 , . . .) also satisfies this assumption, and so the sequence of function values s = ( f (x1 ), f (y1 ), f (x2 ), f (y2 ), . . .) is also convergent. Since ( f (xn )) is a
subsequence of this, the limit of s can only be b. On the other hand, ( f (yn )) is also
a subsequence of s, so f (yn ) → b. This holds for all sequences yn → α for which
⊔
⊓
yn = α for all n, so by the transference principle, limx→a f (x) = b.
Exercises
10.18. Show that for all functions f : R → R, there exists a sequence xn → ∞ such
that ( f (xn )) has a limit.
10.19. Let f : R → R be arbitrary. Prove that the limit limx→∞ f (x) exists if and only
if whenever the sequences (xn ) and (yn ) tend to ∞, and the limits limn→∞ f (xn ),
limn→∞ f (yn ) exist, then they are equal.
10.20. Construct a function f : R → R such that f (a · n) → 0 (n → ∞) for all a > 0,
but the limit limx→∞ f (x) does not exist. (H)
136
10 Continuity and Limits of Functions
10.21. Prove that if f : R → R is continuous and f (a · n) → 0 (n → ∞) for all a > 0,
then limx→∞ f (x) = 0. (∗ S)
10.22. Let f : R → R be a function such that the sequence ( f (xn )) has a limit for all
sequences xn → ∞ for which xn+1 /xn → ∞. Show that the limit limx→∞ f (x) exists.
10.23. Prove that if the points 1/n for all n ∈ N+ are limit points of H, then 0 is also
a limit point of H.
10.24. Show that every point x ∈ R is a limit point of each of the sets Q and R \ Q.
10.25. Prove that (a) every bounded infinite set has a finite limit point; and (b) every
infinite set has a limit point.
10.26. Prove that if the set H has only one limit point, then H is countable, and it
can be listed as a sequence (xn ) such that limn→∞ xn exists and is equal to the limit
point of H.
10.27. What are the sets that have exactly two limit points?
10.28. Let f (x) = x if x is rational, and f (x) = −x if x is irrational. What can we say
about the limits limx→c, x∈Q f (x), and limx→c, x∈R\Q f (x)?
10.3 Limits and Operations
In examples up until now, we deduced continuity and values of limits directly from
the definitions. The following theorems—which follow from the transference principle and theorems analogous to theorems on limits of sequences—let us deduce
the continuity or limits of more complex functions using what we already know
about continuity or limits of simple functions.
Theorem 10.35. Let α denote a number a or one of the symbols a − 0, a + 0, ∞,
−∞. If the finite limits limx→α f (x) = b and limx→α g(x) = c exist, then
(i) limx→α ( f (x) + g(x)) exists and is equal to b + c;
(ii) limx→α ( f (x) · g(x)) exists and is equal to b · c;
(iii) If c = 0, then limx→α ( f (x)/g(x)) exists and is equal to b/c.
Proof. We will go into detail only with (i). Let f and g be defined on a punctured neighborhood U̇ of α . By the transference principle, we know that for
every sequence xn → α , xn ∈ U̇, limn→∞ f (xn ) = b and limn→∞ g(xn ) = c. Then by
Theorem 5.12,
lim ( f (xn ) + g(xn )) = b + c,
n→∞
which, by the transference principle again, gives us (i). Statements (ii) and (iii) can
be shown similarly.
⊔
⊓
10.3 Limits and Operations
137
Remarks 10.36. 1. In the first half of the proof, we used that the condition for sequences is necessary, while in the second half, we used that it is sufficient for the
limit to exist.
2. In statement (iii), we did not suppose that g(x) = 0. To ensure that the limit
limx→α ( f (x)/g(x)) has meaning, we see that if c = 0, then there exists a punctured
neighborhood U̇ of α in which g(x) = 0. Theorem 10.30 states that if c < 0, then
in a suitable punctured neighborhood, g(x) < 0, and if c > 0, then in a suitable U̇,
g(x) > 0.
Examples 10.37. 1. A simple application of Theorem 10.35 gives
lim
x→1
xn − 1
n
=
xm − 1 m
for all n, m ∈ N+ , since if x = 1, then
xn − 1
xn−1 + xn−2 + · · · + 1
= m−1
.
m
x −1 x
+ xm−2 + · · · + 1
Here the numerator has n summands, while the denominator has m, each of which
tends to 1 if x → 1.
2. Consider the following problem. Find the values of a and b such that
lim
x2 − x + 1 − (ax + b) = 0
(10.11)
x→∞
holds. It is clear that only positive values of a can come into play, and so we have to
find limits of type “∞ − ∞.” The following argument will lead us to an answer:
x2 − x + 1 − (ax + b) =
√
√
( x2 − x + 1 − (ax + b)) · ( x2 − x + 1 + (ax + b))
√
=
=
x2 − x + 1 + (ax + b)
x2 − x + 1 − (ax + b)2
(1 − a2 )x2 − (2ab + 1)x + (1 − b2 )
√
=√
=
=
x2 − x + 1 + (ax + b)
x2 − x + 1 + (ax + b)
2
=
Since
"
(1 − a2 )x − (2ab + 1) + 1−b
x
!
.
1
b
1
1 − x + x2 + a + x
1−
1 1
b
+ 2 + a + → a + 1 and
x x
x
2ab + 1 +
b2 − 1
→ 2ab + 1,
x
if x → ∞, the quotient can tend to 0 only if 1 − a2 = 0, that is, considering a > 0,
only if a = 1. In this case,
lim
x→∞
2b + 1
.
x2 − x + 1 − (x + b) = −
2
This is 0 if b = −1/2. Thus (10.11) holds if and only if a = 1 and b = −1/2.
138
10 Continuity and Limits of Functions
Definition 10.38. If
lim ( f (x) − (ax + b)) = 0,
x→∞
then we say that the asymptote of f (x) at ∞ is the linear function ax + b. (Or from a
geometric viewpoint: the asymptote of the curve y = f (x) at ∞ is the line y = ax + b.)
The asymptote of f (x) at −∞ is defined similarly.
Remark 10.39. In Theorems 5.12, 5.14 and 5.16, we saw that we can interchange
the order of taking limits of sequences and performing basic operations on them.
We also proved this in numerous cases in which the limit of one (or both) of the
sequences considered is infinity. These cases apply to the corresponding statements
for functions (with identical proofs), just as with the cases with finite limits. So for
example:
If limx→α f (x) = b is finite and limx→α g(x) = ∞, then limx→α ( f (x) + g(x)) = ∞.
Or if limx→α f (x) = a = 0, limx→α g(x) = 0 and g = 0 in a punctured neighborhood of a, then limx→α | f (x)/g(x)| = ∞. If we also know that f /g has unchanging
sign, then it follows that limx→α f (x)/g(x) = ∞ or limx→α f (x)/g(x) = −∞ depending on the sign.
Example 10.40. Let x1 and x2 be the roots of the equation ax2 + bx + c = 0. Find the
limits of x1 and x2 if b and c are fixed, b = 0, and a → 0.
Let
√
√
−b + b2 − 4ac
−b − b2 − 4ac
and
x2 =
.
x1 =
2a
2a
We can suppose b > 0 without sacrificing generality. We see that then lima→0 x1 is
a limit of type 0/0. A simple rearrangement yields
√
b2 − 4ac − b
−4ac
−4c
c
√
=
= √
→− .
x1 =
2
2
2a
b
2a( b − 4ac + b) 2( b − 4ac + b)
√
For the other root, lima→0 (−b − b2 − 4ac) = −2b < 0 implies
lim x2 = −∞,
a→0+0
and
lim x2 = ∞.
a→0−0
We note that critical limits can arise in the case of functions as well—in fact,
exactly in the same situations as for sequences. Critical limits occur when the limits
of f and g on their own do not determine the limits of f + g, f · g, or f /g. So for
example, if limx→0 f (x) = limx→0 g(x) = 0, then the limit limx→0 f (x)/g(x) can be
finite or infinite, or it might not exist. Clearly, limx→0 x/x = 1, limx→0 x/x3 = ∞,
limx→0 −x/x3 = −∞, and if f is the Riemann function, then the limit limx→0 f (x)/x
does not exist (show this). The examples that illustrate critical limits for sequences
can usually be transformed into examples involving functions without much difficulty.
The following theorem addresses limits of compositions of functions.
10.3 Limits and Operations
139
Theorem 10.41. Suppose that limx→α g(x) = γ and limt→γ f (t) = β . If g(x) = γ
on a punctured neighborhood of α , or γ is finite and f continuous at γ , then
limx→α f (g(x)) = β .
Proof. For brevity, we denote the punctured neighborhoods of α by U̇(α ).
We have to show that for every neighborhood V of β , we can find a U̇(α ) such
that if x ∈ U̇(α ), then f (g(x)) ∈ V . Since limt→γ f (t) = β , there exists a Ẇ (γ )
such that f (t) ∈ V whenever t ∈ Ẇ (γ ). Let W (γ ) = Ẇ (γ ) if γ = ∞ or −∞, and
let W (γ ) = Ẇ (γ ) ∪ {γ } if γ is finite. Then W (γ ) is a neighborhood of γ , so by
limx→α g(x) = γ , there exists a U˙1 (α ) such that if x ∈ U˙1 (α ), then g(x) ∈ W (γ ).
If we know that g(x) = γ in a punctured neighborhood U˙2 (α ) of α , then U̇(α ) =
˙
U1 (α ) ∩ U˙2 (α ) is a punctured neighborhood of α such that whenever x ∈ U̇(α ),
g(x) ∈ W (γ ) and g(x) = γ , that is, g(x) ∈ Ẇ (γ ), and thus f (g(x)) ∈ V .
Now let f be continuous at γ . Then f (t) ∈ V for all t ∈ W (γ ), since f (γ ) = β ∈ V .
⊔
⊓
Thus if x ∈ U˙1 (α ), then g(x) ∈ W (γ ) and f (g(x)) ∈ V .
Remark 10.42. The crucial condition in the theorem is that either g(x) = γ in U̇(α ),
or that f is continuous at γ . If neither of these holds, then the statement of the
theorem is not always true, which is illustrated by the following example. Let g be
the Riemann function (that is, function 3 in Example 10.7), and let
1, if t = 0,
f (t) =
0, if t = 0.
It is easy to see that then ( f ◦ g)(x) = f (g(x)) is exactly the Dirichlet function.
In Example 10.7, we saw that limx→0 g(x) = 0. On the other hand, it is clear that
limt→0 f (t) = 1, while the limit limx→0 f (g(x)) does not exist.
If γ = ∞, then the condition g(x) = γ (x ∈ U̇(α )) automatically holds. This means
that if limx→α g(x) = ∞ and limt→∞ f (t) = β , then limx→α f (g(x)) = β holds without further assumptions. A special case of the converse of this is also true.
Theorem 10.43. Let f be defined in a neighborhood of ∞. Then
1
lim f
= lim f (x)
x→∞
x→0+0
x
(10.12)
in the sense that if one limit exists, then so does the other, and they are equal.
Proof. Since limx→0+0 1/x = ∞, as we saw before, the statement is true whenever
the right-hand side exists. If the left-hand side exists, then we apply Theorem 10.41
with the choices α = ∞, g(x) = 1/x, and γ = 0. Since g(x) is nowhere zero,
lim h(1/x) = lim h(x)
x→∞
x→0+0
whenever the right-hand side exists. If we apply this to the function h(x) = f (1/x),
⊔
⊓
then we get (10.12).
As an application of Theorems 10.35 and 10.41, we immediately get the following theorem.
140
10 Continuity and Limits of Functions
Theorem 10.44.
(i) If f and g are continuous at a, then f + g and f · g are also continuous at a, and
if g(a) = 0, then f /g is continuous at a as well.
(ii) If g(x) is continuous at a and f (t) is continuous at g(a), then f ◦ g is also continuous at a.
We prove the following regarding continuity of the inverse of a function. (We
would like to draw the reader’s attention to the fact that in the theorem, we do not
suppose that the function itself is continuous.)
Theorem 10.45. Let f be strictly monotone increasing (decreasing) on the interval
I. Then
(i) The inverse of f , f −1 , is strictly monotone increasing (decreasing) on the set
f (I), and moreover,
(ii) f −1 is continuous at every point of f (I) restricted to the set f (I).
Proof. We can suppose that the interval I is nondegenerate, since otherwise, the
statement is trivial: on a set of one element, every function is strictly increasing,
decreasing, and continuous (restricted to the set). We can also assume that f is
strictly increasing, since the decreasing case can be proved the same way.
Since if u1 , u2 ∈ I, u1 < u2 , then f (u1 ) < f (u2 ), f is one-to-one, so it has an
inverse. Let x1 , x2 ∈ f (I), and suppose that x1 < x2 , but f −1 (x1 ) ≥ f −1 (x2 ). Since f
is monotone increasing, we would have
x1 = f f −1 (x1 ) ≥ f f −1 (x2 ) = x2 ,
which is impossible. Thus we see that f −1 (x1 ) < f −1 (x2 ) if x1 < x2 , that is, f −1 is
monotone increasing.
Fig. 10.11
Let d ∈ f (I) be arbitrary. Then d = f (a) for a suitable a ∈ I. We show that f −1
is continuous at d restricted to f (I). Let ε > 0 be fixed. Since f −1 (d) = a, we have
to show that there exists a δ > 0 such that
f −1 (x) ∈ (a − ε , a + ε ) if x ∈ (d − δ , d + δ ) ∩ f (I).
(10.13)
10.3 Limits and Operations
141
Consider first the case that a is an interior point of I (that is, not an endpoint).
Then we can choose points b, c ∈ I such that a − ε < b < a < c < a + ε (see
Figure 10.11). Since f is strictly monotone increasing, f (b) < f (a) < f (c), that
is, f (b) < d < f (c). Choose a positive δ such that f (b) < d − δ < d + δ < f (c). If
x ∈ (d − δ , d + δ ) ∩ f (I), then by the strict monotonicity of f −1 ,
b = f −1 ( f (b)) < f −1 (x) < f −1 ( f (c)) = c.
Thus f −1 (x) ∈ (b, c) ⊂ (a − ε , a + ε ), which proves (10.13).
If a is the left endpoint of a, then choose a point c ∈ I such that a < c < a + ε .
Then d = f (a) < f (c), so for a suitable δ > 0, d + δ < f (c). If x ∈ (d − δ , d + δ ) ∩
f (I), then
a ≤ f −1 (x) < f −1 ( f (c)) = c.
Thus f −1 (x) ∈ [a, c) ⊂ (a − ε , a + ε ), so (10.13) is again true. The argument for a
being the right endpoint of I is the same.
⊓
⊔
Just as with sequences, we define order of magnitude and asymptotic equivalence
for functions as well.
Definition 10.46. Suppose that limx→α f (x) = limx→α g(x) = ∞. If
lim
x→α
f (x)
= ∞,
g(x)
then we say that f tends to infinity faster than g (or g tends to infinity more slowly
than f ). We can also say that the order of magnitude of f is greater than the order
of magnitude of g.
Similarly, if limx→α f (x) = limx→α g(x) = 0 and limx→α f (x)/g(x) = 0, then we
say that f tends to zero faster than g (or g tends to zero more slowly than f ).
The statement limx→α f (x)/g(x) = 0 is sometimes denoted by f (x) = o(g(x))
(read: f (x) is little-o of g(x)).
If we know only that f (x)/g(x) is bounded in U̇(α ), then we denote this by
f (x) = O(g(x)) (read: f (x) is big-O of g(x)).
Example 10.47. We show that if x → ∞, then the function ax tends to infinity faster
than xk for every a > 1 and k > 0. To see this, we have to show that limx→∞ ax /xk =
∞. Let P be a fixed real number. Since the sequence (an /nk ) tends to infinity, there
exists an n0 such that an /nk > 2k · P if n ≥ n0 . Now if x ≥ 1, then ax ≥ a[x] and
xk ≤ (2 · [x])k = 2k · [x]k . Thus if x ≥ n0 , then [x] ≥ n0 , and so
a[x]
ax
≥ k
> P,
k
x
2 · [x]k
which concludes the argument.
142
10 Continuity and Limits of Functions
Consider the following functions:
√
√ √
. . . , n x, . . . , 3 x, x, x, x2 , x3 , . . . , xn , . . . , 2x , 3x , . . . , nx , . . . , xx .
(10.14)
It is easy to see that if x → ∞, then in the above arrangement, each function tends
to infinity faster than the functions to the left of it. (For the function 2x , this follows
from the previous example.)
Definition 10.48. Suppose that
lim f (x) = lim g(x) = 0
x→α
lim f (x) = lim g(x) = ±∞.
or
x→α
If
lim
x→α
x→α
x→α
f (x)
= 1,
g(x)
then we say that f and g are asymptotically equal. We denote this by f ∼ g if x → α .
√
Example 10.49. We show that if x → 0, then 1 + x − 1 ∼ x/2. Indeed,
√
√
√
2
1 + x − 1 ( 1 + x − 1)( 1 + x + 1)
√
=√
→1
=
(x/2)
(x/2) · ( 1 + x + 1)
1+x+1
if x → 0.
Exercises
√
√
3
x+2−
x+20
√
4
x+9−2
√
359
x−1
limx→1 √
5 x−1 =?
10.29. limx→7
10.30.
=?
$
#√
√
x2 + 2x − 2 x2 + x + x =?
√
√
√
10.32. limx→∞ x3/2 · x + 2 + x − 2 x + 1 =?
10.31. limx→∞ x ·
10.33. limx→1 (1−x)(1−
√
√
√
x)(1− 3 x)·...·(1− n x)
(1−x)n
=?
10.34. Prove that
ax + b
=
lim
x→−d/c+0 cx + d
ax + b
=
lim
x→−d/c−0 cx + d
and
lim
x→±∞
∞,
if bc − ad > 0
−∞, if bc − ad < 0,
−∞, if bc − ad > 0
∞,
if bc − ad < 0,
ax + b a
=
cx + d
c
(c = 0).
10.3 Limits and Operations
143
10.35. Let p(x) be a polynomial of degree at most n; that is, let
p(x) = an xn + an−1 xn−1 + · · · + a1 x + a0 .
Prove that if
lim
x→0
p(x)
= 0,
xn
then p(x) = 0 for all x.
10.36. Construct functions f and g such that limx→0 f (x) = ∞, limx→0 g(x) = −∞,
and moreover,
(a)
(b)
(c)
(d)
limx→0 ( f (x) + g(x)) exists and is finite;
limx→0 ( f (x) + g(x)) = ∞;
limx→0 ( f (x) + g(x)) = −∞;
limx→0 ( f (x) + g(x)) does not exist.
10.37. (a) If at x = a, f is continuous while g is not, then can f + g be continuous
here?
(b) If at x = a, neither f nor g is continuous, then can f + g be continuous here?
10.38. Suppose that ϕ : R → R is strictly monotone, and let R(ϕ ) = R. Prove that if
f : R → R and if f ◦ ϕ is continuous, then f is continuous.
10.39. What can we say about the continuity of f ◦ g if in R,
(i) both f and g are continuous,
(ii) f is continuous, g is not continuous,
(iii) neither f nor g is continuous.
10.40. Give an example of functions f , g : R → R such that
lim f (x) = lim g(x) = 0,
x→0
x→0
but
lim f (g(x)) = 1.
x→0
(S)
10.41. Prove that if f is the Riemann function, then the limit limx→0 f (x)/x does not
exist.
10.42. Is the following statement true? If f1 , f2 , . . . is an infinite sequence of continuous functions and F(x) = infk { fk (x)}, then F(x) is also continuous.
10.43. Is it true that if f is strictly monotone on the set A ⊂ R, then its inverse is
continuous on the set f (A)?
10.44. Let a and b be positive numbers. Prove that (a) if x → ∞, then the order of
magnitude of xa is greater than the order of magnitude of xb if and only if a > b;
and (b) if x → 0 + 0, then the order of magnitude of x−a is greater than the order of
magnitude of x−b if and only if a > b.
144
10 Continuity and Limits of Functions
10.45. Prove that in the ordering (10.14), each function tends to infinity faster than
the functions to the left of it.
√
10.46. Let a > 1 and k > 0. Prove that if x → ∞, then the order of magnitude of a
is larger than the order of magnitude of xk .
x
10.47. Suppose that all of the functions f1 , f2 , . . . tend to infinity as x → ∞. Prove
that there is a function f whose order of magnitude is greater than the order of
magnitude of each fn .
10.4 Continuous Functions in Closed and Bounded Intervals
The following theorems show that if f is continuous on a closed and bounded
interval, then it automatically follows that f possesses numerous other important
properties.
Definition 10.50. Let a < b. The function f is continuous in the interval [a, b] if it
is continuous at all x ∈ (a, b), is continuous from the right at a, and continuous from
the left at b.
More generally:
Definition 10.51. Let A ⊂ D( f ). The function f is continuous on the set A if at each
x ∈ A, it is continuous when restricted to the set A.
From now on, we denote the set of all continuous functions on the closed and
bounded interval [a, b] by C[a, b].
Theorem 10.52. If f ∈ C[a, b], then f is bounded on [a, b].
Proof. We prove the statement by contradiction. Suppose that f is not bounded on
[a, b]. Then for no number K does | f (x)| ≤ K for all x ∈ [a, b] hold. Namely, for
every n, there exists an xn ∈ [a, b] such that | f (xn )| > n.
Consider the sequence (xn ). This is bounded, since each of its terms falls
inside [a, b], so it has a convergent subsequence (xnk ). Let limk→∞ xnk = α . Since
{xn } ⊂ [a, b], α is also in [a, b]. However, f is continuous at α , so by the transference principle, the sequence ( f (xnk )) is convergent (and tends to f (α )). It follows
that the sequence ( f (xnk )) is bounded. This, however, contradicts that | f (xnk )| > nk
of all k.
⊔
⊓
Remark 10.53. In the theorem, it is necessary to suppose that the function f is continuous on a closed and bounded interval. Dropping either assumption leads to the
statement of the theorem not being true. So for example, the function f (x) = 1/x is
continuous on the bounded interval (0, 1], but f is not bounded there. The function
f (x) = x2 is continuous on [0, ∞), but is also not bounded there.
10.4 Continuous Functions in Closed and Bounded Intervals
145
Definition 10.54. Let f be defined on the set A. If the image of A, f (A), has a greatest element, then we call it the (global) maximum of the function f over A, and
denote it by max f (A) or max f (x). If a ∈ A and f (a) = max f (A), then we say that
x∈A
a is an global maximum point of f over A.
If f (A) has a smallest element, then we call this the (global) minimum of the
function f over A, and denote it by min f (A) or min f (x). If b ∈ A and f (b) =
x∈A
min f (A), then we say that b is a global minimum point of f over A.
We collectively call the global maximum and minimum points global extrema.
Naturally, on a set A, a function can have numerous maximum (or minimum)
points.
A set of numbers can have a maximum (or minimum) only if it is bounded from
above (below). However, as we have already seen, not every bounded set has a maximum or minimum element. If the image of a set A under a function f is bounded,
that alone does not guarantee that there will be a largest or smallest value of f .
For example, the fractional part f (x) = {x} is bounded on [0, 2]. The least upper
bound of the image of this set is 1, but the function is never equal to 1. Thus this
function does not have a maximum point in [0, 2].
The following theorem shows that this cannot happen with a continuous function
on a closed and bounded interval.
Theorem 10.55 (Weierstrass’s Theorem). If f ∈ C[a, b], then there exist α ∈ [a, b]
and β ∈ [a, b], such that f (α ) ≤ f (x) ≤ f (β ) if x ∈ [a, b]. In other words, a continuous function always has an absolute maximum and minimum point over a closed
and bounded interval.
We give two proofs of the theorem.
Proof I. By Theorem 10.52, f ([a, b]) is bounded. Let the least upper bound of
f ([a, b]) be M. If M ∈ f ([a, b]), then this means that M = max f (x). Thus we have
x∈[a,b]
only to show that M ∈ f ([a, b]) is impossible. We prove this by contradiction. If
M ∈ f ([a, b]), then the values of the function F(x) = M − f (x) are positive for all
x ∈ [a, b]. Thus the function 1/F is also continuous on [a, b] (see Theorem 10.44),
so it is bounded there (by Theorem 10.52 again). Then there exists a K > 0 such that
1
≤K
M − f (x)
for all x ∈ [a, b]. Taking reciprocals of both sides and rearranging (and using that
M − f (x) > 0 everywhere) yields the inequality
f (x) ≤ M −
1
K
if x ∈ [a, b]. However, this contradicts M being the least upper bound of f ([a, b]).
The existence of min f [a, b] can be proven similarly. (Or we can reduce it to the
statement regarding the maximum if we apply that to − f instead of f .)
146
10 Continuity and Limits of Functions
Proof II. Once again, let M = sup f ([a, b]); we will show that M ∈ f ([a, b]). If n is a
positive integer, then M − (1/n) is not an upper bound of f ([a, b]), since M was the
least upper bound. Thus there exists a point xn ∈ [a, b] such that f (xn ) > M − (1/n).
The sequence (xn ) is bounded (since each of its terms falls in [a, b]), so it has a
convergent subsequence (xnk ). Let limk→∞ xnk = α . Since {xn } ⊂ [a, b], we have
α ∈ [a, b]. Now f is continuous at α , so by the transference principle, f (xnk ) →
f (α ). Since
1
M − < f (xnk ) ≤ M
nk
for all k, by the squeeze theorem, M ≤ f (α ) ≤ M, that is, f (α ) = M. This shows
that M ∈ f ([a, b]).
The existence of min f [a, b] can be proven similarly.
Remark 10.56. Looking at the conditions of the theorem, it is again essential that
we are talking about continuous functions in closed, bounded intervals. As we saw
in Remark 10.53, if f is continuous on an open interval (a, b), then it might happen
that f ((a, b)) is not bounded from above, and then max f ((a, b)) does not exist. But
this can occur even if f is bounded. For example, the function f (x) = x is continuous
and bounded on the open interval (0, 1), but it does not have a greatest value there.
It is equally important for the interval to be bounded. This is illustrated by the
function f (x) = −1/(1+x2 ), which is bounded in [0, ∞), but does not have a greatest
value.
Another important property of continuous functions over closed and bounded
intervals is given by the following theorem.
Theorem 10.57 (Bolzano–Darboux3 Theorem). If f ∈ C[a, b], then f takes on
every value between f (a) and f (b) on the interval [a, b].
We give two proofs of this theorem again, since both proofs embody ideas that
are frequently used in analysis.
Proof I. Without loss of generality, we may assume that f (a) < c < f (b). We will
prove the existence of a point α ∈ [a, b] such that the function takes on values not
larger than c and not smaller than c in every neighborhood of the point. Then by the
continuity of f at α , it follows that f (α ) = c.
We will define α as the intersection of a sequence of nested closed intervals
I0 ⊃ I1 ⊃ . .
.. Let I0= [a, b].
a+b
a+b
If
f
,b ,
≤ c, then let I1 = [a1 , b1 ] =
2
2
a+b
a+b
but if f
> c, then let I1 = [a1 , b1 ] = a,
.
2
2
We continue this process. If In = [an , bn ] is already defined
3
Jean Gaston Darboux (1842–1917), French mathematician.
10.4 Continuous Functions in Closed and Bounded Intervals
147
Fig. 10.12
an + bn
and f
, bn ,
≤ c, then let In+1 = [an+1 , bn+1 ] =
2
an + bn
an + bn
> c, then let In+1 = [an+1 , bn+1 ] = an ,
.
but if f
2
2
The interval sequence I0 ⊃ I1 ⊃ . . . is defined such that
an + bn
2
f (an ) ≤ c < f (bn )
(10.15)
holds for all n (Figure 10.12). Since |In | = (b − a)/2n → 0, the interval sequence
(In ) has exactly one shared point. Let this be α . Clearly,
α = lim an = lim bn ,
n→∞
n→∞
and since f is continuous in α ,
lim f (an ) = lim f (bn ) = f (α ).
n→∞
n→∞
(10.16)
But by (10.15),
lim f (an ) ≤ c ≤ lim f (bn ),
n→∞
n→∞
that is, (10.16) can hold only if f (α ) = c.
Proof II. Let us suppose once again that f (a) < c < f (b), and let
A = {x ∈ [a, b] : f (x) < c}.
The set A is then bounded and nonempty, since a ∈ A. Thus α = sup A exists, and
since A ⊂ [a, b], we have α ∈ [a, b]. Since f is continuous at a and f (a) < c, f (x) < c
holds over a suitable interval [a, a + δ ), and so a < α . Moreover, since f is continuous at b and f (b) > c, f (x) > c holds on a suitable interval (b − δ , b], and so α < b.
We will show that f (α ) = c.
148
10 Continuity and Limits of Functions
If f (α ) were larger than c, then there would exist an interval (α − δ , α + δ ) in
which f (x) > c would hold. But then α could not be the upper limit of the set A,
that is, its least upper bound, since the smaller α − δ would also be an upper bound
of A.
If, however, f (α ) were smaller than c, then there would exist an interval (α −
δ , α + δ ) in which f (x) < c held. But then α cannot be the upper limit of the set A
once again, since there would be values x in A that were larger than α . Thus neither
f (α ) > c nor f (α ) < c can hold, so f (α ) = c.
Corollary 10.58. If f ∈ C[a, b], then the image of f (the set f ([a, b])) is a closed and
bounded interval; in fact,
f [a, b] = min f (x), max f (x) .
x∈[a,b]
x∈[a,b]
Proof. It follows from Weierstrass’s theorem that M = max f ([a, b]) and m = min
f ([a, b]) exist. It is clear that f ([a, b]) ⊂ [m, M]. By Theorem 10.57, we also know
that the function takes on every value of [m, M] in [a, b], so f ([a, b]) = [m, M].
⊔
⊓
It is easy to see from the above theorems that if I is any kind of interval and f is
continuous on I, then f (I) is also an interval (see Exercise 10.61).
Using the Bolzano–Darboux theorem, we can give a simple proof of the existence
of the kth roots of nonnegative numbers (Theorem (3.6)).
Corollary 10.59. If a ≥ 0 and k is a positive integer, then there exists a nonnegative
real number b such that bk = a.
Proof. The function xk is continuous on the interval [0, a + 1] (why?). Since we
have f (0) = 0 ≤ a and f (a + 1) = (a + 1)k ≥ a + 1 > a, by the Bolzano–Darboux
theorem, there exists a b ∈ [0, a + 1] such that bk = f (b) = a.
⊔
⊓
Exercises
10.48. Give an example of a function f : [a, b] → R that is continuous on all of [a, b]
except for one point, and that is (a) not bounded, (b) bounded, but does not have a
greatest value.
10.49. Show that if f : R → R is continuous and limx→∞ f (x) = limx→−∞ f (x), then
f has either a largest or smallest value (not necessarily both).
10.4 Continuous Functions in Closed and Bounded Intervals
149
10.50. Which are the functions f : [a, b] → R which have smallest and largest values
on every nonempty set A ⊂ [a, b]?
10.51. Suppose that the function f : [a, b] → R satisfies the following properties:
(a) If a ≤ c < d ≤ b, then f takes on every value between f (c) and f (d) on
[c, d]; moreover, (b) whenever xn ∈ [a, b] for all n and xn → c, then on the set
{xn : n = 1, 2, . . .} ∪ {c}, the function f has largest and smallest values. Prove that f
is continuous.
10.52. Let f : [a, b] → (0, ∞) be continuous. Prove that for suitable δ > 0, f (x) > δ
for all x ∈ [a, b]. Give a counterexample if we write (a, b) instead of [a, b].
10.53. Let f , g : [a, b] → R be continuous and suppose that f (x) < g(x) for all
x ∈ [a, b]. Prove that for suitable δ > 0, f (x)+δ < g(x) for all x ∈ [a, b]. Give a
counterexample if we write (a, b) instead of [a, b].
10.54. Prove that if f : [a, b] → R is continuous and one-to-one, then it is strictly
monotone.
10.55. Show that if f : [a, b] → R is monotone increasing and the image contains
[ f (a), f (b)] ∩ Q, then f is continuous.
10.56. Prove that if f : [a, b]→R is continuous, then for every x1 , . . . , xn ∈ [a, b], there
exists a c ∈ [a, b] such that ( f (x1 ) + · · · + f (xn ))/n = f (c).
10.57. Prove that every cubic polynomial has a real root. Is it true that every fourthdegree polynomial has a real root? (H)
10.58. Prove that if f : [a, b] → [a, b] is continuous, then there exists c ∈ [a, b] for
which f (c) = c. Give a counterexample if we take any other type of interval than
[a, b]. (H)
10.59. Prove that if f : [0, 1] → R is continuous and f (0) = f (1), then there exists
an x ∈ [0, 1/2] such that f (x) = f (x + (1/2)). In fact, for every n ∈ N+ , there exists
a 0 ≤ x ≤ 1 − (1/n) such that f (x) = f (x + (1/n)).
10.60. Does there exist a continuous function f : R → R for which f ( f (x)) = −x
for all x? (H)
10.61. Prove that if I is an interval (closed or not, bounded or not, degenerate or not)
and f : I → R is continuous, then f (I) is also an interval. (S)
150
10 Continuity and Limits of Functions
10.5 Uniform Continuity
Let the function f be continuous on the interval I. This means that for all a ∈ I and
arbitrary ε > 0, there exists a δ > 0 such
that
| f (x) − f (a)| < ε , if x ∈ (a − δ , a + δ ) ∩ I.
(10.17)
In many cases, we can determine the largest
possible δ for a given a such that (10.17)
holds. Let us denote this by δ (a). If ε > 0
Fig. 10.13
is fixed, then for different a, usually different δ (a) correspond. It is easy to see, for
example, that for the function f (x) = x2 , the larger |a| is, the smaller the corresponding δ (a) is (Figure 10.13). Thus in the interval [0, 1], the δ (a) corresponding
to a = 1 is the smallest, so for each a ∈ [0, 1], we can choose δ to be δ (1). In other
words, this means that for all a ∈ [0, 1]
| f (x) − f (a)| < ε ,
if
|x − a| < δ (1).
This argument, of course, does not usually work. Since there is not always a smallest number out of infinitely many, we cannot always—at least using the above
method—find a δ that is good for all a ∈ I if we have a continuous function
f : I → R. But such a δ does not always exist. In the case of f (x) = 1/x, we found
the value of δ (a) in Example 10.2.4 (see equation (10.2)). We can see that in this
case, δ (a) → 0 if a → 0, that is, there does not exist a δ that would be good at every
point of the interval (0, 1). As we will soon show, this phenomenon cannot occur in
functions that are continuous on a closed and bounded interval: in this case, there
must exist a shared, good δ for the whole interval. We call this property uniform
continuity. The following is the precise definition:
Definition 10.60. The function f is uniformly continuous on the interval I if for
every ε > 0, there exists a (shared, independent of the position) δ > 0 such that
if x0 , x1 ∈ I
and
|x1 − x0 | < δ ,
then
| f (x1 ) − f (x0 )| < ε .
(10.18)
We can define uniform continuity on an arbitrary set A ⊂ R similarly: in the
above definition, write A in place of I everywhere.
Theorem 10.61 (Heine’s Theorem4 ). If f ∈ C[a, b], then f is uniformly continuous
in [a, b].
Proof. We prove the statement by contradiction. Suppose that f is not uniformly
continuous in [a, b]. This means that there exists an ε0 > 0 for which there does not
exist a δ > 0 such that (10.18) holds. Then (10.18) does not hold with the choice
4
Heinrich Eduard Heine (1821–1881), German mathematician.
10.5 Uniform Continuity
151
δ = 1/n either, that is, for every n, there exist αn ∈ [a, b] and βn ∈ [a, b] for which
1
|αn − βn | < ,
n
(10.19)
| f (αn ) − f (βn )| ≥ ε0 .
(10.20)
but at the same time,
Since αn ∈ [a, b], there exists a convergent subsequence (αnk ) whose limit, α , is also
in [a, b]. Now by (10.19),
βnk = βnk − αnk + αnk → 0 + α = α .
Since f is continuous on [a, b], it is continuousat α (restricted to [a, b]). Thus by the
transference principle, f αnk → f (α ) and f βnk → f (α ), so
lim f αnk − f βnk = 0.
k→∞
This, however, contradicts (10.20).
⊔
⊓
Remark 10.62. In Theorem 10.61, both the boundedness and the closedness of the
interval [a, b] are necessary. For example, the function f (x) = 1/x is continuous on
(0, 1), but it is not uniformly continuous there. This shows that the closedness assumption cannot be dropped. The function f (x) = x2 is continuous on (−∞, ∞), but
it is not uniformly continuous there. This shows that the boundedness assumption
cannot be dropped either.
Later, we will see that uniform continuity is a very useful property, and we often
need to determine whether a function is uniformly continuous on a set A. If A is a
closed and bounded interval, then our job is easy: by Heine’s theorem, the function
is uniformly continuous on A if and only if it is continuous at every point of A. If,
however, A is an interval that is neither bounded nor closed (perhaps A is not even
an interval), then Heine’s theorem does not help. This is why it is important for us
to know that there is a simple property that is easy to check that implies uniform
continuity.
Definition 10.63. The function f is said to have the Lipschitz5 property (is Lipschitz, for short) on the set A if there exists a constant K ≥ 0 such that
| f (x1 ) − f (x0 )| ≤ K · |x1 − x0 |
(10.21)
for all x0 , x1 ∈ A.
Theorem 10.64. If f is Lipschitz on the set A, then f is uniformly continuous on A.
5
Rudolph Otto Sigismund Lipschitz (1832–1903), German mathematician.
152
10 Continuity and Limits of Functions
Proof. If (10.21) holds for all x0 , x1 ∈ A, then x0 , x1 ∈ A and |x1 − x0 | < ε /K imply
| f (x1 ) − f (x0 )| ≤ K · |x1 − x0 | < K ·
ε
= ε.
K
⊔
⊓
Remark 10.65. The converse is generally not true: uniform continuity does not generally imply the Lipschitz property. (That is, the Lipschitz
property is stronger than
√
uniform continuity.) So for example, the function x is not Lipschitz on the interval
[0, 1]. Indeed, for every constant K > 0, if x0 = 0 and 0 < x1 < min(1, 1/K 2 ), then
x1 > K 2 · x12 , and so
√
√
√
| x1 − x0 | = x1 > K · x1 = K · |x1 − x0 | .
√
On the other hand, x is uniformly continuous on [0, 1], since it is continuous there.
Exercises
10.62. The functions given below are uniformly continuous on the given intervals
by Heine’s theorem. For all ε > 0, give a δ that satisfies the definition of uniform
continuity.
√
(a) x2 on [0, 1];
(b) x3 on [−2, 2];
(c) x on [0, 1].
10.63. Prove that (a) f (x) = x3 is not uniformly continuous on R; and (b) f (x) =
1/x2 is not uniformly continuous on (0, 1), but is uniformly continuous on [1, +∞).
10.64. Prove that if f is continuous on R and
lim f (x) = lim f (x) = 0,
x→∞
x→−∞
then f is uniformly continuous on R.
10.65. Prove that if f is uniformly continuous on a bounded set A, then f is bounded
on A. Does this statement still hold if we do not assume that A is bounded?
10.66. Prove that if f : R → R and g : R → R are uniformly continuous on R, then
f + g is also uniformly continuous on R.
10.67. Is it true that if f : R → R and g : R → R are uniformly continuous on R,
then f · g is also uniformly continuous on R?
10.68. Prove that if f is continuous on [a, b], then for every ε > 0, we can find a
piecewise linear function ℓ(x) in [a, b] such that | f (x) − ℓ(x)| < ε for all x ∈ [a, b]
(that is, the graph of f can be approximated to within less than ε by a piecewise
linear function).
10.6 Monotonicity and Continuity
153
(The function ℓ(x) is a piecewise linear function on [a, b] if the interval [a, b] can be
subdivided with points a0 = a < a1 < · · · < an−1 < an = b into subintervals [ak−1 , ak ]
on which ℓ(x) is linear, that is, ℓ(x) = ck x + dk if x ∈ [ak−1 , ak ] and k = 1, . . . , n.)
10.69. Prove that the function xk is Lipschitz on every bounded set (where k is an
arbitrary positive integer).
√
10.70. Prove that the function x is Lipschitz on the interval [a, b] for all 0 < a < b.
10.71. Suppose that f and g are Lipschitz on A. Prove that then
(i) f + g and c · f are Lipschitz on the set A for all c ∈ R; and
(ii) if the set A is bounded, then f · g is also Lipschitz on A. (H)
10.72. Give an example for Lipschitz functions f , g : R → R for which f · g is not
Lipschitz.
10.73. Suppose that f is Lipschitz on the closed and bounded interval [a, b]. Prove
that if f is nowhere zero, then 1/ f is also Lipschitz on [a, b].
10.74. Suppose that the function f : R → R satisfies | f (x1 ) − f (x2 )| ≤ |x1 − x2 |2 for
all x1 , x2 ∈ R. Prove that then f is constant.
10.6 Monotonicity and Continuity
Let f be defined on a punctured neighborhood of a. The function f is continuous at
a if and only if all of the following conditions hold:
(i) limx→a f (x) exists,
(ii) a ∈ D( f ),
(iii) limx→a f (x) = f (a).
If any one of these three conditions does not hold, then the function is not continuous
at a; we then say that f has a point of discontinuity at a. We classify points of
discontinuity as follows.
Definition 10.66. Let f be defined on a punctured neighborhood of a, and suppose
that f is not continuous at a. If limx→a f (x) exists and is finite, but a ∈ D( f ) or
f (a) = limx→a f (x), then we say that a is a removable discontinuity of f .6
If limx→a f (x) does not exist, but the finite limits
lim f (x) = f (a + 0) and
x→a+0
lim f (x) = f (a − 0)
x→a−0
both exist (and then are necessarily different), then we say that f has a jump discontinuity at a. We call removable discontinuities and jump discontinuities discontinuities of the first type collectively.
In all other cases, we say that f has a discontinuity of the second type at a.
6
Since then, by setting f (a) = limx→a f (x), f can be made continuous at a.
154
10 Continuity and Limits of Functions
Examples 10.67. 1. The functions {x} and [x] have jump discontinuities at every
positive integer value. Similarly, the function sgn x has a jump discontinuity at 0.
2. The Riemann function (function 3. in Example 10.7) has a removable discontinuity at every rational point.
3. The Dirichlet function (function (9) in Example 9.7) has discontinuities of the
second type at every point.
Below, we will show that the points of discontinuity of a monotone function are
of the first type, and these can only be jump discontinuities. This is equivalent to
saying that a monotone function possesses both one-sided limits at every point.
Theorem 10.68. Let f be monotone increasing on the finite or infinite open interval
(a, b). Then
(i) for every x0 ∈ (a, b), the finite limits f (x0 − 0) and f (x0 + 0) exist, and
f (x0 − 0) ≤ f (x0 ) ≤ f (x0 + 0).
(ii) If f is bounded from above on (a, b), then the finite limit f (b − 0) exists. If f is
bounded from below on (a, b), then the finite limit f (a + 0) exists.
(iii) If f is not bounded from above on (a, b), then f (b − 0) = ∞; if f is not bounded
from below on (a, b), then f (a + 0) = −∞.
A similar statement can be formulated for monotone decreasing functions, as
well as for intervals that are unbounded. We give two proofs of the theorem.
Proof I. (i) Since f (x) ≤ f (x0 ) for all x ∈ (a, x0 ), the set f ((a, x0 )) is bounded from
above, and f (x0 ) is an upper bound. Let α = sup f ((a, x0 )); then α ≤ f (x0 ).
Let ε > 0 be fixed. Since α is the least upper bound of the set f ((a, x0 )), α − ε
cannot be an upper bound. Thus there exists xε ∈ (a, x0 ) for which α − ε < f (xε ).
Now by the monotonicity of f and the definition of α ,
α − ε < f (xε ) ≤ f (x) ≤ α
if a < xε < x < x0 , which clearly shows that limx→x0 −0 f (x) = α . Thus we saw that
f (x0 − 0) exists and is finite, as well as that f (x0 − 0) ≤ f (x0 ). The argument is
similar for f (x0 + 0) ≥ f (x0 ).
Statements (ii) and (iii) can be proven similarly; in the first statement of (ii),
sup f ((a, b)) takes on the role of f (x0 ).
Proof II. We will go into detail only in proving (i). By Theorem 10.22, it suffices to
show that for every sequence xn ր x0 , ( f (xn )) is convergent, and its limit is at most
f (x0 ). By the monotonicity of f , if xn ր x0 , then ( f (xn )) is monotone increasing;
it has a (finite or infinite) limit. Since also f (xn ) ≤ f (x0 ) for all n, limn→∞ f (xn ) ≤
f (x0 ).
Corollary 10.69. If f is monotone on (a, b), then at every point x0 ∈ (a, b), it either
is continuous or has a jump discontinuity: a monotone function on (a, b) can have
only jump discontinuities.
10.6 Monotonicity and Continuity
155
We now show that discontinuities of a monotone function are limited not only by type, but
by quantity.
Theorem 10.70. If f is monotone on the open
interval I, then it can have at most countably
many discontinuities on I.
Proof. Without loss of generality, we may assume that f is monotone increasing on I. If f
is not continuous at a c ∈ I, then f (c − 0) <
f (c + 0). Let r(c) be a rational number for
Fig. 10.14
which f (c − 0) < r(c) < f (c + 0). If c1 < c2 ,
then by the monotonicity of f , f (c1 + 0) ≤
f (c2 − 0). Thus if f has both c1 and c2 as points of discontinuity, then r(c1 ) < r(c2 )
(Figure 10.14).
This means that we have created a one-to-one correspondence between the points
of discontinuity and a subset of the rational numbers. Since the set of rational numbers is countable, f can have only a countable number of discontinuities.
⊔
⊓
Remark 10.71. Given an arbitrary countable set of numbers A, we can construct a
function f that is monotone on (−∞, ∞) and whose set of points of discontinuity
is exactly A (see Exercise 10.76). So for example, we can construct a monotone
increasing function on (−∞, ∞) that is continuous at every irrational point and discontinuous at every rational point.
In Theorem 10.45, we saw that if f is strictly monotone in the interval I, then its
inverse is continuous on the interval f (I). If the function f is also continuous, then
we can expand on this in the following way.
Theorem 10.72. Let f be strictly increasing and continuous on the interval I. Then
(i) f (I) is also an interval; namely,
if I = [a, b], then f (I) = [ f (a), f (b)];
if I = [a, b), where b is finite or infinite, then f (I) = [ f (a), sup f (I));
if I = (a, b], where a is finite or infinite, then f (I) = (inf f (I), f (b)];
if I = (a, b), where each of a and b is either finite or infinite, then f (I) =
(inf f (I), sup f (I)).
(ii) The inverse of f , f −1 , is strictly monotone increasing and continuous on the
interval f (I) restricted to f (I).
A similar statement can be made for strictly monotone decreasing and continuous
functions.
Proof. We need only prove (i). If I = [a, b], then f (I) = [ f (a), f (b)] is clear from
the Bolzano–Darboux theorem (Figure 10.15).
Next suppose that I = [a, b). It is clear that then, f (I) ⊂ [ f (a), sup f (I)]. If
f (a) ≤ c < sup f (I), then let us choose a point u ∈ I for which c < f (u). By the
156
10 Continuity and Limits of Functions
Fig. 10.15
Bolzano–Darboux theorem, f takes on every value between f (a) and f (u) on the
interval [a, u], so c ∈ f ([a, u]) ⊂ f (I).
This shows that [ f (a), sup f (I)) ⊂ f (I). To prove that f (I) = [ f (a), sup f (I)), we
just need to show that sup f (I) ∈
/ f (I). Indeed, if c ∈ f (I) and c = f (u), where u ∈ I,
then u < v ∈ I implies c = f (u) < f (v) ≤ sup f (I), so c = sup f (I).
The rest of the statements can be proved similarly.
⊔
⊓
Remark 10.73. By the previous theorem, the inverse of a function f defined on an
interval [a, b] exists and is also defined on a closed and bounded interval if the function f is strictly monotone and continuous. This condition is far from necessary, as
the following example illustrates (Figure 10.16).
Let f be defined for x ∈ [0, 1] as
x,
if x is rational,
f (x) =
1 − x, if x is irrational.
It is easy to see that in [0, 1], f
a) is not monotone on any subinterval,
b) is nowhere continuous except for the
point x = 1/2; yet
c) the inverse of f exists.
Moreover, f ([0, 1]) = [0, 1], so f is a
one-to-one correspondence from [0, 1] to
Fig. 10.16
itself that is nowhere monotone and is continuous nowhere except at one point.
We see, however, that if f is continuous on an interval, then strict monotonicity
of f is necessary and sufficient for the inverse function to exist (see Exercise 10.54).
10.7 Convexity and Continuity
157
Exercises
10.75. Give a function f : [0, 1] → [0, 1] that is monotone and has infinitely many
jump discontinuities.
10.76. Prove that for every countable set A ⊂ R, there exists a monotone increasing
function f : R → R that jumps at every point of A but is continuous at every point
of R \ A. (H)
10.77. Let f be defined on a neighborhood of a, and let
m(h) = inf{ f (x) : x ∈ [a − h, a + h]}, M(h) = sup{ f (x) : x ∈ [a − h, a + h]}
for all h > 0. Prove that the limits limh→0+0 M(h) = M and limh→0+0 m(h) = m exist,
and moreover, that f is continuous at a if and only if m = M.
10.78. Can f have an inverse function in [−1, 1] if f ([−1, 1]) = [−1, 1] and f has
exactly two points of discontinuity in [−1, 1]?
10.79. Construct a function f : R → R that is continuous at every point different
from zero and has a discontinuity of the second type at zero.
10.80. Let f : R → R be a function such that f (x − 0) ≤ f (x) ≤ f (x + 0) for all x. Is
it true that f is monotone increasing? (H)
10.81. Prove that the set of discontinuities of the first type of every function f : R →
R is countable. (H)
10.82. Prove that if there is a discontinuity of the first type at every rational point of
the function f : R → R, then there is an irrational point where it is continuous. (H)
10.7 Convexity and Continuity
Our first goal is to prove that a convex function in an open interval is necessarily
continuous. As we will see, this follows from the fact that if f is convex, then every
point c has a neighborhood in which f can be surrounded by two continuous (linear)
functions that share the value f (c) at c. To see this, we first prove a helping theorem.
We recall that (for a given f ) the linear function that agrees with f at a and b is ha,b ,
that is,
f (b) − f (a)
ha,b (x) =
· (x − a) + f (a).
b−a
Lemma 10.74. Let f be convex on the interval I. If a, b ∈ I, a < b, and x ∈ I \ [a, b],
then
f (x) ≥ ha,b (x).
(10.22)
158
10 Continuity and Limits of Functions
If f is strictly convex on I, then strict inequality holds in (10.22). (That is, outside the
interval [a, b], the points of the graph of f lie above the line connecting the points
(a, f (a)) and (b, f (b)); see Figure 9.2.)
Proof. Suppose that a < b < x. By the definition of convexity, f (b) ≤ ha,x (b), that is,
f (b) ≤
f (x) − f (a)
· (b − a) + f (a),
x−a
which yields (10.22) after a simple rearrangement. If instead x < a < b, then f (a) ≤
hx,b (a), that is,
f (b) − f (x)
· (a − x) + f (x),
b−x
which yields (10.22) after a simple rearrangement.
If f is strictly convex, then we can repeat the above argument using strict inequalities.
⊓
⊔
f (a) ≤
Now we can easily prove the continuity of convex functions.
Theorem 10.75. If f is convex on the open interval I, then f is continuous on I.
Proof. Let c ∈ I be fixed, and choose points a, b ∈ I such that a < c < b. If x ∈ (c, b),
then by the above lemma and the convexity of f ,
ha,c (x) ≤ f (x) ≤ hc,b (x).
Since limx→c ha,c (x) = limx→c hc,b (x) = f (c), by the squeeze theorem we have
limx→c+0 f (x) = f (c). We can similarly get that limx→c−0 f (x) = f (c).
⊔
⊓
If f is convex on the interval I, then for arbitrary a, b ∈ I,
a+b
f (a) + f (b)
.
f
≤
2
2
(10.23)
Indeed, if a = b, then (10.23) is clear, while if a < b, (10.23) follows from the inequality f (x) ≤ ha,b (x) applied to x = (a + b)/2. The functions that satisfy (10.23)
for all a, b ∈ I are called weakly convex functions.7 The function f is weakly concave if f ((a + b)/2) ≥ ( f (a) + f (b))/2 for all a,b ∈ I.
The condition for weak convexity—true to its name—is a weaker condition than
convexity, that is, there exist functions that are weakly convex but not convex. One
can show that there exists a function f : R → R that is additive in the sense that
f (x + y) = f (x) + f (y) holds for all x, y ∈ R, but is not continuous. (The proof of
this fact, however, is beyond the scope of this book.) Now it is easy to see that
such a function is weakly convex, and it actually satisfies the stronger condition
f ((a + b)/2) = ( f (a) + f (b))/2 as well for all a, b. On the other hand, f is not
convex, since it is not continuous.
7
Weakly convex functions are often called Jensen-convex functions as well.
10.7 Convexity and Continuity
159
In the following theorem, we prove that if f is continuous, then the weak convexity of f is equivalent to the convexity of f . This means that in talking about
continuous functions, to determine convexity it is enough to check the conditions
for weak convexity, which is usually easier to do.
Theorem 10.76. Suppose that f is continuous and is weakly convex on the interval I. Then f is convex on I.
Proof. We have to show that if a, x0 , b ∈ I and a < x0 < b, then f (x0 ) ≤ ha,b (x0 ).
Suppose that this is not true, that is, f (x0 ) > ha,b (x0 ). This means that the function
g(x) = f (x) − ha,b (x) is positive at x0 . Since g(a) = 0, there exists a last point before
x0 where g vanishes. To see this, let A = {x ∈ [a, x0 ] : g(x) = 0}, and let α = sup A.
Then a ≤ α ≤ x0 . We show that g(α ) = 0. We can choose a sequence xn ∈ A that
tends to α , and so by the continuity of g, we have g(xn ) → g(α ). Since g(xn ) = 0 for
all n, we must have g(α ) = 0. It follows that α < x0 , and the function g is positive
on the interval (α , x0 ]: if there were a point α < x1 ≤ x0 such that g(x1 ) ≤ 0, then
by the Bolzano–Darboux theorem, g would have a root in [x1 , x0 ], which contradicts
the fact that α is the supremum of the set A.
By the exact same argument, there is a first point β after x0 where g vanishes,
and so the function g is positive in the interval [x0 , β ). Then g(α ) = g(β ) = 0, and
g(x) > 0 for all x ∈ (α , β ). Now we got g by subtracting a linear function ℓ from f . It
follows that g is also weakly convex; since ℓ is linear, ℓ((a + b)/2) = (ℓ(a) + ℓ(b))/2
for all a, b, so if f satisfies inequality (10.23), then subtracting ℓ does not change
this. However, g(α ) = g(β ) = 0 and g((α + β )/2) > 0, so (10.23) is not satisfied
with the choices a = α and b = β . This is a contradiction, which shows that f is
convex.
⊔
⊓
Remark 10.77. If the function f : I → R satisfies the condition
a+b
f (a) + f (b)
f
<
2
2
(10.24)
for all a, b ∈ I, a = b, then we call f strictly weakly convex. We can similarly define
strictly weakly concave functions. By the previous theorem, it follows that if f is
continuous and strictly weakly convex on the interval I, then f is strictly convex on
I. Indeed, it is easy to see that if f is convex but not strictly convex on the interval
I, then there is a subinterval J on which f is linear (see Exercise 10.83). Then,
however, (10.24) does not hold for the points of J, since if a, b ∈ J, then equality
holds in (10.24).
We can similarly see that every continuous and strictly weakly concave function
is strictly concave.
We mention that the conditions of Theorem 10.76 can be greatly weakened: instead of assuming the continuity of f , it suffices to assume that I has a subinterval
on which f is bounded from above (see Exercises 10.99–10.102).
160
10 Continuity and Limits of Functions
Exercises
10.83. Prove that if f is convex but not strictly convex on the interval I, then I has a
subinterval on which f is linear.
10.84. Let us call a function f : I → R barely convex if whenever a, b, c ∈ I and
a < b < c, then f (b) ≤ max( f (a), f (c)). Prove that if f : I → R is convex on the
interval I, then f is barely convex.
10.85. Let f be barely convex on the interval (a, b), and suppose that a < c < d < b
and f (c) > f (d). Show that f is monotone decreasing on (a, c]. Similarly, show that
if a < c < d < b and f (c) < f (d), then f is monotone increasing on [d, b).
10.86. Prove that the function f : I → R is barely convex on the interval (a, b) if and
only if one of the following cases applies:
(a) f is monotone decreasing on (a, b).
(b) f is monotone increasing on (a, b).
(c) There exists a point c ∈ (a, b) such that f is monotone decreasing on (a, c),
monotone increasing on (c, b), and f (c) ≤ max( f (c − 0), f (c + 0)).
10.87. Prove that if f : I → R is convex on the interval (a, b), then one of the following cases applies:
(a) f is monotone decreasing on (a, b).
(b) f is monotone increasing on (a, b).
(c) There exists a point c ∈ (a, b) such that f is monotone decreasing on (a, c] and
monotone increasing on [c, b).
10.88. Let f be convex on (−∞, ∞), and suppose that limx→−∞ f (x) = ∞. Is it possible that limx→∞ f (x) = −∞? (S)
10.89. Let f be convex on (−∞, ∞), and suppose that limx→−∞ f (x) = 0. Is it possible that limx→∞ f (x) = −∞? (H)
10.90. Let f be convex on (0, 1). Is it possible that limx→1−0 f (x) = −∞? (H)
10.91. Let f be weakly convex on the interval I. Prove that
x1 + · · · + xn
f (x1 ) + · · · + f (xn )
f
≤
n
n
for all x1 , . . . , xn ∈ I. (S)
10.92. Let f : R → R be an additive function (that is, suppose that for all x, y,
f (x + y) = f (x) + f (y)). Prove that f (rx) = r · f (x) for every real number x and rational number r.
10.93. Prove that if f : R → R is additive, then the function g(x) = f (x) − f (1) · x is
also additive and periodic, namely that every rational number is a period.
10.8 Arc Lengths of Graphs of Functions
161
10.94. Let f : R → R be an additive function. Prove that if f is bounded from above
on an interval, then f (x) = f (1) · x for all x. (H)
10.95. Let f : R → R be an additive function. Prove that f 2 is weakly convex. (If f
is not a linear function, then f 2 is a weakly convex function that is bounded from
below, but is not convex.)
10.96. Let f be continuous on the interval I, and suppose that for all a, b ∈ I, a < b,
there exists a point a < x < b such that f (x) ≤ ha,b (x). Prove that f is convex. (H)
10.97. Let f be bounded on the interval I, and suppose that for all a, b ∈ I, a < b,
there exists a point a < x < b such that f (x) ≤ ha,b (x). Does it then follow that f is
convex?
10.98. Let f be convex on the open interval I. Prove that f is Lipschitz on every
closed and bounded subinterval of I.
The following four questions will take us through the proof that if f is weakly
convex on an open interval I, and I has a subinterval in which f is bounded from
above, then f is convex.
10.99. Let f be weakly convex on the open interval I, and let x0 ∈ I. Prove that if f
is bounded from above on (x0 − δ , x0 + δ ), then f is bounded on (x0 − δ , x0 + δ ). (S)
10.100. Let f be weakly convex on the open interval I. Let n ≥ 1 be an integer, and
let x and h be numbers such that x ∈ I and x + 2n h ∈ I. Prove that
f (x + h) − f (x) ≤
1
· [ f (x + 2n h) − f (x)] .
2n
(S)
10.101. Let f be weakly convex on the open interval I, and let x0 ∈ I. Prove that if
f is bounded from above on (x0 − δ , x0 + δ ), then f is continuous at x0 . (S)
10.102. Let f be weakly convex on the interval I, and suppose that I contains a
nondegenerate subinterval on which f is bounded from above. Prove that f is continuous (and so by Theorem 10.76, convex) on I. (H)
10.8 Arc Lengths of Graphs of Functions
One of the key objectives of analysis is the measurement of lengths, areas, and
volumes. Our next goal is to deal with a special case: the notion of the arc length of
the graph of a function8 .
We denote the line segment connecting the points p, q ∈ R2 by [p, q], that is,
[p, q] = {p + t(q − p) : t ∈ [0, 1]}. The length of the line segment [p, q] (by definition) is the distance between its endpoints, which is |q − p|. We call a set that is
8
We will have need of this in defining trigonometric functions. We return to dealing with arc
lengths of more general curves in Chapter 16.
162
10 Continuity and Limits of Functions
a union of connected line segments a broken line (or a polygonal path). Thus a
broken line is of the form [p0 , p1 ] ∪ [p1 , p2 ] ∪ . . . ∪ [pn−1 , pn ], where p0 , . . . , pn are
points of the plane. The length of the broken line is the sum of the lengths of the
lines that it comprises, that is, |p1 − p0 | + |p2 − p1 | + · · · + |pn − pn−1 |.
Since “the shortest distance between two points is a straight line”, the length of a
curve (no matter how we define it) should not be smaller than the distance between
its endpoints. If we inscribe a broken line [p0 , p1 ] ∪ [p1 , p2 ] ∪ . . . ∪ [pn−1 , pn ] “on
top of” a curve, then the part of the arc connecting pi−1 and pi has length at least
|pi − pi−1 |, and so the length of the whole curve needs to be at least |p1 − p0 | +
|p2 − p1 | + · · · + |pn − pn−1 |. On the other hand—again just using intuition—we can
expect a “very fine” broken line inscribed on the curve to “approximate” it well
enough so that the two lengths will be close. What we can take away from this is
that the arc length should be equal to the supremum of the lengths of the broken
lines on the curve. This finding is what we will accept as the definition. We remind
ourselves that we denote the graph of the function f : [a, b] → R by graph f .
Definition 10.78. Let f : [a, b] → R be an arbitrary function and let a = x0 < x1 <
· · · < xn = b be a partition F of the interval [a, b]. The inscribed polygonal path over
the partition F of f is the broken line over the points
(x0 , f (x0 )), . . . , (xn , f (xn )).
The arc length of graph f is the least upper bound of the set of lengths of all inscribed
polygonal paths on f . (The arc length can be infinite). We denote the arc length of
the graph of f by s( f ; [a, b]). Thus
n
s( f ; [a, b]) = sup
∑ |pi − pi−1 | : a = x0 < x1 < · · · < xn = b,
i=1
pi = (xi , f (xi )) (i = 0, . . . , n) .
We say that graph f is rectifiable if s( f ; [a, b]) is finite.
Let us note that if a = b, then s( f ; [a, b]) = 0 for all functions f .
Theorem 10.79.
(i) For an arbitrary function f : [a, b] → R,
!
(b − a)2 + ( f (b) − f (a))2 ≤ s( f ; [a, b]),
(10.25)
and so if a < b, then s( f ; [a, b]) > 0.
(ii) If f : [a, b] → R is monotone, then graph f is rectifiable, and
s( f ; [a, b]) ≤ (b − a) + | f (b) − f (a)|.
(10.26)
Proof. It is clear that s( f ; [a, b]) is not smaller than any of its inscribed polygonal paths. Now the segment connecting (a, f (a)) and (b, f (b)) is such a path
10.8 Arc Lengths of Graphs of Functions
163
that corresponds to the partition a = x0 < x1 = b. Since the length of this is
(b − a)2 + ( f (a) − f (b))2 , (10.25) holds.
Now suppose that f is monotone increasing, let F : a = x0 < x1 < · · · < xn = b
be a partition of the interval [a, b], and denote the point (xi , f (xi )) by pi for all
i = 0, . . . , n. Then, using the monotonicity of f ,
!
|pi − pi−1 | = (xi − xi−1 )2 + ( f (xi ) − f (xi−1 ))2 ≤
≤ (xi − xi−1 ) + ( f (xi ) − f (xi−1 ))
for all i, so
n
%
n
∑ |pi − pi−1 | ≤ ∑ (xi − xi−1 )
i=1
i=1
&
+
%
n
∑ ( f (xi ) − f (xi−1 ))
i=1
&
=
= (xn − x0 ) + ( f (xn ) − f (x0 )) = (b − a) + ( f (b) − f (a)).
Since the partition was arbitrary, we have established (10.26). If f is monotone
decreasing, then we can argue similarly, or we can reduce the statement to one about
monotone increasing functions by considering the function − f .
⊔
⊓
Remark 10.80. Since not every monotone function is continuous, by (ii) of the previous theorem, there exist functions that are not everywhere continuous but whose
graphs are rectifiable. Thus rectifiability is a more general concept then what the
words “arc length” intuitively suggest.
The statement of the following theorem can also be expressed by saying that arc
lengths are additive.
Theorem 10.81. Let a < b < c and f : [a, c] → R. If graph f is rectifiable, then
s( f ; [a, c]) = s( f ; [a, b]) + s( f ; [b, c]).
(10.27)
We give a proof for this theorem in the appendix of the chapter.
We will need the following simple geometric fact soon.
Lemma 10.82. If A, B are convex polygons, and A ⊂ B, then the perimeter of A is
not larger than the perimeter of B.
Proof. If we cut off part of the polygon B at a line given by an edge of the polygon
A, then we get a polygon B1 with perimeter not larger than B but still containing A.
Repeating the process, we get the sequence B, B1 , . . . , Bn = A, in which the perimeter
of each polygon is at most as big as the one before it.
⊔
⊓
Arc length of a circle. Let C denote the unit circle centered at the origin. Let the
part of C falling into the upper half of the plane {(x, y) : y ≥ 0}√
be denoted by C+ .
+
It is clear that C agrees with the graph of the function c(x) = 1 − x2 defined on
the interval [−1, 1]. Since c is monotone on both the intervals [−1, 0] and [0, 1], by
the above theorems, the graph of c is rectifiable.
164
10 Continuity and Limits of Functions
We denote the arc length of graph c (that is, of the half-circle C+ ) by π .
√
By the previous
√two theorems, we can extract the approximation 2 2 ≤ π ≤ 4,
where the value 2 2 is the length of the inscribed broken path corresponding to the
partition −1 = x0 < 0 = x1 < 1 = x2 . Inscribing different broken lines into C+ gives
us different lower bounds for π , and with the help of these, we can approximate π
with arbitrary precision (at least in theory).
If we inscribe C into an arbitrary convex polygon P, then by Lemma 10.82, every
polygon inscribed in C will have smaller or equal perimeter than the perimeter of
P. Thus the supremum of the perimeters of the inscribed polygons, 2π , cannot be
larger than the perimeter of P.
The lower and upper approximations that we get this way can help
us show that π = 3.14159265 . . .. The
number π , like e, is irrational. One can
also show that π , again like e, is transcendental, but the proof of that is beyond the scope of this book.
Remark 10.83. To define trigonometric
functions, we will need the (seemingly trivial) fact that starting from
the point (0, 1), we can measure arcs
of any length on C. Consider the
case x ∈ [0, π ]. We have to show that
there is a number u ∈ [−1, 1] such
that s(c; [u, 1]) = x. With the notation
S(u) = s(c; [u, 1]), this means that the
function S(u) takes on every value between 0 and π on the interval [−1, 1]
(Figure 10.17).
Fig. 10.17
Theorem 10.84. The function S is strictly monotone decreasing and continuous on
[−1, 1].
Proof. If −1 ≤ u < v ≤ 1, then by Theorem 10.81,
S(u) = s(c; [u, 1]) = s(c; [u, v]) + s(c; [v, 1]) = S(v) + s(c; [u, v]).
Since s(c; [u, v]) > 0, we know that S is strictly monotone decreasing on [−1, 1].
Since the function c is monotone both on [−1, 0] and on [0, 1], by (10.26),
s(c; [u, v]) ≤ (v − u) + |c(v) − c(u)|
if −1 ≤ u < v ≤ 0 or 0 ≤ u < v ≤ 1.
Thus
|S(u) − S(v)| ≤ |v − u| + |c(v) − c(u)|
(10.28)
10.8 Arc Lengths of Graphs of Functions
whenever u, v ∈ [−1, 0] or u, v ∈ [0, 1]. Since the function c(u) =
uous on [−1, 1], we have that
165
√
1 − u2 is contin-
lim (|v − u| + |c(v) − c(u)|) = 0
v→u
for all u ∈ [−1, 1], so by (10.28) we immediately have that S is continuous on [−1, 1].
⊔
⊓
By the previous theorem and by the Bolzano–Darboux theorem, the function
S(u) takes on every value between S(−1) and S(1), moreover exactly once. Since
S(−1) = π (since this was the definition of π ) and S(1) = 0, we have seen that if
0 ≤ x ≤ π , then we can measure out an arc of length x onto the circle C. What about
other lengths? Since the arc length of the semicircle is π , if we can measure one of
length x, then we can measure one of length x + π (or x − π ) as well, in which case
we just jump to the antipodal point.
Exercises
10.103. Let f : [a, b] → R be a function for which s( f ; [a, b]) = b − a. Prove that f
is constant.
10.104. Prove that the function f : [a, b] → R is linear (that is, it is of the form mx+b
with suitable constants m and b) if and only if
!
s( f ; [a, b]) = (b − a)2 + ( f (b) − f (a))2 .
10.105. Prove that if the graph of f : [a, b] → R is rectifiable, then f is bounded on
[a, b].
10.106. Prove that if the graph of f : [a, b] → R is rectifiable, then at every point
x ∈ [a, b), the right-hand limit of f exists, and at every point x ∈ (a, b], the left-hand
limit exists.
10.107. Prove that neither the graph of the Dirichlet function nor the graph of the
Riemann function over the interval [0, 1] is rectifiable.
10.108. Let the function f : [0, 1] → R be defined as follows: f (x) = x if x = 1/2n
(n = 1, 2, . . .), and f (x) = 0 otherwise. Prove that the graph of f is rectifiable. What
is its arc length?
10.109. Prove that if f : [a, b] → R is Lipschitz, then its graph is rectifiable.
166
10 Continuity and Limits of Functions
10.9 Appendix: Proof of Theorem 10.81
Proof. Let us denote by S1 , S2 , and S3 the sets of the lengths of the inscribed polygonal paths of the intervals [a, b], [b, c], and [a, c], respectively. Then s( f ; [a, b]) =
sup S1 , s( f ; [b, c]) = sup S2 , and s( f ; [a, c]) = sup S by the definition of arc length.
Since one partition of [a, b] and one of [b, c] together yield a partition of the
interval [a, c], the sum of any number in S1 with any number in S2 is in S. This
means that S ⊃ S1 + S2 . By Theorem 3.20, sup(S1 + S2 ) = sup S1 + sup S2 , which
implies
s( f ; [a, c]) = sup S ≥ sup(S1 + S2 ) = sup S1 + sup S2 = s( f ; [a, b]) + s( f ; [b, c]).
Now we show that
s( f ; [a, c]) ≤ s( f ; [a, b]) + s( f ; [b, c]).
(10.29)
Let F : a = x0 < x1 < · · · < xn = c be a partition of the interval [a, c], and denote
the point (xi , f (xi )) by pi . Then the length of the inscribed polygonal path on F is
n
hF = ∑ |pi − pi−1 |. If the point b is equal to one of the points xi , say to xk , then
i=1
F1 : a = x0 < x1 < · · · < xk = b, and F2 : b = xk < xk+1 < · · · < xn = c are partitions
of the intervals [a, b] and [b, c] respectively, so
k
hF1 = ∑ |pi − pi−1 | ≤ s( f ; [a, b]) and
i=1
n
hF2 =
∑
i=k+1
|pi − pi−1 | ≤ s( f ; [b, c]).
Since hF = hF1 + hF2 , then hF ≤ s( f ; [a, b]) + s( f ; [b, c]). If the point b is not equal
to any of the points xi and xk−1 < b < xk , then let F1 : a = x0 < x1 < · · · < xk−1 < b
and F2 : b < xk < xk+1 < · · · < xn = c. Let q denote the point (b, f (b)). The lengths
of the inscribed polygonal paths corresponding to partitions F1 and F2 are
k−1
hF1 =
∑ |pi − pi−1 | + |q − pk−1 | ≤ s( f ; [a, b])
i=1
and
n
hF2 = |pk − q| +
∑
i=k+1
|pi − pi−1 | ≤ s( f ; [b, c]).
Now by the triangle inequality,
|pk − pk−1 | ≤ |q − pk−1 | + |pk − q|,
so it follows that hF ≤ hF1 + hF2 ≤ s( f ; [a, b]) + s( f ; [b, c]). Thus hF ≤ s( f ; [a, b]) +
s( f ; [b, c]) for all partitions F, which makes (10.29) clear.
⊔
⊓
Chapter 11
Various Important Classes of Functions
(Elementary Functions)
In this chapter, we will familiarize ourselves with the most commonly occurring
functions in mathematics and in applications of mathematics to the sciences. These
are the polynomials, rational functions, exponential, power, and logarithm functions,
trigonometric functions, hyperbolic functions, and their inverses. We call the functions that we can get from the above functions using basic operations and composition elementary functions.
11.1 Polynomials and Rational Functions
We call the function p : R → R a polynomial function (a polynomial, for short) if
there exist real numbers a0 , a1 , . . . , an such that
p(x) = an xn + an−1 xn−1 + · · · + a1 x + a0
(11.1)
for all x. Suppose that in the above description, an = 0. If x1 is a root of p (that is, if
p(x1 ) = 0), then
p(x) = p(x) − p(x1 ) = an (xn − x1n ) + · · · + a1 (x − x1 ).
Here using the equality
xk − x1k = (x − x1 )(xk−1 + x1 xk−2 + · · · + x1k−2 x + x1k−1 ),
and then taking out the common factor x − x1 we get that p(x) = (x − x1 ) · q(x),
where q(x) = bn−1 xn−1 + · · · + b1 x + b0 and bn−1 = an = 0.
If x2 is a root of q, then by repeating this process with q, we obtain that p(x) =
(x − x1 )(x − x2 ) · r(x), where r(x) = cn−2 xn−2 + · · · + c1 x + c0 and cn−2 = an = 0.
It is clear that this process ends in at most n steps, and in the last step, we get the
following.
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 11
167
168
11 Various Important Classes of Functions (Elementary Functions)
Lemma 11.1. Suppose that in (11.1), an = 0. If p has a root, then there exist not
necessarily distinct real numbers x1 , . . . , xk and a polynomial p1 such that k ≤ n, the
polynomial p1 has no roots, and
p(x) = (x − x1 ) · . . . · (x − xk ) · p1 (x)
(11.2)
for all x. It then follows that p can have at most n roots.
The above lemma has several important consequences.
1. If a polynomial is not identically zero, then it has only finitely many roots.
Clearly, in the expression (11.1), not all coefficients are zero. If am is the
nonzero coefficient with largest index, then we can omit the terms with larger
indices. Then by the lemma, p can have at most m roots.
2. If two polynomials agree in infinitely many points, then they are equal everywhere. (Apply the previous point to the difference of the two polynomials.)
3. The identically zero function can be expressed as (11.1) only if a0 =. . .=an = 0
(since the identically zero function has infinitely many roots).
4. If in an expression (11.1), an = 0 and the polynomial p defined by (11.1) has an
expression of the form
p(x) = bk xk + bk−1 xk−1 · · · + b1 x + b0 ,
where bk = 0, then necessarily k = n and bi = ai for all i = 0, . . . , n. We see this
by noting that the difference is the identically zero function, so this statement
follows from the previous one.
The final corollary means that a not identically zero polynomial has a unique
expression of the form (11.1) in which an is nonzero.
In this presentation of a polynomial, we call the coefficient an the leading coefficient of p, and the number n the degree of the polynomial. We denote the degree
of p by gr p.1 The zero-degree polynomials are thus the constant functions different
from zero. The identically zero function does not have a degree.
If a polynomial p is not identically zero, then its presentation of the form (11.2)
is unique. Clearly, if p(x) = (x − y1 ) · . . . · (x − ym ) · p2 (x) is another presentation,
then x1 is also a root of this, whence one of y1 , . . . , ym must be equal to x1 (since p2
has no roots). We can suppose that y1 = x1 . Then
(x − x2 ) · . . . · (x − xk ) · p1 (x) = (x − y2 ) · . . . · (x − ym ) · p2 (x)
for all x = x1 . Then the two sides agree at infinitely many points, so they are equal
everywhere. Since x2 is a root of the right-hand side, it must be equal to one of
y2 , . . . , ym . We can assume that y2 = x2 . Repeating this argument, we run out of
x − xi terms on the left-hand side, and at the kth step, we get that
p1 (x) = (x − yk+1 ) · . . . · (x − ym ) · p2 (x).
1
The notation is based on the Latin gradus = degree.
11.1 Polynomials and Rational Functions
169
Since p1 has no roots, necessarily m = k and p1 = p2 .
If in the presentation (11.2), an x − α term appears ℓ times, then we say that α is
a root of multiplicity ℓ. So, for example, the polynomial p(x) = x5 − x4 − x + 1 has
1 as a root of multiplicity two, and −1 is a root of multiplicity one (often called a
simple root), since p(x) = (x − 1)2 (x + 1)(x2 + 1), and x2 + 1 has no roots.2
As for the analytic properties of polynomials, first of all, we should note that
a polynomial is continuous everywhere. This follows from Theorem 10.44, taking
into account the fact that constant functions and the function x are continuous everywhere. We now show that if in the presentation (11.1), n > 0 and an = 0, then
∞,
if an > 0,
lim p(x) =
(11.3)
x→∞
−∞, if an < 0.
This is clear from the rearrangement
a0
an−1
p(x) = xn an +
+···+ n ,
x
x
using that limx→∞ xn = ∞ and
a0
an−1
lim an +
+ · · · + n = an .
x→∞
x
x
Rational functions are functions of the form p/q, where p and q are polynomials, and q is not identically zero. The rational function p/q is defined where
the denominator is nonzero, so everywhere except for a finite number of points.
By 10.44, it again follows that a rational function is continuous at every point where
it is defined.
The following theorem is analogous to the limit relation (11.3).
Theorem 11.2. Let
and
p(x) = an xn + an−1 xn−1 + · · · + a1 x + a0
q(x) = bk xk + bk−1 xk−1 · · · + b1 x + b0 ,
where an = 0 and bk = 0. Then
⎧
∞,
⎪
⎪
⎪
⎨
−∞,
p(x)
lim
=
x→∞ q(x)
⎪
an /bk ,
⎪
⎪
⎩
0,
ha an /bk > 0 and n > k,
ha an /bk < 0 and n > k,
if n = k,
if n < k.
2 Since we defined polynomials on R, we have been talking about only real roots the entire time.
Among complex numbers, every nonconstant polynomial has a root; see the second appendix of
the chapter.
170
11 Various Important Classes of Functions (Elementary Functions)
Exercises
11.1. Show that if p and q are polynomials, then so are p + q, p · q, and p ◦ q.
11.2. Let p and q be polynomials. Prove that
(a) if none of p, q, p + q are identically zero, then gr(p + q) ≤ max(gr p, gr q);
(b) if neither p nor q is identically zero, then gr(p · q) = (gr p) + (gr q);
(c) if none of p, q, and p ◦ q are identically zero then gr(p ◦ q) = (gr p) · (gr q).
Does it suffice to assume that only p and q are not identically zero?
11.3. Let p(x) = an xn + an−1 xn−1 + · · · + a1 x + a0 , where an > 0. Prove that p is
monotone increasing on the half-line (K, ∞) if K is sufficiently large.
11.4. Prove that if the polynomial p is not constant, then p takes on each of its values
at most k times, where k = gr p.
11.5. Prove that if the rational function p/q is not constant, then it takes on each of
its values at most k times, where k = max(gr p, gr q).
11.6. Prove that every polynomial is Lipschitz on every bounded interval.
11.7. Prove that every rational function is Lipschitz on every closed and bounded
interval on which it is defined.
11.2 Exponential and Power Functions
Before we define the two important classes of the exponential and power functions,
we fulfill our old promise (which we made after Theorem 3.27) and show that the
identities regarding taking powers still apply when we take arbitrary real powers.
The simple proof is made possible by our newly gained knowledge of limits of
sequences and their properties. We will use the following lemma in the proof of all
three identities.
Lemma 11.3. If a > 0 and xn → x, then axn → ax .
Proof. Suppose first that a > 1. In Theorem 3.25, we saw that
sup{ar : r ∈ Q, r < x} = inf{as : s ∈ Q, s > x},
and by definition, the shared value is ax . Let ε > 0 be given. Then there exist rational
numbers r < x and s > x such that
ax − ε < ar and as < ax + ε .
Since xn → x, for suitable n0 we have r < xn < s if n > n0 . Now according to Theorem 3.27, for every u < v, au < av . Thus for every n > n0 ,
11.2 Exponential and Power Functions
171
ax − ε < ar < axn < as < ax + ε .
Since ε was arbitrary, we have shown that axn → ax .
The statement can be proved similarly if 0 < a < 1, while the case a = 1 is trivial.
⊔
⊓
Theorem 11.4. For arbitrary a, b > 0 and real exponents x, y,
(ab)x = ax · bx ,
ax+y = ax · ay
and
(ax )y = axy .
(11.4)
Proof. We have already seen these inequalities for rational exponents in Theorem 3.23.
We begin by showing that the first two equalities in (11.4) hold for all positive a, b
and real numbers x, y. Choose two sequences (rn ) and
that
(sn ) of rational numbers
tend to x and y respectively. (For example, if rn ∈ x − (1/n), x + (1/n) ∩ Q and
sn ∈ (y − (1/n), y + (1/n)) ∩ Q, then these sequences work.) Then by Lemma 11.3,
(ab)x = lim (ab)rn = lim arn · brn = ax · bx
n→∞
n→∞
and
ax+y = lim arn +sn = lim arn · asn = ax · ay .
n→∞
n→∞
We only outline the proof of the third identity for the case a > 1 and x, y > 0. (The
remaining cases cant be proven similarly, or can be reduced to our case by considering reciprocals.)
Let now rn → x and sn → y be sequences consisting of rational numbers that
satisfy 0 < rn < x and 0 < sn < y for all n. Then
arn sn = (arn )sn < (ax )sn < (ax )y .
(11.5)
Here, other than using Theorem 3.27, in the middle inequality we used the fact
that if 0 < u < v and s > 0 is rational, then us < vs . This follows from vs /us =
(v/u)s > (v/u)0 = 1, since v/u > 1, and then we can apply Theorem 3.27 again.
Now from (11.5), we get that
axy = lim arn sn ≤ (ax )y .
n→∞
The inequality axy ≥ (ax )y can be proven similarly if we take sequences rn → x and
sn → y consisting of rational numbers such that rn > x and sn > y for all n.
⊔
⊓
We note that by the second identity of (11.4),
ax · a−x = ax+(−x) = a0 = 1,
and so a−x = 1/ax holds for all a > 0 and real numbers x.
Now we can continue and define exponential and power functions. If in the power
ab we consider the base to be fixed and let the exponent vary, then we get the
172
11 Various Important Classes of Functions (Elementary Functions)
exponential functions; if we consider the exponent to be fixed and let the base be a
variable, then we get the power functions. The precise definition is the following.
Definition 11.5. Given arbitrary a > 0, the function x → ax (x ∈ R) is called the
exponential function with base a.
Given arbitrary b ∈ R, the function x → xb (x > 0) is called the power function
with exponent b.
Since 1x = 1 for all x, the function
that is identically 1 is one of the exponential functions. Similarly, by x0 =
1 (x > 0) and x1 = x, the functions 1
and x are power functions over the interval (0, ∞).
The most important properties of
exponential functions are summarized
by the following theorem.
Fig. 11.1
Theorem 11.6.
(i) If a > 1, then the exponential function ax is positive everywhere, strictly monotone increasing, and continuous on R. Moreover,
lim ax = ∞ and
x→∞
lim ax = 0.
x→−∞
(11.6)
(ii) If 0 < a < 1, then the exponential function ax is positive everywhere, strictly
monotone decreasing, and continuous on R. Moreover,
lim ax = 0 and
x→∞
lim ax = ∞.
x→−∞
(iii) Given arbitrary a > 0, the function ax is convex on R (Figure 11.1).
Proof. We have already seen in Theorem 3.27 that if a > 1, then the function ax
is positive and strictly monotone increasing. Thus by Theorem 10.68, the limits
limx→∞ ax and limx→−∞ ax exist. And since an → ∞ and a−n → 0 if n → ∞, (11.6)
holds. The analogous statements when 0 < a < 1 can be proven similarly.
The continuity of exponential functions is clear by Lemma 11.3, using Theorem 10.19. Thus we have proved statements (i) and (ii).
Let a > 0 and x, y ∈ R. If we apply the inequality between the arithmetic and
geometric means to the numbers ax and ay , then we get that
a(x+y)/2 =
√
ax + ay
.
ax · ay ≤
2
This means that the function ax is weakly convex. Since it is continuous, it is convex
by Theorem 10.76.
⊔
⊓
The corresponding properties of power functions are given by the following
theorem (Figure 11.2).
11.2 Exponential and Power Functions
173
Theorem 11.7.
(i) If b > 0, then the power function xb is positive, strictly monotone increasing,
and continuous on the interval (0, ∞), and moreover,
lim xb = 0 and lim xb = ∞.
x→∞
x→0+0
(ii) If b < 0, then the power function xb is positive, strictly monotone decreasing,
and continuous on the interval (0, ∞), and moreover,
lim xb = ∞ and lim xb = 0.
x→∞
x→0+0
(iii) If b ≥ 1 or b ≤ 0, then the function xb is convex on (0, ∞). If 0 ≤ b ≤ 1, then xb
is concave on (0, ∞).
Fig. 11.2
To prove this theorem, we will require a generalization of Bernoulli’s inequality
(Theorem 2.5).
Theorem 11.8. Let x > −1.
(i) If b ≥ 1 or b ≤ 0, then (1 + x)b ≥ 1 + bx.
(ii) If 0 ≤ b ≤ 1, then (1 + x)b ≤ 1 + bx.
Proof. We already proved the statement for nonnegative exponents in Exercise 3.33.
The following simple proof is based on the convexity of the exponential function.
Let us change our notation: write a instead of x, and x instead of b. We have to
show that if a > −1, then x ∈ [0, 1] implies (1+ a)x ≤ ax + 1, while x ∈
/ (0, 1) implies
(1 + a)x ≥ ax + 1. Both statements follow from the fact that the function (1 + a)x is
convex. We see this by noting that the chord connecting the points 0 and 1 is exactly
y = ax + 1; in other words, h0,1 (x) = ax + 1. Thus if x ∈ [0, 1], then the inequality
(1 + a)x ≤ h0,1 (x) follows from the definition of convexity, while for x ∈
/ (0, 1), we
have (1 + a)x ≥ h0,1 (x) by Lemma 10.74.
⊔
⊓
174
11 Various Important Classes of Functions (Elementary Functions)
Proof (Theorem 11.7). (i) Let b > 0. If t > 1, then t b > t 0 = 1 by Theorem 3.27.
Now if 0 < x < y, then
y b y b
·x =
· xb > 1 · xb = xb ,
yb =
x
x
which shows that xb is strictly monotone increasing. Since for arbitrary K > 0, we
have xb > K if x > K 1/b , it follows that limx→∞ xb = ∞. Similarly, for arbitrary ε > 0,
we have xb < ε if x < ε 1/b , and so limx→0+0 xb = 0. (In both arguments, we used
b
that a1/b = a1 = a for all a > 0.)
Let x0 > 0 be given; we will see that the function xb is continuous at x0 . If 0 <
ε < x0b , then by the monotonicity of the power function with exponent 1/b, we have
1/b
1/b
x0b − ε
< x0 < x0b + ε
.
Now if
then
x0b − ε
1/b
1/b
< x < x0b + ε
,
x0b − ε < xb < x0b + ε .
This proves the continuity of xb .
Statement (ii) can be proved the same way.
(iii) Since the function xb is continuous, it suffices to show that for b ≥ 1 and b ≤ 0,
it is weakly convex, and for 0 ≤ b ≤ 1, it is weakly concave.
Consider first the case b ≥ 1 or b ≤ 0. We have to show that
x + y b xb + yb
(11.7)
≤
2
2
for all x, y > 0. Let us introduce the notation (x+y)/2 = t, x/t = u, y/t = v. Then u+
v = 2. By statement (i) of Theorem 11.8, ub ≥ 1 + b · (u − 1) and vb ≥ 1 + b · (v − 1).
Thus
u+v−2
ub + vb
≥ 1+b·
= 1.
2
2
If we multiply this inequality through by t b , then we get (11.7).
b
Now suppose that 0 ≤ b ≤ 1; we have to show that (x + y)/2 ≥ (xb + yb )/2.
We get this result by repeating the argument we used above, except that we apply
⊔
⊓
statement (ii) of Theorem 11.8 instead of (i).
x
As another application of Theorem 11.8, we inspect the function 1 + 1/x .
x
Theorem 11.9. The function f (x) = 1 + 1/x is monotone increasing on the intervals (−∞, −1) and (0, ∞), and
1 x
1 x
= lim 1 +
= e.
(11.8)
lim 1 +
x→∞
x→−∞
x
x
11.2 Exponential and Power Functions
175
Proof. If 0 < x < y, then y/x > 1, so by Theorem 11.8,
1
1+
y
y/x
1
y 1
≥ 1+ · = 1+ ,
x y
x
and so by the monotonicity of the power function,
1 y
1 x
1+
≥ 1+
.
y
x
(11.9)
If, however, x < y < −1, then 0 < y/x < 1, so again by Theorem 11.8,
1
1+
y
y/x
1
y 1
≤ 1+ · = 1+ ,
x y
x
and then by the monotone decreasing property of the power function with exponent
x, we get (11.9) again. Thus we have shown that f is monotone increasing on the
given intervals.
By Theorem 10.68, it then follows that f has limits at infinity in both directions.
Since (1 + 1/n)n → e (since this was the definition of the number e), the limit at
infinity can only be e. On the other hand,
n
n−1
n − 1 −n
n
1
1
1 −n
=
=
= 1+
· 1+
1−
,
n
n
n−1
n−1
n−1
so
lim
n→−∞
1
1+
n
n
1
= lim 1 −
n→∞
n
−n
= e.
This gives us the first equality of (11.8).
⊔
⊓
We can generalize Theorem 11.9 in the following way.
Theorem 11.10. For an arbitrary real number b,
b x
b x
lim 1 +
= lim 1 +
= eb .
x→∞
x→−∞
x
x
(11.10)
Proof. The statement is clear for b = 0. If b > 0, then using Theorem 10.41 for
limits of compositions yields
b x/b
1 x
lim 1 +
= lim 1 +
= e,
x→∞
x→∞
x
x
and so by the continuity of power functions with exponent b,
%
&b
b x/b
b x
1+
lim 1 +
= lim
= eb .
x→∞
x→∞
x
x
We can find the limit at −∞ similarly, as well as in the b < 0 case.
⊔
⊓
176
11 Various Important Classes of Functions (Elementary Functions)
Corollary 11.11. For an arbitrary real number b, limh→0 (1 + bh)1/h = eb .
Proof. Applying Theorem 10.41 twice, we get
b x
= eb .
lim (1 + bh)1/h = lim 1 +
x→±∞
h→0±0
x
⊔
⊓
By Theorem 11.10,
b
lim 1 +
n→∞
n
n
= eb
(11.11)
for all real numbers b. This fact has several important applications.
Examples 11.12. 1. Suppose that a bank pays p percent yearly interest on a bond. A
1-dollar bond would then grow to a + q dollars after a year, where q = p/100. If,
however, after half a year, the bank pays interest of p/2 percent, and from this point
on, the new interest applies to this increased value, then at the end of the year, our
bond will be worth 1 + (q/2) + [1 + (q/2)] · (q/2) = [1 + (q/2)]2 dollars. If now we
divide the year into n equal parts, and the interest is added to our bond after every
1/n year, which is then included in calculating interest from that point on, then
by the end of the year, the bond will be worth [1 + (q/n)]n dollars. This sequence
is monotone increasing (why?), and as we have seen, its limit is eq = e p/100 . This
means that no matter how often we add the interest to our bond during the year, its
value cannot exceed e p/100 , but it can get arbitrarily close to it.
2. In this application, we inspect how much a certain material (say a window pane)
absorbs a certain radiation (say a fixed wavelength of light). The amount of absorption is a function of the thickness of the material. This function is not linear, but
experience shows that for thin slices of the material, a linear approximation works
well. This means that there exists a positive constant α (called the absorption coefficient) such that a slice of the material with thickness h absorbs about α · h of
entering radiation if h is sufficiently small.
Consider a slab of the material with thickness h, where h is now arbitrary. Subdivide this slab into n equal slices. If n is sufficiently large, then each slice with
thickness h/n will absorb α · (h/n) of the radiation that has made it that far. Thus
after the ith slice, (1 − (α h/n))i of the radiation remains, so after the light leaves
the whole slab, 1 − (1 − (α h/n))n of it is absorbed. If we let n go to infinity, we get
that a slab of thickness h will absorb 1 − e−α h of the radiation.
The following theorem gives an interesting characterization of exponential functions.
Theorem 11.13. The function f : R → R is an exponential function if and only if it
is continuous, not identically zero, and satisfies the identity
f (x1 + x2 ) = f (x1 ) · f (x2 )
for all x1 , x2 ∈ R.
(11.12)
11.2 Exponential and Power Functions
177
Proof. We already know that the conditions are satisfied for an exponential function.
Suppose now that f satisfies these conditions, and let a = f (1). We will show that
a > 0 and f (x) = ax for all x.
Since f (x) = f (x/2) + (x/2) = f (x/2)2 for all x, fis nonnegative
everywhere.
If f vanishes at a point x0 , then by the identity f (x) = f (x − x0 ) + x0 = f (x − x0 ) ·
f (x0 ), it would follow that f is identically zero, which we have excluded. Thus f is
positive everywhere, so specifically a = f (1) is positive. Since f (0) = f (0 + 0) =
f (0)2 and f (0) > 0, we have f (0) = 1. Thus we get that 1 = f (0) = f (x + (−x)) =
f (x) · f (−x), so f (−x) = 1/ f (x) for all x.
From assumption (11.12), using induction we know that
f (x1 + · · · + xn ) = f (x1 ) · . . . · f (xn )
holds for all n and x1 , . . . , xn . Then by the choice x1 = · · · = xn = x, we get f (nx) =
f (x)n . Thus if p and q are positive integers, then
q
p
p
· q = f (p · 1) = f (1) p = a p ,
f
=f
q
q
that is, f (p/q) = a p/q . Since f (−p/q) = 1/ f (p/q) = 1/a p/q = a−p/q , we have
shown that f (r) = ar for all rational numbers r.
If x is an arbitrary real number, then let (rn ) be a sequence of rational numbers
that tends to x. Since f is continuous on x,
f (x) = lim f (rn ) = lim arn = ax .
n→∞
n→∞
⊔
⊓
Remark 11.14. Theorem 11.13 characterizes exponential functions with the help of
a functional equation. We ran into a similar functional equation in Chapter 8, when
we mentioned functions that satisfy
f (x1 + x2 ) = f (x1 ) + f (x2 )
(11.13)
while talking about weakly convex functions; this functional equation is called
Cauchy’s functional equation. We also mentioned that there exist solutions of
(11.13) that are not continuous. These solutions cannot be bounded from above
on any interval, by Exercise 10.94. If f is such a function, then the function e f (x)
satisfies the functional equation (11.12), and it is not bounded in any interval. This
remark shows that in Theorem 11.13, the continuity condition cannot be dropped
(although it can be weakened).
178
11 Various Important Classes of Functions (Elementary Functions)
We will run into two relatives of the functional equations above, whose continuous solutions are exactly the power functions and logarithmic functions (see
Exercises11.15 and 11.26). Equally noteworthy is d’Alembert’s3 functional
equation:
f (x1 + x2 ) + f (x1 − x2 ) = 2 f (x1 ) f (x2 ).
See Exercises 11.35 and 11.44 regarding the continuous solutions of this functional
equation.
Generalized
mean. If a > 0 and b = 0, then we will also√denote the power a1/b by
√
b
a (which is strongly connected to the fact that a1/k = k a for positive integers k
by definition). Let a1 , . . . , an be positive numbers, and let b = 0. The quantity
G(b; a1 , . . . , an ) =
b
ab1 + · · · + abn
n
is called the generalized mean with exponent b of the numbers ai . It is clear that
G(−1; a1 , . . . , an ), G(1; a1 , . . . , an ), and G(2; a1 , . . . , an ) are exactly the harmonic,
arithmetic, and quadratic means of the numbers ai . We also consider the geometric
mean as a generalized mean, since we define the generalized mean with exponent
0 by
√
G(0; a1 , . . . , an ) = n a1 · . . . · an .
(To see the motivation behind this definition, see Exercise 12.35.)
Let b ≥ 1. Apply Jensen’s inequality (Theorem 9.19) to xb . We get that
a1 + · · · + an b ab1 + · · · + abn
.
≤
n
n
Raising both sides to the power 1/b, we get the inequality
a1 + · · · + an
≤ G(b; a1 , . . . , an ) =
n
b
ab1 + · · · + abn
.
n
This is called the generalized mean inequality (which implies the inequality of
arithmetic and quadratic means as a special case). However, this inequality is also
just a special case of the following theorem.
Theorem 11.15. Let a1 , . . . , an be fixed positive numbers. Then the function
b → G(b; a1 , . . . , an ) (b ∈ R)
is monotone increasing on R.
Proof. Suppose first that 0 < b < c. Then c/b > 1. Apply Jensen’s inequality to the
function xc/b and the numbers abi (i = 1, . . . , n). We get that
3
Jean le Rond d’Alembert (1717–1783), French mathematician.
11.2 Exponential and Power Functions
179
ab1 + · · · + abn
n
c/b
≤
ac1 + · · · + acn
.
n
If we raise both sides to the power 1/c, then we get that
G(b; a1 , . . . , an ) ≤ H(c; a1 , . . . , an ).
(11.14)
Now let b = 0 and c > 0. Apply the inequality of arithmetic and geometric means
for the numbers aci (i = 1, . . . , n). We get that
G(0; a1 , . . . , an )c =
ac1 · . . . · acn ≤
n
ac1 + · · · + acn
,
n
and if here we raise both sides to the power 1/c, then we get (11.14) again.
It is easy to check that
1
G(−b; a1 , . . . , an ) =
G b;
1
1
a1 , . . . , an
for all b. So using the inequalities we just proved, we get that for b < c ≤ 0, we have
G(b; a1 , . . . , an ) =
1
G −b; a11 , . . . , a1n
= G(c; a1 , . . . , an ),
≤
1
G −c; a11 , . . . , a1n
=
⊔
⊓
which proves the theorem.
xb = 0 if b > 0; then it is in our best interest
As we saw in Theorem 11.7, limx→0+0
for all positive powers of zero to be zero. Thus we define 0b = 0 for all b > 0 (but
we still do not define the nonpositive powers of zero). Regarding this change, for
b > 0 we can extend the power function xb to be defined at zero as well, where its
value is zero. The new extended function xb is then continuous from the right at zero
if b > 0.
Exercises
11.8. Prove that for the numbers 0 < a < b, ab = ba holds if and only if there exists
a positive number x such that
1
a = 1+
x
x
1
and b = 1 +
x
x+1
.
11.9. Prove that if a > 0 and a = 1, then the function ax is strictly convex.
11.10. Prove that if b > 1 or b < 0, then the function xb is strictly convex.
180
11 Various Important Classes of Functions (Elementary Functions)
11.11. Prove that if 0 < b < 1, then the function xb is strictly concave.
11.12. Let x > −1 and b ∈ R. Prove that (1 + x)b = 1 + bx holds if and only if at
least one of x = 0, b = 0, b = 1 holds.
x
11.13. limx→−1−0 1 + 1x =?
11.14. Let G(b; a1 , . . . , an ) be the generalized mean with exponent b of the positive
numbers a1 , . . . , an . Prove that
lim G(b; a1 , . . . , an ) = max(a1 , . . . , an )
b→∞
and
lim G(b; a1 , . . . , an ) = min(a1 , . . . , an ).
b→−∞
11.15. Prove that the function f : (0, ∞) → R is a power function if and only if it is
continuous, not identically zero, and satisfies the identity
f (x1 · x2 ) = f (x1 ) · f (x2 )
for all positive x1 , x2 .
11.3 Logarithmic Functions
If a > 0 and a = 1, then the function ax is strictly monotone and continuous on
R by Theorem 11.6. Thus if a > 0 and a = 1, then ax has an inverse function,
which we call the logarithm to base a, and we denote it by loga x. Since the image of ax is (0, ∞), the function loga x is defined on the open interval (0, ∞), but
its image is R. Having the definition of the inverse function in mind, we come to
the conclusion that if a > 0, a = 1, and x > 0, then loga x is the only real number
for which aloga x = x holds. Specifically, loga 1 = 0 and loga a = 1 (Figure 11.3).
Theorem 11.16.
(i) If a > 1, then the function loga x is strictly monotone
increasing, continuous, and strictly concave on (0, ∞).
Moreover,
lim loga x = −∞ and lim loga x = ∞.
x→0+0
x→∞
(11.15)
(ii) If 0 < a < 1, then the function loga x is strictly monotone
decreasing, continuous, and strictly convex on (0, ∞).
Moreover,
lim loga x = ∞ and lim loga x = −∞.
x→0+0
x→∞
(11.16)
Fig. 11.3
11.3 Logarithmic Functions
181
(iii) For all a > 0, a = 1, and x, y > 0, the identities
loga (xy) = loga x + loga y,
loga (x/y) = loga x − loga y,
(11.17)
loga (1/y) = − loga y,
as well as
loga (xy ) = y · loga x
and
√
loga y x = 1y · loga x
(11.18)
hold.
Proof. As an application of Theorems 11.6 and 10.72, we get that the function loga x
is continuous everywhere; if a > 1, it is strictly monotone increasing; and if 0<a<1,
it is strictly monotone decreasing. Since its image is R, the limit relations (11.15)
and (11.16) follow from this. Thus we have proved (i) and (ii) except for the statements about convexity and concavity.
Let a > 0, a = 1. By the second identity in (11.4), we obtain that for arbitrary
x, y > 0 we have
aloga x+loga y = aloga x · aloga x = x · y = aloga (xy) .
Since the function ax is strictly monotone, we get the first identity (11.17) from this.
Thus
loga x = loga xy · y = loga xy + loga y,
which is the second identity in (11.17). Applying this to x = 1 gives us the third one.
If we now apply the third identity of (11.4), then we get that
y
y
aloga (x ) = xy = aloga x = ay·loga x ,
which implies the first identity of (11.18). Applying this
√ to 1/y instead of y will
yield the second identity of (11.18), where we note that y x = x1/y .
√
If 0 < x < y, then by the inequality of arithmetic and geometric means, xy <
(x + y)/2. Applying (11.17) and (11.18), we get that if a > 1, then
x+y
loga x + loga y
< loga
,
2
2
where we note also that the function loga is strictly monotone increasing for a > 1.
This means that if a > 1, then loga is strictly weakly concave. Similarly, we get that
if 0 < a < 1, then the function loga is strictly weakly convex. Since we are talking
about continuous functions, when a > 1, the function loga x is strictly concave, while
if 0 < a < 1, then it is strictly convex. This concludes the proof of the theorem. ⊓
⊔
Remark 11.17. Let a and b be positive numbers different from 1. Then by applying (11.18), we get
loga x = loga blogb x = logb x · loga b,
182
11 Various Important Classes of Functions (Elementary Functions)
which gives us
logb x =
loga x
loga b
(11.19)
for all x > 0. This means that two logarithmic functions differ by only a constant
multiple. Thus it is useful to choose one logarithmic function and write the rest of
them as multiples of this one. But which one should we choose among the infinitely
many logarithmic functions? This is determined by necessity; clearly, it is most
useful to choose the one that we use the most. Often in engineering, the base-10
logarithm is used, while in computer science, it is base 2. We will use the logarithmic
function at base e, since this will make the equations that arise in differentiation
the simplest. From now on, we will write log x instead of loge x. (Sometimes, the
logarithm at base e is denoted by ln x, which stands for logaritmus naturalis, the
natural logarithm.)
If we apply (11.19) to a = e, we get that
logb x =
log x
log b
(11.20)
whenever b > 0, b = 1, and x > 0. Just as every logarithmic function can be expressed using the natural logarithm, we can also express every exponential function
using ex . In fact, if a > 0, then by the definition of log a and by the third identity
of (11.4),
x
ax = elog a = ex·log a = (ex )log a ,
that is, ax is a power of the function ex .
Considering that e > 1, the function log x is concave. This fact makes proving an
important inequality possible.
Theorem 11.18 (Hölder’s Inequality4 ). Let p and q be positive numbers such that
1/p + 1/q = 1. Then for arbitrary real numbers a1 , . . . , an and b1 , . . . , bn ,
|a1 b1 + · · · + an bn | ≤
p
|a1 | p + · · · + |an | p ·
Proof. First we show that
ab ≤
q
|b1 |q + · · · + |bn |q .
a p bq
+
p
q
(11.21)
(11.22)
for all a, b ≥ 0. This is clear if a = 0 or b = 0, so we can assume that a > 0 and
b > 0. Since log x is concave, by Lemma 9.17,
log (ta p + (1 − t)bq ) ≥ t log a p + (1 − t) log bq
(11.23)
for all 0 < t < 1. Now we apply this inequality with t = 1/p. Then 1 − t = 1/q,
and so the right-hand side of (11.23) becomes log a + log b = log(ab). Since log x is
monotone increasing, we get that (1/p)a p + (1/q)bq ≥ ab, which is exactly (11.22).
4
Otto Ludwig Hölder(1859–1937), German mathematician.
11.3 Logarithmic Functions
183
Now to continue the proof of the theorem, let A = p |a1 | p + · · · + |an | p and B =
|b1 |q + · · · + |bn |q . If A = 0, then a1 = . . . = an = 0, and so (11.21) holds, since
both sides are zero. The same is the case if B = 0, so we can assume that A > 0 and
B > 0.
Let αi = |ai |/A and βi = |bi |/B for all i = 1, . . . , n. Then
q
α1p + · · · + αnp = β1q + · · · + βnq = 1.
(11.24)
Now by (11.22),
αi βi ≤
1 p 1 q
α + βi
p i
q
for all i. Summing these inequalities, we get that
α1 β1 + · · · + αn βn ≤
1
1
1
1
· (α1p + · · · + αnp ) + · (β1q + · · · + βnq ) = · 1 + · 1 = 1,
p
q
p
q
using (11.24) and the assumption on the numbers p and q. If now we write |ai |/A
and |bi |/B in place of αi and multiply both sides by AB, then we get that
|a1 b1 | + · · · + |an bn | ≤ AB,
from which we immediately get (11.21) by the triangle inequality.
⊔
⊓
Hölder’s inequality for the special case p = q = 2 gives the following, also very
notable, inequality.
Theorem 11.19 (Cauchy–Schwarz5 –Bunyakovsky6 Inequality). For arbitrary
real numbers a1 , . . . , an and b1 , . . . , bn ,
!
!
|a1 b1 + · · · + an bn | ≤ a21 + · · · + a2n · b21 + · · · + b2n .
We also give a direct proof.
Proof. For arbitrary i, j = 1, . . . , n, the number
Ai, j = a2i b2j + a2j b2i − 2ai a j bi b j
is nonnegative, since Ai, j = (ai b j − a j bi )2 . If we add all the numbers Ai, j for all
1 ≤ i < j ≤ n, then we get the difference
(a21 + · · · + a2n ) · (b21 + · · · + b2n ) − (a1 b1 + · · · + an bn )2 ,
which is thus also nonnegative.
5
6
Hermann Amandus Schwarz (1843–1921), German mathematician.
ViktorYakovlevich Bunyakovsky (1804–1889), Russian mathematician.
⊔
⊓
184
11 Various Important Classes of Functions (Elementary Functions)
Exercises
11.16. Prove that 1 + 1/2 + · · · + 1/n > log n for all n. (H)
11.17. Prove that the sequence 1 + 1/2 + · · · + 1/n − log n is monotone decreasing
and convergent.7
11.18. Let f be a strictly monotone increasing and convex (concave) function on the
open interval I. Prove that the inverse of f is concave (convex).
11.19. Let f be strictly monotone decreasing and convex (concave) on the open
interval I. Prove that the inverse of f is convex (concave). Check these statements
for the cases that f is an exponential, power, or logarithmic function.
11.20. Prove that limx→0+0 xε · log x = 0 for all ε > 0.
11.21. Prove that limx→∞ x−ε · log x = 0 for all ε > 0.
√
limx→0+0 (1 + 1/x)x =?
11.22. limx→0+0 xx =?
limx→∞ x x =?
11.23. Let limn→∞ an = a, limn→∞ bn = b. When does limn→∞ abnn = ab hold?
11.24. Prove that equality holds in (11.22) only if a p = bq .
11.25. Suppose that the numbers a1 , . . . , an , b1 , . . . , bn are nonnegative. Prove that
equality holds in (11.21) if and only if a1 = . . . = an = 0 or if there exists a number
t such that bqi = t · aip for all i = 1, . . . , n.
11.26. Prove that the function f : (0, ∞) → R is a logarithmic function if and only if
it is continuous, not identically zero, and satisfies the identity
f (x1 · x2 ) = f (x1 ) + f (x2 )
for all positive x1 , x2 .
11.4 Trigonometric Functions
Defining trigonometric functions. Since trigonometric functions are functions
that deal with angles, or more precisely, map angles to real numbers, we first need
to clarify how angles are measured.
Let O be a point in the plane, let h and k be rays starting at point O, and let (h, k)∢
denote the section of the plane (one of them, that is) determined by h and k. Let D
be a disk centered at O, and C a circle centered at O. It is clear by inspection that the
angle determined by h and k is proportional to the area of the sector D ∩ (h, k)∢, as
well as the length of the arc C ∩ (h, k)∢. It would be a good choice, then, to measure
7 The limit of this sequence is called Euler’s constant. Whether this number is rational has been
an open question for a long time.
11.4 Trigonometric Functions
185
the angle by the area of the sector D ∩ (h, k)∢ or the length of the arc C ∩ (h, k)∢ (or
any other quantity directly proportional to these). We will choose the length of the
arc. We agree that an angle at the point O is measured by the length of the arc of a
unit circle centered at O that subtends the angle. We call this number the angular
measure, and the units for it are radians. Thus the angular measure of a straight
angle is the length of the unit semicircle, that is, π radians; the angular measure of
a right angle is half of this, that is, π /2 radians. From now on, we omit the word
“radians,” so measurements of angles (unless otherwise specified) will be given in
radians.
In trigonometry, we define the cosine (or sine) of an angle x smaller
than π /2 by considering the right triangle with acute angle x and taking
the quotient of the length of the side
adjacent (or opposite) to the angle x
and the hypotenuse.
Let 0 < u < 1 and
√
v = c(u) = 1 − u2 . Then the points
O = (0, 0), P = (u, 0), and Q = (u, v)
define a right triangle whose angle at
−→
−→
O is defined by the rays OP and OQ.
Fig. 11.4
The arc of the circle C that falls into this
section agrees with the graph of c over
the interval [u, 1], whose length is s(c; [u, 1]) (see Definition 10.78). If s(c; [u, 1]) = x,
then the angular measure of the angle POQ∢ is x, and so cos x = OP/OQ = u/1 = u,
and sin x = PQ/OQ = v/1 = v. We can also formulate this by saying that if starting
from the point (1, 0), we measure out an arc of length x onto the circle C in the positive direction, then the point we end up at has coordinates (cos x, sin x). Here the
“positive direction” means that we move in a counterclockwise direction, that is, we
start measuring the arc in the upper half-plane.8
We will accept the statement above as the definition of the functions cos x and
sin x.
Definition 11.20. Starting at the point (1, 0) on the unit circle centered at the origin,
measure an arc of length x in the positive or negative direction according to whether
x > 0 or x < 0. (If |x| ≥ 2π , then we will circle around more than once as necessary.)
The first coordinate of the point we end up at is denoted by cos x, while the second
coordinate is sin x. Thus we have defined the functions cos and sin on the set of all
real numbers.
Remark 11.21. In Remark 10.83, we saw that for 0 ≤ x ≤ π , there exists a u ∈ [−1, 1]
such that S(u) = s(c; [u, 1]) = x.√Thus the endpoint of an arc of length x measured
on C is the √
point (u, c(u)) = (u, 1 − u2 ). It then follows that cos x = u and sin x =
√
2
1 − u = 1 − cos x2 . The relation cos x = u means that on the interval [0, π ], the
function cos x is exactly the inverse of the function S.
8
That is, in the part of the plane given by {(x, y) : y > 0}.
186
11 Various Important Classes of Functions (Elementary Functions)
We often need to use the expressions sin x/ cos x and cos x/ sin x, for which we
use the notation tg x and ctg x respectively.
Properties of Trigonometric Functions. As we noted, measuring an arc of
length π from any point (cos x, sin x) of the circle C gets us into the antipodal point,
whose coordinates are (− cos x, − sin x). Thus
cos(x + π ) = − cos x and sin(x + π ) = − sin x
(11.25)
for all x. Since (cos 0, sin 0) = (1, 0), we have cos 0 = 1 and sin 0 = 0. Thus
by (11.25),
cos(kπ ) = (−1)k and sin(kπ ) = 0
(11.26)
for all integers k. By the definition, we immediately get that
cos(x + 2π ) = cos x and sin(x + 2π ) = sin x
for all x, that is, cos x and sin x are both periodic functions with period 2π . Since
(cos x, sin x) is a point on the circle C, we have
sin2 x + cos2 x = 1
(11.27)
for all x. The circle C is symmetric with respect to the horizontal axis. So if we
measure a segment of length |x| in the positive or negative direction starting at the
point (1, 0), then we arrive at a point that is symmetric with respect to the horizontal
axis. This means that (cos(−x), sin(−x)) = (cos x, − sin x), that is,
cos(−x) = cos x and sin(−x) = − sin x
(11.28)
for all x. In other words, the function cos x is even, while the function sin x is odd.
Connecting the identities (11.28) and (11.25), we obtain that
cos(π − x) = − cos x and sin(π − x) = sin x
(11.29)
for all x. Substituting x = π /2 into this, we get cos(π /2) = 0, so by (11.25), we see
that
π
(11.30)
+ kπ = 0 (k ∈ Z).
cos
2
Since sin(π /2) =
1 − cos2 (π /2) = 1, then again by (11.25), we have
π
+ kπ = (−1)k (k ∈ Z).
sin
2
(11.31)
The circle C is also symmetric with respect to the 45◦ line passing through the origin. If we measure x in the positive direction from the point (1, 0) and then reflect
over this line, we get the point that we would get if we measured x in the negative direction starting at (0, 1). Since (0, 1) = (cos(π /2), sin(π /2)), the endpoint
of the arc we mirrored is the same as the endpoint of the arc of length x − (π /2)
measured from (1, 0) in the negative direction, which is the same as (π /2) − x in
11.4 Trigonometric Functions
187
the positive direction. Thus we see that the point (sin x, cos x)—the reflection of
(cos x, sin x) about the 45◦ line passing through the origin—agrees with the point
(cos((π /2) − x), sin((π /2) − x)), so
π
π
− x = sin x and sin
− x = cos x
(11.32)
cos
2
2
for all x. The following identities are the addition formulas for sin and cos.
sin(x + y) = sin x cos y + cos x sin y,
sin(x − y) = sin x cos y − cos x sin y,
cos(x + y) = cos x cos y − sin x sin y,
cos(x − y) = cos x cos y + sin x sin y.
(11.33)
The proofs of the addition formulas are outlined in the first appendix of the chapter. The proof there is based on rotations about the origin. Later, using differentiation, we will give a proof that does not use geometric concepts and does not rely on
geometric inspections (see the second appendix of Chapter 13).
The following identities are simple consequences of the addition formulas.
sin 2x = 2 sin x cos x,
cos 2x = cos2 x − sin2 x = 1 − 2 sin2 x = 2 cos2 x − 1,
cos2 x =
1 − cos 2x
1 + cos 2x
, sin2 x =
,
2
2
cos x cos y = 12 (cos(x + y) + cos(x − y)) ,
sin x sin y = 12 (cos(x − y) − cos(x + y)) ,
(11.34)
(11.35)
(11.36)
1
2
sin x cos y = (sin(x + y) + sin(x − y)) ,
x−y
x+y
cos
,
2
2
x+y
x−y
cos
,
sin x − sin y = 2 sin
2
2
(11.37)
x−y
x+y
cos
,
cos x + cos y = 2 cos
2
2
x−y
x+y
cos x − cos y = −2 sin
sin
.
2
2
Now we turn to the analytic properties of the functions sin and cos (Figure 11.5).
sin x + sin y = 2 sin
Theorem 11.22.
(i) The function cos x is strictly monotone decreasing on the intervals [2kπ ,
(2k + 1)π ] and strictly monotone increasing on the intervals [(2k − 1)π , 2kπ ],
(k ∈ Z). The only roots of the function cos x are the points (π /2) + kπ .
(ii) The function sin x is strictly monotone increasing on the intervals [2kπ − (π /2),
2kπ + (π /2)] and strictly monotone decreasing on the intervals [2kπ + (π /2),
2kπ + (3π /2)], (k ∈ Z). The only roots of the function sin x are the points kπ .
188
11 Various Important Classes of Functions (Elementary Functions)
Fig. 11.5
Proof. (i) By remark 11.21, on the interval [0, π ] the function cos is none other
than the inverse of the function S. Since S is strictly monotone decreasing on
[−1, 1], the inverse function cos x is also strictly monotone decreasing on [0, π ].
Thus by (11.25), it is clear that if k ∈ Z is even, then cos x is strictly monotone decreasing on [2kπ , (2k + 1)π ] and is strictly monotone increasing on [(2k − 1)π , 2kπ ].
Now the statement about the roots of cos x is clear from this.
⊔
⊓
Statement (ii) follows from (i) via the identity (11.32).
The following inequalities are especially important for applications.
Theorem 11.23. For all x, the equations
| sin x| ≤ |x|
(11.38)
0 ≤ 1 − cos x ≤ x2
(11.39)
and
hold.
Proof. It suffices to prove the inequality (11.38) for nonnegative x, since both sides
are even. If x > π /2, then | sin x| ≤ 1 < π /2 < x, and the statement holds. We can
suppose that 0 ≤ x ≤ π /2. Let u = cos x and v√= sin x. Then by the definition of cos x
and sin x, in the graph of the function c(t) = 1 − t 2 , the arc length over the interval
[u, 1] is exactly x, since (cos x, sin x) = (u, v) are the coordinates of the points that
we get by measuring an arc of length x onto the circle C; see Figure 11.4. Then by
Theorem 10.79,
!
0 ≤ sin x = v ≤ (1 − u)2 + (v − 0)2 ≤ s(c; [u, 1]) = x,
which proves (11.38).
For inequality (11.39), it again suffices to consider only the nonnegative x, since
cos is an even function. If x > π /2, then
1 − cos x ≤ 2 <
2
3
π 2
<
< x2 ,
2
2
11.4 Trigonometric Functions
189
and so (11.39) is true. If, however, 0 ≤ x ≤ π /2, then cos x ≥ 0, and so
1 − cos x =
sin2 x
1 − cos2 x
=
≤ sin2 x ≤ x2 ,
1 + cos x
1 + cos x
⊔
⊓
which gives (11.39).
Theorem 11.24. For all x, y ∈ R, the inequalities
| cos x − cos y| ≤ |x − y|
(11.40)
| sin x − sin y| ≤ |x − y|
(11.41)
and
hold.
Proof. By Theorem 11.23 and the identity (11.37), we get that
x−y x+y
x−y
· 1 = |x − y|
| cos x − cos y| = 2 · sin
· sin
≤ 2·
2
2
2
and
x−y
x−y
x + y
· 1 = |x − y|.
| sin x − sin y| = 2 · sin
· cos
≤ 2·
2
2
2
⊔
⊓
Theorem 11.25.
(i) The functions sin and cos are continuous everywhere, and in fact, they are
Lipschitz.
(ii) The function sin x is strictly concave on the intervals [2kπ , (2k + 1)π ] and
strictly convex on the intervals [(2k − 1)π , 2kπ ], (k ∈ Z).
(iii) The function cos x is strictly concave on the intervals [2kπ − (π /2), 2kπ +
(π /2)] and strictly convex on the intervals [2kπ + (π /2), 2kπ + (2π /2)],
(k ∈ Z).
Proof. Statement (i) is clear by the previous theorem.
If 0 ≤ x < y ≤ π , then applying the first identity in (11.37) yields
x+y
x−y
x+y
sin x + sin y
= sin
cos
< sin
.
2
2
2
2
This shows that sin x is strictly weakly concave on the interval [0, π ]. Since it is
continuous, it is strictly concave there. Then statement (ii) follows immediately by
the identity sin(x + π ) = − sin x.
Finally, statement (iii) follows from (ii) using the identity (11.32).
⊔
⊓
190
11 Various Important Classes of Functions (Elementary Functions)
Theorem 11.26. If |x| < π /2 and x = 0, then
cos x ≤
sin x
≤ 1.
x
(11.42)
Proof. Since the function (sin x)/x is even, it suffices to consider the case x > 0.
The inequality (sin x)/x ≤ 1 is clear by (11.38).
So all we need to show is that if 0 < x < π /2, then
x ≤ tg x.
(11.43)
In Figure 11.6, we have A = (cos x, 0), B = (cos x, sin x). For the circle K, the
tangent at point B intersects the horizontal axis at point C. The reflection of the
point B over the horizontal axis is D. Since OAB and OBC are similar triangles,
BC = sin x/ cos x = tg x.
Fig. 11.6
Let us inscribe an arbitrary polygonal path in the arc DB. Extending this with
segments OD and OB gives us a convex polygon T , which is part of the quadrilateral
ODCB. By Lemma 10.82 it then follows that the perimeter of T is at most the
perimeter of ODCB, which is 2 + 2 tg x. Since the supremum of polygonal paths
⊔
⊓
inscribed into DB is 2x, we get that 2 + 2x ≤ 2 + 2 tg x, which proves (11.43).
Theorem 11.27. The limit relations
lim
x→0
1 − cos x
=0
x
(11.44)
sin x
=1
x
(11.45)
and
lim
x→0
hold.
Proof. The two statements follow from inequalities (11.39) and (11.42) by applying
the squeeze theorem.
⊔
⊓
11.4 Trigonometric Functions
191
Now we summarize the properties of
the functions tg x and ctg x. The function
tg x = sin x/ cos x is defined where the denominator is nonzero, that is, at the points x =
(π /2) + kπ , where k is an arbitrary integer. By
the addition formulas of sin and cos, it is easy
to deduce the following identities:
tg x + tg y
,
1 − tg x · tg y
tg x − tg y
,
tg(x − y) =
1 + tg x · tg y
2 tg x
tg 2x =
.
1 − tg2 x
tg(x + y) =
(11.46)
Fig. 11.7
The function tg x is continuous on its domain, since it is the quotient of two
continuous functions. The function tg x is odd, and it is periodic with period π ,
since
− sin x
sin x
sin(x + π )
tg(x + π ) =
=
=
= tg x
cos(x + π ) − cos x cos x
for all x = (π /2) + kπ . Since on the interval [0, π /2), the function sin x is strictly
monotone increasing, cos x is strictly monotone decreasing, and both are positive,
so there tg x is strictly monotone increasing. Since tg 0 = 0 and tg x is odd, tg x is
strictly increasing on the whole interval (−π /2, π /2).
The limit relations
lim
x→−(π /2)+0
tg x = −∞
and
lim
x→(π /2)−0
tg x = ∞
(11.47)
hold, following from the facts
lim sin x = sin(±π /2) = ±1,
lim cos x = cos(±π /2) = 0,
x→±π /2
x→±π /2
and that cos x is positive on (−π /2, π /2) (Figure 11.7).
The function ctg x = cos x/ sin x is defined where the denominator is not zero,
that is, at the points x = kπ , where k is an arbitrary integer. The function ctg x is
continuous on its domain, is odd, and is periodic with period π . By (11.32) we get
that
π
−x
(11.48)
ctg x = tg
2
for all x = kπ . It then follows that the function ctg x is strictly monotone decreasing
on the interval (0, π ), and that
lim ctg x = ∞
x→0+0
and
lim ctg x = −∞.
x→π −0
(11.49)
192
11 Various Important Classes of Functions (Elementary Functions)
Exercises
11.27. Prove the following equalities:
√
√
π
3
π
2
,
(a) cos =
(b) cos =
,
6
2
4
2
√
2π
1
3π
2
(d) cos
=− ,
(e) cos
=−
,
3
2
4
2
√
1
π
π
2
(g) sin = ,
(h) sin =
,
6
2
4
2
√
√
2π
3
3π
2
(j) sin
=
,
(k) sin
=
,
3
2
4
2
(c) cos
1
π
= ,
3
2
√
3
5π
=−
,
(f) cos
6
2
√
π
3
,
(i) sin =
3
2
5π
1
(l) sin
= .
6
2
11.28. Prove that
π
2π
sin 3x = 4 · sin x · sin x +
· sin x +
3
3
for all x.
11.29. Prove that
π
2π
3π
· sin x +
sin 4x = 8 · sin x · sin x +
· sin x +
4
4
4
for all x. How can the statement be generalized?
11.30.
Let (an ) denote sequence (15) in Example 4.1, that is, let a1 = 0 and an+1 =
√
2 + an (n ≥ 1). Prove that an = 2 · cos(π /2n ).
11.31. Prove that if n is a positive integer, then cos nx can be written as an nthdegree polynomial in cos x, that is, there exists a polynomial Tn of degree n such
that cos nx = Tn (cos x) for all x. (H)
11.32. Prove that if n is a positive integer, then sin nx/ sin x can be written as an nthdegree polynomial of cos x, that is, there exists a polynomial Un of degree n such
that sin nx = sin x ·Un (cos x) for all x.9
11.33. Can sin nx be written as a polynomial of sin x for all n ∈ N+ ?
11.34. Prove that the equation x · sin x = 100 has infinitely many solutions.
11.35. Let f : R → R be continuous, not identically zero, and suppose that | f (x)| ≤ 1
and f (x + y) + f (x − y) = 2 f (x) f (y) for all x, y. Prove that for a suitable constant c,
f (x) = cos cx for all x. (∗ H)
9
The polynomials Tn and Un defined as such are called the Chebyshev polynomials.
11.5 The Inverse Trigonometric Functions
193
11.36. (a) Let An = ∑nk=1 sin−2 (kπ /2n). Show that A1 = 1 and A2n = 4 · An − 1 for
all n = 1, 2, . . ..
(n = 0, 1, . . .).
(b) Prove that A2n = (2/3) · 4n + (1/3)
(c) Prove that (sin−2 x) − 1 < x−2 < sin−2 x for all 0 < x < π /2, and deduce from
this that
n
An − n < (2n/π )2 · ∑ 1/k2 < An
k=1
for all n.
2
2
(d) Prove that ∑∞
k=1 1/k = π /6. (H S)
11.5 The Inverse Trigonometric Functions
Since the function cos x is strictly monotone on the interval [0, π ], it has an inverse
there, which we call the arccosine and denote by arccos x. We are already familiar with this function (Figures 11.8 and 11.9). In Remark 11.21, we noted that the
function cos x on the interval [0, π ] agrees with the inverse of S(u) = s(c; [u, 1]).
Thus the function arccos is none other than the function S. The function arccos x is
thus defined on the interval [−1, 1], and there it is strictly monotone decreasing and
continuous.
Fig. 11.8
Fig. 11.9
The function sin x is strictly monotone increasing on the interval [−π /2, π /2],
so it has an inverse there, which we call the arcsine function, and we denote it by
arc sin x. This function is defined on the interval [−1, 1], and is strictly monotone
increasing and continuous there. From the identities (11.32), we see that for all
x ∈ [−1, 1],
194
11 Various Important Classes of Functions (Elementary Functions)
π
− arc sin x.
(11.50)
2
The function tg x is strictly monotone increasing on the interval (−π /2, π /2), so
it has an inverse there, which we call the arctangent, and we denote it by arc tg x.
By the limit relation (11.47) and the continuity of tg x, it follows that the function
tg x takes on every real value over (−π /2, π /2). Thus the function arc tg x is defined
on all of the real number line, is continuous, and is strictly monotone increasing
(Figure 11.10). It is also clear that
arccos x =
lim arc tg x = −
x→−∞
π
2
and
lim arc tg x =
x→∞
π
.
2
(11.51)
Fig. 11.10
The function ctg x is strictly monotone decreasing on the interval (0, π ), so it
has an inverse there, which we denote by arcctg x. Since ctg x takes on every real
value over (0, π ), the function arcctg x is defined on the whole real number line, is
continuous, and is strictly monotone decreasing. By (11.48), it follows that for all x,
arcctg x =
π
− arc tg x.
2
(11.52)
Exercises
11.37. Draw the graphs of the following functions:
(a) arc sin(sin x),
(b) arccos(cos x),
(d) arcctg(ctg x),
(e) arc tg x − arcctg(1/x).
11.38. Prove the following identities:
√
(a) arc sin x = arccos 1 − x2 (x ∈ [0, 1]).
(b) arc tg x = arc sin √ x 2 (x ∈ R).
1+x
(c) arc tg(tg x),
11.6 Hyperbolic Functions and Their Inverses
(c) arc sin x = arc tg √ x
1−x2
195
(x ∈ (−1, 1)).
(d) arc tg x + arc tg(1/x) = π /2 (x > 0).
11.39. Solve the following equation: sin (2 arc tg x) = 1/x.
11.6 Hyperbolic Functions and Their Inverses
The so-called hyperbolic functions defined below share many properties with their trigonometric analogues. We call the functions
sh x =
ex − e−x
,
2
and
ch x =
ex + e−x
2
hyperbolic sine and hyperbolic cosine respectively. By their definition, it is clear that sh x
and ch x are defined everywhere and are continuous, and moreover, that ch x is even, while sh x
is odd.
Since the function ex is strictly monotone increasing, and the function e−x is strictly monotone decreasing, the function sh x is strictly
monotone increasing on R (Figure 11.11). By
the limit relation (11.6), it is clear that
lim sh x = −∞ and
x→−∞
Fig. 11.11
lim sh x = ∞.
x→∞
(11.53)
Since ex and e−x are strictly convex, it is easy to see that the function ch x is strictly
convex on R.
The following properties, which follow easily from the definitions, show some
of the similarities between the hyperbolic and trigonometric functions. First of all,
ch2 x − sh2 x = 1
for all x, that is, the point (ch u, sh u) lies on the hyperbola with equation x2 − y2 = 1
(whence the name “hyperbolic”; how this is analogous to cos2 x + sin2 x = 1 is clear)
196
11 Various Important Classes of Functions (Elementary Functions)
Considering that ch x is positive everywhere,
1 + sh2 x
ch x =
(11.54)
for all x. It immediately follows that the smallest value of ch x is 1, and moreover, that ch x
is strictly monotone increasing on [0, ∞) and
strictly monotone decreasing on (−∞, 0] (Figure 11.12).
For all x, y, the addition formulas
sh(x + y) = sh x ch y + ch x sh y,
sh(x − y) = sh x ch y − ch x sh y,
ch(x + y) = ch x ch y + sh x sh y,
ch(x − y) = ch x ch y − sh x sh y
(11.55)
Fig. 11.12
hold, which are simple consequences of the identity ex+y = ex · ey . The following
identities then follow from the addition formulas:
sh 2x = 2 sh x ch x,
ch 2x = ch2 x + sh2 x = 1 + 2 sh2 x = 2 ch2 x − 1,
1 + ch 2x
ch2 x =
,
2
−1 + ch 2x
sh2 x =
.
2
(11.56)
Remark 11.28. The similarities to trigonometric functions we see above are surprising, and what makes the analogy even more puzzling is how differently the two
families of functions were defined. The answer lies in the fact that—contrary to
how it seems—exponential functions very much have a connection with trigonometric functions. This connection, however, can be seen only through the complex
numbers, and since dealing with complex numbers is not our goal here, we only outline this connection in the second appendix of the chapter (as well as Remarks 13.18
and 11.29).
Much like the functions tg and ctg, we introduce th x = sh x/ ch x and cth x =
ch x/ sh x (Figures 11.13 and 11.14). We leave it to the reader to check that the function th x (hyperbolic tangent) is defined on the real numbers, continuous everywhere, odd, and strictly monotone increasing, and moreover, that
lim th x = −1
x→−∞
and
lim th x = 1.
x→∞
(11.57)
The function cth x (hyperbolic cotangent) is defined on the set R \ {0}, and it is
continuous there; it is strictly monotone decreasing on the intervals (−∞, 0) and
(0, ∞), and moreover,
lim cth x = ±1
x→±∞
and
lim cth x = ±∞.
x→0±0
(11.58)
11.6 Hyperbolic Functions and Their Inverses
Fig. 11.13
197
Fig. 11.14
The inverse of the function sh x is called the area hyperbolic sine function, denoted by arsh x (Figure 11.15). Since sh x is strictly monotone increasing, continuous, and by (11.53) it takes on every value, arsh x is defined on all real numbers,
is continuous everywhere, and is strictly monotone increasing. We can actually express arsh with the help of power and logarithmic functions. Notice that arsh x = y
is equivalent to sh y = x. If we write the definition of sh x into this and multiply
2y
y
both sides by 2ey , we get the equation
√ e − 1 = 2xey . This is a quadratic equation
y
y
in e , from which we get e = x ± x2 + 1. Since e > 0, only the positive sign is
considered. Finally, taking the logarithm of both sides, we get that
arsh x = log x + x2 + 1
(11.59)
for all x.
The function ch x is strictly monotone increasing and continuous on the interval
[0, ∞), and by (11.54), its image there is [1, ∞). The inverse of the function ch x
restricted to the interval [0, ∞) is called the area hyperbolic cosine, and we denote
it by arch x (Figure 11.16). By the above, arch x is defined on the interval [1, ∞), and
it is continuous and strictly monotone increasing. It is easy to see that
arch x = log x + x2 − 1
(11.60)
for all x ≥ 1.
The inverse of the function th x is called the area hyperbolic tangent, and we
denote it by arth x (Figure 11.17). By the properties of th x, we see that arth x is
defined on the interval (−1, 1), is continuous, and is strictly monotone increasing.
198
11 Various Important Classes of Functions (Elementary Functions)
Fig. 11.15
Fig. 11.16
It is easy to see that
1+x
1
arth x = · log
2
1−x
(11.61)
for all x ∈ (−1, 1). We leave the definition of arcth and the verification of its most
important properties to the reader.
Fig. 11.17
Remark 11.29. The names of trigonometric and hyperbolic functions come from
Latin words. The notation sin x comes from the word sinus meaning bend, fold.
The notation tg x comes from the word tangens, meaning tangent. The name is
justified by the fact that if 0 < x < π /2, then tg x is the length of the segment tangent
to the circle at (1, 0), starting there and ending where the line going through the
origin and the point (cos x, sin x) crosses it (see Figure 11.4).
The inverse trigonometric functions all have the arc prefix, which implies that
arccos x corresponds to the length of some arc. For inverse hyperbolic functions,
instead of arc, the word
√ used is area. This is justified by the following observation.
Let u ≥ 1 and v = u2 − 1. The line segments connecting the origin to (u, v) and
11.7 First Appendix: Proof of the Addition Formulas
199
(u, −v), and the hyperbola x2 − y2 = 1 between the points (u, v) and (u, −v) define
a section Au of the plane. One can show that the area of Au is equal to arch u (see
Exercise 16.9).
Exercises
11.40. Check the addition formulas for sh x and ch x.
11.41. Find and prove formulas analogous to (11.46) about th x.
√
11.42. Prove that log
x2 + 1 − x = − arsh x for all x.
√
11.43. Prove that log x − x2 − 1 = − arch x for all x ≥ 1.
11.44. Let f : R → R be continuous, and suppose that f (x + y) + f (x − y) =
2 f (x) f (y) for all x, y. Prove that one of the following holds.
(a) f is the function that is identically zero.
(b) There exists a constant c such that f (x) = cos cx for all x.
(c) There exists a constant c such that f (x) = ch cx for all x.
11.45. We say that the function f is algebraic if there exist polynomials p0 (x),
p1 (x), . . . , pn (x) such that pn is not identically zero, and
p0 (x) + p1 (x) f (x) + · · · + pn (x) f n (x) = 0
for all x ∈ D( f ). We call a function f transcendental if it is not algebraic.
(a) Prove that every polynomial
and every rational function are algebraic.
!
√
3 x2 −1
(b) Prove that 1 + x,
x+2 and |x| are algebraic functions.
(c) Prove that ex , log x, sin x, cos x are transcendental functions.
(d) Can a (nonconstant) periodic function be algebraic?
11.7 First Appendix: Proof of the Addition Formulas
Let Oα denote the positive rotation around the origin by α degrees. Then for arbitrary α ∈ R and x ∈ R2 , Oα (x) is the point that we get by rotating x around the
origin by α in the positive direction. We will need the following properties of the
rotations Oα .
(i) For arbitrary α , β ∈ R and x ∈ R2 , Oα +β (x) = Oα Oβ (x) .
200
11 Various Important Classes of Functions (Elementary Functions)
(ii) The map Oα is linear, that is,
Oα (px + qy) = p · Oα (x) + q · Oα (y)
for all x, y ∈ R2 and p, q ∈ R.
We will use these properties without proof. (Both properties seem clear; we can
convince ourselves of (ii) if we think about what the geometric meaning is of the
sum of two vectors and of multiplying a vector by a real number.) We now show
that
Oα ((1, 0)) = (cos α , sin α ).
(11.62)
Let Oα ((1, 0)) = P, and let h denote the ray starting at the origin and crossing
P. Then the angle formed by h and the positive half of the x-axis is α , that is, h
intersects the circle C at the point (cos α , sin α ). Since rotations preserve distances
(another geometric fact that we accept), the distance of P from the origin is 1. Thus
P is on the circle C, that is, it agrees with the intersection of h and C, which is
(cos α , sin α ).
By the above, (0, 1) = Oπ /2 ((1, 0)), so by property (i),
Oα ((0, 1)) = Oα +(π /2) ((1, 0)) =
π
π
, sin α +
=
= cos α +
2
2
= (− sin α , cos α ),
(11.63)
where we used (11.28) and (11.32).
Let the coordinates of x be (x1 , x2 ). Then by (11.62), (11.63), and (ii), we get that
Oα (x) = x1 · Oα ((1, 0)) + x2 · Oα ((0, 1)) =
= x1 · (cos α , sin α ) + x2 · (− sin α , cos α ) =
= (x1 · cos α − x2 · sin α , x1 · sin α + x2 · cos α ).
By all these,
(cos(α + β ), sin(α + β )) = Oα +β ((1, 0)) = Oα Oβ ((1, 0)) =
= Oα ((cos β , sin β )) =
= (cos β · cos α − sin β · sin α , cos β · sin α + sin β · cos α ),
so after comparing coordinates, we get the first and third identities of (11.33). If we
apply these to −y instead of y, then also using that cos x is even and sin x is odd, we
get the other two identities.
11.8 Second Appendix: A Few Words on Complex Numbers
201
11.8 Second Appendix: A Few Words on Complex Numbers
The introduction of complex numbers is motivated by the need to include a number whose square is −1 into our investigations. We denote this number by i.10 We
call the formal expressions a + bi, where a and b are real numbers, complex numbers. We consider the real numbers complex numbers as well by identifying the real
number a with the complex number a + 0 · i. Addition and multiplication are defined
on the complex numbers in a way to preserve the usual rules, and so that i2 = −1
also holds. Thus the sum and product of the complex numbers z1 = a1 + b1 i and
z2 = a2 + b2 i are defined by the formulas
z1 + z2 = (a1 + a2 ) + (b1 + b2 )i
and
z1 · z2 = (a1 a2 − b1 b2 ) + (a1 b2 + a2 b1 )i.
One can show that the complex numbers form a field with these operations if we
take the zero element to be 0 = 0 + 0 · i and the identity element to be 1 = 1 + 0 · i.
Thus the complex numbers form a field that contains the real numbers.
A remarkable fact is that among the complex numbers, every nonconstant polynomial (with complex coefficients) has a root (so for example, the polynomial x2 + 1
has the roots i and −i). One can prove that a degree-n polynomial has n roots counting multiplicity. This statement is called the fundamental theorem of algebra.
To define complex powers, we will use the help of (11.11). Since (1 + z/n)n
is defined for all complex numbers z, it is reasonable to define ez as the limit of
the sequence (1 + z/n)n as n → ∞. (We say that the sequence of complex numbers
zn = an + bn i tends to z = a + bi if an → a and bn → b.) One can show that this is
well defined, that is, the limit exists for all complex numbers z, and that the powers
defined in this way satisfy ez+w = ez · ew for all complex z and w. Moreover—and
this is important to us—if z = ix, where x is real, then the limit of (1 + z/n)n is
cos x + i · sin x, so
eix = cos x + i · sin x
(11.64)
for all real numbers x.11 If we apply (11.64) to −x instead of x, then we get that
e−ix = cos x − i · sin x.
We can express both cos x and sin x from this expression, and we get the identities
cos x =
10
eix + e−ix
,
2
sin x =
eix − e−ix
.
2i
(11.65)
As the first letter of the word “imaginary.”
It is worth checking that ei(x+y) = eix · eiy holds for all x, y ∈ R. By (11.64), this is equivalent to
the addition formulas for the functions cos and sin.
11
202
11 Various Important Classes of Functions (Elementary Functions)
These are called Euler’s formulas. These two identities help make the link between trigonometric and hyperbolic functions clear. If we extend the definitions of
ch and sh to the complex numbers, we get that
cos x = ch(ix) and sin x = sh(ix)/i
for all real x.
(11.66)
Chapter 12
Differentiation
12.1 The Definition of Differentiability
Consider a point that is moving on a line, and let s(t) denote the location of the
point on the line at time t. Back when we talked about real-life problems that could
of
lead to the definition of limits (see Chapter 9, p. 121), we sawthat the definition
instantaneous velocity required taking the limit of the fraction s(t) − s(t0 ) /(t −t0 )
in t0 . Having precisely defined what a limit is, we can now define the instantaneous
velocity of the point at a t0 to be the limit
lim
t→t0
s(t) − s(t0 )
t − t0
(assuming, of course, that this limit exists and is finite).
f
We alsosaw that if we want to define the tangent line to the graph of a function
at a, f (a) , then the slope of that line is exactly the limit of f (x) − f (a) /(x − a)
in a. We agree to define the tangent line as the line that contains the point a, f (a)
and has slope
f (x) − f (a)
,
lim
x→a
x−a
again assuming that this limit exists and is finite.
Other than the two examples above, many problems in mathematics, physics,
and other fields can be grasped in the same form as above. This is the case when
we have to find the rate of some change (not necessarily happening in space). If, for
example, the temperature of an object at time t is H(t), then we can ask how fast
the temperature is changing at time t0 . The average change over the interval [t0 ,t]
is H(t) − H(t0 ) /(t − t0 ). Clearly, the instantaneous change of temperature will be
defined as the limit
H(t) − H(t0 )
lim
t→t0
t − t0
(assuming that it exists and is finite).
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 12
203
204
12 Differentiation
We use the following names for the quotients
appearing
above. If f is defined
at the points a and b, then the quotient f (b) − f (a) /(b − a) is called the difference
quotient
of f between a and b. It is clear that the difference quotient
f
(b)
−
f
(a)
/(b
−
a) agrees with the slope of the line passing through the points
a, f (a) and b, f (b) .
In many cases, using the notation b − a = h, the difference quotient between a
and x = a + h is written as
f (a + h) − f (a)
.
h
Definition 12.1. Let f be defined on a neighborhood of the point a. We say that the
function f is differentiable at the point a if the finite limit
lim
x→a
f (x) − f (a)
x−a
(12.1)
exists. The limit (12.1) is called the derivative of f at a.
The derivative of f at a is most often denoted by f ′ (a). Sometimes, other
notations,
d f
d f (x)
dy
′
ḟ (a) ,
,
,
y (a) ,
dx x = a
dx x = a
dx x = a
are used (the last two in accordance with the notation y = f (x)).
Equipped with the above definition,
we can say that if the function s(t) describes the location of a moving point,
then the instantaneous velocity at time
t0 is s′ (t0 ). Similarly, if the temperature of an object at time t is given by
H(t), then the instantaneous temperature change at time t0 is H ′ (t0 ). The
definition of a tangent should also be
updated with our new definition. The
Fig. 12.1
tangent of f at a is
the line
that passes
through the point a, f (a) and has slope
lim
x→a
f (x) − f (a)
= f ′ (a).
x−a
Since the equation of this line is y = f ′ (a) · (x − a) + f (a), we accept the following
definition.
Definition
12.2. Let f be differentiable at the point a. The tangent line of f at
a, f (a) is the line with equation y = f ′ (a) · (x − a) + f (a).
12.1 The Definition of Differentiability
205
The visual meaning of the derivative f ′ (a) is then the slope of the tangent of
graph f at (a, f (a)) (Figure 12.1).
Examples 12.3. 1. The constant function f (x) = c is differentiable at all values a,
and its derivative is zero. This is because
f (x) − f (a) c − c
=
=0
x−a
x−a
for all x = a.
2. The function f (x) = x is differentiable at all values a, and f ′ (a) = 1. This is
because
f (x) − f (a) x − a
=
=1
x−a
x−a
for all x = a.
3. The function f (x) = x2 is differentiable at all values a, and f ′ (a) = 2a. This is
because
f (x) − f (a) x2 − a2
=
= x + a,
x−a
x−a
and so
f (x) − f (a)
f ′ (a) = lim
= 2a.
x→a
x−a
Thus by the definition of tangent lines, the
tangent of the parabola y = x2 at the point
(a, a2 ) is the line with equation y = 2a(x − a) +
a2 = 2ax − a2 . Since this passes through the
point (a/2, 0), we can construct the tangent by
drawing a line connecting the point (a/2, 0) to
the point (a, a2 ) (Figure 12.2).1
Differentiability is a stronger condition than
continuity. This is shown by the theorem below
and the remarks following it.
Theorem 12.4. If f is differentiable at a, then f
is continuous at a.
Fig. 12.2
Proof. If f is differentiable at a, then
f (x) − f (a)
· (x − a) = f ′ (a) · 0 = 0.
lim ( f (x) − f (a)) = lim
x→a
x→a
x−a
This means exactly that f is continuous at a.
⊔
⊓
Remarks 12.5. 1. Continuity is a necessary but not sufficient condition for differentiability. There exist functions that are continuous at a point a but are not differentiable there. An easy example is the function f (x) = |x| at a = 0. Clearly,
1
So the calculus was correct; see page 5.
206
12 Differentiation
f (x) =
x,
−x,
so
f (x) − f (0)
=
x−0
if x ≥ 0,
if x < 0,
1,
−1,
if x > 0,
if x < 0.
Then
lim
x→0+0
f (x) − f (0)
f (x) − f (0)
= 1 and lim
= −1,
x→0−0
x−0
x−0
and so f is not differentiable at 0.
2. There even exist functions that are continuous everywhere but not differentiable
anywhere. Using the theory of series of functions, one can show that for suitable
a, b > 0, the function
∞
f (x) =
∑ an · sin(bn x)
n=1
has this property (for example with the choice a = 1/2, b = 10). Similarly, one can
show that the function
∞
2n x!
g(x) = ∑ n
n=1 2
is everywhere continuous but nowhere differentiable, where x! denotes the smallest
distance from x to an integer.
3. There even exists a function that is differentiable at a point a but is not continuous
at any other point. For example, the function
x2 ,
if x is rational,
f (x) =
−x2 , if x is irrational
is differentiable at 0. This is because
f (x) − f (0) x2
x − 0 = x = |x| → 0,
if x → 0,
so
f (x) − f (0)
= 0.
x−0
At the same time, it can easily be seen—see function 6 in Example 10.2—that the
function f (x) is not continuous at any x = 0.
f ′ (0) = lim
x→0
As we saw, the function |x| is not differentiable at 0, and accordingly, the graph
does not have a tangent at 0 (here the graph has a cusp). If, however, we look only
on the right-hand side of the point, then the difference quotient has a limit from that
side. Accordingly, in the graph of |x|, the “right-hand chords” of (0, 0) have a limit
(which is none other than the line y = x). The situation is similar on the left-hand
side of 0.
12.1 The Definition of Differentiability
207
As the above example illustrates, it is reasonable to introduce the one-sided variants of derivatives (and hence one-sided differentiability).
Definition 12.6. If the finite limit
lim
x→a+0
f (x) − f (a)
x−a
exists, then we call this limit the right-hand derivative of f at a. We can similarly
define the left-hand derivative as well.
We denote the right-hand derivative at a by f+′ (a), and the left-hand derivative at
a by f−′ (a).
Remark 12.7. It is clear that f is differentiable at a if and only if both the right- and
left-hand derivatives of f exist at a, and f+′ (a) = f−′ (a). (Then this shared value is
f ′ (a).)
As we have seen, differentiability does not follow from continuity. We will now
show that convexity—which is a stronger property than continuity—implies onesided differentiability.
Theorem 12.8. If f is convex on the interval (a, b), then f is differentiable from the
left and from the right at all points c ∈ (a, b).
Proof. By Theorem 9.20, the function x → ( f (x) − f (c))/(x − c) is monotone increasing on the set (a, b) \ {c}. Fix d ∈ (a, c). Then
f (d) − f (c)
f (x) − f (c)
≤
d −c
x−c
for all x ∈ (c, b), so the function ( f (x) − f (c))/(x − c) is monotone increasing and
bounded from below on the interval (c, b).
By statement (ii) of Theorem 10.68, it then follows that the limit
lim
x→c+0
f (x) − f (c)
x−c
exists and is finite, which means that f is differentiable from the right at c. It can be
shown similarly that f has a left-hand derivative at c.
⊔
⊓
Linear Approximation. In working with a function that arises in a problem,
we can frequently get a simpler and more intuitive result if instead of the function,
we use another simpler one that “approximates the original one well.” One of the
simplest classes of functions comprises the linear functions (y = mx + b). We show
that the differentiability of a function f at a point a means exactly that the function
can be “well approximated” by a linear function. As we will soon see, the best linear
approximation for f at a point a is the function y = f ′ (a)(x − a) + f (a).
If f is continuous at a, then for all c,
f (x) − [c · (x − a) + f (a)] → 0,
if
x → a.
208
12 Differentiation
Thus every linear function ℓ(x) such that ℓ(a) = f (a) “approximates f well”
in the sense that f (x) − ℓ(x) → 0 as x → a. The differentiability of f at a, by the
theorem below, means exactly that the function
t(x) = f ′ (a) · (x − a) + f (a)
(12.2)
approximate significantly better: not only does the difference f − t tend to zero as
x → a, but it tends to zero faster than (x − a).
Theorem 12.9. The function f is differentiable at a if and only if at a, it can be
“well approximated” locally by a linear polynomial in the following sense: there
exists a number α (independent of x) such that
f (x) = α · (x − a) + f (a) + ε (x) · (x − a),
where ε (x) → 0 as x → a. The number α is the derivative of f at a.
Proof. Suppose first that f is differentiable at a, and let
ε (x) =
f (x) − f (a)
− f ′ (a).
x−a
Since
f ′ (a) = lim
x→a
we have ε (x) → 0 as x → a. Thus
f (x) − f (a)
,
x−a
f (x) = f (a) + f ′ (a)(x − a) + ε (x)(x − a),
where ε (x) → 0 as x → a.
Now suppose that
f (x) = α · (x − a) + f (a) + ε (x)(x − a),
where ε (x) → 0 as x → a. Then
f (x) − f (a)
= α + ε (x) → α ,
x−a
if x → a.
Thus f is differentiable at a, and f ′ (a) = α .
⊔
⊓
The following theorem expresses that t(x) = f ′ (a) · (x − a) + f (a) is the “best”
linear function approximating f .
Theorem 12.10. If the function f is differentiable at a, then for all c = f ′ (a),
lim
x→a
f (x) − [ f ′ (a)(x − a) + f (a)]
= 0.
f (x) − [c(x − a) + f (a)]
12.1 The Definition of Differentiability
209
Proof. As x → a,
f (x) − [ f ′ (a)(x − a) + f (a)]
=
f (x) − [c(x − a) + f (a)]
f (x)− f (a)
− f ′ (a)
x−a
f (x)− f (a)
−c
x−a
→
f ′ (a) − f ′ (a)
= 0.
f ′ (a) − c
⊔
⊓
Theorem 12.9 gives a necessary and sufficient condition for f to be differentiable
at a. With the help of this, we can give another (equivalent to (12.1)) definition of
differentiability.
Definition 12.11. The function f is differentiable at a if there exists a number α
(independent of x) such that
f (x) = α · (x − a) + f (a) + ε (x)(x − a),
where ε (x) → 0 if x → a.
The significance of this equivalent definition lies in the fact that if we want to extend the concept of differentiability to other functions—not necessarily real-valued
or of a real variable—then we cannot always find a definition analogous to Definition 12.1, but the generalization of Definition 12.11 is generally feasible.
Derivative Function. We will see that the derivative is the most useful tool in
investigating the properties of a function. This is true locally and globally. The existence of the derivative f ′ (a) and its value describes local properties of f : from the
value of f ′ (a), we can deduce the behavior of f at a.2
If, however, f is differentiable at every point of an interval, then from the values
of f ′ (x) we get global properties of f . In applications, we mostly come upon functions that are differentiable in some interval. The definition of this is the following.
Definition 12.12. Let a < b. We say that f is differentiable on the interval (a, b) if
it is differentiable at every point of (a, b). We say that f is differentiable on [a, b] if
it is differentiable on (a, b) and differentiable from the right at a and from the left
at b.
Generally, we can think of differentiation as an operation that maps functions to
functions.
Definition 12.13. The derivative function of the function f is the function that is
defined at every x where f is differentiable and has the value f ′ (x) there. It is denoted
by f ′ .
The basic task of differentiation is to find relations between functions and their
derivatives, and to apply them. For the applications, however, first we need to decide
where a function is differentiable, and we have to find its derivative. We begin with
this latter problem, and only thereafter do we inspect what derivatives tell us about
functions and how to use that information.
2
One such link was already outlined when we saw that differentiability implies continuity.
210
12 Differentiation
Exercises
12.1. Where is the function ({x} − 1/2)2 differentiable?
12.2. Let f (x) = x2 if x ≤ 1, and f (x) = ax + b if x > 1. For what values of a and b
will f be differentiable everywhere?
12.3. Let f (x) = |x|α · sin |x|β if x = 0, and let f (0) = 0. For what values of α and
β will f be continuous at 0? When will it be differentiable at 0?
12.4. Prove that the graph of the function x2 has the tangent line y = mx + b if the
line and the graph of the function intersect at exactly one point.
12.5. Where are the tangent lines to the function 2x3 − 3x2 + 8 horizontal?
12.6. When is the x-axis tangent to the graph of x3 + px + q?
12.7. Let f (2−n ) = 3−n for all positive integers n, and let f (x) = 0 otherwise. Where
is f differentiable?
12.8. Are there any points where the Riemann function is differentiable?
12.9. Are there any points where the square of the Riemann function is differentiable? (H)
12.10. At what angle does the graph of x2 intersect the line y = 2x? (That is, what is
the angle between the tangent of the function and the line?)
√
12.11. Prove √
that the function f (x) = x is differentiable for all a > 0, and that
f ′ (a) = 1/(2 a).
12.12. Prove that the function 1/x is differentiable at all points a > 0, and find its
derivative. Show that every tangent of the function 1/x forms a triangle with the two
axes whose area does not depend on which point the tangent is taken at.
12.13. Prove that if f is differentiable at 0, then the function f (|x|) is differentiable
at 0 if and only if f ′ (0) = 0.
12.14. Prove that if f is differentiable at a, then
lim
h→0
f (a + h) − f (a − h)
= f ′ (a).
2h
Show that the converse of this statement does not hold.
12.15. Let f be differentiable at the point a. Prove that if xn < a < yn for all n and if
yn − xn → 0, then
f (yn ) − f (xn )
lim
= f ′ (a). (H S)
n→∞
yn − xn
12.2 Differentiation Rules and Derivatives of the Elementary Functions
211
12.16. Suppose that f is differentiable everywhere on (−∞, ∞). Prove that if f is
even (odd), then f ′ is odd (even).
12.17. Let
B = { f : f is bounded on [a, b]},
C = { f : f is continuous on [a, b]},
M = { f : f is monotone on [a, b]},
X = { f : f is convex on [a, b]},
D = { f : f is differentiable on [a, b]},
I = { f : f has an inverse on [a, b]}.
From the point of containment, how are the sets B, C, M, X, D, and I related?
12.2 Differentiation Rules and Derivatives of the Elementary
Functions
The differentiability of some basic functions can be deduced from the properties we
have seen. These include the polynomials, trigonometric functions, and logarithmic
functions. On the other hand, it is easy to see that all of the elementary functions
can be expressed in terms of polynomials, trigonometric functions, and logarithmic
functions using the four basic arithmetic operations, taking inverses, and composition. Thus to determine the differentiability of the remaining elementary functions,
we need theorems that help us deduce their differentiability and to calculate their
derivatives based on the differentiability and derivatives of the component functions
in terms of which they can be expressed. These are called the differentiation rules.
Below, we will determine the derivatives of the power functions with integer exponent, trigonometric functions, and logarithmic functions; we will introduce the
differentiation rules, and using all this information, we will determine the derivatives of the rest of the elementary functions.
Theorem 12.14. For an arbitrary positive integer n, the function xn is differentiable
everywhere on (−∞, ∞), and (xn )′ = n · xn−1 for all x.
Proof. For all a,
xn − an
= lim xn−1 + xn−2 · a + · · · + x · an−2 + an−1 = nan−1
x→a x − a
x→a
lim
by the continuity of xk .
⊔
⊓
Theorem 12.15.
(i) The functions sin x and cos x are differentiable everywhere on (−∞, ∞). Moreover,
(sin x)′ = cos x and (cos x)′ = − sin x
for all x.
212
12 Differentiation
(ii) The function tg x is differentiable at all points x = π2 + kπ (k ∈ Z), and at those
points,
1
.
(tg x)′ =
cos2 x
(iii) The function ctg x is differentiable for all x = kπ (k ∈ Z), and at those points,
(ctg x)′ = −
1
.
sin2 x
Proof. (i) For arbitrary a ∈ R and x = a,
x+a
sin x − sin a 2 sin(x − a)/2 · cos(x + a)/2 sin((x − a)/2)
=
=
· cos
x−a
x−a
(x − a)/2
2
by the second identity of (11.37). Now limx→a sin((x − a)/2)/((x − a)/2) = 1 and
limx→a cos (x + a)/2 = cos a by (11.45), by the continuity of the cos function and
the theorem about limits of compositions of functions. Therefore,
lim
x→a
sin x − sin a
= cos a.
x−a
Similarly, for arbitrary a ∈ R and x = a,
cos x − cos a
2 sin(x − a)/2 · sin(x + a)/2
sin((x − a)/2)
x+a
=−
=−
· sin
x−a
x−a
(x − a)/2
2
by the fourth identity of (11.37). Then using (11.45) and the continuity of the sin
function, we get that
cos x − cos a
= − sin a.
lim
x→a
x−a
(ii) For arbitrary a = (π /2) + kπ and x = a,
sin x sin a
tg x − tg a
sin x cos a − sin a cos x
1
1
=
−
=
·
=
·
x−a
cos x cos a
x−a
cos x cos a
x−a
sin(x − a)
1
=
·
.
x−a
cos x cos a
Then using (11.45) and the continuity of cos, we get that
tg x − tg a
1
=
.
x→a
x−a
cos2 a
lim
12.2 Differentiation Rules and Derivatives of the Elementary Functions
213
(iii) For arbitrary a = kπ and x = a,
ctg x − ctg a cos x cos a 1
cos x sin a − cos a sin x
1
·
=
−
=
·
=
x−a
sin x sin a x − a
sin x sin a
x−a
sin(x − a)
1
=−
·
,
x−a
sin x sin a
which gives us
lim
x→a
1
ctg x − ctg a
=− 2 .
x−a
sin a
⊔
⊓
Theorem 12.16. If a > 0 and a = 1, then the function loga x is differentiable at every
point x > 0, and
1 1
· .
(12.3)
(loga x)′ =
log a x
Proof. By Corollary 11.11,
h 1/h
lim 1 +
= e1/x
h→0
x
for all x > 0. Thus
loga (x + h) − loga x
h 1/h
= lim loga 1 +
= loga e1/x =
lim
h→0
h→0
h
x
1
1 1
· .
= · loga e =
x
log a x
⊔
⊓
It is clear that if a > 0 and a = 1, then the function loga |x| is differentiable on the
set R \ {0}, and (loga |x|)′ = 1/(x · log a) for all x = 0.
We now turn our attention to introducing the differentiation rules. As we will
see, a large portion of these are consequences of the definition of the derivative and
theorems about limits.
Theorem 12.17. If the functions f and g are differentiable at a, then c f (c ∈ R),
f + g, and f · g are also differentiable at a, and
(i) (c f )′ (a) = c f ′ (a),
(ii) ( f + g)′ (a) = f ′ (a) + g′ (a),
(iii) ( f g)′ (a) = f ′ (a)g(a) + f (a)g′ (a).
If g(a) = 0, then 1/g and f /g are differentiable at a. Moreover,
′
1
g′ (a)
(a) = − 2 ,
(iv)
g
g (a)
′
f
f ′ (a)g(a) − f (a)g′ (a)
.
(a) =
(v)
g
g2 (a)
214
12 Differentiation
Proof. The shared idea of the proofs of these is to express the difference quotients
of each function with the help of the difference quotients ( f (x) − f (a))/(x − a) and
(g(x) − g(a))/(x − a):
(i) The difference quotient of the function F = c f is
F(x) − F(a) c f (x) − c f (a)
f (x) − f (a)
=
= c·
→ c · f ′ (a).
x−a
x−a
x−a
(ii) The difference quotient of F = f + g is
F(x) − F(a) ( f (x) + g(x)) − ( f (a) + g(a))
f (x) − f (a) g(x) − g(a)
=
=
+
.
x−a
x−a
x−a
x−a
Thus
F(x) − F(a)
= f ′ (a) + g′ (a).
x−a
(iii) The difference quotient of F = f · g is
lim
x→a
F(x) − F(a)
f (x) · g(x) − f (a) · g(a)
=
=
x−a
x−a
f (x) − f (a)
g(x) − g(a)
=
· g(x) + f (a) ·
.
x−a
x−a
Since g(x) is differentiable at a, it is continuous there (by Theorem 12.4), and so
lim
x→a
f (x) − f (a)
g(x) − g(a)
F(x) − F(a)
= lim
· lim g(x) + f (a) · lim
=
x→a
x→a
x→a
x−a
x−a
x−a
= f ′ (a)g(a) + f (a)g′ (a).
If g(a) = 0, then by the continuity of g, it follows that g(x) = 0 on a neighborhood
of a; that is, the functions 1/g(x) and f (x)/g(x) are defined here.
(iv) The difference quotient of F = 1/g is
F(x) − F(a) 1/g(x) − 1/g(a) g(a) − g(x)
1
=
=
·
=
x−a
x−a
g(a)g(x)
x−a
1
g(x) − g(a)
1
=−
·
→− 2
· g′ (a).
g(x)g(a)
x−a
g (a)
(v) The difference quotient of F = f /g is
F(x) − F(a)
f (x)g(a) − f (a)g(x)
f (x)/g(x) − f (a)/g(a)
1
=
=
=
x−a
x−a
g(a)g(x)
x−a
f (x) − f (a)
g(x) − g(a)
1
· g(x) −
· f (x) ,
=
g(x)g(a)
x−a
x−a
12.2 Differentiation Rules and Derivatives of the Elementary Functions
which implies
215
f ′ (a)g(a) − f (a)g′ (a)
F(x) − F(a)
=
.
x→a
x−a
g2 (a)
lim
⊔
⊓
Remarks 12.18. 1. The statements of the theorem hold for right- and left-hand
derivatives as well.
2. Let I be an interval, and suppose that the functions f : I → R and g : I → R are
differentiable on I as stated in Definition 12.12. By the above theorem, it follows
that c f , f + g, and f · g are also differentiable on I, and the equalities
(c f )′ = c f ′ , ( f + g)′ = f ′ + g′ , ( f g)′ = f ′ g + f g′
hold. If we also suppose that g = 0 on I, then 1/g and f /g are also differentiable on
I, and
′
′
f
1
g′
f ′ g − f g′
= − 2 , moreover
=
.
g
g
g
g2
We emphasize that here we are talking about equality of functions, not simply numbers.
3. From the above theorem, we can use induction to easily prove the following
statements.
If the functions f1 , . . . , fn are differentiable at a, then
(i) f1 + · · · + fn is differentiable at a, and
( f1 + · · · + fn )′ (a) = f1′ (a) + · · · + fn′ (a);
moreover,
(ii) f1 · . . . · fn is differentiable at a, and
( f1 · . . . · fn )′ (a) = f1′ · f2 · · · fn + f1 · f2′ · f3 · · · fn + · · · + f1 · · · fn−1 · fn′ (a).
4. By (ii) above, it follows that if f1 (a) · . . . · fn (a) = 0, then
′
f1
( f 1 · · · · · f n )′
fn′
(a) =
(a).
+···+
f1 · · · · · fn
f1
fn
Thus if f1 , . . . , fn are defined on the interval I, are nowhere zero, and are differentiable on the interval I, then
f′
( f 1 · · · · · f n )′
f′
= 1 +···+ n .
f1 · · · · · fn
f1
fn
The next theorem is known as the chain rule.
(12.4)
216
12 Differentiation
Theorem 12.19. If the function g is differentiable at a and the function f is differentiable at g(a), then the function h = f ◦ g is differentiable at a, and
h′ (a) = f ′ (g(a)) · g′ (a).
With the notation y = g(x) and z = f (y), the statement of the theorem can easily
be remembered in the form
dz dy
dz
=
· .
dx dy dx
This formula is where the name “chain rule” comes from.
Proof. By the assumptions, it follows that the function h is defined on a neighborhood of the point a. Indeed, f is defined on a neighborhood V of g(a). Since g is
differentiable at a, it is continuous at a, so there exists a neighborhood U of a such
that g(x) ∈ V for all x ∈ U. Thus h is defined on U. After these preparations, we give
two proofs of the theorem.
I. Following the proof of the previous theorem, let us express the difference quotient
of h using the difference quotients of f and g. Suppose first that g(x) = g(a) on a
punctured neighborhood of a. Then
h(x) − h(a)
f (g(x)) − f (g(a))
f (g(x)) − f (g(a)) g(x) − g(a)
=
=
·
.
x−a
x−a
g(x) − g(a)
x−a
(12.5)
Since g is continuous at a, if x → a, then g(x) → g(a). Thus by the theorem on the
limits of compositions of functions,
lim
x→a
h(x) − h(a)
g(x) − g(a)
f (t) − f (g(a))
= lim
· lim
= f ′ (g(a)) · g′ (a).
x→a
x−a
t − g(a)
x−a
t→g(a)
The proof of the special case above used twice the fact that g(x) = g(a) on a punctured neighborhood of a: first, when we divided by g(x) − g(a), and second, when
we applied Theorem 10.41 for the limit of the composition. Recall that the assumption of Theorem 10.41 requires that the inner function shouldn’t take on its limit in a
punctured neighborhood of the place, unless the outer function is continuous. This,
however, does not hold for us, since the outer function is the difference quotient
( f (t) − f (g(a)))/(t − g(a)), which is not even defined at g(a).
Exactly this circumstance gives an idea for the proof in the general case. Define
the function F(t) as follows: let
F(t) =
f (t) − f (g(a))
t − g(a)
if t ∈ V and t = g(a), and let F(t) = f ′ (g(a)) if t = g(a). Then F is continuous
at g(a), so by Theorem 10.41, limx→a F(g(x)) = f ′ (g(a)). To finish the proof, it
suffices to show that
g(x) − g(a)
h(x) − h(a)
= F(g(x)) ·
x−a
x−a
(12.6)
12.2 Differentiation Rules and Derivatives of the Elementary Functions
217
for all x ∈ U. We distinguish two cases. If g(x) = g(a), then (12.6) is clear
from (12.5). If, however, g(x) = g(a), then h(x) = f (g(x)) = f (g(a)) = h(a), so
both sides of (12.6) are zero. Thus the proof is complete.
II. This proof is based on the definition of differentiability given in 12.11. According
to this, the differentiability of f at g(a) means that
f (t) − f (g(a)) = f ′ (g(a))(t − g(a)) + ε1 (t)(t − g(a))
(12.7)
for all t ∈ V , where ε1 (t) → 0 if t → g(a).
Let ε1 (g(a)) = 0. Similarly, by the differentiability of the function g, it follows
that
(12.8)
g(x) − g(a) = g′ (a)(x − a) + ε2 (x)(x − a)
for all x ∈ U, where ε2 (x) → 0 as x → a. If we substitute t = g(x) in (12.7), and then
apply (12.8), we get that
h(x) − h(a) = f (g(x)) − f (g(a)) =
= f ′ (g(a))(g(x) − g(a)) + ε1 (g(x))(g(x) − g(a)) =
= f ′ (g(a))g′ (a)(x − a) + ε (x)(x − a),
where
ε (x) = f ′ (g(a))ε2 (x) + ε1 (g(x))(g′ (a) + ε2 (x)).
Since g(x) → g(a) as x → a, we have ε1 (g(x)) → 0 as x → a (since by ε1 (g(a)) = 0,
ε1 is continuous at g(a)). Then from ε2 (x) → 0, we get that ε (x) → 0 if x → a. Now
this, by Definition 12.11, means exactly that the function h is differentiable at a, and
⊔
⊓
h′ (a) = f ′ (g(a))g′ (a).
The following theorem gives the differentiation rule for inverse functions.
Theorem 12.20. Let f be strictly monotone and
continuous on the interval (a, b), and let ϕ denote the inverse of f . If f is differentiable at the
point c ∈ (a, b) and f ′ (c) = 0, then ϕ is differentiable at f (c), and
ϕ ′ ( f (c)) =
1
.
f ′ (c)
Proof. The function ϕ is defined on the interval
J = f ((a, b)). By the definition of the inverse
function, ϕ ( f (c)) = c and f (ϕ (y)) = y for all
y ∈ J. Let F(x) denote the difference quotient
( f (x) − f (c))/(x − c). If y= f (c), then
Fig. 12.3
218
12 Differentiation
ϕ (y) − ϕ ( f (c))
ϕ (y) − c
=
=
y − f (c)
f (ϕ (y)) − f (c)
1
=
.
F(ϕ (y))
(12.9)
Since ϕ is strictly monotone, if y = f (c), then ϕ (y) = c. Thus we can apply Theorem 10.41 on the limits of compositions of functions. We get that
ϕ ′ ( f (c)) = lim
y→ f (c)
ϕ (y) − ϕ ( f (c))
1
1
1
= lim
= lim
= ′ .
x→c F(x)
y − f (c)
f (c)
y→ f (c) F(ϕ (y))
⊔
⊓
Remarks 12.21. 1. The statement of the theorem can be illustrated with the following geometric argument. The graphs of the functions f and ϕ are the mirror images
of each other in the line y = x. The mirror image of the tangent to graph f at the
point (c, f (c)) gives us the tangent to graph ϕ at ( f (c), c). The slopes of these are
the reciprocals of each other, that is, ϕ ′ ( f (c)) = 1/ f ′ (c) (Figure 12.3).
2. Let f be strictly monotone and continuous on the interval (a, b), and suppose that f is differentiable everywhere on (a, b). If f ′ is nonvanishing everywhere, then by the above theorem, ϕ is everywhere differentiable on the interval
J = f ((a, b)), and ϕ ′ ( f (x)) = 1/ f ′ (x) for all x ∈ (a, b). If y ∈ J, then ϕ (y) ∈ I and
f (ϕ (y)) = y. Thus ϕ ′ (y) = 1/ f ′ (ϕ (y)). Since this holds for all y ∈ J, we have that
ϕ′ =
1
f′ ◦ϕ
.
(12.10)
3. If f ′ (c) = 0 (that is, if the tangent line to graph f is parallel to the x-axis at
the point (c, f (c))), then by (12.9), it is easy to see that the difference quotient
(ϕ (y) − ϕ ( f (c)))/(y − f (c)) does not have a finite limit at f (c). Indeed, in this
case, the limit of the difference quotient F(x) = ( f (x) − f (c))/(x − c) at c is zero,
so limy→ f (c) F(ϕ (y)) = 0. If, however, f is strictly monotone increasing, then the
difference quotient F(x) is positive everywhere, and so limy→ f (c) 1/F(ϕ (y)) = ∞
(see Remark 10.39). Then
lim
y→ f (c)
ϕ (y) − ϕ ( f (c))
= ∞,
y − f (c)
and we can similarly get that if f is strictly monotone decreasing, then the value
of the above limit is −∞. (These observations agree with the fact that if f ′ (c) = 0,
then the tangent line to graph ϕ at ( f (c), c) is parallel to the y-axis.) This remark
motivates the following extension of the definition of the derivative.
Definition 12.22. Let f be defined on a neighborhood of a point a. We say that the
derivative of f at a is infinite if
lim
x→a
f (x) − f (a)
= ∞,
x−a
and we denote this by f ′ (a) = ∞. We define f ′ (a) = −∞ similarly.
12.2 Differentiation Rules and Derivatives of the Elementary Functions
219
Remark 12.23. Definitions 12.1 and 12.22 can be stated jointly as follows: if the
limit (12.1) exists and has the value β (which can be finite or infinite), then we
say that the derivative of f at a exists, and we use the notation f ′ (a) = β .
We emphasize that the “differentiable” property is reserved for the cases in which
the derivative is finite. Thus a function f is differentiable at a if and only if its
derivative there exists and is finite.
We extend the definition of the one-sided derivatives (Definition 12.6) for the
cases in which the one-sided limits of the difference quotient are infinite. We use
the notation f+′ (a) = ∞, f−′ (a) = ∞, f+′ (a) = −∞, and f−′ (a) = −∞; their meaning
is straightforward.
Using the concepts above, we can extend Theorem 12.20 as follows.
Theorem 12.24. Let f be strictly monotone and continuous on the interval (a, b).
Let ϕ denote the inverse function of f . If f ′ (c) = 0, then ϕ has a derivative at f (c),
and in fact, ϕ ′ ( f (c)) = ∞ if f is strictly monotone increasing, and ϕ ′ ( f (c)) = −∞
if f is strictly monotone decreasing.
Now let us return to the elementary functions. With the help of the differentiation
rules, we can now determine the derivatives of all of them.
Theorem 12.25. If a > 0, then the function ax is differentiable everywhere, and
(ax )′ = log a · ax
(12.11)
for all x.
Proof. The statement is clear for a = 1, so we can suppose that a = 1. Since
(loga x)′ = 1/(x · log a), by the differentiation rule for inverse functions, we get that
(ax )′ = ax · log a.
ax
⊔
⊓
We note that the differentiability of the function
can also be deduced easily
from Theorem 12.8.
Applying Theorems 12.16 and 12.25 for a = e, we get the following.
Theorem 12.26.
(i) For all x,
(ex )′ = ex .
(12.12)
(ii) For all x > 0,
1
(log x)′ = .
(12.13)
x
According to (12.11), the function ex is the only exponential function that is
the derivative of itself. This fact motivates us to consider e to be one of the most
220
12 Differentiation
important constants3 of analysis (and of mathematics, more generally). By equality (12.13), out of all the logarithmic functions, the derivative of the logarithm with
base e is the simplest. This is why we chose the logarithm with base e from among
all the other logarithm functions (see Remark 11.17).
Using the derivatives of the exponential and logarithmic functions, we can easily
find the derivatives of the power functions.
Theorem 12.27. For arbitrary b ∈ R, the function xb is differentiable at all points
x > 0, and
(xb )′ = b · xb−1 .
(12.14)
Proof. Since xb = eb log x for all x > 0, we can apply the differentiation rule for
compositions of functions, Theorem 12.19.
⊓
⊔
The derivatives of the inverse trigonometric functions can easily be found by the
differentiation rule for inverse functions.
Theorem 12.28.
(i) The function arc sin x is differentiable on the interval (−1, 1), and
1
(arc sin x)′ = √
1 − x2
(12.15)
1
(arccos x)′ = − √
1 − x2
(12.16)
for all x ∈ (−1, 1).
(ii) The function arccos x is differentiable on the interval (−1, 1), and
for all x ∈ (−1, 1).
(iii) The function arc tg x is differentiable everywhere, and
(arc tg x)′ =
1
1 + x2
for all x.
(iv) The function arcctg x is differentiable everywhere, and
1
(arcctg x)′ = −
1 + x2
for all x.
(12.17)
(12.18)
π π
Proof. The function sin x is strictly increasing
π and
differentiable on − 2 , 2 , and its
π
derivative is cos x. Since cos x = 0 if x ∈ − 2 , 2 , by Theorem 12.20, if x ∈ (−1, 1),
then
1
1
1
=√
=!
,
(arc sin x)′ =
cos(arc sin x)
1 − x2
1 − sin2 (arc sin x)
which proves (i). Statement (ii) follows quite simply from (i) and (11.50).
The other central constant is π . The relation between these two constants is given by eiπ = −1,
which is a special case of the identity (11.64).
3
12.2 Differentiation Rules and Derivatives of the Elementary Functions
221
The function tg x is strictly increasing on the interval − π2 , π2 , and its derivative
is 1/ cos2 x = 0 there. Thus
(arc tg x)′ = cos2 (arc tg x).
However,
cos2 x =
1
,
1 + tg2 x
so
1
,
1 + x2
which establishes (iii). Statement (iv) is clear from (iii) and (11.52).
cos2 (arc tg x) =
⊔
⊓
By the definition of the hyperbolic functions and (12.12), the assertions of the
following theorem, which strengthen their link to the trigonometric functions, are
clear once again.
Theorem 12.29.
(i) The functions sh x and ch x are differentiable everywhere on (−∞, ∞), and
(sh x)′ = ch x and (ch x)′ = sh x
for all x.
(ii) The function th x is differentiable everywhere on (−∞, ∞). Moreover,
(th x)′ =
1
ch2 x
for all x.
(iii) The function cth x is differentiable at all points x = 0, and there,
(cth x)′ = −
1
.
sh2 x
Finally, consider the inverse hyperbolic functions.
Theorem 12.30.
(i) The function arsh x is differentiable everywhere, and
1
(arsh x)′ = √
2
x +1
(12.19)
for all x.
(ii) The function arch x is differentiable on the interval (1, ∞), and
1
(arch x)′ = √
x2 − 1
for all x > 1.
(12.20)
222
12 Differentiation
(iii) The function arth x is differentiable everywhere on (−1, 1), and
(arth x)′ =
1
1 − x2
(12.21)
for all x ∈ (−1, 1).
Proof. (i) Since the function sh x is strictly increasing on R and its derivative there
is ch x = 0, the function arsh x is differentiable everywhere. The derivative can be
computed either from Theorem 12.20 or by the identity (11.59).
(ii) Since the function ch x is strictly increasing on (0, ∞) and its derivative there is
sh x = 0, the function arch x is differentiable on the interval (1, ∞).
⊔
⊓
We leave the proofs of (12.20) and statement (iii) to the reader.
Remark 12.31. It is worth noting that the derivatives of the functions log x, arc tg x,
and arth x are rational functions, and the derivatives of arc sin x, arccos x, arsh x, and
arch x are algebraic functions. (The definition of an algebraic function can be found
in Exercise 11.45.)
Exercises
12.18. Suppose f + g is differentiable at a, and g is not differentiable at a. Can f be
differentiable at a?
12.19. Let f (x) = x2 · sin(1/x), f (0) = 0. Prove that f is differentiable everywhere. (S)
12.20. Prove that if 0 < c < 1, then the right-hand derivative of xc at 0 is infinity.
√
12.21. Prove that if n is a positive odd integer, then the derivative of n x at 0 is
infinity.
√
12.22. Where is the tangent line of the function 3 sin x vertical?
12.23. Prove that the graphs of the functions 4a(a − x) and 4b(b + x) cross each
other at right angles, that is, the tangent lines at the intersection point are perpendicular. (S)
12.24. Prove that the curves x2 − y2 = a and √xy = b cross each other at right
angles. That is, the graphs of the functions ± x2 − a and b/x cross each other
perpendicularly.
12.25. At what angle do the graphs of the functions 2x and (π − e)x cross each
other? (S)
12.2 Differentiation Rules and Derivatives of the Elementary Functions
223
12.26. Give a closed form for x + 2x2 + · · · + nxn . (Hint: differentiate the function
1 + x + · · · + xn .) Use this to compute the sums
1 2 3
n
+ + +···+ n
2 4 8
2
and
3
n
1 2
+ +
+···+ n .
3 9 27
3
12.27. Let f (x) = x · (x + 1) · . . . · (x + 100), and let g = f ◦ f ◦ f . Compute the value
of g′ (0).
12.28. Prove that the function xx is differentiable for all x > 0, and compute its
derivative.
12.29. The function xx is strictly monotone on [1, ∞). What is the value of the derivative of its inverse at the point 27?
12.30. The function x5 + x2 is strictly monotone on [0, ∞). What is the value of the
derivative of its inverse at the point 2?
12.31. Prove that the function x + sin x is strictly monotone increasing. What is the
value of the derivative of its inverse at the point 1 + (π /2)?
12.32. Let f (x) = logx 3 (x > 0, x = 1). Compute the derivatives of f and f −1 .
12.33. Let us apply differentiation to find limits. The method consists in changing
the function being considered into a difference quotient and finding its limit through
differentiation. For example, instead of
lim (x + ex )1/x ,
x→0
we can take its logarithm to get the quotient
log (x + ex )
,
x
which is the difference quotient of the numerator at 0. The limit of the quotient is
thus the derivative of the numerator at 0. If this limit is A, then the original limit is
eA . Finish this computation.
12.34. Apply the method above, or one of its variants, to find the following limits:
(a) limx→0 (cos x)1/ sin x ,
x 1/ sh x
,
(b) limx→0 e 2+1
2
(c) limx→0 logshcosx3x ,
(d) limx→1
(2 − x)1/ cos(π /(2x)) ,
1/x
(e) lim x − 1 · logx x .
x→∞
224
12 Differentiation
12.35. Prove, using the method above, that if a1 , . . . , an > 0, then
"
x
x
√
x a1 + · · · + an
= n a1 · . . . · an .
lim
x→0
n
polynomial (see Exercise 11.32). Prove that
12.36. Let Tn denote the nth Chebyshev
√
if Tn (a) = 0, then |Tn′ (a)| = n/ 1 − a2 .
12.37. Let f be convex on the open interval I.
(a) Prove that the function f+′ (x) is monotone increasing on I.
(b) Prove that if the function f+′ (x) is continuous at a point x0 , then f is differentiable at x0 .
(c) Prove that the set {x ∈ I : f is not differentiable at x} is countable.
12.3 Higher-Order Derivatives
Definition 12.32. Let the function f be differentiable in a neighborhood of a point
a. If the derivative function f ′ has a derivative at a, then we call the derivative of f ′
at a the second derivative of f . We denote this by f ′′ (a). Thus
f ′′ (a) = lim
x→a
f ′ (x) − f ′ (a)
.
x−a
If f ′′ (a) exists and is finite, then we say that f is twice differentiable at a. The
second derivative function of f , denoted by f ′′ , is the function defined for the points
x at which f is twice differentiable, and its value there is f ′′ (x).
We can define the kth-order derivatives by induction:
Definition 12.33. Let the function f be k − 1 times differentiable in a neighborhood
of the point a. Let the (k − 1)th derivative function of f be denoted by f (k−1) . The
derivative of f (k−1) at a, if it exists, is called the kth (order) derivative of f . The kth
derivative function is denoted by f (k) ; this is defined where f is k times differentiable.
The kth derivative at a can be denoted by the symbols
d k f (x)
d k f
,
,
y(k) (a) ,
dxk x = a
dxk x = a
d k y
dxk x = a
as well. To keep our notation consistent, we will sometimes use the notation
f (0) = f ,
f (1) = f ′ ,
f (2) = f ′′
too.
If f (k) exists for all k ∈ N+ at a, then we say that f is infinitely differentiable at a.
12.3 Higher-Order Derivatives
225
It is easy to see that if p is an nth-degree polynomial, then its kth derivative is an
(n − k)th-degree polynomial for all k ≤ n. Thus the nth derivative of the polynomial
p is constant, and the kth derivative is identically zero for k > n. It follows that every
polynomial is infinitely differentiable. With the help of higher-order derivatives, we
can easily determine the multiplicity of a root of a polynomial.
Theorem 12.34. The number a is a root of the polynomial p with multiplicity k if
and only if
p(a) = p′ (a) = . . . = p(k−1) (a) = 0 and p(k) (a) = 0.
(12.22)
Proof. Clearly, it is enough to show that if a is a root of multiplicity k, then (12.22)
holds (since, for different k’s the statements (12.22) exclude each other). We prove
this by induction. If k = 1, then p(x) = (x − a) · q(x), where q(a) = 0. Then p′ (x) =
q(x) + (x − a) · q′ (x), which gives p′ (a) = q(a) = 0, so (12.22) holds for k = 1.
Let k > 1, and suppose that the statement holds for k − 1. Since p(x) = (x − a)k ·
q(x), where q(a) = 0, we have
p′ (x) = k · (x − a)k−1 · q(x) + (x − a)k · q′ (x) = (x − a)k−1 · r(x),
where r(a) = 0. Then the number a is a root of the polynomial p′ with multiplicity
k − 1, so by the induction hypothesis,
p′ (a) = p′′ (a) = . . . = p(k−1) (a) = 0 and p(k) (a) = 0.
Since p(a) = 0 is also true, we have proved (12.22).
⊔
⊓
Some of the differentiation rules apply for higher-order derivatives as well. Out
of these, we will give those for addition and multiplication. To define the rule for
multiplication, we need to introduce binomial coefficients.
n!
Definition 12.35. If 0 ≤ k ≤ n are integers, then the number
is denoted
k!(n − k)!
n
by k , where 0! is defined to be 1.
By the definition, it is clear that n0 = nn = 1 for all n. It is also easy to check
that
n
n−1
n−1
=
+
(12.23)
k
k−1
k
for all n ≥ 2 and k = 1, . . . , n − 1.
Theorem 12.36 (Binomial Theorem). The following identity holds:
n
n n−k k
(a + b)n = ∑
a b.
k=0 k
(12.24)
The name of the theorem comes from the fact that a binomial (that is, a polynomial with two terms) appears on the left-hand side of (12.24).
226
12 Differentiation
Proof. We prove this by induction. The statement is clear for n = 1. If it holds for
n, then
(a + b)n+1 = (a + b)n · (a + b) =
n n−1
n
a
·b+···+
a · bn−1 + bn · (a + b).
= an +
1
n−1
If we multiply this out, then in the resulting sum,
the terms an+1 and
bn+1 appear,
n n−k+1 k
n n−k+1 k
and moreover, for all 1 ≤ k ≤ n, the terms k−1 a
· b and k a
· b also
n−k+1 k
a
appear. The sum of these two terms, according to (12.23), is exactly n+1
·b .
k
⊔
⊓
Thus we get the identity (12.24) (with n + 1 replacing n).
Theorem 12.37. If f and g are n times differentiable at a, then f + g and f · g are
also n times differentiable there, and
( f + g)(n) (a) = f (n) (a) + g(n) (a),
(12.25)
n (n−k)
( f · g) (a) = ∑
f
(a) · g(k) (a).
k=0 k
(12.26)
as well as
(n)
n
The identity (12.26) is called the Leibniz rule.
Proof. (12.25) is straightforward by induction. We also use induction to prove
(12.26). If n = 1, then the statement is clear by the differentiation rule for products.
Suppose that the statement holds for n. If f and g are n + 1 times differentiable at
a, then they are n times differentiable in a neighborhood U of a, so by the induction
hypothesis,
n (n−1) ′
n
f
·g +···+
f ′ · g(n−1) + f · g(n)
(12.27)
( f · g)(n) = f (n) · g+
1
n−1
in U. Using the differentiation rules for sums and products, we get that ( f ·g)(n+1) (a)
(n+1) (a), as well as terms of
is the sum
terms f (n+1) (a) · g(a) and
f (a) · g
nofthe
f (n−k+1) (a) · g(k) (a) and nk f (n−k+1) (a) · g(k) (a) for all k = 1, . . . , n.
the form k−1
(n−k+1)
These sum to n+1
(a) · g(k) (a), which shows that (12.26) holds for n + 1.
k f
⊔
⊓
The higher-order derivatives of some elementary functions are easy to compute.
Examples 12.38. 1. It is easy to see that the exponential function ax is infinitely
differentiable, and that
(ax )(n) = (log a)n · ax
(12.28)
for all n.
2. It is also easy to check that the power function xb is infinitely differentiable on
the interval (0, ∞), and that
(xb )(n) = b(b − 1) · . . . · (b − n + 1) · xb−n
for all n and x > 0.
(12.29)
12.3 Higher-Order Derivatives
227
3. The functions sin x and cos x are also infinitely differentiable, and their higherorder derivatives are
(sin x)(2n) = (−1)n · sin x,
(cos x)(2n) = (−1)n · cos x,
(sin x)(2n+1) = (−1)n · cos x,
(cos x)(2n+1) = (−1)n+1 · sin x
(12.30)
for all n and x.
Remark 12.39. The equalities (sin x)′′ = − sin x and (cos x)′′ = − cos x can be expressed by saying that the functions sin x and cos x satisfy the relation
y′′ + y = 0;
this means that if we write sin x or cos x in place of y, then we get an equality. Such a
relation that links the derivatives of a function to the function itself (possibly using
other known functions) is called a differential equation. A differential equation
is said to have order n if the highest-order derivative that appears in the differential equation is the nth. So we can say that the functions sin x and cos x satisfy the
second-order differential equation y′′ + y = 0. More specifically, we will say that
this differential equation is an algebraic differential equation, since only the basic
operations are applied to the function and its derivatives.
It is clear that the exponential function ax satisfies the first-order differential
equation y′ − log a · y = 0. But it is not immediately clear that every exponential
function satisfies the same differential equation. Indeed, if y = ax , then y′ /y = log a,
which is constant. Thus (y′ /y)′ = 0, that is, ax satisfies the second-order algebraic
differential equation
y′′ · y − (y′ )2 = 0.
We can similarly show that every power function satisfies the same second-order
algebraic differential equation. The function x will appear in this differential equation, but can be removed by increasing the order (see Exercise 12.43).
The function loga x satisfies the equation y′ − (log a · x)−1 = 0. From this, we get
that x · y′ is a constant, that is,
x · y′′ + y′ = 0.
It is easy to see that the logarithmic functions satisfy a single third-order algebraic
differential equation, in which x does not appear. The inverse trigonometric and
hyperbolic functions satisfy similar equations.
One can show that if two functions both satisfy an algebraic differential equation,
then their sum, product, quotient, and composition also satisfy an algebraic differential equation (a different one, generally more complicated). It follows that every
elementary function satisfies an algebraic differential equation (which, of course,
depends on the function).
In the next chapter, we will discuss differential equations in more detail.
228
12 Differentiation
Exercises
12.38. Prove that if f is twice differentiable at a, then f is continuous in a neighborhood of a.
12.39. How many times is the function |x|3 differentiable at 0?
12.40. Give a function that is k times differentiable at 0 but is not k + 1 times differentiable there.
12.41. Prove that for the Chebyshev polynomial Tn (x) (see Exercise 11.31),
(1 − x2 )Tn′′ (x) − x Tn′ (x) + n2 Tn (x) = 0
for all x.
12.42. Prove that the Legendre polynomial
Pn (x) =
satisfies
(n)
1 2
(x − 1)n
n!
2n
(1 − x2 )Pn′′ (x) − 2x Pn′ (x) + n(n + 1)Pn (x) = 0
for all x.
12.43. (a) Prove that every power function satisfies a second-order algebraic differential equation.
(b) Prove that every power function satisfies a single third-order algebraic differential equation that does not contain x.
12.44. Prove that the logarithmic functions satisfy a single third-order algebraic differential equation that does not contain x. (S)
12.45. Prove that each of the functions arc sin x, arccos x, arc tg x, arsh x, arch x, and
arth x (individually) satisfies a third-degree algebraic differential equation that does
not contain x.
12.46. Prove that the function ex + log x satisfies an algebraic differential equation.
12.47. Prove that the function ex · sin x satisfies an algebraic differential equation.
12.4 Linking the Derivative and Local Properties
229
12.4 Linking the Derivative and Local Properties
Definition 12.40. Let the function f be defined
on a neighborhood of the point a. We say that f
is locally increasing at a if there exists a δ > 0
such that f (x) ≤ f (a) for all a − δ < x < a, and
f (x) ≥ f (a) for all a < x < a + δ .
Let f be defined on a right-hand neighborhood of a. We say that f is locally increasing
on the right at a if there exists a δ > 0 such that
f (x) ≥ f (a) for all a < x < a + δ (Figure 12.4).
Fig. 12.4
We similarly define the concepts of strictly
locally increasing, locally decreasing, and
strictly locally decreasing at a, as well as (strictly) locally increasing and decreasing from the left.
Remark 12.41. We have to take care in distinguishing the concepts of local and
monotone increasing (or decreasing) functions. The precise link between the two
is the following.
On the one hand, it is clear that if f is monotone increasing on (a, b), then f is
locally increasing for all points in (a, b).
On the other hand, it can be shown that if f is locally increasing at every point in
(a, b), then it is monotone increasing in (a, b) (but since we will not need this fact,
we leave the proof of it as an exercise; see Exercise 12.54).
It is possible, however, for a function f to be locally increasing at a point a
but not be monotone increasing on any neighborhood U(a). Consider the following
examples.
1. The function
x · sin2 (1/x),
f (x) =
0,
if x = 0,
if x = 0
is locally increasing at 0, but it does not have a neighborhood of 0 in which f is
monotone (Figure 12.5).
2. The function
1/x, if x = 0,
f (x) =
0,
if x = 0
is strictly locally increasing at 0, but it is not monotone increasing in any interval.
In fact, in the right-hand punctured neighborhoods of 0, f is strictly monotone
decreasing.
230
12 Differentiation
Fig. 12.5
Fig. 12.6
3. Similarly, the function
tg x,
f (x) =
0,
is strictly locally decreasing at
tervals (0, π /2) and (π /2, π ).
4. The function
π
2
if x ∈ (0, π ) \
if x = π2
π
2
,
but is strictly monotone increasing on the in-
x,
f (x) =
2x,
if x is irrational,
if x is rational
is strictly locally increasing at 0, but there is no interval on which it is monotone
(Figure 12.6).
Definition 12.42. We say that the function f has a local maximum (or minimum) at
a if a has a neighborhood U in which f is defined and for all x ∈ U, f (x) ≤ f (a) (or
f (x) ≥ f (a)). We often refer to the point a itself as the local maximum (or minimum)
of the function.
12.4 Linking the Derivative and Local Properties
231
If for all x ∈ U \ {a}, f (x) < f (a) (or f (x) > f (a)), then we say that a is a strict
local maximum (or minimum).
Local maxima and local minima are collectively called local extrema.
Fig. 12.7
Remark 12.43. We defined absolute (global) extrema in Definition 10.54. The following connections exist between absolute and local extrema.
An absolute extremum is not necessarily a local extremum, since a condition for
a point to be a local extremum is that the function be defined in a neighborhood
of the point. So for example, the function x on the interval [0, 1] has an absolute
minimum at 0, but this is not a local minimum. However, if the function f : A → R
has an absolute extremum at a ∈ A, and A contains a neighborhood of a, then a is a
local extremum.
A local extremum is not necessarily an absolute extremum, since the fact that f
does not have a value larger than f (a) in a neighborhood of a does not prevent it
from having a larger value outside the neighborhood.
Consider the following three properties:
I. The function f is locally increasing at a.
II. The function f is locally decreasing at a.
III. a is a local extremum of the function f .
The function f satisfies one of the properties I, II, and III if and only if there
exists a δ > 0 such that the graph of f over the interval (a − δ , a) lies entirely in one
section of the plane separated by the horizontal line y = f (a) and the graph of f over
(a, a + δ ) lies in one of the sections separated by y = f (a). The four possibilities
correspond to the function at a being locally increasing, locally decreasing, a local
maximum, and a local minimum (Figure 12.7).
It is clear that properties I and II can hold simultaneously only if f is constant in
a neighborhood of a.
In the strict variants of the properties above, however, only one can hold at one
time.
232
12 Differentiation
Of course, it is possible that none of I, II, and III holds. This is the case, for
example, with the function
x sin 1/x, if x =
0,
f (x) =
0,
if x = 0
at the point x = 0.
Let us now see the connection between the sign of the derivative and the properties above.
Theorem 12.44. Suppose that f is differentiable at a.
(i) If
(ii) If
(iii) If
(iv) If
(v) If
f ′ (a) > 0, then f is strictly locally increasing at a.
f ′ (a) < 0, then f is strictly locally decreasing at a.
f is locally increasing at a, then f ′ (a) ≥ 0.
f is locally decreasing at a, then f ′ (a) ≤ 0.
f has a local extremum at a, then f ′ (a) = 0.
Proof. (i) If
f ′ (a) = lim
x→a
f (x) − f (a)
> 0,
x−a
then by Theorem 10.30, there exists a δ > 0 such that
f (x) − f (a)
>0
x−a
for all 0 < |x − a| < δ . Thus f (x) > f (a) if a < x < a + δ , and f (x) < f (a) if
a − δ < x < a. But this means precisely that the function f is strictly locally increasing at a (Figure 12.8). Statement (ii) can be proved similarly.
(iii) If f is locally increasing at a, then there exists a δ > 0 such that
f (x) − f (a)
≥0
x−a
if 0 < |x − a| < δ . But then
f ′ (a) = lim
x→a
f (x) − f (a)
≥ 0.
x−a
Statement (iv) can be proved similarly.
(v) If f ′ (a) = 0, then by (i) and (ii), f is either locally strictly increasing or locally
strictly decreasing at a, so a cannot be a local extremum. Thus if f has a local
⊔
⊓
extremum at a, then necessarily f ′ (a) = 0.
Remarks 12.45. 1. The one-sided variants of the statements (i)–(iv) above also hold
(and can be proved the same way). That is, if f+′ (a) > 0, then f is strictly locally
increasing on the right at a; if f is locally increasing on the right at a, then f+′ (a) ≥ 0,
assuming that the right-hand derivative exists.
12.4 Linking the Derivative and Local Properties
233
Fig. 12.8
2. None of the converses of the statements (i)–(v) is true. If we know only that f
is strictly locally increasing at a, cannot deduce that f ′ (a) > 0. For example, the
function f (x) = x3 is strictly locally increasing at 0 (and in fact strictly monotone
increasing on the whole real line), but f ′ (0) = 0.
If we know only that f ′ (a) ≥ 0, we cannot deduce that the function f is locally
increasing at a. For example, for the function f (x) = −x3 , f ′ (0) = 0 ≥ 0, but f is
not locally increasing at 0 (and in fact, it is strictly locally decreasing there).
Similarly, if we know only that f ′ (a) = 0, cannot deduce that the function f has a
local extremum at a. For example, for the function f (x) = x3 , f ′ (0) = 0, but f does
not have a local extremum at 0 (since x3 is strictly monotone increasing on the entire
real line). We can also express this by saying that if f is differentiable at a, then the
assumption f ′ (a) = 0 is a necessary but not sufficient condition for f to have a local
extremum at a.
3. If in statement (iii), we assume f to be strictly locally increasing instead, then we
still cannot say more than f ′ (a) ≥ 0 generally (since the converse of statement (i)
does not hold).
4. From f ′ (a) > 0, we can deduce only that
f is locally increasing at a, and not that it
is monotone increasing. Consider the following example. Let f be a function such that
x − x2 ≤ f (x) ≤ x + x2 for all x (Figure 12.9).
Then f (0) = 0, and so if x > 0, then
1−x ≤
f (x) − f (0)
f (x)
=
≤ 1 + x,
x
x−0
while if x < 0, the reverse inequalities hold.
Thus by the squeeze theorem,
f ′ (0) = lim
x→0
Fig. 12.9
f (x) − f (0)
= 1 > 0.
x−0
On the other hand, it is clear that we can choose the function f such that it is not
monotone increasing in any neighborhood of 0. For this, if we choose δ > 0, we
need −δ < x < y < δ such that f (x) > f (y). If, for example, f (x) = x − x2 for all
234
12 Differentiation
rational x and f (x) = x + x2 for all irrational x, then this holds for sure. We can even
construct f to be differentiable everywhere: draw a “smooth” (that is, differentiable)
wave between the graphs of the functions x − x2 and x + x2 (one such function can
be seen in Exercise 12.53).
Even though the condition f ′ (a) = 0 is not sufficient for f to have a local extremum at a, in certain important cases, statement (v) of Theorem 12.44 is still
applicable for finding the extrema.
Example 12.46. Find the (absolute) maximum of the function f (x) = x · (1 − x) in
the interval [0, 1]. Since the function is continuous, by Weierstrass’s theorem (Theorem 10.55) f has an absolute maximum in [0, 1]. Suppose that f takes on its largest
value at the point a ∈ [0, 1]. Then either a = 0, a = 1, or a ∈ (0, 1). In the last case
f has a local maximum at a, and since f is everywhere differentiable, by statement
(v) of Theorem 12.44 we have f ′ (a) = 0. Now f ′ (x) = 1 − 2x, so the condition
f ′ (a) = 0 is satisfied only by a = 1/2. We get that the function attains its maximum
at one of the points 0, 1, 1/2. However, f (0) = f (1) = 0 and f (1/2) = 1/4 > 0,
so 0 and 1 cannot be maxima of the function. Thus only a = 1/2 is possible. Thus
we have shown that the function f (x) = x · (1 − x) over the interval [0, 1] attains its
maximum at the point 1/2; that is, a = 1/2 is its absolute maximum.
Remark 12.47. This argument can be applied in the cases in which we are dealing
with a function f that is continuous on a closed and bounded interval [a, b] and is
differentiable inside, on (a, b). Then f has a largest value by Weierstrass’s theorem.
If this is attained at a point c, then either c = a, c = b, or c ∈ (a, b). In this last case,
we are talking about a local extremum as well, so f ′ (c) = 0. Thus if we find all
points c ∈ (a, b) where f ′ vanishes, then the absolute maximum points of f must be
among these, a, and b. We can then locate the absolute maxima by computing f at
all of these values (not forgetting about a and b), and determining those at which the
value of f is the largest. (We should note that in some cases, we have to compute the
value of f at infinitely many points. It can happen that f ′ has infinitely many roots
in (a, b); see Exercise 12.52.)
Example 12.48. As another application of the argument above, we deduce Snell’s4
law. By what is called Fermat’s5 principle, light traveling between two points takes
the path that can be traversed in the least amount of time. In Figure 12.10, the x-axis
separates two fluids in which the speed of light is respectively v1 and v2 . Looking
from the point P1 , we will see point P2 in the direction that a light ray arrives at P1 if
it starts at P2 . The light ray—by Fermat’s principle—“chooses” the path that takes
the shortest time to traverse. To determine the path of the light ray, we thus need to
solve the following problem.
4
5
Willebrord Snellius (1580–1626), Dutch mathematician.
Pierre de Fermat (1601–1665), French mathematician.
12.4 Linking the Derivative and Local Properties
235
Let a line e be given in the plane, and in
the two half-planes determined by this line, let
there be given the points P1 and P2 . If a moving
point travels with velocity v1 in the half-plane
in which P1 is located, and with velocity v2 if it
is in the half-plane of P2 , what path must it take
to get from P1 to P2 in the shortest amount of
time?
Let the line e be the x-axis, let the coordiFig. 12.10
nates of P1 be (a1 , b1 ), and let the coordinates
of P2 be (a2 , b2 ). We may assume that a1 < a2
(Figure 12.10). Clearly, the point needs to travel in a straight line in both half-planes,
so the problem is simply to find where the point crosses the x-axis, that is, where the
path bends (is refracted).
If the path intersects the x-axis at the point x, the time necessary for the point to
traverse the entire path is
!
!
1
1
f (x) = · (x − a1 )2 + b21 + · (x − a2 )2 + b22 ,
v1
v2
and so
f ′ (x) =
1
x − a1
x − a2
1
·!
+ ·!
.
v1
v
2
(x − a )2 + b2
(x − a )2 + b2
1
1
2
(12.31)
2
Our task is to find the absolute minimum of f . Since if x < a1 , then f (x) > f (a1 ),
and if x > a2 , then f (x) > f (a2 ), it suffices to find the minimum of f in the interval [a1 , a2 ]. Since f is continuous, Weierstrass’s theorem applies, and f attains its
minimum on [a1 , a2 ]. Since f is also differentiable, the minima can be only at the
endpoints of the interval and at the points where the derivative is zero.
Now by (12.31),
f ′ (a1 ) =
(a − a2 )
! 1
< 0,
v2 · (a1 − a2 )2 + b22
and so by Theorem 12.44, f is strictly locally decreasing at a1 . Thus in a suitable
right-hand neighborhood of a1 , every value of f is smaller that f (a1 ), so a1 cannot
be a minimum. Similarly,
f ′ (a2 ) =
(a − a1 )
! 2
> 0,
v1 · (a2 − a1 )2 + b21
and so by Theorem 12.44, f is strictly locally increasing at a2 . Thus in a suitable
left-hand neighborhood of a2 , every value of f is smaller that f (a2 ), so a2 cannot
be a minimum either. Thus the minimum of the function f is at a point x ∈ (a1 , a2 )
where f ′ (x) = 0. By (12.31), this is equivalent to saying that
236
12 Differentiation
!
x − a1
(x − a1 )2 + b21
We can see in the figure that
!
x − a1
= sin α
:!
a2 − x
=
(x − a2 )2 + b22
and
(x − a1 )2 + b21
!
v1
.
v2
a2 − x
= sin β ,
(x − a2 )2 + b22
where α and β are called the angle of incidence and angle of refraction respectively.
Thus the path taking the least time will intersect the line separating the two fluids at
the point where
sin α
v1
= .
sin β
v2
This is Snell’s law.
Exercises
12.48. We want to create a box with no lid out of a rectangle with sides a and b by
cutting out a square of size x at each corner of the rectangle. How should we chose
x to maximize the volume of the box? (S)
12.49. Which cylinder inscribed into a given sphere has the largest volume?
12.50. Which right circular cone inscribed into a given sphere has the largest volume?
12.51. Which right circular cone inscribed into a given sphere has the largest surface
area? (The surface area of the cone includes the base circle.)
12.52. Let
x2 · sin 1/x,
f (x) =
0,
if x = 0,
if x = 0.
Prove that the derivative of f has infinitely many roots in (0, 1).
12.53. Let
x + 2x2 · sin 1/x,
f (x) =
0,
if x = 0,
if x = 0.
Show that f ′ (0) > 0, but f is not monotone increasing in any neighborhood of 0. (S)
12.54. Prove that if f is locally increasing at all points in (a, b), then it is monotone
increasing in (a, b). (H)
12.5 Intermediate Value Theorems
237
12.55. Determine the absolute extrema of the functions below in the given intervals.
(a) x2 − x4 , [−2, 2];
(b) x − arc tg x, [−1, 1];
(c) x + e−x , [−1, 1];
(d) x + x−2 , [1/10, 10];
(e) arc tg(1/x), [1/10, 10];
(f) cos x2 , [0, π ];
(g) sin(sin x), [−π /2, π /2];
(h) x · e−x , [−2, 2];
(i) xn · e−x , [−2n, 2n];
(j) x − log x, [1/2, 2];
(k) 1/(1 + sin2 x), (0, π );
2
(l) 1 − e−x , [−2, 2];
(m) x · sin(log x), [1, 100];
(n) √
xx , (0, ∞);
(o) x x, (0, ∞);
(p) (log x)/x, (0, ∞);
(q) x · log x, (0, ∞);
(r) xx · (1 − x)1−x , (0, 1).
12.5 Intermediate Value Theorems
The following three theorems—each a generalization of the one that it follows—are
some of the most frequently used theorems in differentiation. When we are looking
for a link between properties of a function and its derivative, most often we use one
of these intermediate value theorems.
Theorem 12.49 (Rolle’s Theorem6 ). Suppose that the function f is continuous on
[a, b] and differentiable on (a, b). If f (a) = f (b), then there exists a c ∈ (a, b) such
that f ′ (c) = 0.
Proof. If f (x) = f (a) for all x ∈ (a, b), then f is
constant in (a, b), so f ′ (x) = 0 for all x ∈ (a, b).
Then we can choose c to be any number in
(a, b).
We can thus suppose that there exists an
x0 ∈ (a, b) for which f (x0 ) = f (a). Consider
first the case f (x0 ) > f (a). By Weierstrass’s
theorem, f has an absolute maximum in [a, b].
Fig. 12.11
Since f (x0 ) > f (a) = f (b), neither a nor b can
be its absolute maximum. Thus if c is an absolute maximum, then c ∈ (a, b), and so c is also a local maximum too. By statement
(v) of Theorem 12.44, it then follows that f ′ (c) = 0.
If f (x0 ) < f (a), then we argue similarly, considering the absolute minimum of f
instead (Figure 12.11).
⊔
⊓
An important generalization of Rolle’s theorem is the following theorem.
6
Michel Rolle (1652–1719), French mathematician.
238
12 Differentiation
Theorem 12.50 (Mean Value Theorem). If the function f is continuous on [a, b]
and differentiable on (a, b), then there exists a c ∈ (a, b) such that
f ′ (c) =
f (b) − f (a)
.
b−a
Proof. The equation for the chord between the points (a, f (a)) and (b, f (b)) is
given by
y = ha,b (x) =
The function
f (b) − f (a)
(x − a) + f (a).
b−a
F(x) = f (x) − ha,b (x)
satisfies the conditions of Rolle’s theorem. Indeed, since f and ha,b are both continuous in [a, b] and differentiable on (a, b), their difference also has these properties.
Since F(b) = F(a) = 0, we can apply Rolle’s theorem to F. We get that there exists
a c ∈ (a, b) such that F ′ (c) = 0. But this means that
0 = F ′ (c) = f ′ (c) − h′a,b (c) = f ′ (c) −
so
f (b) − f (a)
,
b−a
f (b) − f (a)
,
b−a
which concludes the proof of the theorem (Figure 12.12).
f ′ (c) =
⊔
⊓
The geometric meaning of the mean value
theorem is the following: if the function f is
continuous on [a, b] and differentiable on (a, b),
then the graph of f has a point in (a, b) where
the tangent line is parallel to the chord ha,b .
The following theorem is a generalization of
the previous one.
Fig. 12.12
Theorem 12.51 (Cauchy’s Mean Value Theorem). If the functions f and g are
continuous on [a, b], differentiable on (a, b), and for x ∈ (a, b) we have g′ (x) =
0,
then there exists a c ∈ (a, b) such that
f ′ (c)
f (b) − f (a)
=
.
g′ (c)
g(b) − g(a)
Proof. By Rolle’s theorem, we know that g(a) = g(b). Indeed, if g(a) = g(b) held,
then the derivative of g would be zero at at least one point of the interval (a, b),
which we did not allow. Let
F(x) = f (x) − f (a) −
f (b) − f (a)
(g(x) − g(a)).
g(b) − g(a)
12.5 Intermediate Value Theorems
239
The function F is continuous on [a, b], differentiable on (a, b), and F(a) = F(b) = 0.
Thus by Rolle’s theorem, there exists a c ∈ (a, b) such that F ′ (c) = 0. Then
0 = F ′ (c) = f ′ (c) −
f (b) − f (a) ′
g (c).
g(b) − g(a)
Since by the assumptions, g′ (c) = 0, we get that
f (b) − f (a)
f ′ (c)
=
,
g′ (c)
g(b) − g(a)
⊔
⊓
which concludes the proof.
It is clear that the mean value theorem is a special case of Cauchy’s mean value
theorem if we apply the latter with g(x) = x.
A simple but important corollary of the mean value theorem is the following.
Theorem 12.52. If f is continuous on [a, b], differentiable on (a, b), and f ′ (x)=0
for all x ∈ (a, b), then the function f is constant on [a, b].
Proof. By the mean value theorem, for every x in (a, b], there exists a c ∈ (a, b) such
that
f (x) − f (a)
f ′ (c) =
.
x−a
So by f ′ (c) = 0, we have f (x) = f (a).
⊔
⊓
The following corollary is sometimes called the fundamental theorem of integration; we will later see why.
Corollary 12.53. If f and g are continuous on [a, b], differentiable on (a, b), and
moreover, f ′ (x) = g′ (x) for all x ∈ (a, b), then with a suitable constant c, we have
f (x) = g(x) + c for all x ∈ [a, b].
Proof. Apply Theroem 12.52 to the function f − g.
⊔
⊓
Exercises
12.56. Give an example of a differentiable function f : R → R that has a point c
such that f ′ (c) is not equal to the difference quotient ( f (b) − f (a))/(b − a) for any
a < b. Why does this not contradict the mean value theorem?
12.57. Prove that if f is twice differentiable on [a, b], and for a c ∈ (a, b) we have
f ′′ (c) = 0, then there exist a ≤ x1 < c < x2 ≤ b such that
f ′ (c) =
f (x1 ) − f (x2 )
.
x1 − x2
(H)
240
12 Differentiation
12.58. Prove that
(α − β ) · cos α ≤ sin α − sin β ≤ (α − β ) · cos β
for all 0 < β < α < π /2.
12.59. Prove that | arc tg x − arc tg y| ≤ |x − y| for all x, y.
12.60. Let f be differentiable on the interval I, and suppose that the function f ′ is
bounded on I. Prove that f is Lipschitz on I.
12.61. Prove that if f ′ (x) = x2 for all x, then there exists a constant c such that
f (x) = (x3 /3) + c.
12.62. Prove that if f ′ (x) = f (x) for all x, then there exists a constant c such that
f (x) = c · ex for all x.
12.63. Let f : R → (0, ∞) be differentiable and strictly monotone increasing. Suppose that the tangent line of the graph of f at every point (x, f (x)) intersects the
x-axis at the point x − a, where a > 0 is a constant. Prove that f is an exponential
function.
12.64. Let f : (0, ∞) → (0, ∞) be differentiable and strictly monotone increasing.
Suppose that the tangent line to the graph of f at every point (x, f (x)) intersects the
x-axis at the point c · x, where c > 0 is a constant. Prove that f is a power function.
12.65. Prove that if f and g are differentiable everywhere, f (0) = 0, g(0) = 1,
f ′ = g, and g′ = − f , then f (x) = sin x and g(x) = cos x for all x. (H)
12.66. Prove that the function x5 − 5x + 2 has three real roots.
12.67. Prove that the function x7 + 8x2 + 5x − 23 has at most three real roots.
12.68. At most how many real roots can the function x16 + ax + b have?
12.69. For what values of k does the function x3 − 6x2 + 9x + k have exactly one real
root?
12.70. Prove that if p is an nth-degree polynomial, then the function ex − p(x) has
at most n + 1 real roots.
12.71. Let f be n times differentiable on (a, b). Prove that if f has n distinct roots in
(a, b), then f (n−k) has at least k roots in (a, b) for all k = 1, . . . , n − 1. (H)
12.72. Prove that if p is an nth-degree polynomial and every root of p is real, then
every root of p′ is also real.
12.73. Prove that every root of the Legendre polynomial
Pn (x) =
is real.
(n)
1 2
(x − 1)n
2n n!
12.6 Investigation of Differentiable Functions
241
12.74. Let f and g be n times differentiable functions on [a, b], and suppose that
they have n common roots in [a, b]. Prove that if the functions f (n) and g(n) have
no common roots in [a, b], then for all x ∈ [a, b] such that g(x) = 0, there exists a
c ∈ (a, b) such that
f (x)
f (n) (c)
= (n) .
g(x)
g (c)
12.75. Let f be continuous on (a, b) and differentiable on (a, b) \ {c}, where a <
c < b. Prove that if limx→c f ′ (x) = A, where A is finite, then f is differentiable at c
and f ′ (c) = A.
12.76. Prove that if f is twice differentiable at a, then
lim
h→0
f (a + 2h) − 2 f (a + h) + f (a)
= f ′′ (a).
h2
(H)
12.77. Let f be differentiable on (0, ∞). Prove that if there exists a sequence xn → ∞
such that f (xn ) → 0, then there also exists a sequence yn → ∞ such that f ′ (yn ) → 0.
12.78. Let f be differentiable on (0, ∞). Prove that if limx→∞ f ′ (x) = 0, then
limx→∞ f (x)/x = 0.
12.6 Investigation of Differentiable Functions
We begin with monotonicity criteria.
Theorem 12.54. Let f be continuous on [a, b] and differentiable on (a, b).
(i) f is monotone increasing (decreasing) on [a, b] if and only if f ′ (x) ≥ 0 ( f ′ (x) ≤
0) for all x ∈ (a, b).
(ii) f is strictly monotone increasing (decreasing) on [a, b] if and only if f ′ (x) ≥ 0
( f ′ (x) ≤ 0) for all x ∈ (a, b), and [a, b] does not have a nondegenerate subinterval on which f ′ is identically zero.
Proof. (i) Suppose that f ′ (x) ≥ 0 for all x ∈ (a, b). By the mean value theorem, for
arbitrary a ≤ x1 < x2 ≤ b there exists a c ∈ (x1 , x2 ) such that
f (x1 ) − f (x2 )
= f ′ (c).
x1 − x2
Since f ′ (c) ≥ 0, we have f (x1 ) ≤ f (x2 ), which means exactly that f is monotone
increasing on [a, b].
Conversely, if f is monotone increasing on [a, b], then it is locally increasing at
every x in (a, b). Thus by statement (iii) of Theorem 12.44, we see that f ′ (x) ≥ 0.
The proof is similar for the monotone decreasing case.
242
12 Differentiation
(ii) It is easy to see that a function f is strictly monotone on [a, b] if and only if it is
monotone in [a, b], and if [a, b] does not have a subinterval on which f is constant.
⊔
⊓
Then the statement can be proved easily by Theorem 12.52
As an application of the theorem above, we introduce a simple but useful method
for proving inequalities.
Corollary 12.55. Let f and g be continuous on [a, b] and differentiable on (a, b). If
f (a) = g(a) and a < x ≤ b implies f ′ (x) ≥ g′ (x), then f (x) ≥ g(x) for all x ∈ [a, b].
Also, if f (a) = g(a) and a < x ≤ b imply f ′ (x) > g′ (x), then f (x) > g(x) for all
x ∈ (a, b].
Proof. Let h = f − g. If a < x ≤ b implies f ′ (x) ≥ g′ (x), then h′ (x) ≥ 0 for all
x ∈ (a, b), and so by statement (i) of Theorem 12.54, h is monotone increasing on
[a, b]. If f (a) = g(a), then h(a) = 0, and so h(x) ≥ h(a) = 0, and thus f (x) ≥ g(x)
for all x ∈ [a, b]. The second statement follows similarly, using statement (ii) of
Theorem 12.54.
⊔
⊓
Example 12.56. To illustrate the method, let us show that
log(1 + x) >
2x
x+2
(12.32)
for all x > 0. Since for x = 0 we have equality, by Corollary 12.55 it suffices to show
that the derivatives of the given functions satisfy the inequality for all x > 0. It is
easy to check that (2x/(x + 2))′ = 4/(x + 2)2 . Thus we have only to show that for
x > 0, 1/(1 + x) > 4/(x + 2)2 , which can be checked by multiplying through.
More applications of Corollary 12.55 can be found among the exercises.
Remark 12.57. With the help of Theorem 12.54, we can find the local and absolute
extrema of an arbitrary differentiable function even if it is not defined on a closed
and bounded interval. This is because by the sign of the derivative, we can determine
on which intervals the function is increasing and on which it is decreasing, and this
generally gives us enough information to find the extrema.
Consider the function f (x) = x · e−x , for example. Since f ′ (x) = e−x − x · e−x ,
we have f ′ (x) > 0 if x < 1, and f ′ (x) < 0 if x > 1. Thus f is strictly monotone
increasing on (−∞, 1], and strictly monotone decreasing on [1, ∞). It follows that f
has an absolute maximum at 1 (which is also a local maximum), and that f does not
have any local or absolute minima.
In Theorem 12.44 we saw that if f is differentiable at a, then for f to have a local
extremum at a, it is necessary (but generally not sufficient) for f ′ (a) = 0 to hold.
The following theorems give sufficient conditions for the existence of local extrema.
12.6 Investigation of Differentiable Functions
243
Theorem 12.58. Let f be differentiable in a neighborhood of the point a.
(i) If f ′ (a) = 0 and f ′ is locally increasing (decreasing) at a,7 then a is a local
minimum (maximum) of f .
(ii) If f ′ (a) = 0 and f ′ is strictly locally increasing (decreasing) at a, then the point
a is a strict local minimum (maximum) of f .
Proof. (i) Consider the case that f ′ is locally increasing at a. Then there exists a
δ > 0 such that f ′ (x) ≤ 0 if a − δ < x < a, and f ′ (x) ≥ 0 if a < x < a + δ . By
Theorem 12.54, it then follows that f is monotone decreasing on [a − δ , a], and
monotone increasing on [a, a + δ ]. Thus if a − δ < x < a, then f (x) ≥ f (a). Moreover, if a < x < a + δ , then again f (x) ≥ f (a). This means exactly that f has a local
minimum at a. The statement for the local maximum can be proved similarly.
(ii) If f ′ is strictly locally increasing at a, then there exists a δ > 0 such that f ′ (x) < 0
if a − δ < x < a, and f ′ (x) > 0 if a < x < a + δ .
It then follows that f is strictly monotone decreasing on [a − δ , a], and strictly
monotone increasing on [a, a + δ ]. Thus if a − δ < x < a, then f (x) > f (a). Moreover, if a < x < a + δ , then again f (x) > f (a). This means exactly that f has a strict
local minimum at a. One can argue similarly for the case of a strict local maximum.
⊔
⊓
Remark 12.59. The sign change of f ′ at a is not necessary for f to have a local
extremum at the point a. Let f be a function such that x2 ≤ f (x) ≤ 2x2 for all x.
Then the point 0 is a strict local (and absolute) minimum of f . On the other hand,
it is possible that f is not differentiable (or even continuous) at any point other than
0; this is the case, for example, if f (x) = x2 for all rational x, and f (x) = 2x2 for all
irrational x.
We can also construct such an f to be differentiable; for this we need to place
a differentiable function between the graphs of the functions x2 and 2x2 each of
whose one-sided neighborhoods of 0 contains a section that is monotone decreasing and a section that is monotone increasing as well. We give such a function in
Exercise 12.95.
Theorem 12.60. Let f be twice differentiable at a. If f ′ (a) = 0 and f ′′ (a) > 0, then
f has a strict local minimum at a. If f ′ (a) = 0 and f ′′ (a) < 0, then f has a strict
local maximum at a.
Proof. Suppose that f ′′ (a) > 0. By Theorem 12.44, it follows that f ′ is strictly locally increasing at a. Now apply the previous theorem (Figure 12.13). The proof for
⊔
⊓
the case f ′′ (a) < 0 is similar.
Remark 12.61. If f ′ (a) = 0 and f ′′ (a) = 0, then we cannot deduce whether f has
a local extremum at a. The different possibilities are illustrated by the functions
That is, if f ′ changes signs at the point a, meaning that it is nonpositive on a left-sided neighborhood of a and nonnegative on a right-sided neighborhood, or vice versa.
7
244
12 Differentiation
Fig. 12.13
f (x) = x3 , f (x) = x4 , and f (x) = −x4 at a = 0. In this case, we can get sufficient conditions for f to have a local extremum at a by the value of higher-order
derivatives.
Theorem 12.62.
(i) Let the function f be 2k times differentiable at the point a, where k ≥ 1. If
f ′ (a) = . . . = f (2k−1) (a) = 0 and f (2k) (a) > 0,
(12.33)
then f has a strict local minimum at a. If
f ′ (a) = . . . = f (2k−1) (a) = 0 and f (2k) (a) < 0,
then f has a strict local maximum at a.
(ii) Let the function f be 2k + 1 times differentiable at a, where k ≥ 1. If
f ′ (a) = . . . = f (2k) (a) = 0 and f (2k+1) (a) = 0,
(12.34)
then f is strictly monotone in a neighborhood of a, that is, f does not have a
local extremum there.
Proof. (i) We prove only the first statement, using induction. We already saw the
k = 1 case in Theorem 12.60. Let k > 1, and suppose that the statement holds for
k − 1. If (12.33) holds for f , then for the function g = f ′′ , we have
g′ (a) = . . . = g(2k−3) (a) = 0 and g(2k−2) (a) > 0.
Thus by the induction hypothesis, f ′′ has a strict local minimum at a. Since by
k > 1 we have f ′′ (a) = 0, there must exist a δ > 0 such that f ′′ (x) > 0 at all points
x ∈ (a − δ , a + δ ) \ {a}. Then by Theorem 12.54, it follows that f ′ is strictly monotone increasing on (a − δ , a + δ ). Thus f ′ is strictly locally increasing at a, so we
can apply Theorem 12.58.
(ii) Suppose (12.34) holds. Then by the already proved statement (i), a is a strict
local extremum of f ′ . Since f ′ (a) = 0, either f ′ (x) > 0 for all x ∈ (a − δ , a + δ )\
{a}, or f ′ (x) < 0 for all x ∈ (a − δ , a + δ ) \ {a}. By Theorem 12.54, it then follows
that f is strictly monotone on (a − δ , a + δ ), so it does not have a local extremum at
the point a (Figure 12.14).
⊔
⊓
12.6 Investigation of Differentiable Functions
245
Fig. 12.14
We now turn to the conditions for convexity.
Theorem 12.63. Let f be differentiable on the interval I.
(i) The function f is convex (concave) on I if and only if f ′ is monotone increasing
(decreasing) on I.
(ii) The function f is strictly convex (concave) on I if and only if f ′ is strictly monotone increasing (decreasing) on I.
Proof. (i) Suppose that f ′ is monotone increasing on I. Let a, b ∈ I, a < b, and let
a < x < b be arbitrary. By the mean value theorem, there exist points u ∈ (a, x) and
v ∈ (x, b) such that
f ′ (u) =
f (x) − f (a)
f (b) − f (x)
and f ′ (v) =
.
x−a
b−x
Since u < v and f ′ is monotone increasing, f ′ (u) ≤ f ′ (v), so
f (x) − f (a)
f (b) − f (x)
≤
.
x−a
b−x
Then by a simple rearrangement, we get that
f (x) ≤
f (b) − f (a)
· (x − a) + f (a),
b−a
which shows that f is convex.
Now suppose that f is convex on I, and let a, b ∈ I, a < b, be arbitrary. By
Theorem 9.20, the function F(x) = ( f (x) − f (a))/(x − a) is monotone increasing
on the set I \ {a}. Thus F(x) ≤ (( f (b) − f (a))/(b − a) for all x < b, x = a. Since
f ′ (a) = limx→a F(x), we have that (Figures 12.15)
f ′ (a) ≤
f (b) − f (a)
.
b−a
(12.35)
Similarly, the function G(x) = ( f (x) − f (b))/(x − b) is monotone increasing on the
set I \ {b}, so G(x) ≥ (( f (b) − f (a))/(b − a) for all x > a, x = b.
Since f ′ (b) = limx→b G(x), we have that
f ′ (b) ≥
f (b) − f (a)
.
b−a
(12.36)
246
12 Differentiation
Fig. 12.15
Fig. 12.16
If we combine (12.35) and (12.36), we get that f ′ (a) ≤ f ′ (b). Since this is true for
all a, b ∈ I if a < b, f ′ is monotone increasing on I (Figure 12.16).
The statement for concavity can be proved in the same way. Statement (ii) follows
by a straightforward change of the argument above.
⊔
⊓
Rearranging equation (12.35), we get that if a < b, then f (b) ≥ f ′ (a) · (b − a) +
f (a). Equation (12.36) states that if a < b, then f (a) ≥ f ′ (b) · (a − b) + f (b). This
means that for arbitrary a, x ∈ I,
f (x) ≥ f ′ (a) · (x − a) + f (a),
(12.37)
that is, a tangent line drawn at any point on the graph of f lies below the graph itself.
Thus we have proved the “only if” part of the following theorem.
Theorem 12.64. Let f be differentiable on the interval I. The function f is convex
on I if and only if for every a ∈ I, the graph of the function f lies above the tangent
of the graph at the point a, that is, if and only if (12.37) holds for all a, x ∈ I.
Proof. We now have to prove only the “if” part of the statement. Suppose that (12.37)
holds for all a, x ∈ I. If a, b ∈ I and a < b, then it follows that both (12.35) and (12.36)
are true, so f ′ (a) ≤ f ′ (b). Thus f ′ is monotone increasing on I, so by Theorem
12.63, f is convex.
⊔
⊓
Theorem 12.65. Let f be twice differentiable on I. The function f is convex (concave) on I if and only if f ′′ (x) ≥ 0 ( f ′′ (x) ≤ 0) for all x ∈ I.
Proof. The statement of the theorem is a simple corollary of Theorems 12.63
and 12.54.
⊔
⊓
Definition 12.66. We say that a point a is an inflection point of the function f if f
is continuous at a, f has a (finite or infinite) derivative at a, and there exists a δ > 0
such that f is convex on (a − δ , a] and concave on [a, a + δ ), or vice versa.
√
So for example, 0 is an inflection point of the functions x3 and 3 x.
12.6 Investigation of Differentiable Functions
247
Theorem 12.67. If f is twice differentiable at a, and f has an inflection point at a,
then f ′′ (a) = 0.
Proof. If f is convex on (a − δ , a], then f ′ is monotone increasing there; if it is
concave on [a, a + δ ), then f ′ is monotone decreasing there. Thus f ′ has a local
maximum at a, and so f ′′ (a) = 0.
The proof is similar in the case that f is concave on (a − δ , a] and convex on
[a, a + δ ).
⊔
⊓
Remark 12.68. Let f be differentiable on a neighborhood of the point a. By Theorem 12.63, a is an inflection point of f if and only if a is a local extremum of f ′
such that f ′ is increasing in a left-hand neighborhood of a and is decreasing in a
right-hand neighborhood of a, or the other way around. From this observation and
by Theorem 12.67, we get the following theorem.
Theorem 12.69. Let f be twice differentiable on a neighborhood of the point a.
Then a is an inflection point of f if and only if f ′′ changes sign at the point a, that
is, if f ′′ (a) = 0 and f ′′ is locally increasing or decreasing at a.
Corollary 12.70. Let f be three times differentiable at a. If f ′′ (a) = 0 and f ′′′ (a) =
0, then f has an inflection point at a.
Remark 12.71. In the case f ′′ (a) = f ′′′ (a) = 0, it is possible that f has an inflection
point at a, but it is also possible that it does not. The different cases are illustrated by
the functions f (x) = x4 and f (x) = x5 at the point a = 0. As in the case of extrema,
we can refer to the values of higher-order derivatives to help determine when a point
of inflection exists.
Theorem 12.72.
(i) Let the function f be 2k + 1 times differentiable at a, where k ≥ 1. If
f ′′ (a) = . . . = f (2k) (a) = 0 and f (2k+1) (a) = 0,
(12.38)
then f has an inflection point at a.
(ii) If
f ′′ (a) = . . . = f (2k−1) (a) = 0 and f (2k) (a) = 0,
(12.39)
then f is strictly convex or concave in a neighborhood of a, and so a is not a
point of inflection.
Proof. (i) We already saw the case for k = 1 in the previous theorem, so we may
assume that k > 1. If (12.38) holds, then by statement (ii) of Theorem 12.62, f ′′ is
strictly monotone in a neighborhood of a. Thus f ′′ is locally increasing or decreasing
at the point a, and we can apply Theorem 12.69.
(ii) Assume (12.39). Then necessarily k > 1. By statement (i) of Theorem 12.62,
f ′′ has a strict local extremum at a. Since f ′′ (a) = 0, this means that for a suitable
248
12 Differentiation
neighborhood U of a, the sign of f ′′ does not change in U \ {a}. Thus f ′ is strictly
monotone on U, so f is either strictly convex or strictly concave on U, and so a
cannot be a point of inflection.
⊔
⊓
Remark 12.73. If the function f is infinitely differentiable at a and f (n) (a) = 0 for at
least one n ≥ 2, then with the help of Theorems 12.62 and 12.72, we can determine
whether f has a local extremum or a point of inflection at a. This is because if
f ′ (a) = 0, then a cannot be a local extremum. Now suppose that f ′ (a) = 0, and let
n be the smallest positive integer for which f (n) (a) = 0. In this case, the function f
has a local extremum at a if and only if n is even.
If f ′′ (a) = 0, then a is not an inflection point of f . If, however, f ′′ (a) = 0, and
n is the smallest integer greater than 2 for which f (n) (a) = 0, then a is a point of
inflection of f if and only if n is odd.
It can occur, however, that f is infinitely differentiable at a, and that f (n) (a) = 0
for all n, while f is not zero in any neighborhood of a. In the following chapter
we will see that among such functions we will find some that have local extrema
at a, but we will also find those that do not; we will find those that have points of
inflection at a, and those that do not (see Remark 13.17).
A complete investigation of the function f is accomplished by finding the following pieces of information about the function:
1.
2.
3.
4.
5.
6.
the (one-sided) limits of the accumulation points of the domain;
the intervals on which f is monotone increasing or decreasing;
the local and absolute extrema of f ;
the points where f is continuous or differentiable;
the intervals on which f is convex or concave;
the inflection points of f .
2
Example 12.74. Carry out a complete investigation of the function x2n e−x for all
n ∈ N.
Fig. 12.17
2
2
1. Consider first the case n = 0. If f (x) = e−x , then f ′ (x) = −2x · e−x , so the sign
of f ′ (x) agrees with the sign of −x. Then f is strictly increasing on the interval
(−∞, 0], and strictly decreasing on the interval [0, ∞), so f must have a strict local
and global maximum at x = 0.
12.6 Investigation of Differentiable Functions
249
Fig. 12.18
√
2
0 if |x|
> 1/ 2, and
On the other hand,
f ′′ (x) = e−x (4x2 − 2), so f ′′ (x) >
√
√
√
concave
strictly
f ′′ (x) < 0 if |x| < 1/ 2. So f is strictly
√ on [−1/ 2, 1/ 2], and √
√
convex on the intervals (−∞, −1/ 2] and [1/ 2, ∞), so the points ±1/ 2 are
points of inflection. All these properties are summarized in Figure (12.18).
If we also consider that limx→±∞ f (x) = 0, then we can sketch the graph of the
function as seen in Figure 12.17. Inspired by the shape of its graph, the function
2
e−x —which appears many times in probability—is called a bell curve.
2
2. Now let f (x) = x2n e−x , where n ∈ N+ . Then
2
2
f ′ (x) = 2n · x2n−1 − 2x2n+1 e−x = 2 · x2n−1 e−x n − x2 .
√
√
Thus f ′ (x) is positive
on the√intervals (−∞, − n) and (0, n), and negative on
√
the intervals (− n, 0) and ( n, ∞). On the other hand,
1
2
· f ′′ (x) =
#
$
2 ′
n · x2n−1 − x2n+1 · e−x =
2
= n(2n − 1)x2n−2 − (2n + 1)x2n − 2nx2n + 2x2n+2 · e−x =
2
= x2n−2 · 2x4 − (4n + 1)x2 + n(2n − 1) · e−x .
It is easy to see that the roots of f ′′ are the numbers
±
√
4n + 1 ± 16n + 1
.
4
If we denote these by xi (1 ≤ i ≤ 4), then x1 < x2 < 0 < x3 < x4 . It is also easy
to check that f ′′ is positive if |x| < x3 or |x| > x4 , and negative if x3 < |x| < x4
(Figure 12.19). The behavior of the function can be summarized by the Figure
12.18.
If we also consider that f (0) = 0 and limx→±∞ f (x) = 0, then we can sketch the
graph as in Figure 12.20.
250
12 Differentiation
Fig. 12.19
Fig. 12.20
Exercises
12.79. Prove that if x ∈ [0, 1], then 2x ≤ 1 + x ≤ ex and 2x/π ≤ sin x ≤ x.
12.80. Prove that if f is a rational function, then there exists an a such that f is
monotone and either concave or convex on (−∞, a). A similar statement is true on
a suitable half-line (b, ∞).
12.81. For which a > 0, does the equation ax = x have roots? (H)
12.82. For which a > 0 is the sequence defined by the recurrence a1 = a, an+1 = aan
convergent? (H)
x+(1/2)
> e for all x > 0. (H)
12.83. Prove that 1 + 1x
12.84. Prove that for all x ≥ 0 and n ≥ 1,
ex ≥ 1 + x +
xn
x2
+···+ .
2!
n!
12.85. Prove that for all 0 ≤ x ≤ K and n ≥ 1,
ex ≤ 1 + x +
xn
xn+1
x2
+···+ +
· eK .
2!
n! (n + 1)!
12.86. Prove that for all x ≥ 0,
ex = 1 + x +
x2
+··· .
2!
(12.40)
12.6 Investigation of Differentiable Functions
251
As a special case,
e = 1+
1
1
+ +··· .
1! 2!
12.87. Prove that e is irrational. (S)
12.88. Prove that for all x ≥ 0 and n ≥ 1,
1−x+
x2
x2n−1
x2n
x2
−...−
≤ e−x ≤ 1 − x + − . . . +
.
2!
(2n − 1)!
2!
(2n)!
12.89. Prove that (12.40) holds for all x.
12.90. Prove that for all x ≥ 0 and n ≥ 1,
1−
x4n−2
x2 x4
x4n
x2 x4
+ −...−
≤ cos x ≤ 1 − + − . . . +
2! 4!
(4n − 2)!
2! 4!
(4n)!
and
x−
x4n−1
x3 x5
x4n+1
x3 x5
+ −...−
≤ sin x ≤ x − + − . . . +
.
3! 5!
(4n − 1)!
3! 5!
(4n + 1)!
12.91. Prove that for all x ≥ 0 and n ≥ 1,
x−
x2n
x2
x2n+1
x2
+···−
≤ log(1 + x) ≤ x − + · · · +
.
2
2n
2
2n + 1
12.92. Prove that for all x ∈ [0, 1],
log(1 + x) = x −
x2 x3
+ −....
2
3
As a special case,
log 2 = 1 −
1 1
+ −....
2 3
12.93. Prove that
x−
x3 x5
x4n−1
x3 x5
x4n+1
+ −...−
≤ arc tg x ≤ x − + − . . . +
3
5
4n − 1
3
5
4n + 1
for every x ≥ 0 and n ≥ 1.
12.94. Prove that
arc tg x = x −
for every |x| ≤ 1.
x3 x5
+ −....
3
5
(12.41)
252
12 Differentiation
As a special case,
π
1 1
= 1− + −....
4
3 5
(12.42)
12.95. Let
x4 · (2 + sin(1/x)) ,
f (x) =
0,
if x = 0,
if x = 0.
Show that f has a strict local minimum at 0, but that f ′ does not change sign at 0. (S)
12.96. Let
esin(1/x)−(1/x) , if x > 0,
f (x) =
0,
if x = 0.
Prove that
(a)
(b)
(c)
(d)
f is continuous on [0, ∞),
f is differentiable on [0, ∞),
f is strictly monotone increasing on [0, ∞),
f ′ (1/(2π k)) = 0 if k ∈ N+ , that is, f ′ is 0 at infinitely many places in the interval
[0, 1].
12.97. Give a function that is monotone decreasing and differentiable on (0, ∞),
satisfies lim f (x) = 0, but lim f ′ (x) = 0.
x→∞
x→∞
12.98. Let f be differentiable on a punctured neighborhood of a, and let
lim
x→a
f (x) − f (a)
= ∞.
x−a
Does it then follow that lim f ′ (x) = ∞?
x→a
12.99. Let f be convex and differentiable on the open interval I. Prove that f has a
minimum at the point a ∈ I if and only if f ′ (a) = 0.
12.100. Let f be convex and differentiable on (0, 1). Prove that if limx→0+0 f (x) =
∞, then limx→0+0 f ′ (x) = −∞. Show that the statement no longer holds if we drop
the convexity assumption.
12.101. Carry out a complete investigation of each of the functions below.
x3 − 3x,
x2 − x4 ,
x − arc tg x,
x + e−x ,
x + x−2 ,
arc tg(1/x),
cos x2 ,
sin(sin x),
sin(1/x),
x · e−x ,
xn · e−x ,
x − log x,
1/(1 + sin2 x),
x · sin(log x),
1 x
1+ x ,
xx ,
1 x+1
1+ x
√
x
x,
,
2
1 − e−x ,
(log x)/x,
12.6 Investigation of Differentiable Functions
x · log x,
x
,
arc tg x − x+1
√
log x
x,
2
x2n+1 e−x (n ∈ N).
xx · (1 − x)1−x ,
253
arc tg x − 21 log(1 + x2 ),
x4 /(1 + x)3 ,
ex /(1 + x),
# 2
$
(1+x)2
e−x · 1−x
sin
x
−
cos
x
,
2
2
ex / sh x,
Chapter 13
Applications of Differentiation
13.1 L’Hôpital’s Rule
The following theorem gives a useful method for determining critical limits.
Theorem 13.1 (L’Hôpital’s Rule). Let f and g be differentiable on a punctured
neighborhood of α , and suppose that g = 0 and g′ = 0 there. Moreover, assume that
either
lim f (x) = lim g(x) = 0,
(13.1)
x→α
x→α
or
lim |g(x)| = ∞.
x→α
If
lim
f ′ (x)
= β,
g′ (x)
lim
f (x)
= β.
g(x)
x→α
(13.2)
then it follows that
x→α
(13.3)
Here α can denote a number a or one of the symbols a + 0, a − 0, ∞, −∞, while β
can denote a number b, ∞, or −∞.
Proof. We first give the proof for the special case in which α = a is finite, and
limx→a f (x) = limx→a g(x) = f (a) = g(a) = 0. In this case, by Cauchy’s mean value
theorem, in a neighborhood of a for all x = a there exists a c ∈ (x, a) such that
f (x)
f (x) − f (a)
f ′ (c)
=
= ′ .
g(x)
g(x) − g(a)
g (c)
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 13
255
256
13 Applications of Differentiation
Thus if (xk ) is a sequence that tends to a, then there exists a sequence ck → a such
that
f ′ (ck )
f (xk )
= ′
g(xk )
g (ck )
for all k. Thus
f (xk )
f ′ (ck )
= lim ′
= β,
k→∞ g(xk )
k→∞ g (ck )
lim
(13.4)
so by the transference principle, (13.3) holds.
In the general case, when f (α ) or g(α ) is not defined or nonzero, or α is not
finite, the above proof
is modified
by expressing
the quotient f (x)/g(x) by the differential quotient f (x) − f (y) / g(x) − g(y) . First we assume that α is one of the
symbols a + 0, a − 0, ∞, −∞. For arbitrary y = x in a suitable one-sided neighborhood of α , we have g(y) = g(x) by Rolle’s theorem. Thus
f (x)
f (x) − f (y) g(x) − g(y) f (y)
f (x) − f (y)
g(y)
f (y)
=
·
+
=
.
1−
+
g(x)
g(x) − g(y)
g(x)
g(x)
g(x) − g(y)
g(x)
g(x)
It then follows that with suitable c ∈ (x, y),
f ′ (c)
f (x)
g(y)
f (y)
= ′
.
1−
+
g(x)
g (c)
g(x)
g(x)
(13.5)
Thus if we show that for every sequence xk → α , xk = α , there exists another
sequence yk → α , yk = α , such that
f (yk )
→0
g(xk )
and
g(yk )
→ 0,
g(xk )
(13.6)
then (13.4) follows by (13.5), and so does (13.3) by the transference principle.
Assume first (13.1). Let xk → α , xk = α , be given. For each k, there exists an nk
such that
f (xnk ) 1
g(xnk ) 1
<
and
g(x ) k
g(x ) < k ,
k
k
f (xn )
g(xn )
→ 0 and
→ 0 if n → ∞. If yk = xnk for all k, then
g(xk )
g(xk )
(13.6) clearly holds.
Now assume (13.2). Then for all sequences xk → α such that xk = α , we have
lim |g(xk )| = ∞, so for all i, there exists an Ni such that
f (xi ) 1
g(xi ) 1
<
and
g(xm ) i
g(xm ) < i
since for a fixed k,
if m ≥ Ni . We can assume that N1 < N2 < . . . . Choose the sequence (yk ) as follows:
let yk = xi if Ni < k ≤ Ni+1 . It is clear that (13.6) holds again.
13.1 L’Hôpital’s Rule
257
Finally, if α = a is finite, then by applying the already proved α = a + 0 and
α = a − 0 cases, we get that
lim
x→a−0
f (x)
f (x)
= lim
= β,
g(x) x→a+0 g(x)
which implies that
lim
x→a
f (x)
= β.
g(x)
⊔
⊓
Examples 13.2. 1. In Example 10.47, we saw that if a > 1, then as x → ∞, the function ax tends to infinity faster than any positive power of x. We can also see this by
an n-fold application of L’Hôpital’s rule:
log a · ax
(log a)n · ax
ax
= ∞.
= lim
= · · · = lim
n
n−1
x→∞ n · x
x→∞ n(n − 1) · . . . · 1
x→∞ x
lim
2. Similarly, an n-fold application of L’Hôpital’s rule gives
logn x
n logn−1 x
n!
= lim
= · · · = lim
= 0.
x→∞
x→∞
x→∞ x
x
x
lim
It then simply follows that
logα x
=0
x→∞ xβ
for all positive α and β . Thus every positive power of x tends to ∞ as x → ∞ faster
than any positive power of log x. (Although this also follows from 1. quite clearly.)
1/x
log x
= lim
= 0.
3. lim x log x = lim
x→0+0 −1/x2
x→0+0
x→0+0 1/x
We can similarly see that limx→0+0 xα log x = 0 for all α > 0.
cos x − 1
− sin x
− cos x
1
sin x − x
= lim
=− .
= lim
= lim
4. lim
3
2
x→0
x→0 6x
x→0
x→0
x
3x
6
6
(Although
this alsofollows easily from Exercise 12.90.)
1
x − sin x
1 − cos x
1
−
= lim
=
lim
= lim
x→0+0 x sin x
x→0+0 sin x + x cos x
x→0+0 sin x
x
5.
sin x
= lim
= 0.
x→0+0 cos x + cos x − x sin x
lim
Remark 13.3. L’Hôpital’s rule is not always applicable. It can occur that the limit
limx→a f (x)/g(x) exists, but limx→a f ′ (x)/g′ (x) does not, even though
lim f (x) = lim g(x) = 0
x→a
x→a
and g and g′ are nonzero everywhere other than at a.
258
13 Applications of Differentiation
Let, for example, a = 0, f (x) = x2 · sin(1/x), and g(x) = x. Then
f (x)
x2 · sin(1/x)
= lim
= lim x · sin(1/x) = 0.
x→0 g(x)
x→0
x→0
x
lim
On the other hand,
f ′ (x) 2x · sin(1/x) − cos(1/x)
=
= 2x · sin(1/x) − cos(1/x),
g′ (x)
1
so limx→0 f ′ (x)/g′ (x) does not exist.
Exercises
13.1. Determine the following limits.
sin 3x
,
x→0 tg 5x
lim
log(cos ax)
,
x→0 log(cos bx)
ex − e−x − 2x
,
x→0
x − sin x
lim x−2 − (sin x)−2 ,
1
lim ctg x − ,
x→0
x
tg x − x
,
x − sin x
−2
sin x x
,
lim
x→0
x
x ctg x − 1
,
x2
1 + ex ctg x
lim
,
x→0
2
lim (1 − x) · tg(π x/2),
x→1
lim
x→0
x→1
lim
x→0
lim x1/(1−x) ,
lim
lim
x→0
lim (2 − x)tg(π x/2) .
x→1
13.2 Polynomial Approximation
While dealing with the definition of differentiability, we saw that a function is differentiable at a point if and only if it is well approximated by a linear polynomial
there (see Theorem 12.9). We also saw that if f is differentiable at the point a, then
out of all the linear functions, the polynomial f ′ (a) · (x − a) + f (a) is the one that
approximates f the best around that point (see Theorem 12.10). Below, we will generalize these results by finding the polynomial of degree at most n that approximates
our function f the best locally.1
To do this, we first need to make precise what we mean by “best local approximation” (as we did in the case of linear functions). A function can be approximated
“better” or “worse” by other functions in many different senses. Our goal might be
for the difference
1
In the following, we also include the identically zero polynomial in the polynomials of degree at
most n.
13.2 Polynomial Approximation
259
max | f (x) − pn (x)|
x∈[a,b]
to be the smallest on the interval [a, b]. But it could be that our goal is to minimize
some sort of average of the difference f (x) − pn (x). If we want to approximate a
function locally, we want the differences very close to a to be “very small.”
As we already saw in the case of the linear functions, if we consider a polynomial
p(x) that is continuous at a and p(a) = f (a), then it is clear that as long as f is
continuous at a,
lim (( f (x) − p(x)) = f (a) − p(a) = 0.
x→a
In this case, for x close to a, p(x) will be “close” to f (x). If we want a truly good
approximation, then we must require more of it. In Theorem 12.9, we saw that if f is
differentiable at a, then the linear function e(x) = f ′ (a)(x − a) + f (a) approximates
our function so well that not only limx→a (( f (x) − e(x)) = 0, but even
lim
x→a
f (x) − e(x)
=0
x−a
holds, that is, f (x) − e(x) tends to 0 faster than (x − a) does. It is easy to see that
for linear functions, we cannot expect more—that is, for f (x) − e(x) to tend to zero
even faster than (x − a)α for some number α > 1—than this. If, however, f is n
times differentiable at a, then we will soon see that using an nth-degree polynomial,
we can approximate up to order (x − a)n in the sense that for a suitable polynomial
tn (x) of degree at most n, we have
lim
x→a
f (x) − tn (x)
= 0.
(x − a)n
(13.7)
More than this cannot generally be said.
In Theorem 12.10, we saw that the best linear (or polynomial of degree at
most 1) approximation in the sense above is unique. This linear polynomial p1 (x)
satisfies p1 (a) = f (a) and p′1 (a) = f ′ (a), and it is easy to see that these two conditions already determine p1 (x). In the following theorem, we show that these
statements can be generalized in a straightforward way to approximations using
polynomials of degree at most n.
Theorem 13.4. Let the function f be n times differentiable at the point a, and let
tn (x) = f (a) + f ′ (a)(x − a) + · · · +
f (n) (a)
(x − a)n .
n!
(13.8)
(i) The polynomial tn is the only polynomial of degree at most n whose ith derivative at a is equal to f (i) (a) for all i ≤ n. That is,
(n)
tn (a) = f (a), tn′ (a) = f ′ (a), . . . ,tn (a) = f (n) (a),
(13.9)
and if a polynomial p of degree at most n satisfies
p(a) = f (a), p′ (a) = f ′ (a), . . . , p(n) (a) = f (n) (a),
then necessarily p = tn .
(13.10)
260
13 Applications of Differentiation
(ii) The polynomial tn satisfies (13.7). If a polynomial p of degree at most n satisfies
lim
x→a
f (x) − p(x)
= 0,
(x − a)n
(13.11)
then necessarily p = tn . Thus out of the polynomials of degree at most n, tn is
the one that approximates f locally at a the best.
Proof. (i) The equalities (13.9) follow simply from the definition of tn . Now suppose (13.10) holds, where p is a polynomial of degree at most n. Let q = p − tn .
Then
(13.12)
q(a) = q′ (a) = · · · = q(n) (a) = 0.
We show that q is identically zero. Suppose that q = 0. By Theorem 12.34, the
number a is a root of q of multiplicity at least n + 1. This, however, is impossible,
since q is of degree at most n. Thus q is identically zero, and p = tn .
(ii) Let g = f − tn and h(x) = (x − a)n . Then
g(a) = g′ (a) = . . . = g(n) (a) = 0,
and h(i) (x) = n · (n − 1) · . . . · (n − i + 1) · (x − a)n−i for all i < n, which gives
h(a) = h′ (a) = · · · = h(n−1) (a) = 0 and h(n−1) (x) = n! · (x − a).
Applying L’Hôpital’s rule n − 1 times, we get that
g′ (x)
g(n−1) (x)
g(x)
= lim ′
= . . . = lim
.
x→a h (x)
x→a n! · (x − a)
x→a h(x)
lim
Since
g(n−1) (x)
g(n−1) (x) − g(n−1) (a)
= lim
= g(n) (a) = 0,
x→a
x→a x − a
x−a
we have (13.7).
Now assume (13.11), where p is a polynomial of degree at most n. If q = p − tn ,
then by (13.7) and (13.11), we have
lim
lim
x→a
q(x)
= 0.
(x − a)n
(13.13)
Suppose that q is not identically zero. If a is a root of q with multiplicity k, then
q(x) = (x − a)k · r(x), where r(x) is a polynomial such that r(a) = 0. Since q has
degree at most n, we know that k ≤ n, and so the limit of
q(x)
1
= r(x) ·
n
(x − a)
(x − a)n−k
at the point a cannot be zero. This contradicts (13.13), so we see that q = 0 and
⊔
⊓
p = tn .
13.2 Polynomial Approximation
261
Remark 13.5. Statement (ii) cannot be strengthened, that is, for every α > 0, there
exists an n times differentiable function such that
lim
x→a
f (x) − pn (x)
= 0
(x − a)n+α
for every nth-degree polynomial pn (x). It is easy to check that for example, the
function f (x) = |x|n+α has this property at a = 0.
Definition 13.6. The polynomial tn defined in (13.8) is called the nth Taylor2 polynomial of f at the point a.
Thus the 0th Taylor polynomial is the constant function f (a), while the first
Taylor polynomial is the linear function f (a) + f ′ (a) · (x − a).
The difference f (x) − tn (x) can be obtained in several different ways. We give
two of these in the following theorem.
Theorem 13.7 (Taylor’s Formula). Let the function f be (n + 1) times differentiable on the interval [a, x]. Then there exists a number c ∈ (a, x) such that
n
f (x) =
∑
k=0
f (k) (a)
f (n+1) (c)
(x − a)k +
(x − a)n+1 ,
k!
(n + 1)!
(13.14)
and there exists a number d ∈ (a, x) such that
n
f (x) =
∑
k=0
f (k) (a)
f (n+1) (d)
(x − a)k +
(x − d)n (x − a).
k!
n!
(13.15)
If f is (n + 1) times differentiable on the interval [x, a], then there exists a c ∈ (x, a)
such that (13.14) holds, and there exists a d ∈ (x, a) such that (13.15) holds.
Equality (13.14) is called Taylor’s formula with Lagrange remainder, while
(13.15) is Taylor’s formula with Cauchy remainder. In the case a = 0, (13.14) is
often called Maclaurin’s3 formula.
Proof. We prove only the case a < x; the case x < a can be handled in the same way.
For arbitrary t ∈ [a, x], let
%
&
f (n) (t)
f ′ (t)
(x − t) + · · · +
(x − t)n − f (x).
R(t) = f (t) +
(13.16)
1!
n!
(Here we consider x to be fixed, so R is a function of t.) Then R(x) = 0. Our goal is
to find a suitable formula for R(a). Since in (13.16), every term is differentiable on
[a, x], so is R, and
2
3
Brook Taylor (1685–1731), English mathematician.
Colin Maclaurin (1698–1746), Scottish mathematician.
262
13 Applications of Differentiation
R′ (t) =
′′′
f ′′ (t)
f (t)
f ′′ (t)
(x − t) − f ′ (t) +
(x − t)2 −
(x − t) + · · · +
1!
2!
1!
%
&
f (n+1) (t)
f (n) (t)
(x − t)n −
(x − t)n−1 .
+
n!
(n − 1)!
= f ′ (t) +
We can see that here, everything cancels out except for one term, so
R′ (t) =
f (n+1) (t)
(x − t)n .
n!
(13.17)
Let h(t) = (x − t)n+1 . By Cauchy’s mean value theorem, there exists a number c ∈
(a, x) such that
R(a)
R′ (c)
R(x) − R(a) R′ (c)
=
=
−
=
=
(x − a)n+1
h(x) − h(a)
h′ (c)
(n + 1) · (x − c)n
f (n+1) (c)/n! · (x − c)n
f (n+1) (c)
.
=−
=−
n
(n + 1) · (x − c)
(n + 1)!
Then R(a) = −( f (n+1) (c)/(n + 1)!) · (x − a)n+1 , so we have obtained (13.14).
If we apply the mean value theorem to R, then we get that for suitable d ∈ (a, x),
R(a) − R(x)
f (n+1) (d)
= R′ (d) =
(x − d)n .
a−x
n!
Then R(a) = −( f (n+1) (d)/n!) · (x − d)n · (x − a), so we get (13.15).
⊔
⊓
In Theorem 13.4, we showed that the nth Taylor polynomial of an n times differentiable function f approximates f well locally in the sense that if x → a then
f (x) − tn (x) tends to 0 very fast. In other questions, however, we want a function f
to be globally approximated by a polynomial. In such cases, we want polynomials
p1 (x), . . . , pn (x), . . . such that for arbitrary x ∈ [a, b], | f (x) − pn (x)| → 0 as n → ∞.
As we will see, the Taylor polynomials play an important role in this aspect as well.
An important class of functions have the property that the Taylor polynomials
corresponding to a fixed position a will tend to f (x) at every point x as n → ∞.
(That is, we do not think about tn (x) as x → a for a fixed n, but as n → ∞ with a
fixed x.)
It is easy to see that if the function f is infinitely differentiable at the point a,
then the Taylor polynomials of f corresponding to the point a form partial sums
of an infinite series. As we will soon see, in many cases these infinite series are
convergent, and their value is exactly f (x). It is useful to introduce the following
naming conventions.
13.2 Polynomial Approximation
263
Definition 13.8. Let f be a function that is infinitely differentiable at the point a.
The infinite series
∞
f (k) (a)
∑ k! (x − a)k
k=0
is called the Taylor series of f corresponding to the point a.
If a Taylor series is convergent at a point x and its sum is f (x), then we say that
the Taylor series represents f at x.
Theorem 13.9. If f is infinitely differentiable on the interval I and there exists a
number K such that | f (n) (x)| ≤ K for all x ∈ I and n ∈ N, then
∞
f (x) =
∑
k=0
f (k) (a)
(x − a)k
k!
(13.18)
for all a, x ∈ I; that is, the Taylor series of f corresponding to every point a ∈ I
represents f in the interval I.
Proof. Equation (13.14) of Theorem 13.7 together with the assumption | f (n+1) | ≤ K
implies
n
K
f (k) (a)
(x − a)k ≤
|x − a|n+1 .
f (x) − ∑
k!
(n
+
1)!
k=0
Now by Theorem 5.26, |x − a|n /n! → 0, so the partial sums of the infinite series on
the right-hand side of (13.18) tend to f (x) as n → ∞. By the definition of conver⊔
⊓
gence of infinite series, this means exactly that (13.18) holds.
Remark 13.10. The theorem above states that we can obtain the function f in the
interval I as the sum of an infinite series consisting of power functions. Thus we
have an infinite series whose terms are functions of x. These infinite series are called
function series. They are the subject of an important chapter of analysis, but we will
not discuss them here.
Remark 13.11. The statement of Theorem 13.9 is actually quite surprising: if for a
function f , | f (n) | ≤ K holds for all n in an interval I, then the derivatives of f at a
alone determine the values of the function f at every other point of I. From this,
it also follows that for these functions, the values taken on in an arbitrarily small
neighborhood of a already determine all other values of the function on I.
Let us see some applications.
Example 13.12. Let f (x) = sin x. By the identities (12.30), it follows that | f (n) (x)| ≤
1 for all x ∈ R and n ∈ N. Also by (12.30), the (4n + 1)th Taylor polynomial of sin x
corresponding to the point 0 is
x−
x4n+1
x3 x5
+ −...+
.
3! 5!
(4n + 1)!
264
13 Applications of Differentiation
Then by Theorem 13.9,
sin x = x −
x4n+1
x3 x5
+ −...+
−...
3! 5!
(4n + 1)!
(13.19)
for all x. We can similarly show that
cos x = 1 −
x4n
x2 x4
+ −...+
−...
2! 4!
(4n)!
(13.20)
for all x. (These statements can also easily be deduced from Exercise 12.90.)
Remark 13.13. The equalities (13.19) and (13.20) belong to some of the most surprising identities of mathematics. We can express the periodic and bounded functions sin x and cos x as the sum of functions that are neither periodic nor bounded
(except for the term 1 in the expression of cos x).
Example 13.14. Now let f (x) = ex . Then | f (n) (x)| ≤ eb for all x ∈ (−∞, b] and n ∈ N.
It is easy to see that the nth Taylor polynomial of the function ex around 0 is
1+
x x2
xn
+ +···+ ,
1! 2!
n!
so
xn
x2
+···+ +···
(13.21)
2!
n!
for all x, just as we saw already in Exercises 12.86 and 12.89. If we apply (13.21) to
−x, then we get that
ex = 1 + x +
e−x = 1 − x +
xn
x2
− . . . + (−1)n + · · · .
2!
n!
(13.22)
If we take the sum of (13.21) and (13.22), and their difference, and we then divide
both sides by 2, then we get the expressions
sh x = x +
x2n+1
x3 x5
+ +···+
+···
3! 5!
(2n + 1)!
(13.23)
x2n
x2 x4
+ +···+
+··· .
2! 4!
(2n)!
(13.24)
and
ch x = 1 +
Example 13.15. By Exercise 12.92,
log(1 + x) = x −
xn
x2
+ · · · + (−1)n−1 + · · ·
2
n
(13.25)
for all x ∈ [0, 1]. This can easily be seen from (13.14) as well. Let f (x) = log(1 + x).
It can be computed that
13.2 Polynomial Approximation
265
f (n) (x) = (−1)n−1 ·
(n − 1)!
(1 + x)n
for all x > −1 and n ∈ N+ . Thus the nth Taylor polynomial of log(1 + x) at 0 is
tn (x) = x −
x2
xn
+ · · · + (−1)n−1 .
2
n
If x ∈ (0, 1], then by (13.14), for all n there exists a number c ∈ (0, x) such that
log(1 + x) − tn (x) = (−1)n ·
1
xn+1 .
(n + 1)(1 + c)n+1
(13.26)
Since
1
1
n+1
(−1)n ·
x
≤ n+1 → 0
n+1
(n + 1)(1 + c)
(13.27)
if n → ∞, it follows that (13.25) holds.
This argument cannot be used if x < 0, since then c < 0, and the bound in (13.27)
is not valid. For −1 < x < 0, however, we can apply (13.15). According to this, for
all n there exists a number d ∈ (x, 0) such that
log(1 + x) − tn (x) = (−1)n ·
1
· (x − d)n · x.
(1 + d)n+1
Here 1 + d > 1 + x > 0 and |(x − d)/(1 + d)| < |x|, so
|log(1 + x) − tn (x)| ≤
1
· |x|n+1 → 0 as n → ∞,
1+x
proving our statement. To summarize, equation (13.25) holds for all x ∈ (−1, 1].
Remark 13.16. If we substitute x = 1 into the equations (13.21) and (13.25), we get
the series representations
1
1
e = 1+ + +···
1! 2!
and
1 1
log 2 = 1 − + − . . . .
2 3
(We have already seen these in Exercises 12.86 and 12.92.)
The examples above can be summarized by saying that the functions sin x, cos x,
and ex are represented by their Taylor series everywhere; the function log(1 + x) is
represented by its Taylor series centered at 0 for all x ∈ (−1, 1]. By Theorem 13.9,
it easily follows that the Taylor series of the functions sin x and cos x centered at any
point represent the function. The same holds for the function ex (see Exercise 13.4).
Remark 13.17. We cannot omit the condition on the boundedness of the derivatives
in Theorem 13.9. Consider the following example.
266
13 Applications of Differentiation
2
Let f (x) = e−1/x if x = 0, and let f (0) = 0. We show that f is infinitely differentiable everywhere. Using differentiation rules and induction, it is easy to see that
f is infinitely differentiable at every point x = 0 and that for every n, there exists a
polynomial pn such that
2
pn (x)
f (n) (x) = 3n · e−1/x
x
for all x = 0. Since limy→∞ y2n /ey = 0, it follows from the theorem on the limits of
compositions (Theorem 10.41) that
lim
1
x→0 x4n
2
· e−1/x = 0,
and so
lim f (n) (x) = lim pn (x) · xn ·
x→0
x→0
1 −1/x2
= 0.
·e
x4n
Now we show that f is also infinitely differentiable at the point 0 and that f (n) (0) = 0
for all n. The statement f (n) (0) = 0 holds for n = 0. If we assume it holds for n, then
(by L’Hôpital’s rule, for example)
lim
x→0
f (n) (x) − f (n) (0)
f (n) (x)
f (n+1) (x)
= lim
= lim
= 0,
x→0
x→0
x−0
x
1
so f (n+1) (0) = 0.
This shows that f is infinitely differentiable everywhere, and that f (n) (0) = 0
for all n. Thus every term of the Taylor series of f centered at 0 will be zero for
all x. Since f (x) > 0 if x = 0, the statement of Theorem 13.9 does not hold for the
function f and the point a = 0.
Moreover, we have constructed a function that is infinitely differentiable everywhere, but its Taylor series corresponding to the point 0 does not represent it at any
point x = 0.
We can observe that the function f has an absolute minimum at the origin. It is
also easy to see that the function
⎧
−1/x2 , if x < 0,
⎪
⎨−e
2
g(x) = e−1/x ,
if x > 0,
⎪
⎩
0,
if x = 0
is infinitely differentiable, strictly monotone increasing, and all of its derivatives are
zero at the origin.
Remark 13.18. In Remark 11.28 and in the second appendix of Chapter 11, we
showed that the trigonometric and hyperbolic functions are closely linked, which
can be expressed with the help of complex numbers by Euler’s formulas (11.64),
(11.65), and (11.66).
This connection can be seen from the corresponding Taylor series as well. If we
apply the equality (13.21) to ix instead of x, then using (13.19) and (13.20), we
get (11.64). If we substitute ix into (13.23) and (13.24), then we get (11.66).
13.2 Polynomial Approximation
267
Global Approximation with Polynomials. Approximation with Taylor polynomials can be used only for functions that are many times differentiable. By the theorem
below, for a function f to be arbitrarily closely approximated by a polynomial on
a closed and bounded interval, it suffices for f to be continuous there. We remind
ourselves that the set of continuous functions on the closed and bounded interval
[a, b] is denoted by C[a, b] (see Theorems 10.52–10.58).
Theorem 13.19 (Weierstrass Approximation Theorem). If f ∈ C[a, b], then for
every ε > 0, there exists a polynomial p such that
| f (x) − p(x)| < ε
(13.28)
for all x ∈ [a, b].
We prove the theorem by specifying a polynomial that satisfies (13.28). For the
case [a, b] = [0, 1], let
n
k
n k
Bn (x; f ) = ∑ f
(13.29)
·
x (1 − x)n−k .
n
k
k=0
These polynomials
are weighted averages of the values f (k/n) (k = 0, . . . , n), where
the weights nk xk (1 − x)n−k depend on x. The polynomial Bn (x; f ) is called the
nth Bernstein4 polynomial of the function f . Our goal is to prove the following
theorem.
Theorem 13.20. If f ∈ C[0, 1], then for all ε > 0, there exists an n0 = n0 (ε ; f ) such
that n > n0 implies
| f (x) − Bn (x; f )| < ε
for all x ∈ [0, 1].
The general case can be reduced to this. This is because if f : [a, b] → R, then the
function x → g(x) = f (a + (b − a) · x) is defined on [0, 1]. Thus let
x−a
;g
Bn (x; f ) = Bn
b−a
for all x ∈ [a, b]. In other words, in the general case, we get the Bernstein polynomial by transforming the function using an inner linear composition into a function
defined on [0, 1]. Then, using the inverse transformation, we transform the Bernstein polynomial of the new function into one defined on [a, b]. It is clear that if
|Bn (x; g) − g(x)| < ε for all x ∈ [0, 1], then |Bn (x; f ) − f (x)| < ε for all x ∈ [a, b]. We
prove Theorem 13.20 in the first appendix of the chapter.
The Bernstein polynomial—which is a weighted average of the values f (k/n)—
more or less tries to recreate the function f from the values it takes on at the points
k/n (k = 0, . . . , n). The need to recreate a function from its values at finitely many
4
Sergei Natanovich Bernstein (1880–1968), Russian mathematician.
268
13 Applications of Differentiation
points appears many times. If, for example, we are performing measurements, then
we might deduce other values from the ones we measured—at least approximately.
One of the simplest methods for this is to find a polynomial that takes the same
values at these points as the given function and has lowest possible degree.
Theorem 13.21. Let f be defined on the interval [a, b] , and let
a ≤ x0 < x1 < · · · < xn ≤ b
be fixed points. There exists exactly one polynomial p of degree at most n such that
p(xi ) = f (xi ) (i = 0, . . . , n).
Proof. For every k = 0, . . . , n let lk (x) be the product of the linear functions (x − xi )/
(xk − xi ) for all 0 ≤ i ≤ n, i = k. Then l0 , . . . , ln are nth-degree polynomials such that
1, if k = j,
lk (x j ) =
0, if k = j.
It is then clear that if
n
Ln (x; f ) =
∑ f (xk ) · lk (x),
(13.30)
k=0
then Ln (x; f ) is a polynomial of degree at most n that agrees with f at every point xi .
Now suppose that a polynomial p also satisfies these conditions. Then the polynomial q = p − Ln (x; f ) has degree at most n, and has at least n + 1 roots, since it
vanishes at all the points x0 , . . . , xn . Thus by Lemma 11.1, q is identically zero, so
⊔
⊓
p = Ln (x; f ).
The polynomial given in (13.30) is called the Lagrange interpolation polynomial of f corresponding to the points x0 , . . . , xn .
Remark 13.22. It can be proved that taking the points {±k/n : 0 ≤ k ≤ n} yields Lagrange interpolation polynomials for f (x) = sin x and g(x) = sin 2x that approximate
them well, but do not approximate h(x) = |x| well at all. Namely, one can show that
| f (x) − Ln (x; f )| < K ·
1
;
n!
|g(x) − Ln (x; g)| < K ·
2n
;
n!
while |h(x) − Ln (x; h)| can be arbitrarily large for any x ∈ [−1, 1], x = 0, 1, −1 (see
Exercise 13.10).
Generally, the Lagrange interpolation polynomials corresponding to the points
{±k/n : 0 ≤ k ≤ n} do not converge for every continuous function, even though the
polynomial agrees with the function on an ever denser set of points. If, however, the
function f is infinitely differentiable and maxx∈[−1,1] | f (n) (x)| = Kn , then
| f (x) − Ln (x, f )| < 2n ·
Kn
.
n!
Thus the tighter we can bound Kn , the tighter we can bound | f (x) − Ln (x, f )|.
13.2 Polynomial Approximation
269
Exercises
13.2. Prove that if f is a polynomial, then its Taylor series corresponding to any
point will represent f everywhere. (H)
13.3. Prove that the Taylor series of the function 1/x corresponding to any point
a > 0 represents the function at all x ∈ (0, 2a).
13.4. Prove that the Taylor series of the function ex corresponding to any point will
represent the function everywhere.
13.5. Check that the nth Bernstein polynomial of the function f : [−1, 1] → R is
2k
1 n n
k
n−k
(S)
∑ k (1 + x) (1 − x) f n − 1 .
2n k=0
13.6. Prove that if the function f : [−1, 1] → R is even, then all of its Bernstein
polynomials are also even (H).
13.7. Find the nth Bernstein polynomial of the function f (x) = |x| defined on [−1, 1]
for n = 1, 2, 3, 4, 5! (S)
13.8. Find the Bernstein polynomials of the function f (x) = e2x in [a, b].
13.9. Determine the Bernstein polynomials of f (x) = cos x in − π2 , π2 . (H)
13.10. Determine the Lagrange interpolation polynomials of
(a) the
f (x) =sin x and g(x) = sin 2x corresponding to the points
functions
± nk · π2 : 0 ≤ k ≤ n for n = 2, 3, and 4, and of
(b) the function h(x) = |x| corresponding to the points
{±k/n : 0 ≤ k ≤ n} for n = 2, 3, 4 and n = 10.
Check that we get much better approximations in (a) than in (b).
13.11. Show that ∑nk=0 nk nk xk (1 − x)n−k = x for all x. (S)
13.12. Show that
k2 n k
∑ n2 k x (1 − x)n−k = x2 + (x − x2 )/n
k=0
n
for all x. (S)
270
13 Applications of Differentiation
13.3 The Indefinite Integral
It is a common problem to reconstruct a function from its derivative. This is the case
when we want to determine the location of a particle based on its speed, or the speed
(and thus location) of a particle based on its acceleration. Computing volume and
area provide further examples.
Examples 13.23. 1. Let f be a nonnegative monotone increasing and continuous
function on the interval [a, b]. We would like to find the area under the graph of f .
Let T (x) denote the area over the interval [a, x]. If a ≤ x < y ≤ b, then
f (x)(y − x) ≤ T (y) − T (x) ≤ f (y)(y − x),
since T (y) − T (x) is the area of a domain that contains a rectangle of width y − x
and height f (x), and is contained in a rectangle of width y − x and height f (y) (see
Figure 13.1). Thus
T (y) − T (x)
≤ f (y),
f (x) ≤
y−x
and so
lim
y→x
T (y) − T (x)
= f (x),
y−x
that is, T ′ (x) = f (x) if x ∈ [a, b].
Consider, for example, the function f (x) = x2 over the interval [0, 1]. By the
argument above, T ′ (x) = x2 , so T (x) = 13 x3 + c by the fundamental theorem of integration (Corollary 12.53). However, T (0) = 0, so c = 0, and T (x) = 13 x3 for all
x ∈ [0, 1]. Specifically, T (1) = 1/3, giving us the theorem of Archimedes regarding
the area under a parabola.
Fig. 13.1
13.3 The Indefinite Integral
271
2. With this method, the volume of a sphere can also be determined. Consider a
sphere of radius R centered at the origin, and for 0 ≤ u ≤ R, let V (u) denote the
volume of the part of the sphere that lies between the horizontal planes z = 0 and
z = u (Figure 13.2). If 0 ≤ u < v ≤ R, then
(R2 − v2 ) · π · (v − u) ≤ V (v) −V (u) ≤ (R2 − u2 ) · π · (v − u).
Indeed, V (v) − V (u) is the volume of the
slice of the sphere determined by the horizontal planes z = u and z = v. This slice contains
√ a cylinder of height v − u, whose radius
is R2 − v2 , and it is contained√in a cylinder
of the same height with radius R2 − u2 . Assuming that the formula for the volume of a
cylinder is known, we get inequality (13.31).
Now by inequality (13.31), it is clear that
V ′ (x) = π (R2 − x2 ) for all x ∈ [0, R], and thus
(13.31)
Fig. 13.2
1
V (x) = π R2 x − π x3 + c
3
by the fundamental theorem of integration again. Since V (0) = 0, we have c = 0.
The volume of the hemisphere is thus
1
2
V (R) = π R3 − π R3 = π R3 .
3
3
The solutions of the examples above required finding a function that has given
derivatives.
Definition 13.24. If F is differentiable on the interval I and F ′ (x) = f (x) for all
x ∈ I, then we say that F is a primitive function of f on I.
Theorem 13.25. If F is a primitive function of f on the interval I, then every primitive function of f on I is of the form F + c, where c is a constant.
⊔
⊓
Proof. This is a restatement of Corollary 12.53.
'
Definition 13.26. The collection of the primitive functions of f is denoted
by f dx,
'
and is called the indefinite' integral or antiderivative of f . Thus f dx is a set of
functions. Moreover, F ∈ f dx if and only if F ′ = f .
Remarks 13.27. 1. Theorem 13.25 can be stated as F ∈
(
f dx = {F + c; c ∈ R}.
'
f dx implies that
This statement is often denoted more concisely (and less precisely) by
(
f dx = F + c.
272
13 Applications of Differentiation
2. A remark on the notation: F ′ = f can also be written as
dF
= f,
dx
'which after “multiplying through” gives us that dF = f dx. Thus the equation F =
f dx denotes that we are talking about the inverse
of differentiation. In any case, dx
'
can be omitted from behind the integral symbol; f is an equally usable notation.
3. Does every function have a primitive function? Let, for example,
0, if x ≤ 0,
f (x) =
1, if x > 0.
Suppose that F is a primitive function of f . Then for x < 0, we have F ′ (x) = 0, so
F(x) = c if x ≤ 0. On the other hand, if x > 0, then F ′ (x) = 1, so F(x) = x + a if
x ≥ 0. It then follows that F−′ (0) = 0 and F+′ (0) = 1, so F is not differentiable at 0,
which is impossible. This shows that f does not have a primitive function.
4. One of the most important sufficient conditions for the existence of a primitive
function is continuity. That is, if f is continuous on the interval I, then f has a
primitive function on I. We do not yet have all the tools to prove this important
theorem, so we will return to its proof once we have introduced integration (see
Theorem 15.5).
We saw in Example 13.23 that if f is nonnegative, monotone, and continuous,
then the area function T (x) is a primitive function of f . The proof of the general
case also builds on this thought. However, this argument uses the definition and
properties of area, and until we make this clear, we cannot come up with a precise
proof in this way.
5. Continuity is a sufficient condition for a primitive function to exist. Continuity,
however, is not a necessary condition. In Example 13.43, we will give a function
that is not continuous at 0 but has a primitive function. Later, in the section titled
Properties of Derivative Functions, we will inspect the necessary conditions for a
function to have a primitive function more closely (see Remark 13.45).
When we are looking for the primitive functions of our elementary functions, we
use the same method that we used to determine limits and derivatives as well. First
of all, we need a list of the primitive functions of the simplest elementary functions
(elementary integrals). Besides that, we must familiarize ourselves with the rules
that tell us how to find the primitives of functions defined with the help of functions
with already known primitives. These theorems are called integration rules.
The integrals listed here follow from the differentiation formulas of the corresponding functions.
13.3 The Indefinite Integral
273
The Elementary Integrals
(
xα dx =
(
(
cos x dx = sin x + c
(
sin x dx = − cos x + c
(
sh x dx = ch x + c
1 α +1
x
+ c (α = −1)
α
+
1
(
1
ax dx =
· ax + c (a = 1)
log a
(
(
1
dx = − ctg x + c
2
( sin x
1
dx = arc tg x + c
1 + x2
1
dx = tg x + c
2x
cos
(
1
√
dx = arc sin x + c
1 − x2
(
ch x dx = sh x + c
(
1
dx = th x + c
ch2 x
(
√
1
1
dx = log |x| + c
x
(
ex dx = ex + c
(
1
dx = − cth x + c
sh2 x
dx = arsh x + c = log(x +
x2 + 1) + c
dx = arch x + c = log(x +
(
1+x
1
1
+c
dx = · log
1 − x2
2
1−x
x2 − 1) + c
(
√
x2 + 1
1
x2 − 1
These equalities are to be understood that the indefinite integrals are the sets of
the functions on the right-hand side on all the intervals where the functions on the
right are defined and are differentiable.
For the time being, we will need only the simplest integration rules. Later, in
dealing with integration in greater depth, we will get to know further methods and
rules.
Theorem 13.28. If both f and g have primitive functions on the interval I, then
f + g and c · f do as well, namely
(
( f + g) dx =
(
f dx +
(
g dx
and
(
c f dx = c
(
f dx
for all c ∈ R.
'
The theorem
is to be understood
as H ∈ ( f' + g) dx if and only if H = F + G,
'
'
where
F
∈
f
dx
and
G
∈
g
dx.
Similarly,
H ∈ c f dx if and only if H = cF, where
'
F ∈ f dx.
Proof. The theorem is clear by statements (i) and (ii) of Theorem 12.17.
⊔
⊓
274
13 Applications of Differentiation
Example 13.29.
(
(
(x3 + 2x − 3) dx =
(x + 1)2
dx =
x3
(
x4
4
+ x2 − 3x + c.
x2 + 2x + 1
dx =
x3
= log |x| −
Theorem 13.30. If F ∈
'
(
1 2
1
+ 2+ 3
x x
x
1
2
− 2 + c.
x 2x
dx =
f dx, then
(
f (ax + b) dx = 1a F(ax + b) + c
for all a, b ∈ R, where a = 0.
Proof. The statement is clear by the differentiation rules for compositions of
functions.
⊓
⊔
Example 13.31.
( √
2x − 3 dx =
(
(2x − 3)1/2 dx = 21 · 23 · (2x − 3)3/2 +c = 31 (2x − 3)3/2 +c.
e−x
+ c = −e−x + c.
−1
(
x+1
x+1
dx = −3 cos
+ c.
sin
3
3
(
(
(
1 − cos 2x
1
dx =
sin2 x dx =
− 21 cos 2x dx =
2
2
1 sin 2x
x sin 2x
1
+c = −
+ c.
= x− ·
2
2
2
2
4
(
e−x dx =
Remark 13.32. With this method we can determine the integrals of every trigonometric function, making use of the identities (11.35)–(11.36). So, for example,
(
(
(
1 − cos 2x
sin x dx =
2
(
1
1
=
2 sin x − 2 cos 2x sin x dx =
sin3 x dx =
=
=
sin2 x · sin x dx =
(
1
1
dx =
2 sin x − 4 sin 3x + sin(−x)
(
3
4
1
sin x − 41 sin 3x dx = − 43 cos x + 12
cos 3x + c.
Theorem 13.33. If f is differentiable and positive everywhere on I, then on I,
(
[ f (x)]α f ′ (x) dx =
[ f (x)]α +1
+ c,
α +1
if α = −1.
13.3 The Indefinite Integral
275
If f = 0, then
(
f ′ (x)
= log | f (x)| + c.
f (x)
Proof. The statement is clear by the differentiation rules for composition of
functions.
⊓
⊔
Example 13.34.
sin4 x
+ c.
4
(
(
− sin x
dx = − log | cos x| + c.
tg x dx = −
cos
x
(
(
(
(
cos x sin3 x dx =
x
1 + x2 dx =
1
2
1 + x2 · 2x dx =
1
2
(x2 + 1)1/2 (x2 + 1)′ dx =
= 21 · 32 · (x2 + 1)3/2 + c = 31 (x2 + 1)3/2 + c.
(
x
dx =
1 + x2
1
2
(
(x2 + 1)′
dx = 21 log(x2 + 1) + c.
x2 + 1
Exercises
13.13. Determine the indefinite integrals of the following functions:
√
√
(a) cos2 x, sin 2x, 1/ x + 2, x2 + ex − 2, 3x , 45x+6 , 1/ 3 x,
√
√
(b) (2x√+ 3x )2 ,
(1 − x)3 /(x3 x),
x3 /(x + 1),
((1 − x)/x)2 ,
(x + 1)/ x,
1/ 1 − 2x,
(c) x2 (5 − x)2 , (e3x + 1)/(ex + 1), tg2 x, ctg2 x, 1/(x + 5),
√
5
(x2 + 3)/(1 − x2 ), 1/(5x − 2)5/2 ,
1 − 2x + x2 /(1 − x),
x/(1 − x2 ),
2
(d) 1/(2 + 3x2 ), e−x + e−2x , |x2 − 5x + 6|, xex , x/(1 + x2 )3/2 , (1 − x2 )9 ,
√
√
√
√
√ 2
(e) x/ x + 1, ( x + 3 x) , (sin x + cos x)2 , sin x/ cos3 x, x2 1 + x3 ,
√
√
(f) sin x/ cos 2x, x/(x4 + 1), x/ x2 + 1, (5x + 6)/(2x2 + 3), ex /(ex + 2),
(g) x2 (4x3 + 3)7 ,
x3 (4x2 − 1)10 ,
(h) (e√x − e−x )/(ex + e−x ),
x x + 1.
1/(x · log x), (log x)/x,
√ √
(1 + x)2 /(1 + x2 ),
(sin x)/ x,
x/(1 + x2 )2 ,
276
13 Applications of Differentiation
13.4 Differential Equations
One of the most important applications of differentiation is in expressing a process
in mathematical form. Through differentiation, we can obtain a general overview or
prediction for how the process will continue. One of the driving reasons for differentiation is make problems of this sort solvable.
An easy example of this application is growth and decay problems. If a quantity
is changing at a rate that is proportional to the quantity itself, then we are dealing
with a growth or decay problem, depending on whether the quantity is increasing or
decreasing. For example, the birth rate in a country in a period when its population
is not affected by wars, epidemics, new medical advances, and the quality of life is
more or less constant, in which case population changes as outlined above: the number of births is proportional to the population. The same sort of thing is true for the
population of rabbits on an island, or a bacteria colony. The amount of material that
decays in radioactive decay also follows this law. This is because every molecule
of a decaying material decays with probability p after a (short) time h. Thus after
time t, the amount that has decayed is about p · (t/h) times the total amount of material. Thus the instantaneous rate of change of material is p/h times the amount of
material present at that instant.
How can we express such growth and decay processes mathematically? If the
size of the quantity at time t is f (t), then we can write it as
f ′ (t) = k · f (t),
(13.32)
and we speak of a growth or decay process depending on whether the constant k is
positive or negative. Now that we have precisely defined the conditions, it is easy to
find the functions satisfying them.
Theorem 13.35. Let f be differentiable on the interval I, and let f ′ = k· f on the
interval I, where k is constant. Then there exists a constant c such that
f (x) = c · ekx
(x ∈ I).
Proof. Let g(x) = f (x)e−kx . Then
g′ (x) = f ′ (x)e−kx − k f (x)e−kx = 0,
so g = c, which gives f (x) = c · ekx .
⊔
⊓
Growth and decay processes can thus always be written down as a function of
the form c · ekx .
Example. An interesting (and important) application is radiocarbon dating. Living
organisms have a fixed ratio of the radioactive isotope C14 to the nonradioactive
isotope C12 . When the organism perishes, the C14 isotope is no longer replenished,
and it decays into C12 with a half-life of 5730 years (which means that the amount
of C14 decreases to half its original amount in 5730 years). With the help of this,
13.4 Differential Equations
277
we can inspect the remains of an organism to estimate about how many years ago
it perished. Suppose, for example, that a certain (living) kind of tree possesses C14
with a ratio of α to the total amount of carbon it has. If we find a piece of this
kind of tree that has a ratio of 0.9α of C14 to total carbon, then we can argue in the
following way. When the tree fell, in 1 gram of carbon in the tree, α grams were C14 .
The decay of C14 is given by a function of the form c · ekt , where c · ek0 = α . Thus
we have c = α . We also know that α · e5730k = α /2, so k = −0.000121. Thus if the
tree fell t years ago, then α · ekt = 0.9α , which gives t = (1/k) · log 0.9 ≈ 870 years.
The equation (13.32) is a differential equation in the sense that we defined in
Remark 12.39; that is, it defines a relation between the function f and its derivatives.
We can write a differential equation symbolically using the equation
Φ x, y, y′ , . . . , y(n) = 0.
(13.33)
The differential equation (13.32) with this notation can be written as y′ − k · y = 0.
The relation (13.33) can generally contain known functions, just as y′ − k · y = 0
contains the constant k. The function y = f is a solution to the differential equation (13.33) if f is n times differentiable on an interval I, and
Φ x, f (x), f ′ (x), . . . , f (n) (x) = 0
for all x ∈ I. Theorem 13.35 can now also be stated by saying that the solutions to
the differential equation y′ = ky are the functions cekx .
An important generalization of the differential equation (13.32) is the equation
y′ = f y + g, where f and g are given functions. These are called first-order linear
differential equations.
Theorem 13.36. Let f and g be functions defined on the interval I. Suppose that f
has a primitive function on I, and let F be one such fixed primitive function. Then
every solution of the differential equation y′ = f y + g is of the form
y = eF
(
ge−F dx.
That is, there exists a solution if and only 'if ge−F has a primitive function, and every
solution is of the form eF · G, where G ∈ ge−F dx.
Proof. If y is a solution, then
−F ′
ye
= ( f y + g)e−F + ye−F (− f ) = ge−F ,
'
so y = eF ge−F dx. On the other hand,
eF
(
ge−F dx
′
= f eF
(
ge−F dx + eF ge−F = f eF
(
ge−F dx + g.
⊔
⊓
278
13 Applications of Differentiation
Example 13.37. (The story of the little ant and the evil gnome.) A little ant starts
walking on a 10-cm-long elastic rope, starting at the right-hand side at 1 cm/sec, toward the fixed left-hand endpoint. At the same time, the evil gnome grabs the righthand endpoint of the elastic rope and starts running away to the right at 100 cm/sec,
stretching the rope. Can the little ant ever complete its journey?
If we let y(t) denote the distance of the ant from the left endpoint, then we can
easily find the speed of the ant to be
y′ (t) = 100
10
y(t)
−1 =
· y(t) − 1.
10 + 100t
10t + 1
This is a first-order linear differential equation, to which we can apply the solution given by Theorem 13.36. Since F = log(10t + 1) is a primitive function of the
function 10/(10t + 1), we have
(
y = elog(10t+1) (−1)e− log(10t+1) dt = (10t + 1)
1
= (10t + 1) c − 10
log(10t + 1) .
(
−
1
dt =
10t + 1
Since y(0) = 10, we have c = 10, and so
1
y(t) = t +
(100 − log(10t + 1)) .
10
Thus the little ant will make it to the left endpoint in merely t seconds, where
log(10t +1) = 100, that is, 10t +1 = e100 ≈ 2.7·1043 . This gives us that t ≈ 2.7·1042
sec ≈ 8.6 · 1034 years. (While walking, the little ant ends up 1026 light years away
from the fixed endpoint.)
Theorem 13.38. Let I and J be intervals, and let f : I → R, g : J → R \ {0} be functions. Suppose that f'and 1/g both
have primitive functions on I and J respectively,
'
and let these be F ∈ f and G ∈ (1/g). Then a function y : I1 → R is a solution to
the differential equation
y′ = f (x)g(y)
on the interval I1 ⊂ I if and only if G y(x) = F(x) + c for all x ∈ I1 with some
constant c.
Proof. Let y : I1 → J be differentiable. Then
y′ (x) = f (x)g y(x) ⇐⇒ y′ (x)/g y(x) − f (x) = 0 ⇐⇒
′
⇐⇒ G y(x) − F(x) = 0 ⇐⇒
⇐⇒ G y(x) − F(x) = c ⇐⇒
⇐⇒ G y(x) = F(x) + c.
⊔
⊓
13.4 Differential Equations
279
Remarks 13.39. 1. Equations of the form y′ = f (x)g(y) are called separable differential equations. The statement of the theorem above can be expressed by saying that if g = 0, then we get every solution of the equation if we express y from
G(y) = F(x) + c. This also means that the following formal process leads us to a
correct solution:
dy
dy
= f (x)g(y),
= f (x) dx,
dx
g(y)
(
dy
=
g(y)
(
f (x) dx, G(y) = F(x) + c.
2. The solutions of the equation y′ = f (x)g(y) are generally not defined on the whole
interval I. Consider, for example, the differential equation y′ = y2 , where f ≡ 1,
I = R, and g(x) = x2 , J = (0, ∞). Then by the above theorem, the solutions satisfy
−1/y = x + c, y = −1/x + c. The solutions are defined on only one open half-line
each.
Example 13.40. Consider the graphs of the functions y = c · x2 . These are parabolas
that cover the plane in one layer except the origin, meaning that every point in the
plane, other than the origin, has exactly one parabola of this form containing it.
Which are the curves that intersect every y = c · x2 at right angles? If such a curve is
the graph of a function y = f (x), then at every point a = 0, we have
f ′ (a) = −
1
1
a
=−
=−
.
2
2c · a
2( f (a)/a ) · a
2 f (a)
This is so because perpendicular intersections mean (by definition) that at every
point (a, f (a)) on the curve, the tangent to the curve is perpendicular to the tangent
to the parabola that contains this point, and by writing out the slopes, we get the
above condition.
Thus f is a solution of the separable differential equation y′ = −x/2y. Using the
method above, we get that
(
2y dy = −x dx,
2y dy = −
y2 = −
(
x dx,
x2
+ c,
2
so
"
x2
,
2
where c is an arbitrary positive constant. (In the end, the curves we get
2
are the ellipses of the form x2 + y2 = c
(Figure 13.3).)
f (x) = ± c −
Fig. 13.3
280
13 Applications of Differentiation
Second-Order Linear Homogeneous Differential Equations. Let g and h be functions defined on the interval I. Below, we will learn all there is to know about solutions of differential equations of the form
y′′ + gy′ + hy = 0.
(13.34)
We suppose that g has a primitive function on I; let G be one such primitive function.
1. It is easy to check that if y1 and y2 are solutions, then c1 y1 + c2 y2 is also a
solution for all c1 , c2 ∈ R. (That is, the solutions form a vector space.)
2. If y1 and y2 are solutions, then (y′1 y2 − y1 y′2 )eG is constant on I. (Proof: one has
to check that its derivative is 0.)
3. If y1 and y2 are solutions, then y′1 y2 − y1 y′2 is either identically zero on I or
nowhere zero on I. (Proof: by the previous statement.)
4. If y1 and y2 are solutions and there exists an interval J ⊂ I on which y1 = 0 and
y2 /y1 is not constant, then y′1 y2 − y1 y′2 = 0 on I. (Proof: if it were 0, then the
derivative of y2 /y1 would vanish, and so y2 /y1 would be constant on J.)
5. If y1 and y2 are solutions for which y′1 y2 − y1 y′2 = 0 on I, then every solution is of the form c1 y1 + c2 y2 . (That is, the vector space of the solutions has
dimension 2.) (Proof: We know that (y2 y′ − y′2 y)eG = c1 , (y1 y′ − y′1 y)eG = c2 ,
and (y2 y′1 − y′2 y1 )eG = c3 , where c1 , c2 , and c3 are constants and c3 = 0. Subtract y2 times the second equality from y1 times the first equality. We get that
y(y2 y′1 − y′2 y1 )eG = c1 y1 − c2 y2 . Taking into account the third equality, we then
have that y = (c1 /c3 )y1 − (c2 /c3 )y2 .)
If condition 5 holds, then we say that y1 and y2 form a basis for the solution
space.
Second-Order Linear Homogeneous Differential Equations with Constant Coefficients. A special case of the differential equations discussed above is that g and
h are constant:
y′′ + ay′ + by = 0.
Let the roots of the quadratic polynomial x2 + ax + b = 0 be λ1 and λ2 . It is easy to
check that the following solutions y1 and y2 satisfy condition 5 above, that is, they
form a basis for the solution space.
(i) If λ1 and λ2 are distinct and real, then y1 = eλ1 x , y2 = eλ2 x .
(ii) If λ1 = λ2 = λ , then y1 = eλ x , y2 = xeλ x .
(iii) If λ1 and λ2 are complex, that is, λ1 = α + iβ , λ2 = α − iβ (α , β ∈ R), then
y1 = eα x cos β x, y2 = eα x sin β x.
The Equations of Harmonic Oscillation. Consider a point P that is moving
on a straight line, with a force acting on it toward the origin with magnitude
proportional to its distance from the origin. Then the equation describing the
movement of P is my′′ = cy, where m is the mass of the point P, and c < 0.
Let −c/m = a2 . Then y′′ + a2 y = 0, and so by the above, y(t) = c1 cos at + c2 sin at.
13.4 Differential Equations
281
!
Let C = c21 + c22 . Then there exists a b such that sin b = c1 /C, and cos b = c2 /C.
Thus y(t) = C(sin b cos at + cos b sin at), that is,
y(t) = C · sin(at + b).
Now suppose that another force acts on the point P, stemming from resistance,
which is proportional to the speed of P in the opposite direction. Then the equation
describing the motion of the point is my′′ = cy + ky′ , where c, k < 0. Let −c/m =
a0 and −k/m = a1 . Then y′′ + a1 y′ + a0 y = 0, where a0 , a1 > 0. Let the roots of
the quadratic polynomial x2 + a1 x + a0 = 0 be λ1 and λ2 . Then with the notation
a21 − 4a0 = d, we have that
√
−a1 ± d
.
λ1,2 =
2
Clearly, d < a21 . If d > 0, then λ1 , λ2 < 0, and every solution is of the form
y(t) = c1 eλ1 t + c2 eλ2 t .
If d = 0, then λ1 = λ2 = λ < 0, and every solution is of the form
y(t) = c1 eλ t + c2teλ t .
Finally, if d < 0, then λ1,2 = α ± iβ , where α < 0. Then every solution is of the
form
y(t) = eα t (c1 cos β t + c2 sin β t) = C · eα t sin(β t + δ ).
We observe that in both cases, y(t) → 0 as t → ∞, that is, the vibration is “damped.”
Forced oscillation of the point P occurs when aside from a force proportional to
its distance from the origin in the opposite direction, a force of magnitude M sin ω t
also occurs (this force is dependent only on time; for the sake of simplicity, we ignore friction). The equation of this forced oscillation is my′′ = cy + M sin ω t, that is,
y′′ + a2 y =
M
sin ω t.
m
(13.35)
To write down every solution, it is enough to find one solution y0 .
Generally, if g, h, and u are fixed functions, then the equation
y′′ + gy′ + hy = u
(13.36)
is called a second-order inhomogeneous linear differential equation. If we know
one solution y0 of equation (13.36), and in addition, a basis for the solution space
y1 , y2 , then every solution of (13.36) is of the form y0 + c1 y1 + c2 y2 .
Returning to forced oscillation, let us find a solution y0 of the form c1 cos ω t +
c2 sin ω t. A simple calculation gives that for ω = a, the choices c1 = 0 and c2 =
M/m(a2 − ω 2 ) work. Then every solution of (13.35) is of the form
282
13 Applications of Differentiation
y(t) =
M
sin ω t +C sin(at + b).
m(a2 − ω 2 )
If, however, ω = a, then let us try to find a solution of the form t(c1 cos ω t +
c2 sin ω t). Then c1 = −M/2m and c2 = 0, and thus all of the solutions are given by
y(t) = −
M
t cos at +C sin(at + b).
2m
We see that the movement is not bounded: at the times tk = 2kπ /a, the displacement is
Mπ
k −C → ∞ (k → ∞).
|y(tk )| ≥
ma
This means that if the frequency of the forced oscillation (ω /2π ) agrees with its
natural frequency (a/2π ), then resonance occurs.
Differential Equations That Can Be Reduced to Simpler Equations.
1. The equation y′′ = f (y, y′ ) is called an autonomous (not containing x) secondorder differential equation. It can be reduced to a first-order differential equation
if we first look for the function p for which y′ (x) = p(y(x)). (Such a p exists if
y is strictly monotone.) Then y′′ = p′ (y)y′ = p′ (y) · p, so p′ · p = f (y, p). If we
can determine p from this, then we can obtain y from the separable equation
y′ = p(y).
p = (−1/y) + c;
A simple example: y′′ y2 = y′ ; p′ py2 = p; p′ = y−2 ;
y′ = (−1/y) + c, and y can already be determined from this.
2. The equation y′ = f (x + y) can be reduced to a separable equation if we introduce
the function z = x + y. This satisfies z′ − 1 = f (z),' which is separable.
'
Example: y′ = (x + y)2 ; z′ − 1 = z2 ; z′ = z2 + 1; dz/(z2 + 1) = dx; arc tg z =
x + c; z = tg(x + c); y = tg(x + c) − x.
3. The equation y′ = f (y/x) can be reduced to a separable equation if we introduce
the function z = y/x. Then y = zx, y′ = z + xz′ = f (z), and z′ = ( f (z) − z)/x,
which is separable.
Exercises
13.14. Solve the following equations:
(a) y′ + 2xy = 0, y′ − 2y ctg x = 0, y′ − xy = x3 , y′ + y = e−x ,
y′ − (y/x) = x2 + 3x − 2, y′ cos x + y sin x = 1;
(b) y′ = y2 , xy′ = y log y, y′ = ey−x , xy′ + y2 = −1,
y′ −√y2 − 3y + 4 = 0, y′ = y2 + 1, y′ − ex−y − ex = 0, y′ = ex /(y(ex + 1)),
y′ y 1 − x2 = −x 1 √
− y2 ;
′
2
′
(c) y = (y − x) , y = y − 2x, xy′ = y − x · cos2 (y/x),
x sin(y/x) − y cos(y/x) + xy′ cos(y/x) = 0;
13.4 Differential Equations
283
(d) y′′ + y = 0, y′′ − 5y′ + 6y = 0, y′′ − y′ − 6y = 0, 4y′′ + 4y′ + 37y = 0;
(e) y′′ = (1 − x2 )−1/2 , y3 y′′ = 1, 2xy′′ + y′ = 0, y′′ = y2 .
13.15. Show that the equation y′ = 3 y2 with the initial condition y(0) = 0 has both
the solutions y(x) ≡ 0 and y(x) = (x/3)3 .
13.16. We put water boiling at 100 degrees Celsius out to cool in 20-degree weather.
The temperature of the water is 30 degrees at 10 a.m., and 25 degrees at 11 a.m.
When did we place the water outside? (The speed of cooling is proportional to the
difference between the temperature of the body and that of its environment.)
13.17. Draw the integral curves of the following equations (that is, graphs of their
solutions): y′ = y/x, y′ = x/y, y′ = −y/x, y′ = −x/y, y′ = kxa yb .
13.18. We call the solutions of the equation y′ = (a−by)y logistic curves. In the
special case a = b = 1, we get the special logistic equation y′ = (1 − y)y. Solve this
equation. Sketch some integral curves of the equation.
13.19. Find the functions f for which the following statement holds: for an arbitrary
point P on graph f , its distance from the origin is equal to the distance of the point
Q from the origin, where Q is the intersection of the y-axis with the tangent line of
the graph of the function at P.
13.20. Let f : [0, ∞) → (0, ∞) be differentiable, and suppose that for every a > 0, the
area of the trapezoid bound by the lines with
x = 0, x = a, y = 0, and the
equations
tangent line to the graph of f at the point a, f (a) is constant. What can f be?
13.21. A motorboat weighing 300 kg and moving at 16 m/s has its engine turned
off. How far does it travel (and in how much time) if the water resistance (which is
a force acting in the opposite distance of travel) is v2 Newtons at a speed of v? What
about if the resistance is v Newtons?
13.22. Determine how much time a rocket launched at 100 m/sec initial velocity
needs to reach its maximum height if the air resistance creates a negative acceleration of −v2 /10. (Assume acceleration due to gravity to be 10 m/sec2 .)
13.23. A body is slowly submerged into a liquid. The resistance is proportional to
its speed. Determine the graph of distance against time for the submerging body.
(We assume that the initial speed of the body is zero.)
13.24. In a bowl of M liters we have a liquid that is a solution of a% concentration.
We add a solution of b% concentration at a rate of m liter/sec such that an immediately mixed solution flows out at the same rate. What is the concentration of the
solution in our bowl as a function of time?
13.25. What are the curves that intersect the following families of curves at right
angles? (a) y = x2 + c; (b) y = ex + c; (c) y = c· ex ; (d) y = cos x + c; (e) y = c· cos x.
284
13 Applications of Differentiation
13.5 The Catenary
Let us suspend an infinitely thin rope that has weight and is homogeneous in the
sense that the weight of every arc of the rope of length s is c · s, where c is a constant.
Our goal is the proof—borrowing some basic ideas from physics—of the fact that
the shape of this suspended homogeneous rope is similar to an arc of the graph of
the function ch x. To prove this, we will need Theorems 10.79 and 10.81 about arc
lengths of graphs of functions, as well as one more similar theorem.
Theorem 13.41. Let f be differentiable on [a, b], and suppose that f ′ is continuous.
Then the graph of f is rectifiable. Let s(x) denote the arc length of !
graph f over [a, x],
2
′
that is, let s(x) = s f ; [a, x] . Then s is differentiable, and s (x) = 1 + f ′ (x) for
all x ∈ [a, b].
Lemma 13.42. Let f be differentiable on [a, b], and suppose that
!
2
A ≤ 1 + f ′ (x) ≤ B
for all x ∈ (a, b). Then the graph of f is rectifiable, and the bounds
A · (b − a) ≤ s( f ; [a, b]) ≤ B · (b − a)
hold for the arc length s f ; [a, b] .
(13.37)
Proof. Let a = x0 < x1 < · · · < xn = b be a partition F of the interval [a, b], and let
pi = xi , f (xi ) for every i = 0, . . . , n. The distance between the points pi−1 and pi is
!
2
|pi −pi−1 | = (xi − xi−1 )2 + f (xi ) − f (xi−1 ) =
=
1+
f (xi ) − f (xi−1 )
xi − xi−1
2
(13.38)
· (xi − xi−1 ).
By the mean value theorem, there exists a point ci ∈ (xi , xi−1 ) such that
f ′ (ci ) =
Since by assumption, A ≤
!
f (xi ) − f (xi−1 )
.
xi − xi−1
2
1 + f ′ (ci ) ≤ B, using (13.38), we get the bound
A · (xi − xi−1 ) ≤ |pi − pi−1 | ≤ B · (xi − xi−1 ).
(13.39)
The length of the broken line hF corresponding to this partition F is the sum of the
numbers |pi − pi−1 |. Thus if we add inequalities (13.39) for all i = 1, . . . , n, then we
get that
A · (b − a) ≤ hF ≤ B · (b − a),
from which (13.37) is clear.
(13.40)
⊔
⊓
13.5 The Catenary
285
Proof (Theorem 13.41). By Weierstrass’s theorem, f ′ is bounded on [a, b], so by the
previous lemma, we find that graph f is rectifiable. Let c ∈ [a, b) and ε!> 0 be given.
√
2
Since the functions 1 + x2 and f ′ are continuous, the composition 1 + f ′ (x)
!
2
must also be continuous. Let D denote the number 1 + f ′ (c) . By the definition
of continuity, we can find a positive δ such that
!
2
(13.41)
D − ε < 1 + f ′ (x) < D + ε
for all x ∈ (c, c + δ ). Thus by Lemma 13.42,
(D − ε ) · (x − c) ≤ s( f ; [c, x]) = s(x) − s(c) ≤ (D + ε ) · (x − c),
that is,
s(x) − s(c)
≤ D + ε,
x−c
!
2
if x ∈ (c, c + δ ). Thus we see that s′+ (c) = D = 1 + f ′ (c) . We can show simi!
2
larly that s′− (c) = 1 + f ′ (c) for all c ∈ (a, b].
⊔
⊓
D−ε ≤
Consider now a suspended string, and let f
be the function whose graph gives us the shape
of the string. Graphically, it is clear that f is
convex and differentiable.
Due to the tension inside the rope, every
point has two equal but opposing forces acting on it whose directions agree with the tangent line. For arbitrary u < v, three outer forces
act on the arc over [u, v]: the two forces pulling
the endpoints outward, as well as gravity act-
ing on the arc with magnitude c · s(v) − s(u)
(Figure 13.4). Since the rope is at rest, the sum
of these three forces is zero. The gravitational
force acts vertically downward, so the horizontal components of the tension forces must cancel each other out. This means that the horizonFig. 13.4
tal components of the tension force agree at any
two points. Thus there exists a constant a > 0
such that for all u, the horizontal component of tension at u is a, and so the vertical
component is a · f ′ (u), since the direction of tension is parallel to the tangent line.
It then follows that the vertical components of the three forces acting on [u, v] are
−a · f ′ (u), a · f ′ (v), and −c · (s(v) − s(u)). Thus we get that
a · f ′ (v) − a · f ′ (u) − c · (s(v) − s(u)) = 0
286
13 Applications of Differentiation
for all u < v. If we divide by v − u and let v tend to u, then we get that a · f ′′ (u) − c ·
theorem, f ′′ (u) = b · 1 + [ f ′ (u)]2 , where b = c/a.
s′ (u) = 0. Now by the previous
√
′
′ ′
′′
2
′ 2
Since (arsh
′ x) = 1/ 1 + x , we have that (arsh( f )) = ′f / 1 + ( f ) = b.
Thus arsh f (x) = bx + d with some constant d. From this, f (x) = sh(bx + d), so
f (x) =
1
· ch(bx + d) + e,
b
where b, d, e are constants. This means that the graph of f is similar to an arc of the
graph of the function ch x, and this is what we wanted to prove.
13.6 Properties of Derivative Functions
While talking about indefinite integrals, we mentioned that continuity is a sufficient
but not necessary condition for the existence of a primitive function. Now we introduce a function g that is not continuous at the point 0 but has a primitive function.
Example 13.43. Consider first the function
x2 sin(1/x),
f (x) =
0,
if x = 0,
.
if x = 0
Applying the differentiation rules, we get that for x = 0,
f ′ (x) = 2x sin 1x − cos 1x .
With the help of differentiation rules the differentiability of f at 0 cannot be decided,
and the function 2x sin(1/x) − cos(1/x) isn’t even defined at 0. This, however, does
not mean that the function f is not differentiable at 0. To decide whether it is, we
look at the limit of the difference quotient:
lim
x→0
f (x) − f (0)
x2 sin(1/x) − 0
1
= lim
= lim x sin → 0,
x→0
x→0
x−0
x−0
x
that is, f ′ (0) = 0. Thus the function f is differentiable everywhere, and its derivative is
2x sin(1/x) − cos(1/x), if x = 0,
′
(13.42)
f (x) = g(x) =
0,
if x = 0.
Since limx→0 2x sin(1/x) = 0 and the limit of cos(1/x) does not exist at 0, the limit
of f ′ = g does not exist at 0 either.
Thus the function f (x) is differentiable everywhere, but its derivative function is
not continuous at the point 0. We can also state this by saying that the function g is
not continuous, but it has a primitive function.
13.6 Properties of Derivative Functions
287
The following theorem states that derivative functions—even though they are
not always continuous—possess the properties outlined in the Bolzano–Darboux
theorem.
Theorem 13.44 (Darboux’s Theorem). If f is differentiable on [a, b], then f ′ takes
on every value between f+′ (a) and f−′ (b) there.
Proof. Suppose first that f+′ (a) < 0 < f−′ (b). We show that there exists a c ∈ (a, b)
such that f ′ (c) = 0. By the one-sided variant of Theorem 12.44, f is strictly locally
decreasing from the right at a, and strictly locally increasing from the left at b (see
Remark 12.45). It then follows that f takes on values in the open interval (a, b) that
are smaller than f (a) and f (b).
Now by Weierstrass’s theorem, f takes on its absolute minimum at some point
c ∈ [a, b]. Since neither f (a) nor f (b) is the smallest value of f in [a, b], we must
have c ∈ (a, b). This means that c is a local extremum, so f ′ (c) = 0 (Figure 13.5).
Fig. 13.5
Now let f+′ (a) < d < f−′ (b). Then for the function g(x) = f ′ (x) − d · x, we have
= f+′ (a) − d < 0 < f−′ (b) − d = g′− (b). Since the function g satisfies the
conditions of the previous special case, there exists a c ∈ (a, b) such that g′ (c) =
f ′ (c) − d = 0, that is, f ′ (c) = d.
The proof for the case f+′ (a) > f−′ (b) is similar.
⊔
⊓
g′+ (a)
Remarks 13.45. 1. We say that the function f has the Darboux property on the
interval I if whenever a, b ∈ I and a < b, then f takes on every value between f (a)
and f (b) on [a, b]. By the Bolzano–Darboux theorem, a continuous function on
a closed and bounded interval always satisfies the Darboux property. Darboux’s
theorem states that this is also true for derivative functions.
2. Darboux’s theorem can be stated thus: a function has a primitive function only
if it has the Darboux property. This property, however, is not sufficient for a primitive function to exist. One can show that every derivative function has a point of
continuity (but the proof of this is beyond the scope of this book). This means that
for the existence of a primitive function, the function needs to be continuous at a
point. It is easy to check that if a function f : [0, 1] → R takes on every value between 0 and 1 on every subinterval of [0, 1] (such as the function we constructed in
Exercise 9.13), then f has the Darboux property, but it does not have any points of
continuity. Such a function cannot have a primitive function by what we said above.
The functions appearing in Exercise 13.33 also have the Darboux property but do
not have primitive functions.
288
13 Applications of Differentiation
3. Continuity in an interval I is a sufficient but not necessary condition for the existence of a primitive function on I; the Darboux property on I is a necessary but not
sufficient condition. Currently, we know of no simple condition based on the properties of the function itself that is both necessary and sufficient for a given function
to have a primitive function. Many signs indicate that no such condition exists (see
the following paper: C. Freiling, On the problem of characterizing derivatives, Real
Analysis Exchange 23 (2) (1997–98), 805–812).
The question arises whether derivative functions have the same properties as continuous functions outlined in Theorems 10.52 and 10.55. In the following example,
we show that the answer is negative. For boundedness, note that the function g constructed in Example 13.43 is bounded on the interval [−1, 1]; it is easy to see that
for x ∈ [−1, 1] we have g(x) ≤ 2|x| + 1 ≤ 3. First we construct a derivative function
that is not bounded in [−1, 1].
Example 13.46. Let
x2 sin(1/x2 ), if x = 0,
f (x) =
0,
if x = 0.
Then for x = 0,
f ′ (x) = 2x sin x12 − 2x · cos x12 .
Since
f (x) − f (0)
x2 sin(1/x2 ) − 0
= lim
= lim x sin x12 = 0,
x→0
x→0
x→0
x−0
x−0
f is also differentiable at 0. Now f ′ (x) is not bounded on [−1, 1], since
1
lim f ′ √2k
= ∞.
π
lim
k→∞
Example 13.47. We show that there exists a function f that is differentiable everywhere, whose derivative is bounded on the interval I = [0, 1/10], but f ′ does not
have a largest value on I. Let
(1 − 3x)x2 sin(1/x), if x = 0,
f (x) =
0,
if x = 0.
Then for x = 0,
Since
f ′ (x) = (2x − 9x2 ) sin 1x − (1 − 3x) cos 1x .
lim
x→0
f (x) − f (0)
= lim (1 − 3x)x sin 1x = 0,
x→0
x−0
f is also differentiable at 0. We show that on the interval I, f ′ does not have a
maximum. Indeed, for x ∈ I, f ′ (x) ≤ (2x − 9x2 ) + (1 − 3x) < 1 − x, and
1
3
lim f ′ (2k+1)
=
lim
1
−
π
(2k+1)π = 1.
k→∞
k→∞
f ′ (x) = 1, but
f ′ is less than 1 everywhere on the interval I.
It then follows that supx∈I
′
Thus f does not have a largest value in I, even though it is clearly bounded.
13.7 First Appendix: Proof of Theorem 13.20
289
Exercises
13.26. Does the function
have a primitive function?
⎧
⎪
if x > 0
⎨ 1,
sgn x = 0,
if x = 0
⎪
⎩
−1, if x < 0
13.27. Does the function [x] (floor function) have a primitive function?
13.28. Does the function
2x + 1,
g(x) =
3x,
if x ≤ 1,
if x > 1
have a primitive function?
13.29. Prove that if f is differentiable on the interval I, then f ′ cannot have a removable discontinuity on I.
13.30. Prove that if f is differentiable on the interval I, then f ′ cannot have a discontinuity of the first type on I.
13.31. Let f be differentiable on an interval I. Prove that f ′ (I) is also an interval.
13.32. Prove that the functions h1 (x) = sin(1/x), h1 (0) = 0 and h2 (x) = cos(1/x),
h2 (0) = 0 have primitive functions. (In the proof, you can use the fact that every
continuous function has a primitive function.) (S)
13.33. Show that the squares of the functions h1 and h2 appearing in the previous
question do not have primitive functions. (S)
13.7 First Appendix: Proof of Theorem 13.20
Proof. Let f ∈ C[0, 1] and ε > 0 be fixed. We begin with the equality
n
k
n k
f (x) − Bn (x; f ) = ∑ f (x) − f
x (1 − x)n−k ,
n
k
k=0
which is clear from the definition of Bn (x; f ) and from
n
n
n
∑ k xk (1 − x)n−k = x + (1 − x) = 1
k=0
(13.43)
290
13 Applications of Differentiation
from an application of the binomial theorem. By Theorem 10.52, f is bounded, and
so there exists an M such that | f (x)| ≤ M for all x ∈ [0, 1]. On the other hand, by
Heine’s theorem (Theorem 10.61), there exists a δ > 0 such that if x, y ∈ [0, 1] and
|x − y| < δ , then | f (x) − f (y)| < ε /2.
Let n ∈ N+ and x ∈ [0, 1] be fixed. Let I and J denote the set of the indices 0 ≤ k ≤
n for which |x − (k/n)| < δ and |x − (k/n)| ≥ δ respectively. The basic idea of the
proof is that if k ∈ I, then by the definition of δ we have | f (x) − f (k/n)| < ε /2, and
so on the right-hand side of (13.43), the sum of the terms with indices k ∈ I is small.
We show that the sum of the terms corresponding to indices k ∈ J is also small for the
function f ≡ 1, and that the given function f can only increase this sum to at most
M times that. Let us see the details. If |x − (k/n)| < δ , then | f (x) − f (k/n)| < ε /2,
and so
k n k
n−k
f
(x)
−
f
<
∑
· k x (1 − x)
n
k∈I
ε n k
x (1 − x)n−k ≤
<∑ ·
(13.44)
k
k∈I 2
ε n n k
ε
≤ ·∑
x (1 − x)n−k = .
2 k=0 k
2
To approximate the sum ∑k∈J
n
k
xk (1 − x)n−k we will need the following identities:
k n k
x (1 − x)n−k = x
∑
k=0 n k
(13.45)
k2 n k
∑ n2 k x (1 − x)n−k = x2 + (x − x2 )/n
k=0
(13.46)
n
and
n
for all x ∈ R and n = 1, 2, . . .. Both identities follow easily from the binomial theorem
(see Exercises 13.11 and 13.12). For the bound, we actually want the identity
n
∑
k=0
k
−x
n
2
n k
x − x2
,
x (1 − x)n−k =
k
n
(13.47)
which we get by taking the difference
of (13.46) and 2x times (13.45), and then
adding it to the equality ∑nk=0 x2 · nk xk (1 − x)n−k = x2 . If k ∈ J, then |x−(k/n)| ≥ δ ,
and so
13.8 Second Appendix: On the Definition of Trigonometric Functions Again
2
k
n k
n
1
n−k
∑ k x (1 − x) ≤ δ 2 · ∑ n − x k xk (1 − x)n−k <
k∈J
k∈J
2
n
n k
k
1
−x
< 2·∑
x (1 − x)n−k =
k
δ k=0 n
=
291
(13.48)
1
x − x2
≤
.
nδ 2
4nδ 2
Since | f (x)| ≤ M for all x,
k n
M
∑ f (x) − f n · k xk (1 − x)n−k < 2nδ 2 .
k∈J
(13.49)
Thus by (13.43), using inequalities (13.44) and (13.49), we get that
| f (x) − Bn (x; f )| <
ε
M
+
.
2 2nδ 2
Since x ∈ [0, 1] was arbitrary, for n > M/(εδ 2 ) we have | f (x) − Bn (x; f )| < (ε /2) +
(ε /2) = ε for all x ∈ [0, 1], which concludes the proof of the theorem.
⊔
⊓
13.8 Second Appendix: On the Definition of Trigonometric
Functions Again
The definitions of trigonometric functions (Definition 11.20) are mostly based on
geometry. These definitions directly use the concept of an angle and indirectly of
arc length, and to inspect their important properties, we needed to introduce rotations and its properties. At the same time, the trigonometric functions are among the
most basic functions in analysis, so the need might arise to separate our definitions
from the geometric ideas. Since we defined arc length precisely and proved its important properties, for our purpose only rotations and their role cause trouble. The
addition formulas followed from properties of rotations, and the differentiability of
the functions sin x and cos x used the addition formulas. Our theory of trigonometric
functions is not yet complete without a previous background in geometry. We will
outline a construction below that avoids this shortcoming. √
2
We will
use the notation of Remark 11.21. Let c(u) = 1 − u , and let S(u) =
s c; [u, 1] for all u ∈ [−1,
1]. (Then
S(u) is the length of the arc of the unit circle K
that connects the points u, c(u) and (1, 0).) We know that the function S is strictly
monotone decreasing on [−1, 1], and by Remark 11.21, the function cos x on the interval [0, π ] is none other than the inverse of S. We also saw that if we measure an arc
of length x+ π onto the unit circle, then we arrive at a point antipodal to (cos x, sin x),
so its coordinates are (− cos x, − sin x). Thus it is clear that the following definition
is equivalent to Definition 11.20.
292
13 Applications of Differentiation
√
Definition 13.48. (i) Let c(u) = 1 − u2 and S(u) = s c; [u, 1] for all u ∈ [−1, 1].
The inverse of the function S, which is defined on the interval [0, π ], is denoted
by cos x. We extend the function cos x to the whole real line in such a way that
cos(x + π ) = − cos x holds for all real x.
(ii) We
√ define the function sin x on the interval [0, π ] by the equation sin x =
1 − cos2 x. We extend the function sin x to the whole real line in such a way
that sin(x + π ) = − sin x holds for all real x.
The definition above uses the fact that if for a function f : [0, a] → R we have
f (a) = − f (0), then f can be extended to R uniquely so that f (x + a) = − f (x) holds
for all x. It is easy to check that
f (x + ka) = (−1)k · f (x) (x ∈ [0, a], k ∈ Z)
gives such an extension, and that this is the only possible extension.
Now with the help of the above definition, we can again deduce identities
(11.25)–(11.32), (11.38), and (11.39) as we did in Chapter 10. Although the argument we used there used the concept of reflection, it is easy to exclude them from
the proofs.
The proof of inequality (11.42) is based on geometric facts in several points,
including Lemma 10.82 (in which we used properties of convex n-gons) and the
concept of similar triangles. We can prove Theorem 11.26 without these with the
following replacements.
Theorem 13.49. If x = 0, then
1 − |x| ≤
sin x
≤ 1.
x
(13.50)
Proof. Since the function (sin x)/x is even, it suffices to consider the case x > 0.
The inequality (sin x)/x ≤ 1 is clear by (11.38).
The inequality 1 − x ≤ (sin x)/x is evident for x ≥ π /2, since if (π /2) ≤ x ≤ π ,
then (sin x)/x ≥ 0 ≥ 1 − x, and if x ≥ π , then (sin x)/x ≥ (−1)/π ≥ −1 > 1 − x.
Finally, suppose that 0 < x ≤ π /2. Let cos x = u and sin x = v. Then—again by
the
√ definition of cos x and sin x—the arc length of the graph of the function c(t) =
1 − t 2 over [u, 1] is x. Since the function c is monotone decreasing on the interval
[u, 1], by Theorem 10.79,
x ≤ (1 − u) + c(u) − c(1) = (1 − u) + (v − 0) = (1 − cos x) + sin x.
Moreover, by (11.39), we have 1 − cos x ≤ x2 , so x ≤ x2 + sin x, that is, the first
inequality of (13.50) holds in this case too.
⊔
⊓
The function c(x)
√ is concave on [−1, 1] and differentiable on (−1, 1), where its
derivative is −x/ 1 − x2 . Since
S(u) = s c; [u, 1] = π − s c; [−1, u]
13.8 Second Appendix: On the Definition of Trigonometric Functions Again
293
for all u ∈ [−1, 1], by Theorem 13.41 it follows that S is differentiable on (−1, 1),
and its derivative there is
−x
− 1+ √
1 − x2
2
1
= −√
.
1 − x2
By the differentiation rule for inverse functions, it follows that the function cos x is
differentiable on (0, π ), and its derivative there is
1
√
= − 1 − cos2 x = − sin x.
−1/ 1 − cos2 x
By identities (11.25), this holds for all points x = kπ . Since cos x and sin x are both
continuous, the equality (cos x)′ = − sin x holds at these points as well (see Exercise 12.75). Using this, it follows from identities (11.32) that (sin x)′ = cos x for
all x.
Finally, we prove the addition formulas. Let a ∈ R be arbitrary. The function
A(x) = [sin(a + x) − sin a cos x − cos a sin x]2 +
+ [cos(a + x) − cos a cos x + sin a sin x]2
is everywhere differentiable, and its derivative is
2 · [sin(a + x) − sin a cos x − cos a sin x] · [cos(a + x) + sin a sin x − cos a cos x] +
2 · [cos(a + x) − cos a cos x + sin a sin x] · [− sin(a + x) + cos a sin x + sin a cos x] = 0.
Thus the function A is constant. Since A(0) = 0, we have A(x) = 0 for all x. This is
possible only if for all x and a,
sin(a + x) = sin a cos x + cos a sin x
and
cos(a + x) = cos a cos x − sin a sin x,
which is exactly what we wanted to show.
Chapter 14
The Definite Integral
In the previous chapter of our book, we became acquainted with the concept of the
indefinite integral: the collection of primitive functions of f was called the indefinite
integral of f . Now we introduce a very different kind of concept that we also call
integrals—definite integrals, to be precise. This concept, in contrast to that of the
indefinite integral, assigns numbers to functions (and not a family of functions). In
the next chapter, we will see that as the name integral that they share indicates, there
is a strong connection between the two concepts of integrals.
14.1 Problems Leading to the Definition of the Definite Integral
The concept of the definite integral—much like that of differential quotients
—arose as a generalization of ideas in mathematics, physics, and other branches
of science. We give three examples of this.
Calculating the Area Under the Graph of a Function. Let f be a nonnegative
bounded function on the interval [a, b]. We would like to find the area A of the
region S f = {(x, y) : x ∈ [a, b], 0 ≤ y ≤ f (x)} under the graph of f . As we saw in
Example 13.23, the area can be easily computed assuming that f is nonnegative,
monotone increasing, continuous, and that we know a primitive function of f . If,
however, those conditions do not hold, then we need to resort to a different method.
Let us return to the argument Archimedes used (page 3) when he computed the
area underneath the graph of x2 over the interval [0, 1] by partitioning the interval
with base points xi = i/n and then bounding the area from above and below with
approximate areas (see Figure 14.1). Similar processes are often successful. The
computation is sometimes easier if we do not use a uniform partitioning.
Example 14.1. Let 0 < a < b and f (x) = 1/x. In this case, a uniform partitioning
will not help us as much as using the base points xi = a · qi (i = 0, . . . , n), where
q = n b/a.
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 14
295
296
14 The Definite Integral
Fig. 14.1
The function is monotone decreasing, so the area over the interval [xi−1 , xi ] is at
least
1
xi−1
· (xi − xi−1 ) = 1 −
= 1 − q−1
(14.1)
xi
xi
and at most
1
xi
· (xi − xi−1 ) =
− 1 = q − 1.
xi−1
xi−1
Thus we have the bounds
!
!
n b
−1
n a
n 1 − b = n(1 − q ) ≤ A ≤ n(q − 1) = n
a −1
for the area A. Since
"
x ′
b
(b/a)1/n − 1
n b
lim n
− 1 = lim
=
= log (b/a)
n→∞
n→∞
a
1/n
a
x=0
and similarly
"
a
(a/b)1/n − 1
lim n 1 − n
=
= − lim
n→∞
n→∞
b
1/n
a x ′
=−
= − log (a/b) = log (b/a) ,
b
x=0
we have A = log(b/a).
(14.2)
14.1 Problems Leading to the Definition of the Definite Integral
297
We could have recovered the same
result using the method in Example 13.23. Even though the function
1/x is monotone decreasing and not
increasing, from the point of view of
this method, that is of no significance.
Moreover, since log x is a primitive
function of 1/x, the area that we seek
is A = log b − log a = log(b/a).
Let us note, however, that the
Fig. 14.2
method we used above—bounding the
area from above and below using a partition of the interval—did not use the fact that
1/x has a primitive function, so we can expect it to work in many more cases than
the method seen in Example 13.23.
We argue as follows for the general case. Let a = x0 < x1 < · · · < xn = b be an
arbitrary partition of the interval [a, b]. Let
mi = inf{ f (x) : xi−1 ≤ x ≤ xi }
Mi = sup{ f (x) : xi−1 ≤ x ≤ xi }.
and
If Si denotes the region under the graph over the interval [xi−1 , xi ], and Ai the area
of Si , then mi (xi − xi−1 ) ≤ Ai ≤ Mi (xi − xi−1 ), since mi (xi − xi−1 ) is the area of the
tallest rectangle that can be inscribed into Si , while Mi (xi − xi−1 ) is the area of the
shortest rectangle that covers Si . Thus
n
n
i=1
i=1
∑ mi (xi − xi−1 ) ≤ A ≤ ∑ Mi (xi − xi−1 ).
(See Figure 14.2.) Here the left-hand side is the sum of the areas of the inscribed
rectangles, while the right-hand side is the sum of the areas of the circumscribed
rectangles. For an arbitrary partition of the interval [a, b], we get a lower bound
and an upper bound for the area A that we are seeking. If we are lucky (as in the
example above), then only one number will satisfy these inequalities, and that will
be the value of the area.
The Definition and Computation of Work. Suppose that a point moves along a
line with constant speed. If a force acts on the point in the same direction as its
motion with constant absolute value P, then the work done by that force is P · s,
where s is the length of the path traversed.
The question is, how can we generalize the definition and computation of work
for the more general case in which the magnitude of the force varies? For the sake
of simplicity, we will consider only the case in which the point is moving along a
straight line and the force is in the direction of motion. We suppose that work has
the following properties:
1. Work is additive: if the point moves from a = x0 to x1 , then from x1 to x2 , and
so on until it moves from xn−1 to xn = b, then the work done while moving from
a to b is equal to the sum of the work done on each segment [xi−1 , xi ].
298
14 The Definite Integral
2. Work is a monotone function of force: if the point moving in [a, b] has a force
P(x) acting on it, and another time has P∗ (x) acting on it over the same interval,
and moreover, P(x) ≤ P∗ (x) for all x ∈ [a, b], then L ≤ L∗ , where L and L∗ denote
the work done over [a, b] by P and P∗ respectively.
With the help of these reasonable assumptions, we can give lower and upper
bounds for the work done on a moving point by a force of magnitude P(x) at x. Let
a = x0 < x1 < · · · < xn = b be an arbitrary partition of the interval [a, b]. Let
mi = inf{P(x) : xi−1 ≤ x ≤ xi }
Mi = sup{P(x) : xi−1 ≤ x ≤ xi }.
and
If Li denotes the work done on [xi−1 , xi ] by P, then by the monotonicity of work, we
have mi (xi − xi−1 ) ≤ Li ≤ Mi (xi − xi−1 ). Thus the work done on the whole interval
[a, b], denoted by L, must satisfy the inequality
n
n
i=1
i=1
∑ mi (xi − xi−1 ) ≤ L ≤ ∑ Mi (xi − xi−1 )
by the additivity of work. For an arbitrary partition of the interval [a, b] we get lower
and upper bounds for the work L we are looking for. If we are lucky, only one
number satisfies these inequalities, and that will be the value of the work done.
If, for example, 0 < a < b and at x a force of magnitude 1/x acts on the point,
then by the computations done in Example 14.1, we know that the work done by the
force is equal to log(b/a).
Since the method for finding the amount of work uses the same computation as
finding the area underneath the graph of a function, we can conclude that the magnitude of the work done by a force P agrees with the area underneath the graph of P.
We might, in fact, already know the area beneath the graph, and so the magnitude of
work follows. Consider a spring that when stretched to length x exerts a force c · x.
If we stretch the spring from length a to b, then by what was said above, the work
needed for this is the same as the area underneath the graph of c · x over the interval
[a, b]. This region is a trapezoid with bases ca and cb, and height b − a. Thus the
amount of work done is (ca + cb)/2 · (b − a) = c(b2 − a2 )/2.
Determining Force Due to Pressure. Consider a rectangular container filled with
liquid. How much force is the liquid exerting on the sides of the container? To
answer this question, let us use the fact from physics that pressure at every point
(that is, the force acting on a unit surface containing that point) of the liquid is
independent of the direction. If the pressure were equal throughout the fluid, then
it would immediately follow that the force exerted on a side with area A would
be equal to A times the constant value of the pressure. Pressure, however, is not
constant, but increases with depth. We can overcome this difficulty just as we did
in computing work. We suppose that the force due to pressure has the following
properties:
1. Pressure depends only on depth.
2. The force due to pressure is additive: if we partition the surface, then the force
acting on the whole surface is equal to the sum of the forces acting on the pieces.
14.2 The Definition of the Definite Integral
299
3. Force due to pressure is a monotone function of pressure: if we increase pressure
on a surface at every point, then the force due to pressure will also increase.
Using these three properties, we can get upper and lower bounds for the force
with which the liquid is pushing at the sides of the container. Let the height of the
container be b, and let p(x) denote the pressure inside the liquid at depth x. Consider
a side whose horizontal length is c.
Let 0 = x0 < x1 < · · · < xn = b be a partition of the interval [0, b], and let
mi = inf{p(x) : xi−1 ≤ x ≤ xi }
Mi = sup{p(x) : xi−1 ≤ x ≤ xi }.
and
If Fi denotes the magnitude of the force the liquid exerts on the side of the
container between depths xi−1 and xi , then by the monotonicity of such force,
mi · c · (xi − xi−1 ) ≤ Fi ≤ Mi · c · (xi − xi−1 ). Thus the total value of the force due to
pressure must satisfy the inequalities
n
n
i=1
i=1
c · ∑ mi (xi − xi−1 ) ≤ F ≤ c · ∑ Mi (xi − xi−1 )
due to its additivity. For an arbitrary partition of the interval [0, b], we get lower
and upper bounds for the force F due to pressure. If we are lucky, only one number
satisfies these inequalities, and that will be the value of the work done.
It is clear that the magnitude of the force due to pressure is the same as c times
the area underneath the graph of p. In the simplest case, assuming that the liquid
is homogeneous, pressure is proportional to depth, that is, p(x) = ρ x with some
constant ρ > 0. The magnitude of this force is thus the area underneath the graph of
ρ x times c. This region is a right triangle whose sides have lengths b and ρ b. Thus
the force we seek has magnitude c · (b · ρ b/2) = cρ b2 /2.
14.2 The Definition of the Definite Integral
The above examples motivate the following definition of the definite integral (now not only for
nonnegative functions). We introduce some notation. We call a partition of the interval [a, b]
a sequence F = (x0 , . . . , xn ) such that a = x0 <
· · · < xn = b. We call the points xi the base
points of the partition.
Let f : [a, b] → R be a bounded function,
and let
mi = inf{ f (x) : xi−1 ≤ x ≤ xi }
and
for all i = 1, . . . , n (Figure 14.3). The sums
Fig. 14.3
Mi = sup{ f (x) : xi−1 ≤ x ≤ xi }
300
14 The Definite Integral
n
sF ( f ) = ∑ mi (xi − xi−1 ) and
i=1
n
SF ( f ) = ∑ Mi (xi − xi−1 )
i=1
are called the lower and upper sums of the function f with partition F. If the function f is fixed and it is clear to which function the sums correspond to, sometimes
the shorter notation sF and SF is used instead of sF ( f ) and SF ( f ).
As the examples have hinted, the important cases (or functions f ) for us are those
in which only one number lies between every lower and every upper sum. We will
call a function integrable if this condition holds. However, before we turn to the
formal definition, let us inspect whether there always exists a number (one or more)
that falls between every lower and upper sum. We show that for every bounded
function f , there exists such a number.
Definition 14.2. We say that a partition F ′ is a refinement of the partition F if every
base point of F is a base point of F ′ .
Lemma 14.3. Let f be a bounded function on the interval [a, b], and let F ′ be a
refinement of the partition F. Then sF ≤ sF ′ and SF ≥ SF ′ . That is, in a refinement
of a partition, the lower sum cannot decrease, and the upper sum cannot increase.
Proof. Consider first the simplest case, that F ′
can be obtained by adding one new base point
to F (Figure 14.4). Let the base points of F be
a = x0 < · · · < xn = b, and let xk−1 < x′ < xk . If
m′k = inf{ f (x) : xk−1 ≤ x ≤ x′ }
and
m′′k = inf{ f (x) : x′ ≤ x ≤ xk },
then clearly m′k ≥ mk and m′′k ≥ mk , since the
Fig. 14.4
lower bound of a set cannot be larger than a
lower bound of a subset of that set.
Since the intervals [xi−1 , xi ] not containing x′ add the same amount to sF and sF ′ ,
we have
′′
sF ′ − sF = m′k (x′ − xk−1 ) + mk (xk − x′ ) − mk (xk − xk−1 ) ≥
≥ mk (x′ − xk−1 ) + mk (xk − x′ ) − mk (xk − xk−1 ) = 0.
(14.3)
Thus by adding an extra base point, the lower sum cannot decrease.
This implies that our statement holds for adding several new base points, since if
we add them one at a time, the lower sum increases or stays the same at every step,
so the last sum sF ′ is at least as big as sF .
The statement regarding the upper sum can be proved in the same way.
⊔
⊓
Lemma 14.4. If F1 and F2 are two arbitrary partitions of the set [a, b], then sF1 ≤ SF2 .
That is, for a given function, the lower sum for a partition is less than or equal to
the upper sum of any (other) partition.
14.2 The Definition of the Definite Integral
301
Proof. Let F be the union of the partitions F1 and F2 , that is, let the base points of
F be all those points that are base points of F1 or F2 . Then F is a refinement of both
F1 and F2 . Thus—considering that sF ≤ SF (since mi ≤ Mi for all i)—Lemma 14.3
gives
sF1 ≤ sF ≤ SF ≤ SF2 .
⊔
⊓
Let F denote the set of all partitions of the interval [a, b]. By the lemma above,
for every partition F2 ∈ F , the upper sum SF2 is an upper bound for the set {sF :
F ∈ F }. Thus the least upper bound of this set, that is, the value supF∈F sF , is less
than or equal to SF2 for every F2 ∈ F . In other words, supF∈F sF is a lower bound
of the set {SF : F ∈ F }, and thus
sup sF ≤ inf SF .
F∈F
F∈F
(14.4)
Moreover, it is clear that for a real number I, the inequality sF ≤ I ≤ SF holds for
every partition F if and only if
sup sF ≤ I ≤ inf SF .
F∈F
F∈F
(14.5)
Thus we have shown that for every bounded function f , there exists a number that
falls between every lower and every upper sum. It is also clear that there exists only
one such number if and only if supF∈F sF = infF∈F SF . We accept this condition as
a definition.
Definition 14.5. Let f : [a, b] → R be a bounded function. We call f Riemann integrable on the interval [a, b] (or just integrable, for short) if supF∈F sF = infF∈F SF .
We then call the number supF∈F sF = infF∈F SF the definite integral of f over the
'
interval [a, b], and denote it by ab f (x) dx.
It will be useful to have notation for the values supF∈F sF and infF∈F SF .
Definition 14.6. Let f : [a, b] → R be a bounded function. We call the value supF∈F
'
sF the lower integral of f , and denote it by ab f (x) dx. We call the value infF∈F SF
the upper integral of f , and denote it by
'b
a
f (x) dx.
With these new definitions, (14.4) and (14.5) can be summarized as follows.
Theorem 14.7.
(i) For an arbitrary bounded function
'b
f : [a, b] → R, we have
'b
a
f
(x) dx ≤ a f (x) dx.
(ii) A real number I satisfies the inequalities sF ≤ I ≤ SF for every partition F if
'
'b
and only if ab f (x) dx ≤ I ≤ a f (x) dx.
302
14 The Definite Integral
(iii) f is integrable if and only if
'b
a
f (x) dx =
'b
a
f (x) dx.
'b
a
f (x) dx =
'b
a
f (x) dx, and then
'b
a
f (x) dx =
Examples 14.8. 1. If f ≡ c is constant on the interval [a, b], then f is integrable on
[a, b], and
(b
a
f dx = c(b − a).
(14.6)
Clearly, for every partition, we have mi = Mi = c for all i, and so
n
sF = SF = c · ∑ (xi − xi−1 ) = c(b − a)
i=1
makes (14.6) clear.
2. We show that the function f (x) = x2 is integrable on [0, 1], and its integral is 1/3.
Let Fn denote the uniform partition of [0, 1] into n equal intervals. Then
2
1
n−1 2
(n − 1) · n · (2n − 1)
1
=
+···+
=
sFn = · 0 +
n
n
n
6n3
1
1
2
= · 1−
1−
,
3
n
n
and similarly,
SFn
1
= ·
n
n 2
1 2
1
1
2
= · 1+
+···+
1+
.
n
n
3
n
n
(See Figure 14.1.) Since limn→∞ sFn = 1/3, we have supF∈F sF ≥ 1/3. On the
other hand, limn→∞ SFn = 1/3, so infF∈F SF ≤ 1/3. Considering that supF∈F
sF ≤ infF∈F SF , necessarily supF∈F sF = infF∈F SF = 1/3, which means that x2
'
is integrable on [0, 1], and 01 x2 dx = 1/3.
We will soon see that most functions in our applications (and so every continuous
or monotone function) are integrable. It is important to remind ourselves, however,
that there are very simple bounded functions that are not integrable.
Example 14.9. Let f be the Dirichlet function:
0, if x is irrational,
f (x) =
1, if x is rational.
Let a < b be arbitrary. Since both the set of rational and the set of irrational numbers
are everywhere dense in R (Theorems 3.2 and 3.12), for every partition F : a = x0 <
x1 < · · · < xn = b of the interval [a, b] and all i = 1, . . . , n, we have mi = 0 and Mi = 1.
It follows from this that
14.2 The Definition of the Definite Integral
303
n
n
sF = ∑ 0 · (xi − xi−1 ) = 0
i=1
'
and
SF = ∑ 1 · (xi − xi−1 ) = b − a
for every partition. Thus ab f (x) dx = 0, while
function is not integrable in any interval.
i=1
'b
a
f (x) dx = b − a, so the Dirichlet
Remarks 14.10. 1. It is important to note that the mi and Mi appearing in the lower
and upper sums are the infimum and supremum of the set { f (x) : xi−1 ≤ x ≤ xi },
and not its minimum and maximum. We know that a function does not always have
a largest or smallest value (even if it is bounded); one of the simplest examples is
given by the fractional part function (Example 9.7.8), which does not have a largest
value on the interval [0, 1]. Every bounded and nonempty set does, however, have
an infimum and supremum, and this makes it possible to define the lower and upper
sums, and through these, the upper and lower integrals for every bounded function.
Clearly, if we know that f has a smallest and greatest value on the interval [xi−1 , xi ]
(if, for example, f is continuous there), then mi and Mi agree with the minimum and
maximum of the set { f (x) : xi−1 ≤ x ≤ xi }.
2. We also emphasize the point that we have defined the concepts of upper and lower
sums, upper and lower integrals, and the integral itself only for bounded functions.
For an arbitrary function f : [a, b] → R and partition F, the value mi = inf{ f (x) :
xi−1 ≤ x ≤ xi } is finite or −∞ depending on whether the function f is bounded
from below on the interval [xi−1 , xi ]. If f is not bounded from below on [a, b], then
mi = −∞ for at least one i, and so the lower sum sF could only be defined to be
sF = −∞. Thus—if we considered unbounded functions in the first place—integrals
of functions not bounded from below could only be −∞. A similar statement holds
for functions not bounded from above. Thus it is reasonable to study only integrals
of bounded functions (for now).
We mention that there are more general forms of integration (for example the improper integral and the Lebesgue integral) that allow us to integrate some unbounded
functions as well (see Chapter 19.)
3. It is clear that the integrability of a function f over the interval [a, b] depends
on the relationship of the sets {sF : F ∈ F } and {SF : F ∈ F } to each other. By
Theorem 14.7, these sets can be related in two possible ways:
(a) sup{sF : F ∈ F } = inf{SF : F ∈ F }.
It is in this case that we call f integrable over [a, b].
(b) sup{sF : F ∈ F } < inf{SF : F ∈ F }.
This is the case in which f is not integrable over [a, b]. (By Theorem 14.7, the
case sup{sF : F ∈ F } > inf{SF : F ∈ F } is impossible.)
4. If the function f is nonnegative, then—as we have already seen—the upper and
lower sums correspond to the sum of the areas of the outer and inner rectangles
corresponding to the set B f = {(x, y) : a ≤ x ≤ b, 0 ≤ y ≤ f (x)} (see Figure 14.2).
We also saw that if only one number falls between every upper and lower sum, then
the area of the set B f must be this number. In other words, if f ≥ 0 is integrable on
'
[a, b], then the area of the set B f is ab f (x) dx.
304
14 The Definite Integral
Fig. 14.5
If in [a, b] we have f (x) ≤ 0 and B f = {(x, y) : a ≤ x ≤ b, f (x) ≤ y ≤ 0}, then
mi (xi − xi−1 ) is −1 times the area of the rectangle of base [xi−1 , xi ] and height |mi |.
Thus in this case, |sF | = −sF gives the total area of a collection of rectangles that
contain B f ; and similarly, |SF | = −SF will be the total area of rectangles contained
'
in B f . Thus we can say that if in [a, b] we have f (x) ≤ 0, then ab f (x) dx is −1 times
the area of the region B f (Figure 14.5).
5. The question might arise that if f ≥ 0 is not integrable over [a, b], then how
can we compute the area of the set B f = {(x, y) : a ≤ x ≤ b, 0 ≤ y ≤ f (x)}? The
question stems from something deeper: can we compute the area of any set? In fact,
does every set have an area? In talking about sets previously (concretely, the set
B f = {(x, y) : a ≤ x ≤ b, 0 ≤ y ≤ f (x)}), we simply assumed that these sets have a
well-defined computable area. This might be true for polygons, but does it still hold
for every set? And if it is not true, then how can we say that the area of the set B f is
'b
a f (x) dx?
No matter what we mean by the area of a set, we can agree that
(a) the area of a rectangle with sides parallel to the axes is the product of the lengths
of its sides, that is, the area of [a, b] × [c, d] is (b − a) · (d − c);
(b) if we break a set up into the union finitely many nonoverlapping
rectangles, then the area of that set is the sum of the areas of the rectangles;
and
(c) if A ⊂ B, then the area of A cannot be larger than the area of B.
'
When we concluded that the area of the set B f is ab f (x) dx, we used only these
properties of area. Thus, if we want to be' precise, then we can say only that if the
set B f has area, then it must be equal to ab f (x) dx. We will deal with the concept
of area and its computation in more detail in Chapter 16. We will show that among
planar sets, we can easily and naturally identify which sets we can give an area to,
and that these areas are well defined (assuming properties (a), (b), and (c)). As for
the sets B f above, we will show that B f has area if and only if f is integrable on
'
[a, b], and—as we already know—the area of B f is ab f (x) dx.
14.3 Necessary and Sufficient Conditions for Integrability
305
Exercises
14.1. Determine the upper and lower integrals of the following functions defined on
the interval [0, 1]. Decide whether they are integrable there, and if they are, find their
integrals:
(a)
(b)
(c)
(d)
(e)
(f)
f (x) = x;
f (x) = 0 (0 ≤ x < 1/2), f (x) = 1 (1/2 ≤ x ≤ 1);
f (0) = 1, f (x) = 0 (0 < x ≤ 1);
f (1/n) = 1/n for all n ∈ N+ and f (x) = 0 otherwise;
f (1/n) = 1 for all n ∈ N+ and f (x) = 0 otherwise;
f (x) = x if x ∈ [0, 1] ∩ Q and f (x) = 0 if x ∈ [0, 1] \ Q.
14.2. Let f be integrable on [−a, a]. Prove that
'
a
(a) if f is an even function, then' −a
f (x) dx = 2
a
(b) if f is an odd function, then −a f dx = 0.
'a
0
f dx, and
14.3. Prove that if f and g are integrable on [a, b] and they agree on [a, b] ∩ Q, then
'b
'b
a f dx = a g dx. (H)
14.4. Let f be integrable on [a, b]. Is it true that if g : [a, b] → R is bounded and
f (x) = g(x) for all x ∈ [a, b] ∩ Q, then g is also integrable on [a, b]?
14.5. Let f : [a, b] → R be a function such that for every ε > 0, there exist integrable
'
'
functions g, h : [a, b] → R such that g ≤ f ≤ h and ab h(x) dx − ab g(x) dx < ε are
satisfied. Prove that f is integrable.
14.6. Let f : [a, b] → R be bounded. Prove that the set of upper sums of f is an
interval. (∗ H S)
14.3 Necessary and Sufficient Conditions for Integrability
Unlike differentiability, integrability can be stated in many different ways, and
choosing which equivalent definition we will use always depends on the current
problem we are facing. We will now focus on necessary and sufficient conditions
that more or less follow from the definition. Out of these necessary and sufficient
conditions, we could have accepted any of them as the definition of the definite
integral.
'
'
To shorten the formulas, we will sometimes write f dx instead of f (x) dx.
Theorem 14.11. A bounded function f : [a, b] → R is integrable with integral I if
and only if for arbitrary ε > 0, there exists a partition F such that
I − ε < sF ≤ SF < I + ε .
(14.7)
306
14 The Definite Integral
Proof. Suppose that f is integrable on [a, b], and its integral is I. Let ε > 0 be given.
'
Since I = ab f dx = supF∈F sF , there exists a partition F1 such that sF1 > I − ε .
'b
Similarly, by I = a f dx = infF∈F SF , we can find a partition F2 such that SF2 <
I + ε.
Let F be the union of partitions F1 and F2 . By Lemma 14.3,
I − ε < sF1 ≤ sF ≤ SF ≤ SF2 < I + ε ,
so (14.7) holds.
Now suppose that for all ε > 0, there exists a partition F for which (14.7) holds.
Since
sF ≤
( b
a
f dx ≤
( b
a
f dx ≤ SF
(14.8)
for all partitions F, if we choose F such that it satisfies (14.7), then we get that
'
' b
b
and
a f dx − I < ε
a f dx − I < ε .
Since this holds for all ε > 0,
( b
f dx =
( b
f dx = I,
a
a
so f is integrable on [a, b] with integral I.
⊔
⊓
Theorem 14.12. A bounded function f : [a, b] → R is integrable if and only if for
every ε > 0, there exists a partition F such that SF − sF < ε .
Proof. The “only if” part of the statement is clear from the previous theorem. If the
'b
'
function is not integrable, then ab f dx < a f dx, and so by (14.8),
SF − sF ≥
( b
a
f dx −
( b
f dx > 0
a
for every partition F. This proves the “if” part of the statement.
⊔
⊓
By the previous theorem, the integrability of a function depends on whether the
value
n
SF − sF = ∑ (Mi − mi )(xi − xi−1 )
i=1
can be made arbitrarily small. It is worth giving a name to the difference Mi − mi
appearing here.
Definition 14.13. Given a bounded function f : [a, b] → R, we call the value
ω ( f ; [a, b]) = sup R( f ) − inf R( f ),
14.3 Necessary and Sufficient Conditions for Integrability
307
the oscillation of the function f , where R( f ) denotes the image of f , that is, R( f ) =
{ f (x) : a ≤ x ≤ b}.
It is clear that ω ( f ; [a, b]) is the length of the shortest interval which covers the
image R( f ). It is also easy to see that
ω ( f ; [a, b]) = sup{| f (x) − f (y)| : x, y ∈ [a, b]}
(14.9)
(see Exercise 14.7).
Definition 14.14. For a bounded function f : [a, b] → R, we call the sum
n
n
i=1
i=1
ΩF ( f ) = ∑ ω ( f ; [xi−1 , xi ])(xi − xi−1 ) = ∑ (Mi − mi )(xi − xi−1 ) = SF − sF
corresponding to the partition F : a = x0 < x1 < · · · < xn = b the oscillatory sum
corresponding to F. (See Figure 14.6.)
Fig. 14.6
Using this notation, we can rephrase Theorem 14.12:
Theorem 14.15. A bounded function f : [a, b] → R is integrable if and only if for
arbitrary ε > 0, there exists a partition F such that ΩF < ε .
Theorems 14.12 and 14.15 remind us of Cauchy’s convergence criterion; with
their help, we can determine the integrability of a function without having to determine the value of the integral.
Example 14.16. With the help of Theorem 14.15, we can easily see that if f is integrable over [a, b], then so is | f |. For arbitrary x, y ∈ [a, b],
| f (x)| − | f (y)| ≤ | f (x) − f (y)|,
so by (14.9), ω (| f |; [c, d]) ≤ ω ( f ; [c, d]) for every interval [c, d] ⊂ [a, b]. It follows that ΩF (| f |) ≤ ΩF ( f ) holds for every partition F. Thus if ΩF ( f ) < ε , then
ΩF (| f |) < ε , and so by Theorem 14.15, | f | is also integrable.
308
14 The Definite Integral
Instead of the values mi and Mi occurring in the lower and upper sums, in many
cases we would like to use the value f (ci ) for some “inner” points xi−1 ≤ ci ≤ xi .
The sum we get in this way is called an approximating sum or Riemann sum.
Definition 14.17. The approximating, or Riemann, sums of a function f : [a, b] → R
corresponding to the partition F : a = x0 < x1 < · · · < xn = b are the sums
n
σF ( f ; (ci )) = ∑ f (ci )(xi − xi−1 )
i=1
with any choice of the points ci ∈ [xi−1 , xi ] (i = 1, . . . , n).
The value of the approximating sum σF ( f ; (ci )) thus depends not only on the
partition, but also on the points ci . However, if it will not cause confusion, we omit
the reference to f and ci in the notation.
The integrability of a function and the value of its integral can be expressed
with the help of approximating sums as well. We need to prove only the following
statement for this:
Theorem 14.18. For an arbitrary bounded function f : [a, b] → R and partition F,
inf
(c1 ,...,cn )
σF = sF
and
sup σF = SF ;
(14.10)
(c1 ,...,cn )
that is, the infimum and the supremum of the set of approximating sums with every
choice of ci are sF and SF respectively.
Proof. In Theorem 3.20, we saw that if A and B are nonempty sets of numbers,
then inf(A + B) = inf A + inf B and sup(A + B) = sup A + sup B. Then by induction,
if A1 , . . . , An are nonempty sets of numbers, then
inf(A1 + · · · + An ) = inf A1 + · · · + inf An
and
sup(A1 + · · · + An ) = sup A1 + · · · + sup An ,
(14.11)
where A1 + · · · + An = {a1 + · · · + an : ai ∈ Ai (i = 1, . . . , n)}.
Let Ai = { f (c)(xi − xi−1 ) : c ∈ [xi−1 , xi ]} (i = 1, . . . , n). It is clear that inf Ai = mi (xi −
xi−1 ) and sup Ai = Mi (xi − xi−1 ) for all i. Then (14.10) is clear by (14.11).
⊔
⊓
Now, considering Theorems 14.11 and 14.23, we easily get the following result.
Theorem 14.19. A bounded function f : [a, b] → R is integrable with integral I if
and only if for arbitrary ε > 0, there exists a partition F such that every Riemann
sum σF satisfies |σF − I| < ε .
14.3 Necessary and Sufficient Conditions for Integrability
309
Exercises
14.7. Prove that if f is bounded on [a, b], then
ω ( f ; [a, b]) = sup{| f (x) − f (y)| : x, y ∈ [a, b]}.
14.8. Prove that if f is integrable over [a, b], then for all ε > 0, there exists a subinterval [c, d] ⊂ [a, b] such that ω ( f ; [c, d]) < ε .
14.9. Prove that if a function f is integrable over [a, b], then it has a point of continuity. (H)
14.10. Let f : [a, b] → R be bounded, and F ∈ F fixed. Is it true that the set of values
of approximating sums σF forms an interval? (S)
14.11. Prove that if f is integrable over [a, b], then so is the function e f .
14.12. Is it true that if f is positive and integrable over [a, b], then
'b
a
f dx > 0? (S)
To determine the integrability of a function and the value of the integral, we
do not need to know every lower and upper sum. It generally suffices to know the
values of the lower and upper sums for a suitable sequence of partitions. Indeed,
if f : [a, b] → R is integrable with integral I, then by Theorem 14.11, for every
n = 1, 2, . . ., there exists a partition Fn such that I − (1/n) < sFn ≤ SFn < I + (1/n).
Thus the partitions Fn satisfy
lim sFn = lim SFn = I.
n→∞
n→∞
(14.12)
Conversely, if (14.12) holds for the partitions Fn , then for every ε > 0, there exists
a partition F that satisfies condition (14.7), namely F = Fn , if n is sufficiently large.
Thus by Theorem 14.11, f is integrable and has integral I.
The condition for integrability in terms of the oscillatory sum (Theorem 14.15)
is clearly equivalent to the existence of partitions Fn such that ΩFn → 0. With this,
we have the following:
Theorem 14.20.
(i) A bounded function f : [a, b] → R is integrable with integral I if and only if there
exists a sequence of partitions F1 , F2 , . . . for which (14.12) holds.
(ii) A bounded function f : [a, b] → R is integrable if and only if there exists a sequence of partitions F1 , F2 , . . . such that limn→∞ ΩFn = 0.
Examples 14.21. 1. We considered the function 1/x over the interval [a, b] in Example 14.1, where 0 < a < b. Let Fn denote the partition with base points xi = a · qi (i =
0, . . . , n), where q = n b/a. Since the function is monotone decreasing, mi = 1/xi
and mi (xi − xi−1 ) = 1 − q−1 , while Mi = 1/xi−1 and Mi (xi − xi−1 ) = q − 1 for all i
(see (14.1) and (14.2)). Then sFn = n(1 − q−1 ), SFn = n(q − 1), and as we computed
310
14 The Definite Integral
in Example 14.1, limn→∞ sFn = limn→∞ SFn = log(b/a). Thus according to Theorem 14.20, the function 1/x is integrable on the interval [a, b], and its integral there
is log(b/a).
2. Now we show that for 0 < a < b and α > 0,
(b
xα dx =
a
bα +1 − aα +1
.
α +1
(14.13)
Consider the partition used in the previous example. The function is monotone increasing, so
n−1
sn =
∑ xiα · (xi+1 − xi ) =
i=0
= aα +1
n−1
∑ (aqi )α (aqi+1 − aqi ) =
i=0
n−1
∑ (qα +1 )i (q − 1) = aα +1 (q − 1)
i=0
α +1
=a
n
b/a
n→∞
bα +1 − aα +1
Since lim q = lim
n→∞
lim sn =
n→∞
q−1
bα +1
.
− 1 α +1
−1
aα +1
q
α +1
= 1, we have lim
q−1
n→∞ qα +1 − 1
qn(α +1) − 1
=
qα +1 − 1
=
1
(xα +1 )′x=1
=
1
, and so
α +1
.
We can similarly obtain lim Sn =
n→∞
bα +1 − aα +1
, which proves (14.13).
α +1
'
3. We show that ab ex dx = eb − ea . Consider a uniform partition of the interval [a, b]
into n equal parts with the base points xi = a + i · (b − a)/n (i = 0, . . . , n). Let sn
and Sn denote lower and upper sums corresponding to this partition. Since ex is
monotone increasing in [a, b], mi = exi−1 and Mi = exi for all i. Then
n−1
sn =
∑ ea+i(b−a)/n ·
i=0
b−a
n
n
and
Sn = ∑ ea+i(b−a)/n ·
i=1
b−a
.
n
Summing the geometric series, we get
sn = ea ·
(b − a)/n
e((b−a)/n)·n − 1 b − a
= ea (eb−a − 1) · (b−a)/n
·
.
n
e(b−a)/n − 1
e
−1
Then by lim x/(ex − 1) = 1, we have limn→∞ sn = eb − ea . We can similarly get that
x→0
limn→∞ Sn = eb − ea , so by Theorem 14.20,
4. We show that for 0 < b < π /2,
'b x
b
a
a e dx = e − e .
14.3 Necessary and Sufficient Conditions for Integrability
( b
311
cos x dx = sin b.
(14.14)
0
Consider a uniform partition of the interval [0, b] into n equal parts. Let sn and Sn denote lower and upper sums corresponding to this partition. Since cos x is monotone
decreasing in the interval [0, b],
n
b
ib
· cos
n
n
i=1
sn = ∑
n−1
b
ib
∑ n · cos n .
(14.15)
(n + 1)α
sin(nα /2)
· cos
,
sin(α /2)
2
(14.16)
and
Sn =
i=0
Now we use the identity
cos α + cos 2α + · · · + cos nα =
which can be proved easily by induction. Applying this with α = b/n gives us that
sn =
Since
sin(b/2)
b
(n + 1)b
· cos
·
.
n sin b/(2n)
2n
b/n
=2
n→∞ sin(b/2n)
lim
we have
and
lim cos
n→∞
b
(n + 1)b
= cos ,
2n
2
b
b
lim sn = 2 sin cos = sin b.
2
2
n→∞
Now by (14.15), we know that Sn − sn = bn · (cos 0 − cos b) → 0 if n → ∞, so
limn→∞ Sn = limn→∞ sn = sin b. Thus Theorem 14.20 gives us (14.14).
If f is integrable, then according to Theorem 14.11, for every ε there exists a
partition F such that sF and SF approximate the integral of f with an error of less
than ε . We now show that every partition with short enough subintervals has this
property.
Definition 14.22. Let F : a = x0 < x1 < · · · < xn = b be a partition of the interval
[a, b]. We call the value
δ (F) = max (xi − xi−1 )
1≤i≤n
the mesh of the partition F. Sometimes, we say that a partition is finer than η if
δ (F) < η .
Theorem 14.23.
(i) A bounded function f : [a, b] → R is integrable with integral I if and only if for
every ε > 0, there exists a δ > 0 such that every partition F with mesh smaller
than δ satisfies
I − ε < sF ≤ SF < I + ε .
(14.17)
312
14 The Definite Integral
(ii) A bounded function f : [a, b] → R is integrable if and only if for every ε > 0,
there exists a δ > 0 such that every partition F with mesh smaller than δ
satisfies ΩF < ε .
(iii) A bounded function f : [a, b] → R is integrable with integral I if and only if
for every ε > 0, there exists a δ > 0 such that every partition F with mesh
smaller than δ and every Riemann sum σF belonging to the partition F satisfies
|σF − I| < ε .
Proof. If the conditions for the partitions in (i) or (iii) are satisfied, then by Theorems 14.11 and 14.19, f is integrable with integral I. If the conditions hold in (ii),
then by Theorem 14.15, f is integrable. '
Now suppose that f is integrable. Let ab f dx = I, and let ε > 0 be fixed. To
prove the theorem, it suffices to show that (14.17) holds for every partition with
sufficiently small mesh. This is because if (14.17) is true, then ΩF = SF − sF <
2ε , and |σF − I| < ε for every Riemann sum corresponding to F, so (ii) and (iii)
automatically follow from (i).
We first show that sF > I − ε for every partition with sufficiently small mesh. To
prove this, let us consider how much a lower sum sF corresponding to a partition
with a mesh smaller than δ can increase when a new base point is added. If the new
partition is F ′ , then with the notation of Lemma 14.3 and (14.3), we have
sF ′ − sF = m′k (x′ − xk−1 ) + m′′k (xk − x′ ) − mk (xk − xk−1 ) ≤ 3K δ ,
where K is an upper bound of | f | on [a, b]. That is, adding a new base point to a
partition with mesh smaller than δ can increase the lower sum by at most 3K δ .
By definition (of the lower integral), [a, b] has a partition F0 : a = t0 < . . . < tk = b
such that sF0 > I − ε /2. If F : a = x0 < x1 < . . . < xn = b is an arbitrary partition and
F1 denotes the union of F0 and F, then sF1 ≥ sF0 by Lemma 14.3, and so sF1 > I −
ε /2. We get the partition F1 by adding the base points t1 , . . . ,tk−1 one after another
to F. Let the mesh of F be δ . Then every added base point increases the value of sF
by at most 3K δ , so
sF1 ≤ sF + (k − 1) · 3K δ .
Thus if δ < ε /(6K · k), then sF > sF1 − (ε /2) > I − ε .
This shows that sF > I − ε for every partition with mesh smaller than ε /(6K · k).
Let us note that k depends only on ε . We can similarly show that SF < I + ε holds
for every partition with mesh smaller than ε /(6K · k), so (14.17) itself holds for
every partition with sufficiently small mesh.
⊔
⊓
Now we show that if f is integrable, then (14.12) (and so limn→∞ ΩFn = 0) holds
in every case in which the mesh of the partitions tends to zero.
Theorem 14.24.
(i) A bounded function f : [a, b] → R is integrable with integral I if and only if
lim sFn = lim SFn = I
n→∞
n→∞
holds for every sequence of partitions F1 , F2 , . . . satisfying limn→∞ δ (Fn ) = 0.
14.3 Necessary and Sufficient Conditions for Integrability
313
(ii) A bounded function f : [a, b] → R is integrable if and only if ΩFn → 0 holds for
every sequence of partitions F1 , F2 , . . . satisfying limn→∞ δ (Fn ) = 0.
Proof. (i) If the condition holds, then f is integrable with integral I by statement (i)
of Theorem 14.20. Now suppose that f is integrable, and let ε > 0 be arbitrary. By
Theorem 14.23, there exists a δ0 such that every partition with mesh smaller than δ0
satisfies (14.17). If F1 , F2 , . . . is a sequence of partitions such that limn→∞ δ (Fn ) = 0,
then for every sufficiently large n, we have δ (Fn ) < δ0 , and so |sFn − I| < ε and
|SFn − I| < ε .
Statement (ii) can be proved in the same way using statement (ii) of Theorem
14.20.
⊔
⊓
Example 14.25. With the help of Theorem 14.24, we give a new proof of the equality
log 2 = 1 −
1 1
+ −....
2 3
(14.18)
'
In Example 14.21, we saw that for 0 < a < b, we have ab (1/x) dx = log(b/a). For
'
the special case a = 1 and b = 2, we get 12 (1/x) dx = log 2. Let Fn denote a uniform
partition of the interval [1, 2] into n equal pieces. Since the sequence of partitions
F1 , F2 , . . . satisfies limn→∞ δ (Fn ) = 0, Theorem 14.24 implies limn→∞ sFn = log 2.
Now it is easy to see that the ith term of the sum defining sFn is
mi (xi − xi−1 ) =
1
1
1
· =
,
1 + i/n n i + n
so sFn = ∑ni=1 1/(i + n). Observe that
1
1
1
1
1
1
1 1
−
= 1+ +···+
+···+
1− + −···+
−2
=
2 3
2n − 1 2n
2
2n
2
2n
1
1
1
1
= 1+ +···+
− 1+ +···+
=
2
2n
2
n
1
1
1
+
+···+
= sFn .
=
n+1 n+2
2n
This proves (14.18). This is now the third proof of this equality (see Exercise 12.92
and Remark 13.16).
Exercises
14.13. Prove that if f is differentiable and f ′ is bounded on [0, 1], then there exists
a number K such that
n (1
K
1
i
<
∑f
f
dx
−
n
n
n
i=1
0
314
14 The Definite Integral
for all n.
14.14. Let Fn be the uniform partition of the interval [a, b] into n equal pieces. Is it
true that for every bounded function f : [a, b] → R, the sequence sFn ( f ) is monotone
increasing and the sequence SFn ( f ) is monotone decreasing?
14.15. Let 0 < a < b, and let F : a = x0 < x1 < · · · < xn = b be an arbitrary partition
√
of the interval [a, b]. Compute the approximating sum of F with ci = xi−1 xi of the
'
b
function 1/x2 , and determine the value of the integral a x−2 dx based on that.
14.16. Prove that if 0 < b < π /2, then
'b
0
sin x dx = 1 − cos b. (H)
14.17. Prove that if f is integrable on [0, 1], then
1 n
lim ∑ f
n→∞ n
k=1
(1
k
= f dx.
n
0
14.18. Compute the value of
√
√
√
1+ 2+···+ n
√
.
lim
n→∞
n n
14.19. Prove that
n
∑ k2 ∼
k=1
n3
.
3
(Here an ∼ bn denotes that lim an /bn = 1; see Definition 5.28.)
n→∞
14.20. Prove that for every positive α ,
n
nα +1
∑ kα ∼ α + 1 .
k=1
14.21. Compute the value of
1 n
·∑
n→∞ n2
i=1
lim
n2 − i2 .
14.22. To what value does the fraction
cos(1/n) + cos(2/n) + · · · + cos(n/n)
n
tend as n → ∞?
14.23. Prove that if f is integrable over [0, 1], then
f (0) − f n1 + f 2n ) − · · · + (−1)n f nn
→ 0,
n
as n → ∞.
14.4 Integrability of Continuous Functions and Monotone Functions
315
14.24. Prove that if f is bounded on [a, b], then for every ε > 0, there exists a δ > 0
such that for every partition F with mesh smaller than δ ,
( b
f dx − ε < sF
a
and
SF <
( b
f dx + ε .
a
14.25. Prove that if f is integrable on [0, 1] and f (x) ≥ c > 0 there, then
lim
n
n→∞
'1
n
log f (x) dx
1
2
.
f
·f
··· f
= e0
n
n
n
14.26. Prove that if f is integrable on [0, 1] and f (x) ≥ c > 0 there, then
⎛
⎝
(1
0
⎞−1
dx ⎠
f (x)
'1
≤ e0
log f (x) dx
≤
(1
f (x) dx.
0
14.27. Suppose that f is bounded on [0, 1] and that
k
1 n
lim ∑ f
= A.
n→∞ n
n
k=1
Does it follow from this that f is integrable on [0, 1] and that
'1
0
f (x) dx = A?
14.28. For each of the functions below and for arbitrary ε > 0, give a number δ > 0
such that if a partition of the given interval has mesh smaller than δ , then the upper
and lower sums corresponding to the partition are closer to the integral than ε . (It is
not necessary to find the largest such δ .)
(a) ex , [0, 10];
(b) cos x, [0, 2π ];
(c) sgn x, [−1, 1];
(d) f (1/n) = 1/n for all n ∈ N+ and
f (x) = 0 otherwise, [0, 1];
(e) f (x) = sin(1/x) if x = 0, f (0) = 0, [0, 1].
14.4 Integrability of Continuous Functions and Monotone
Functions
One can guess from Theorem 14.15 that the integrability of a function is closely
related to its continuity. One can prove that a bounded function is integrable if and
only if in a certain well-defined way, it is continuous “almost everywhere.” So for
example, every bounded function that is continuous everywhere except at countably
316
14 The Definite Integral
many points is integrable. The proof of this theorem, however, is far from easy, so
we will consider only some very important and often used special cases.
Theorem 14.26. If f is continuous on [a, b], then f is integrable on [a, b].
Proof. We will apply Theorem 14.15; but first we refer to Heine’s theorem (Theorem
10.61). According to this, if f is continuous on [a, b], then it is uniformly continuous
there, that is, for every ε > 0, there exists a δ > 0 such that for arbitrary x, y ∈ [a, b],
if |x − y| < δ , then | f (x) − f (y)| < ε .
Let ε > 0 be fixed, and choose a δ corresponding to ε by the definition of uniform continuity. Let F be a partition of the interval [a, b] in which the distance
between any two neighboring base points is less than δ (for example, the uniform partition into n equal pieces where n > (b − a)/δ ). By Weierstrass’s theorem
(Theorem 10.55), the function f has both a largest and a smallest value in each interval [xi−1 , xi ], so there are points ui , vi ∈ [xi−1 , xi ] such that f (ui ) = mi and f (vi ) = Mi .
Then |ui − vi | ≤ xi − xi−1 < δ by our choice of the partition F, and so
Mi − mi = f (vi ) − f (ui ) < ε
by our choice of δ . Then
n
n
i=1
i=1
ΩF = ∑ (Mi − mi )(xi − xi−1 ) < ∑ ε (xi − xi−1 ) = ε (b − a).
Since ε > 0 was arbitrary, f is integrable by Theorem 14.15.
⊔
⊓
Corollary 14.27. The elementary functions (as seen in Chapter 11) are integrable
in every interval [a, b] on which they are defined.
Theorem 14.28. If f is monotone on [a, b], then f is integrable on [a, b].
Proof. We use Theorem 14.15 again. Let f be monotone increasing on [a, b]. Then
for every partition F, we have mi = f (xi−1 ) and Mi = f (xi ), and so
n
ΩF = ∑ ( f (xi ) − f (xi−1 ))(xi − xi−1 ).
i=1
Let ε > 0 be fixed. If the partition is such that xi − xi−1 ≤ ε for all i = 1, . . . , n, then
n
ΩF ≤ ∑ ( f (xi ) − f (xi−1 )) · ε = ( f (b) − f (a)) · ε .
(14.19)
i=1
Since ε was arbitrary, f is integrable by Theorem 14.15.
⊔
⊓
14.5 Integrability and Operations
317
Remarks 14.29. 1. For a monotone increasing function with a uniform partition, the oscillatory sum
n
ΩF = ∑ (Mi − mi ) ·
i=1
b−a
n
can be illustrated easily: this is exactly the area of the rectangle obtained by sliding the “little rectangles”
over the last interval [xn−1 , xn ], with
base (b − a)/n and height f (b) − f (a).
That is,
ΩF =
b−a
( f (b) − f (a)),
n
Fig. 14.7
and this is truly arbitrarily small if n is
sufficiently large (Figure 14.7).
2. By Theorem 14.23, if a function is integrable, then for every ε > 0, there exists
a δ such that ΩF = SF − sF < ε holds for every partition F with mesh smaller than
δ . Note that for continuous and monotone functions, the proofs of Theorems 14.26
and 14.28 explicitly provide us with such a δ .
14.5 Integrability and Operations
The following theorems tells us that the family of integrable functions is closed under the most frequently used operations. When multiplying a function by a constant
or summing functions, we even get the value of the new integrals. Let us denote the
set of integrable functions on the interval [a, b] by R[a, b].
Theorem 14.30. If f ∈ R[a, b] and c ∈ R, then c f ∈ R[a, b] and
(b
a
c f dx = c ·
(b
f dx.
a
Proof. The statement follows immediately from the fact that if c ≥ 0, then for an
arbitrary partition F, we have sF (c f ) = c·sF ( f ) and SF (c f ) = c·SF ( f ), and if c < 0,
then sF (c f ) = c · SF ( f ) and SF (c f ) = c · sF ( f ).
⊔
⊓
Theorem 14.31. If f , g ∈ R[a, b], then f + g ∈ R[a, b], and
(b
a
( f + g) dx =
(b
a
f dx +
(b
a
g dx.
318
14 The Definite Integral
'
'
Proof. Let I = ab f dx and J = ab g dx. For arbitrary ε > 0, there exist partitions F
and G such that I − ε < sF ( f ) ≤ SF ( f ) < I + ε and J − ε < sG (g) ≤ SG (g) < J + ε .
Let H be the union of the partitions F and G. Then
I − ε < sH ( f ) ≤ SH ( f ) < I + ε
and
J − ε < sH (g) ≤ SH (g) < J + ε ,
|σH ( f ; (ci )) − I| < ε
and
|σH (g; (ci )) − J| < ε
so
for an arbitrary choice of inner points ci . Since σH (( f + g); (ci )) = σH ( f ; (ci )) +
σH (g; (ci )), it follows that |σH (( f + g); (ci )) − (I + J)| < 2ε . By Theorem 14.19, it
follows that f + g is integrable on [a, b], and its integral is I + J.
⊔
⊓
Theorem 14.32. If f ∈ R[a, b], then f 2 ∈ R[a, b]. Moreover, if | f (x)| ≥ δ > 0 for all
x ∈ [a, b], then 1/ f ∈ R[a, b].
Proof. By Theorem 14.15, it suffices to show that the oscillatory sums of the functions f 2 and 1/ f (if | f | ≥ δ > 0) can be made arbitrarily small.
Since f is integrable, it is also bounded. Let | f (x)| ≤ K for x ∈ [a, b]. For arbitrary
x, y ∈ [a, b],
| f 2 (x) − f 2 (y)| = | f (x) − f (y)| · | f (x) + f (y)| ≤ 2K · | f (x) − f (y)|,
and if | f | ≥ δ > 0, we have
1
1 | f (x) − f (y)|
1
f (x) − f (y) = | f (x) f (y)| ≤ δ 2 · | f (x) − f (y)|.
Then by (14.9), we have ω ( f 2 ; [u, v]) ≤ 2K · ω ( f ; [u, v]), and if | f | ≥ δ > 0, then
ω (1/ f ; [u, v]) ≤ (1/δ 2 ) · ω ( f ; [u, v]) for every interval [u, v] ⊂ [a, b].
It then follows immediately that for an arbitrary partition F, ΩF ( f 2 ) ≤ 2K ·
ΩF ( f ), and if | f | ≥ δ > 0, then ΩF (1/ f ) ≤ (2K/δ 2 ) · ΩF ( f ). Since ΩF ( f ) can
be made arbitrarily small, the same holds for the oscillatory sums ΩF ( f 2 ), and if
⊔
⊓
| f | ≥ δ > 0, then it holds for ΩF (1/ f ).
Theorem 14.33. If f , g ∈ R[a, b], then f g ∈ R[a, b]. Moreover, if |g(x)| ≥ δ > 0 for
all x ∈ [a, b], then f /g ∈ R[a, b].
Proof. Since
1
· ( f + g)2 − ( f − g)2 ,
4
by Theorems 14.30, 14.31, and 14.32, we have that f g ∈ R[a, b]. The second statement of the theorem is clear by f /g = f · (1/g).
⊔
⊓
fg =
After the previous theorems, it might be surprising that the composition of two
integrable functions is not necessarily integrable.
Example 14.34. Let f (0) = 0 and f (x) = 1 if x = 0. It is easy to see that f is integrable on every interval (see Exercise 14.1(c)). Let g denote the Riemann function
14.5 Integrability and Operations
319
(see Remark 10.8). We will soon see that the function g is also integrable on every
interval (see Example 14.45). On the other hand, f ◦ g is exactly the Dirichlet function, which we already saw not to be integrable in any interval (see Example 14.9).
We see that for integrability of compositions of functions, we require something
more than just the integrability of the two functions.
Theorem 14.35. Let g be integrable on [a, b], and let f be continuous on a closed
interval [α , β ] that contains the image of g (that is, the set g([a, b])). Then f ◦ g is
integrable on [a, b].
Proof. We have to show that the function f ◦g has arbitrarily small oscillatory sums.
By Theorem 10.52, f is bounded, so there exists a K ≥ 0 such that | f (t)| ≤ K for
all t ∈ [α , β ]. By Heine’s theorem (Theorem 10.61), f is uniformly continuous on
[α , β ], that is, for arbitrary ε > 0, there exists a δ > 0 such that | f (t1 ) − f (t2 )| < ε
if t1 ,t2 ∈ [α , β ] and |t1 − t2 | < δ .
Consider now an arbitrary partition F : a = x0 < x1 < · · · < xn = b of the interval
[a, b]. We will find a bound on the oscillatory sum ΩF ( f ◦ g) by separately finding a
bound on the terms whose indices satisfy
ω (g; [xi−1 , xi ]) < δ ,
(14.20)
and for the others. Let I denote the set of indices 1 ≤ i ≤ n for which (14.20)
holds, and let J be the set of indices 1 ≤ i ≤ n that do not satisfy (14.20). If i ∈ I,
then for arbitrary u, v ∈ [xi−1 , xi ], we have |g(u) − g(v)| < δ , so by the choice of
δ , | f (g(u)) − f (g(v))| < ε . Let the oscillation ω ( f ◦ g; [xi−1 , xi ]) be denoted by
ωi ( f ◦ g). Then
ωi ( f ◦ g) = sup{| f (g(u)) − f (g(v))| : u, v ∈ [xi−1 , xi ]} ≤ ε
(14.21)
for all i ∈ I. On the other hand, if i ∈ J, then ω (g; [xi−1 , xi ]) ≥ δ , so
n
ΩF (g) = ∑ ω (g; [xi−1 , xi ]) · (xi − xi−1 ) ≥ ∑ ω (g; [xi−1 , xi ]) · (xi − xi−1 ) ≥
i∈J
i=1
≥ ∑ δ · (xi − xi−1 ),
i∈J
and thus
1
∑(xi − xi−1 ) ≤ δ · ΩF (g).
(14.22)
i∈J
Now we use the inequalities (14.21) and (14.22) in order to estimate the sum
ΩF ( f ◦ g):
320
14 The Definite Integral
n
ΩF ( f ◦ g) = ∑ ωi ( f ◦ g) · (xi − xi−1 ) = ∑ + ∑ ≤
i∈I
i=1
i∈J
≤ ∑ ε · (xi − xi−1 ) + ∑ 2K · (xi − xi−1 ) ≤
i∈J
i∈I
2K
≤ ε · (b − a) +
· ΩF (g).
δ
(14.23)
Since g is integrable on [a, b], we can choose a partition F for which ΩF (g) <
εδ /(2K). Then by (14.23), ΩF ( f ◦g) < ε (b− a+ 1). Since ε was chosen arbitrarily,
it follows from Theorem 14.15 that f ◦ g is integrable. This completes the proof. ⊓
⊔
Remark 14.36. If in the composition f ◦ g we assume g to be continuous and f to
be integrable, then we cannot generally expect f ◦ g to be integrable. The examples
showing this, however, are much more complicated than the one we saw in Example
14.34.
Exercises
14.29. Give an example of a function f for which | f | is integrable on [a, b], but f is
not integrable on [a, b].
14.30. Let f and g be bounded on the interval [a, b]. Is it true that
( b
a
( f + g) dx =
( b
a
f dx +
( b
g dx?
(H)
a
14.31. Is it true that if f : [0, 1] → [0, 1] is integrable over [0, 1], then f ◦ f is also
integrable on [0, 1]?
14.6 Further Theorems Regarding the Integrability of Functions
and the Value of the Integral
Theorem 14.37. If a function is integrable on the interval [a, b], then it is also integrable on every subinterval [c, d] ⊂ [a, b].
Proof. Let f be integrable on [a, b]. Then for every ε > 0, there exists a partition F
of the interval [a, b] for which ΩF < ε . Let F ′ be a refinement of the partition F that
we get by including the points c and d. By Lemma 14.3, we know that sF ≤ sF ′ ≤
SF ′ ≤ SF so ΩF ′ ≤ ΩF < ε . On the other hand, if we consider only the base points
of F ′ that belong to [c, d], then we get a partition F of [c, d] for which ΩF ≤ ΩF ′ .
This is clear, because the sum defining ΩF ′ consists of nonnegative terms, and the
terms present in ΩF all appear in ΩF ′ as well.
14.6 Further Theorems Regarding the Integrability of Functions and the Value of the Integral 321
This shows that for all ε > 0, there exists a partition F of [c, d] for which ΩF < ε .
Thus f is integrable over [c, d].
⊓
⊔
Theorem 14.38. Let a < b < c, and let f be defined on [a, c]. If f is integrable on
both intervals [a, b] and [b, c], then it is also integrable on [a, c], and
( c
f (x) dx =
( b
f (x) dx +
'
f (x) dx .
b
a
a
( c
'
Proof. Let I1 = ab f (x) dx and I2 = bc f (x) dx. Let ε > 0 be given, and consider
partitions F1 and F2 of [a, b] and [b, c] respectively that satisfy I1 − ε < sF1 ≤ SF1 <
I1 + ε and I2 − ε < sF2 ≤ SF2 < I + ε .
Taking the union of the base points of the partitions F1 and F2 yields a partition
F of the interval [a, c] for which
sF = sF1 + sF2
SF = SF1 + SF2 .
and
Thus
I1 + I2 − 2ε < sF ≤ SF < I1 + I2 + 2ε .
Since ε was arbitrary, f is integrable on [a, c] by Theorem 14.11, and its integral is
I1 + I2 there.
⊔
⊓
Remark 14.39. Let f be integrable on [a, b]. As we saw, in this case, f is integrable
over every subinterval [c, d] ⊂ [a, b] as well. The function
[c, d] →
( d
f (x) dx,
(14.24)
c
which is defined on the set of closed subintervals of [a, b], has the property that the
sum of its values at two adjacent intervals is equal to its value at the union of those
two intervals. This is usually expressed by saying that the formula (14.24) defines
an additive interval function.
'
Thus far, we have given meaning to the expression ab f (x) dx only when a < b
holds. It is worthwhile, however, to extend the definition to the case a ≥ b.
Definition 14.40. If a > b and f is integrable on [b, a], then let
( b
a
f (x) dx = −
( a
f (x) dx.
b
If f is defined at the point a, then let
( a
a
f (x) dx = 0.
322
14 The Definite Integral
Theorem 14.41. For arbitrary a, b, c,
( c
f (x) dx =
( b
f (x) dx +
f (x) dx,
(14.25)
b
a
a
( c
if the integrals in question exist.
Proof. Inspecting the possible cases, we see that the theorem immediately follows
by Definition 14.40 and Theorem 14.38. If, for example, a < c < b, then
( c
a
f (x) dx +
( b
f (x) dx =
c
( b
f (x) dx,
a
which we can rearrange to give us (14.25). The other cases are similar.
⊔
⊓
We now turn our attention to theorems that ensure integrability of functions having properties weaker than continuity.
Theorem 14.42. Let a < b, and let f be
bounded in [a, b].
(i) If f is integrable on [a + δ , b] for all δ > 0,
then f is integrable on the interval [a, b] as
well.
(ii) If f is integrable on [a, b − δ ] for every δ >
0, then f is integrable on the interval [a, b]
as well.
Proof. We prove only (i). Let | f | ≤ K in [a, b].
Fig. 14.8
Consider a partition of the interval [a + δ , b],
and let the oscillatory sum for this partition be
Ωδ (Figure 14.8). The same partition extended by the interval [a, a + δ ] will be a partition of [a, b]. The oscillation Ω corresponding to this will satisfy Ω ≤ Ωδ + 2K δ ,
since the contribution of the new interval [a, a + δ ] is (M − m)δ ≤ 2K δ . Thus if we
choose δ to be sufficiently small (for example, δ < ε /(4K)), and consider a partition of [a + δ , b] for which Ωδ is small enough (for example, Ωδ < ε /2), then Ω
will also be small (smaller than ε ).
⊔
⊓
Theorem 14.43. If f is bounded in [a, b] and is continuous there except at finitely
many points, then f is integrable on [a, b].
Proof. Let a = c0 ≤ c1 < · · · < ck = b, and suppose that f is continuous everywhere
ci−1 + ci
(i =
except at the points c0 , . . . , ck in [a, b]. Let us take the points di =
2
1, . . . , k) as well; this gives us intervals [ci−1 , di ] and [di , ci ] that satisfy the conditions
of Theorem 14.42. Thus f is integrable on these intervals, and then by applying
Theorem 14.38 repeatedly, we see that f is integrable on the whole interval [a, b].
⊔
⊓
14.6 Further Theorems Regarding the Integrability of Functions and the Value of the Integral 323
Remark 14.44. Theorem 14.43 is clearly a consequence of the theorem we mentioned earlier that if a function is bounded and is continuous everywhere except at
countably many points, then it is integrable. Moreover, the integrability of monotone functions also follows from that theorem, since a monotone function can have
only countably many points of discontinuity, as seen in Theorem 10.70.
Example 14.45. We define the Riemann function as follows (see Example 10.7 and
Remark 10.8). Let
0 if x is irrational;
f (x) = 1
if x = qp , where p and q are integers, q > 0, and (p, q) = 1.
q
In Example 10.7, we saw that the function f is continuous at every irrational point
but discontinuous at every rational point. Thus the set of discontinuities of f is Q,
which is countable by Theorem 8.2. Since f is bounded, f is integrable on every
interval [a, b] by the general theorem we mentioned. We will prove this directly too.
Moreover, we will show that the integral of f is zero over every interval.
Since every interval contains an irrational number, for an arbitrary partition F :
a = x0 < x1 < · · · < xn = b and i = 1, . . . , n, we have mi = 0. Thus sF = 0 for all
F ∈ F . All we need to show is that for arbitrary ε > 0, there exists'a partition for
which SF < ε holds. By the definition of the integral, this will imply ab f (x) dx = 0.
Let η > 0 be fixed. Notice that the function takes on values greater than η at
only finitely many points x ∈ [a, b]. This is because if f (x) > η , then x = p/q, where
0 < q < 1/η . However, for each 0 < q < 1/η , there are only finitely many integers
p such that p/q ∈ [a, b]. Let c1 , . . . , cN be all the points in [a, b] where the value of f
is greater than η . Let F : a = x0 < x1 < · · · < xn = b be a partition with mesh smaller
than η /N. We show that SF < (2 + b − a) · η .
The number of indices i for which [xi−1 , xi ] contains one of the points c j is at most
2N, since every point c j belongs to at most two intervals. In the sum SF , the terms
from these intervals are Mi (xi − xi−1 ) ≤ 1 · (η /N), so the sum of these terms is at
most 2N · (η /N) = 2η . The rest of the terms satisfy Mi (xi − xi−1 ) ≤ η · (xi − xi−1 ),
so their sum is at most
n
∑ η · (xi − xi−1 ) = η · (b − a).
i=1
Adding these two bounds, we obtain SF ≤ (2 + b − a) · η . Thus if we choose an η
such that η < ε /(2 + b − a), then the construction above gives an upper sum that is
smaller than ε .
Theorem 14.46. Let the functions f and g be defined on the interval [a, b]. If f is
integrable on [a, b], and g = f everywhere except at finitely many points, then g is
integrable on [a, b], and
( b
a
g(x) dx =
( b
a
f (x) dx.
324
14 The Definite Integral
Proof. Since f is bounded on [a, b], so is g. Let | f (x)| ≤ K and |g(x)| ≤ K for all
'
x ∈ [a, b]. Suppose first that f and g differ only at the point a. Let I = ab f (x) dx. By
Theorem 14.11, for every ε > 0, there exists a partition F such that I − ε < sF ( f ) ≤
SF ( f ) ≤ I + ε . Since adding a new base point does not decrease sF ( f ) and does not
increase SF ( f ), we can suppose that a+ (ε /K) is a base point, and so x1 − x0 ≤ ε /K.
If f and g are equal in (a, b], then the sums sF ( f ) and sF (g) differ from each
other in only the first term. The absolute value of these terms can be at most
K · (x1 − x0 ) ≤ ε , and so |sF (g) − sF (g)| ≤ 2ε and sF (g) > I − 3ε . We similarly get
that SF (g) < I + 3ε . Since ε was chosen arbitrarily, the function g is integrable with
integral I by Theorem 14.11.
We can argue similarly if f and g differ only at the point b. Then to get the
general statement, we can apply Theorem 14.38 repeatedly, just as in the proof of
Theorem 14.43.
⊔
⊓
Remark 14.47. The above theorem gives us an opportunity to extend the concept of
integrability and the values of integrals to functions that are undefined at finitely
many points of the interval [a, b].
Let f be a function defined on [a, b] except for at most finitely many points. If
there exists a function g integrable on [a, b] for which g(x) = f (x) with the exception
of finitely many points, then we say that f is integrable, and
( b
f (x) dx =
( b
g(x) dx.
a
a
If such a g does not exist, then f is not integrable. By Theorem 14.46, it is clear that
integrability and the value of the integral do not depend on our choice of g.
Summarizing our observations above, we conclude that the integrability and the
value of the integral of a function f over the interval [a, b] do not change if
(i) we change the values of f at finitely many points,
(ii) we extend the definition of f at finitely many points,
(iii) if we make f undefined at finitely many points.
Example 14.48. The function f : [a, b] → R is called a step function if there exists
a partition c0 = a < c1 < · · · < cn = b such that f is constant on every open interval
(ci−1 , ci ) (while the value of f can be arbitrary at the base points ci ). We show that
every step function is integrable, and if f (x) = di for all x ∈ (ci−1 , ci ) (i = 1, . . . , n),
then
(
b
a
n
f (x) dx = ∑ di (ci − ci−1 ).
i=1
This is quite clear by Example 14.8 and the remark above, which say that
ci−1 f (x) dx = di (ci − ci−1 ) for all i, and the rest follows by Theorem 14.38.
' ci
14.7 Inequalities for Values of Integrals
325
Exercises
14.32. Let f and g be integrable on [a, b], and suppose that sF ( f ) = sF (g) for all
'
'
partitions F. Prove that cd f dx = cd g dx for all [c, d] ⊆ [a, b].
14.33. Let f be a continuous nonnegative function defined on [a, b]. Prove that
'b
a f dx = 0 if and only if f ≡ 0. (S)
14.34. Is the function sin(1/x) integrable on the interval [−1, 1]?
14.35. Let {x} denote the fractional part function. Is {1/x} integrable over the interval [0, 1]?
14.36. Let f (x) = n2 (x − 1/n) if 1/n + 1 < x ≤ 1/n (n ∈ N+ ). Is f integrable on the
interval [0, 1]?
14.37. Let f (x) = (−1)n if 1/n + 1 < x ≤ 1/n (n ∈ N+ ). Prove that f is integrable
in [0, 1]. What is the value of its integral?
14.38. Let f : [a, b] → R be a bounded function such that limx→c f (x) = 0 for all
c ∈ (a, b). Prove that f is integrable on [a, b]. What is the value of its integral? (H)
14.39. Let f (x) = x if x is irrational, and f (p/q) = (p + 1)/q if p, q ∈ Z, q > 0, and
(p, q) = 1. What are the lower and upper integrals of f in [0, 1]?
14.40. Let a < b < c, and let f be bounded on the interval [a, c]. Prove that
'c
a
f (x) dx =
'b
a
f (x) dx +
'c
b
f (x) dx.
14.7 Inequalities for Values of Integrals
We begin this section with some simple but often used inequalities.
Theorem 14.49. Let a < b.
'
(i) If f is integrable on [a, b] and f (x) ≥ 0 for all x ∈ [a, b], then ab f (x) dx ≥ 0.
(ii) If f and g are integrable on [a, b] and f (x) ≤ g(x) for all x ∈ [a, b], then
( b
a
f (x) dx ≤
( b
g(x) dx.
a
(iii) If f is integrable on [a, b] and m ≤ f (x) ≤ M for all x ∈ [a, b], then
m(b − a) ≤
( b
a
f (x) dx ≤ M(b − a).
(14.26)
326
14 The Definite Integral
(iv) If f is integrable on [a, b] and | f (x)| ≤ K for all x ∈ [a, b], then
( b
f (x) dx ≤ K(b − a).
a
(v) If h is integrable on [a, b], then
( b
(b
|h(x)| dx.
h(x) dx ≤
a
a
Proof. (i) The statement is clear, because if f is nonnegative, then each of its lower
sums is also nonnegative.
(ii) If f ≤ g, then g − f ≥ 0, so the statement follows from (i).
(iii) Both inequalities of (14.26) are clear from (ii).
(iv) Apply (iii) with m = −K and M = K.
(v) Apply (ii) with the choices f = h and g = |h| first, then with the choices f = −|h|
and g = f .
⊔
⊓
Remark 14.50. The definite integral is a mapping that assigns numbers to specific
functions. We have determined this map to be a linear (Theorems 14.30 and 14.31),
additive interval function (Theorem 14.38), which is monotone (Theorem 14.49)
and assigns the value b − a to the constant function 1 on the interval [a, b]. It is
an important fact that these properties characterize the integral. To see the precise
statement and proof of this statement, see Exercises 14.50 and 14.51.
The following theorem is a simple corollary of inequality (14.26).
Theorem 14.51 (First Mean Value Theorem for Integration). If f is continuous
on [a, b], then there exists a ξ ∈ [a, b] such that
(b
a
f (x) dx = f (ξ ) · (b − a).
(14.27)
Proof. If m = min f ([a, b]) and M =
max f ([a, b]), then
m≤
1
·
b−a
(b
a
f (x) dx ≤ M. (14.28)
Since by the Bolzano–Darboux theorem (Theorem 10.57), the function f
takes on every value between m and M
in [a, b], there must be a ξ ∈ [a, b] for
which (14.27) holds (Figure 14.9).
Fig. 14.9
⊔
⊓
14.7 Inequalities for Values of Integrals
327
Remark 14.52. The graphical meaning of the theorem above is that if f is nonnegative and continuous on [a, b], then there exists a value ξ such that the area of the
rectangle with height f (ξ ) and base [a, b] is equal to the area underneath the graph
of f .
We consider the value in (14.28) to be a generalization of the arithmetic mean.
If, for example, a = 0, b = n ∈ N+ and f (x) = ai for all x ∈ (i − 1, i), (i = 1, . . . , n),
then the value is exactly the arithmetic mean of a1 , . . . , an .
Theorem 14.53 (Abel’s1 Inequality). Let f be monotone decreasing and nonnegative, and let g be integrable on [a, b]. If
m≤
(c
a
g(x) dx ≤ M
for all c ∈ [a, b], then
f (a) · m ≤
(b
a
f (x)g(x) dx ≤ f (a) · M.
(14.29)
To prove the theorem, we require an analogous inequality for sums.
Theorem 14.54 (Abel’s Inequality). If a1 ≥ a2 ≥ · · · ≥ an ≥ 0 and m ≤ b1 + · · · +
bk ≤ M for all k = 1, . . . , n, then
a1 · m ≤ a1 b1 + · · · + an bn ≤ a1 · M.
(14.30)
Proof. Let sk = b1 + · · · + bk (k = 1, . . . , n). Then
a1 b1 + · · · + an bn = a1 s1 + a2 (s2 − s1 ) + · · · + an (sn − sn−1 ) =
= (a1 − a2 )s1 + (a2 − a3 )s2 + · · · + (an−1 − an )sn−1 + an sn .
(14.31)
(This rearrangement is called an Abel rearrangement.) If we replace each sk by
M here, then we are increasing the sum, since sk ≤ M for all k, and the coefficients
ai − ai+1 and an are nonnegative. The number we get in this way is
(a1 − a2 )M + (a2 − a3 )M + · · · + (an−1 − an )M + an M = a1 · M,
which proves the second inequality of (14.30). The first inequality can be proved in
a similar way.
⊔
⊓
Proof (Theorem 14.53). Let ε > 0 be given, and let us choose a partition F : a = x0 <
· · · < xn = b such that ΩF (g) < ε and ΩF ( f · g) < ε . Then for arbitrary 1 ≤ k ≤ n,
k
m − ε < ∑ g(xi−1 )(xi − xi−1 ) < M + ε .
i=1
1
Niels Henrik Abel (1802–1829), Norwegian mathematician.
(14.32)
328
14 The Definite Integral
Indeed, if Fk denotes the partition a = x0 < · · · < xk of the interval [a, xk ], then the
oscillatory sum of g corresponding to Fk is' at most ΩF (g), which is smaller than ε .
Thus sFk (g) and SFk (g) are both closer to axk g dx than ε . By the condition, this last
integral falls between m and M, so
m−ε ≤
(xk
a
k
g dx − ε < sFk (g) ≤ ∑ g(xi−1 )(xi − xi−1 ) ≤ SFk (g) <
i=1
<
(xk
a
g dx + ε ≤ M + ε ,
which proves (14.32).
'
Let S = ∑ni=1 f (xi−1 )g(xi−1 )(xi − xi−1 ). Then S− ab f g dx < ε , since
ΩF ( f · g) < ε . Since f (x0 ) ≥ f (x1 ) ≥ . . . ≥ f (xn−1 ) ≥ 0, by Abel’s inequality
(Theorem 14.54) applied to the sum S, we have f (x0 )·(m− ε ) ≤ S ≤ f (x0 )·(M + ε ).
Then by a = x0 , we see that f (a) · (m − ε ) ≤ S ≤ f (a) · (M + ε ), so
f (a) · (m − ε ) − ε <
(b
a
f g dx < f (a) · (M + ε ) + ε .
This is true for every ε > 0, so we have proved (14.29).
⊔
⊓
Inequalities about sums can often be generalized to integrals. One of these is
Hölder’s inequality, which corresponds to Theorem 11.18.
Theorem 14.55 (Hölder’s Inequality). Let p and q be positive numbers such that
1/p + 1/q = 1. If f and g are integrable on [a, b], then
. b
(b
.
b
.
.(
(
.
p
q
f (x)g(x) dx ≤ / | f (x)| p dx · .
/
|g(x)|q dx.
(14.33)
a
a
a
Proof. By Theorems 14.33 and 14.35, the functions | f g|, | f | p , and |g|q are all integrable on [a, b]. Let F : a = x0 < · · · < xn = b be a partition such that ΩF (| f g|) < ε ,
ΩF (| f | p ) < ε and ΩF (|g|q ) < ε .
Then we can say that the Riemann sum ∑ni=1 f (xi )g(xi ) · (xi − xi−1 ) is less than
'
ε away from the integral A = ab f (x)g(x) dx, the Riemann sum ∑ni=1 | f (xi )| p ·
'
(xi − xi−1 ) is less than ε away from the integral B = ab | f (x)| p dx, and the Riemann
'
sum ∑ni=1 |g(xi )|q · (xi − xi−1 ) is less than ε away from the integral C = ab |g(x)|q dx.
Now by Hölder’s inequality, for these sums (Theorem 11.18), we have
14.7 Inequalities for Values of Integrals
329
n
|A| − ε < ∑ f (xi )g(xi ) · (xi − xi−1 ) =
i=1
n
1/p
1/q
= ∑ ( f (xi )(xi − xi−1 ) ) · (g(xi )(xi − xi−1 ) ) ≤
i=1
≤
≤
p
n
n
i=1
i=1
∑ | f (xi )| p (xi − xi−1 ) · q ∑ |g(xi )|q (xi − xi−1 ) ≤
√
√
p
B + ε · q C + ε.
Since ε was arbitrary, (14.33) holds.
⊔
⊓
For the case p = q = 2, we get the following famous inequality, which is the
analogue of Theorem 11.19 for integrals.
Theorem 14.56 (Schwarz Inequality). If f and g are integrable in [a, b], then
⎛
⎝
(b
a
⎞2
f (x)g(x) dx⎠ ≤
(b
a
2
f (x) dx ·
(b
g2 (x) dx.
a
Remark 14.57. The Schwarz inequality forms
the basis of an important analogy between functions integrable on [a, b] and vectors.
If x = (x1 , x2 ) and y = (y1 , y2 ) are vectors in
R2 , then the number x1 y1 + x2 y2 is called the
dot or scalar product of x and y, and is denoted
by x, y!. The Cauchy–Schwarz–Bunyakovsky
inequality (Theorem 11.19) states that
| x, y!| ≤ |x| · |y| for all
x, y ∈ R2 .
In fact, x, y! = |x| · |y| · cos α , where α denotes
the angle between the rays pointing toward x
and y. We can prove this in the following way.
Since
λ x, y! = x, λ y! = λ · x, y!
Fig. 14.10
for all λ ∈ R, we can suppose that x and y are unit vectors. We have to show that in
this case, x, y! is equal to the cosine of the subtended angle. By the equality
x, y! =
1
· |x + y|2 − |x − y|2 ,
4
we know that congruences (that is, isometries—mappings preserving distances) do
not change the value of the dot product. A suitable isometry maps
√ the vector x into
the vector (1, 0), and the vector y into the vector (a, b), where a2 + b2 = 1. The
dot product of these vectors is a. On the other hand, (a, b) is a point on the unit
330
14 The Definite Integral
circle, and so by the definition of the cosine function, a = cos α , where α is the
angle enclosed by the rays containing (a, b) and (1, 0). That is, x, y! = a = cos α ,
which is what we wanted to show (Figure 14.10). The above also shows that two
vectors are perpendicular to each other if and only if their scalar product
is zero.
'
Theorem 14.56 provides the analogy to consider the !
number ab f (x)g(x) dx the
'
b 2
scalar product of the functions f and g, and the number
a f (x) dx the absolute
value of the function f . Hence we can consider
two functions f and g perpendicu'
lar (or orthogonal) if their scalar product ab f (x)g(x) dx is zero.
This analogy works quite well, and leads to the theory of Hilbert2 spaces.
Exercises
14.41. Prove that if a < b and
'
limh→0+0 ab−h [ f (x + h) − f (x)] dx = 0.
is
f
continuous
on
[a, b],
then
14.42. Let f : [0, ∞) → R be a continuous function, and suppose that
'
limx→∞ f (x) = c. Prove that limt→∞ 01 f (tx) dx = c. (H)
14.43. Compute the value of limn→∞
'1
0
(1 − x)n dx.
14.44. Prove that if f is nonnegative and continuous in [a, b], then
lim
n→∞
n
( b
f n (x) dx = max f ([a, b]). (H)
a
14.45. Prove that if f is convex on [a, b], then
f
(b
a+b
f (a) + f (b)
· (b − a).
· (b − a) ≤ f (x) dx ≤
2
2
a
14.46. Prove that if f is differentiable on [a, b] and f (a) = f (b) = 0, then there exists
a c ∈ [a, b] such that
f ′ (c) ≥
2
·
(b − a)2
(b
f (x) dx.
a
(We can interpret this exercise as follows: if a point moves along a straight line
through the time interval [a, b] with zero initial and final velocity, then for it to travel
a distance d, it must have reached an acceleration of 2d/(b − a)2 along its route.)
2
David Hilbert (1862–1943), German mathematician.
14.7 Inequalities for Values of Integrals
331
14.47. Prove that if f is nonnegative, continuous, and concave on [0, 1], and if
furthermore, f (0) = 1, then
(1
0
When does equality hold?
⎛
⎞2
(1
2 ⎝
x · f (x) dx ≤ ·
f (x) dx⎠ .
3
0
14.48. When does equality hold in (14.33)?
14.49. Let f : R → R be a continuous function, and suppose that
A = lim f (x)
x→−∞
Determine the limit
lim
(a
a→∞
−a
and
B = lim f (x).
x→∞
[ f (x + 1) − f (x)] dx.
14.50. Suppose that there is a number Φ ( f ; [a, b]) assigned to every interval [a, b]
and integrable function f with the following properties:
(i) If f is integrable on [a, b], then Φ (c f ; [a, b]) = c · Φ ( f ; [a, b]) for all c ∈ R.
(ii) If f and g are integrable on [a, b], then Φ ( f + g; [a, b]) = Φ ( f ; [a, b]) + Φ (g;
[a, b]).
(iii) If a < b < c and f is integrable on [a, c], then Φ ( f ; [a, c]) = Φ ( f ; [a, b]) +
Φ ( f ; [b, c]).
(iv) If f and g are integrable on [a, b] and f (x) ≤ g(x) for all x ∈ [a, b], then
Φ ( f ; [a, b]) ≤ Φ (g; [a, b]).
(v) If e(x) = 1 for all x ∈ [a, b], then Φ (e; [a, b]) = b − a.
Prove that Φ ( f ; [a, b]) =
'b
a
f (x) dx for every function f integrable over [a, b]. (H)
14.51. Assume Φ to be as in the previous question, except replace (v) with the following condition:
(vi) If f is integrable on [a, b], then Φ ( fc ; [a − c, b − c]) = Φ ( f ; [a, b]) for all c ∈ R,
where fc (x) = f (x + c) (x ∈ [a − c, b − c]).
Prove that there exists a constant α ≥ 0 such that Φ ( f ; [a, b]) = α ·
every function f integrable over [a, b]. (H)
'b
a
f (x) dx for
Chapter 15
Integration
In this chapter, we will familiarize ourselves with the most important methods for
computing integrals, which will also make the link between definite and indefinite
integrals clear.
15.1 The Link Between Integration and Differentiation
Examples 14.8 and 14.21 both provide equalities of the form
F(a) with the following cast:
f (x) ≡ c,
f (x) = 1/x,
F(x) = c · x;
F(x) = log x;
f (x) = xα ,
f (x) = ex ,
f (x) = cos x,
1
· xα +1
F(x) = α +1
F(x) = ex ;
F(x) = sin x.
'b
a
f (x) dx = F(b) −
(α > 0);
As we can see, in each example, F ′ = f , that is, F is a primitive function of f .
These examples illustrate an important link between integration and differentiation,
and are special cases of a famous general theorem.
Theorem 15.1 (Fundamental Theorem of Calculus). Let f be integrable on [a, b].
If the function F is continuous on [a, b], differentiable on (a, b), and F ′ (x) = f (x)
for all x ∈ (a, b) (that is, F is a primitive function of f on (a, b)), then
( b
a
f (x) dx = F(b) − F(a).
Proof. Let a = x0 < x1 < · · · < xn = b be an arbitrary partition of [a, b]. By the mean
value theorem (Theorem 12.50), for all i, there exists a point ci ∈ (xi−1 , xi ) such that
F(xi ) − F(xi−1 ) = F ′ (ci )(xi − xi−1 ) = f (ci )(xi − xi−1 )
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 15
333
334
15 Integration
holds. If we sum these equalities for all i = 1, . . . , n, then every term cancels out on
the left-hand side except for the terms F(xn ) = F(b) and F(x0 ) = F(a), and so we
get that
n
F(b) − F(a) = ∑ f (ci )(xi − xi−1 ).
i=1
This means that for every partition, there exist inner points such that the Riemann
sum with those points is equal to F(b) − F(a). Thus the number F(b) − F(a) lies
between the lower and upper sums for every partition. Since f 'is integrable, there is
only one such number: the integral of f . Thus F(b) − F(a) = ab f (x) dx.
⊔
⊓
Remark 15.2. While making clear the definition of differentiability back in Chapter 12, we concluded that if the function s(t) defines the position of a moving point,
then its instantaneous velocity is v(t) = s′ (t). Since s(b) − s(a) is the distance the
point travels during the time interval [a, b], the physical interpretation of the fundamental theorem of calculus says that the distance traveled is equal to the integral of
the velocity.
As we saw in Chapter 13, deciding whether a function has a primitive function is
generally a hard task (see Remarks 13.27 and 13.45). However, if the function f is
integrable, then deciding this question—with the help of the fundamental theorem of
calculus—is quite easy. Suppose, for example, that f is integrable on [a, b], and that
F is a primitive function of f . We can assume that F(a) = 0, since if this does not
hold, we can just consider the function F(x) − F(a) instead of F(x). Let x ∈ [a, b],
apply the fundamental theorem of calculus to' the interval [a, x]. We get that
'and
x
f
(t)
dt = F(x) − F(a) = F(x), that is, F(x) = ax f (t) dt for' all x ∈ [a, b]. This
a
means that if f has a primitive function, then the function x → ax f (t) dt must also
be a primitive function. We will introduce a name for this function.
Definition 15.3. Let f be integrable on [a, b]. The function
I(x) =
( x
a
f (t) dt
(x ∈ [a, b])
is called the integral function of f .
With the use of this new concept, we can summarize the results of our previous
argument as follows.
Theorem 15.4. An integrable function has a primitive function if and only if its
integral function is its primitive function.
The most important properties of the integral function are expressed by the following theorem.
Theorem 15.5. Let f be integrable on [a, b], and let I(x) be its integral function.
(i) The function I is continuous and even has the Lipschitz property on the interval
[a, b].
15.1 The Link Between Integration and Differentiation
335
(ii) If f is continuous at the point x0 ∈ [a, b], then I is differentiable there, and
I ′ (x0 ) = f (x0 ).
(iii) If f is continuous on [a, b], then I is differentiable on [a, b], and I ′ = f . It follows
that if f is continuous on [a, b], then it has a primitive function there.
Proof. (i) Let | f (x)| ≤ K for all x ∈ [a, b]. If a ≤ x < y ≤ b, then by Theorem 14.38,
we have
I(y) − I(x) =
( y
a
f (t) dt −
( x
f (t) dt =
( y
f (t) dt,
x
a
so |I(y) − I(x)| ≤ K · |y − x| by statement (iv) of Theorem 14.49.
(ii) Again by Theorem 14.38, we have
I(x) − I(x0 ) =
( x
a
f (t)dt −
( x0
f (t)dt =
a
( x
f (t)dt
x0
so the difference quotient of the function I corresponding to the points x and x0 is
I(x) − I(x0 )
1
=
x − x0
x − x0
( x
f (t) dt.
x0
Since f is continuous at x0 , for arbitrary ε > 0 there exists a δ > 0 such that
f (x0 ) − ε < f (t) < f (x0 ) + ε if |t − x0 | < δ .
First let x0 < x < x0 + δ . For all such x, it follows from statement (iii) of Theorem 14.49 that
( f (x0 ) − ε ) (x − x0 ) ≤
( x
x0
f (t) dt ≤ ( f (x0 ) + ε ) (x − x0 )
holds, that is,
I(x) − I(x0 )
≤ f (x0 ) + ε .
x − x0
The same can be said when x < x0 by rearranging I(x0 ) − I(x) /(x0 − x), so
we have
I(x) − I(x0 )
I ′ (x0 ) = lim
= f (x0 ).
x→x0
x − x0
f (x0 ) − ε ≤
Statement (iii) is clear from (ii).
⊔
⊓
Remarks 15.6. 1. We can see from the proof that if f is continuous from the right or
the left at x0 , then I+′ (x0 ) = f (x0 ) or I−′ (x0 ) = f (x0 ) respectively.
2. The proof of statement (ii) above uses an argument we have already seen before.
In Example 10.7, when we determined the area under the graph of a nonnegative
monotone increasing and continuous function f : [a, b] → R, we showed that if T (x)
denotes the area over the interval [a, x], then T ′ (x) = f (x). Statement (ii) of Theorem 15.5 is actually a rephrasing of this, in which we replace area—which we still
have not clearly defined—with the integral, and the function T (x) with the integral
function.
336
15 Integration
3. The fundamental theorem of calculus implies that if a function F is continuously
differentiable,1 then differentiating F and integrating the derivative gives us F back
(more precisely, its increment on the interval [a, b]). By statement (iii) of 15.5, if we
integrate a continuous function f from a to x, and then we differentiate the integral
function we get, then we obtain f . These two statements express that integration
and differentiation are inverse operations in some sense.
Fig. 15.1
In talking about the theory of integration, several different properties of functions
came into play: boundedness, integrability, continuity, and the property of having a
primitive function. We will use the following notation for functions that have the
corresponding properties.
K[a, b] = { f : [a, b] → R and f is bounded in [a, b]},
R[a, b] = { f : [a, b] → R and f is Riemann integrable on [a, b]},
C[a, b] = { f : [a, b] → R and f is continuous on [a, b]},
P[a, b] = { f : [a, b] → R and f has a primitive function on [a, b]}.
We introduce separate notation for the set of integrable functions whose integral
function is differentiable:
D[a, b] = { f ∈ R[a, b] and the integral function of f is differentiable in [a, b]}.
By Theorem 15.5 (and the proper definitions), the containment relations
C[a, b] ⊂ D[a, b] ⊂ R[a, b] ⊂ K[a, b]
1
and C[a, b] ⊂ P[a, b]
By this we mean that the function is differentiable and its derivative is continuous.
(15.1)
15.1 The Link Between Integration and Differentiation
337
hold for these classes of functions. Moreover,
R[a, b] ∩ P[a, b] ⊂ D[a, b].
(15.2)
This is a straightforward corollary of Theorem 15.4 (Figure 15.1).
We now show that aside from what is listed in (15.1), no other containment
relations exist between these classes of functions.
Examples 15.7. 1. f ∈ K[0, 1] ⇒ f ∈ R[0, 1]: The Dirichlet function is an example.
2. f ∈ R[0, 1] ⇒ f ∈ D[0, 1]: Let f (x) = 0 if 0 ≤ x < 1/2, and f (x) = 1 if 1/2 ≤ x ≤ 1.
3. f ∈ D[0, 1] ⇒ f ∈ C[0, 1]: By (15.2), it suffices to give a function that is integrable,
has a primitive function, but is not continuous. Let
2x sin(1/x) − cos(1/x), if x = 0,
f (x) =
0,
if x=0.
Since f is bounded and continuous everywhere except at a point, it is integrable.
The function f has a primitive function, too, namely the function
x2 sin(1/x), if x = 0,
F(x) =
0,
if x = 0.
On the other hand, f is not continuous at 0.
4. f ∈ D[0, 1] ⇒ f ∈ P[0, 1]: Let f (0) = 1 and f (x) = 0 (0 < x ≤ 1).
5. f ∈ P[0, 1] ⇒ f ∈ K[0, 1]: See Example 13.46.
We mention that there exists a bounded function that has a primitive function
but is not integrable (that is, f ∈ K[0, 1] ∩ P[0, 1] ⇒ f ∈ R[0, 1]). Constructing such
a function is significantly more difficult than constructing the previous ones, so we
will skip that for now.
Combining the continuity of integral functions with Abel’s inequality yields an
important result.
Theorem 15.8 (Second Mean Value Theorem for Integration).
(i) Let f be monotone decreasing and nonnegative, and let g be integrable in [a, b].
Then there exists a ξ ∈ [a, b] such that
( b
a
f (x)g(x) dx = f (a) ·
( ξ
g(x) dx.
(15.3)
a
(ii) Let f be monotone and let g be integrable in [a, b]. Then there exists a ξ ∈ [a, b]
such that
( b
a
f (x)g(x) dx = f (a) ·
( ξ
a
g(x) dx + f (b) ·
( b
ξ
g(x) dx.
(15.4)
338
15 Integration
'
Proof. (i) The integral function G(x) = ax g(t) dt is continuous in [a, b] by Theorem 15.5, so its range in [a, b] has a smallest and a greatest element. Let m =
'
min G[a, b], M = max G[a, b], and I = ab f g dx. Then f (a) · m ≤ I ≤ f (a) · M by
Theorem 14.53. Since f (a) · G takes on every value between f (a) · m and f (a) · M
by the Bolzano–Darboux theorem, there exists a ξ ∈ [a, b] such that f (a) · G(ξ ) = I,
which is exactly (15.3).
(ii) We can suppose that f is monotone decreasing, since otherwise, we can switch
to the function − f . Then f − f (b) is monotone decreasing and nonnegative in [a, b],
so by (i), there exists a ξ ∈ [a, b] such that
( b
a
from which we get
( b
a
(
f (x) − f (b) g(x) dx = f (a) − f (b) ·
ξ
g(x) dx,
a
(
f (x)g(x) dx = f (a) − f (b) ·
a
= f (a) ·
( ξ
a
ξ
g(x) dx + f (b) ·
g(x) dx + f (b) ·
( b
ξ
( b
g(x) dx =
a
g(x) dx.
⊔
⊓
Exercises
15.1. Give every primitive function, integral function, indefinite integral (see Definition 13.24), and definite integral of the functions below over the interval [−2, 3]:
(a) |x|;
(b) sgn(x);
1 + x2 ,
(c) f (x) =
1 − x2 ,
if x ≥ 0,
if x < 0.
15.2. Let f (x) = |x| − 2 (x ∈ [−2, 1]). Does there exist a function whose integral
function is f ? Decide the same for the function g(x) = [x] (x ∈ [−2, 1]).
√
15.3. Does there exist a function on [0, 1] whose integral function is x? (H)
15.4. Let f : [a, b] → R be bounded, and let F be a primitive function of f . Prove
that
( b
a
f (x) dx ≤ F(b) − F(a) ≤
( b
f (x) dx.
a
15.5. Let
G(x) =
( x4
0
Determine the derivative of G.
3
et · sint dt
(x ∈ R).
(H)
15.1 The Link Between Integration and Differentiation
339
15.6. Prove that there are only two continuous functions defined on [a, b] that satisfy
( x
f (t) dt =
( x
f 2 (t) dt
a
a
for all x ∈ [a, b].
15.7. Prove that if f is continuous in [0, 1] and f (x) < 1 for all x ∈ [0, 1], then the
equation
2x −
( x
f (t) dt = 1
0
has exactly one root in [0, 1].
15.8. For which values of x is the value of
( x
sint
0
√ dt
t
maximized?
15.9. Let f be integrable on [a, b], and let the integral function of f be I. Is it possible
for I to be differentiable everywhere and I ′ (x) = f (x) for all x ∈ [a, b]? (H)
15.10. Prove that
lim n
n→∞
1
1
1
+ 2
+···+ 2
2
2
1+n
2 +n
n + n2
=
π
.
4
15.11. Determine the limits of the following sequences:
2n
(a) an =
∑
k=1
n
2n
n
,
2
k + n2
(b) an =
∑
k=n
n
,
k(n + k)
3n
k
k k
(c) an = ∑ 2
,
(d) an = ∑ 2 e n ,
2
k
+
n
n
k=1
k=2n
1
n
1
2
n
.
(e) an =
1+
1+
... 1+
n
n
n
15.12. Let
G(x) =
( 2x
dt
x
t
(x > 0).
Determine G′ (x) without using the fundamental theorem of calculus. How can we
interpret the result?
340
15 Integration
15.2 Integration by Parts
The fundamental theorem of calculus is significant not only from a theoretical
standpoint (in which it outlines a link between differentiation and integration), but in
terms of applications as well, since it tells us the value of the definite integral whenever we know a primitive function of the function we are integrating. Thus we can
use the methods for computing indefinite integrals to compute definite integrals. In
order to have the formulas in a more concise form, we introduce the following notation: if the function F is defined on the interval [a, b], then we denote the difference
F(b) − F(a) by [F]ba .
We now extend our toolkit from Chapter 13 (Theorems 13.28, 13.30, and 13.33)
with two new methods that greatly increase the number of integrals we can compute.
Theorem 15.9 (Integration by Parts). Suppose the functions f and g are differentiable on the interval I, and f g′ has a primitive function there. Then f ′ g has a
primitive function on I as well, and
(
Proof. Let F ∈
'
f ′ g dx = f g −
(
f g′ dx.
f g′ dx. Since ( f g)′ = f ′ g + f g′ , we have
( f g − F)′ = f ′ g + f g′ − f g′ = f ′ g,
⊔
⊓
which is exactly (15.5).
Examples 15.10. A few examples of integration by parts follow.
'
'
'
1. x · cos x'dx = x · (sin x)′ dx = x sin x − x′ · sin x dx =
= x' sin x − 1 ·'sin x dx = x sin x + cos'x +C.
'
2. x · ex dx = x · (ex )′ dx = x · ex − x′ · ex dx = x · ex − 1 · ex dx =
= (x − 1)ex +C.
' 2
'
' x 2 ′
2
3. x · log x dx =
· log x dx = x2 · log x − x2 · (log x)′ dx =
2
2
'
2
(15.5)
2
2
= 'x2 · log x − x2 ·'1x dx = x2 log x − x4 +C (x'> 0).
4. ex · cos x'dx = ex · (sin x)′ dx ='ex sin x − (ex )′ · sin x dx =
= ex sin x − ex (− cos x)′ dx =
= ex sin x − ex sin x dx
' x
x
x
= e' sin x + e cos x − e cos x dx,
so ' ex cos x dx '= 21 · (ex sin x + ex cos x)'+C.
5. log x dx'= x′ · log x dx = x log x − x · (log x)′ dx =
= x' log x − x · 1x 'dx = (x · log x) − x +C (x > '0).
6. arc tg x dx =
x′ · arc tg x dx = x · arc tg x − x · (arc tg x)′ dx =
' x
= x · arc tg x − 1+x2 dx = x · arc tg x − 21 log(1 + x2 ) +C.
15.2 Integration by Parts
341
Applying integration by parts repeatedly allows us to compute various integrals
such as the following:
(
(
xk cos x dx,
xk logn x dx,
(
(
xk sin x dx,
xk ex cos x dx,
(
(
xk ex dx,
xk ex sin x dx.
The following theorem gives us integration by parts for definite integrals.
Theorem 15.11. Suppose f and g are differentiable functions, while f ′ and g′ are
integrable over [a, b]. Then
( b
a
f ′ g dx = [ f g]ba −
( b
a
f g′ dx.
(15.6)
Proof. Since f and g are differentiable, they are continuous, so by Theorem 14.26,
the are also integrable on [a, b]. Thus f ′ g and f g′ are both integrable on [a, b] by
Theorem 14.33. Since ( f g)′ = f ′ g + f g′ , we have
( b
a
( f ′ g + f g′ ) dx = [ f g]ba
by the fundamental theorem of calculus. Applying Theorem 14.31 and rearranging
⊔
⊓
what we get yields (15.6).
As an interesting application of the previous theorem, we get the following
formulas.
Theorem 15.12.
( π
sin2n x dx =
( π
sin2n+1 x dx =
0
1 · 3 · · · (2n − 1)
·π
2 · 4 · · · 2n
(n ∈ N+ ),
(15.7)
(n ∈ N).
(15.8)
and
0
Proof. Let Ik =
If k ≥ 1, then
'π
0
2 · 4 · · · 2n
·2
1 · 3 · · · (2n + 1)
sink x dx for all k ∈ N. Then I0 = π and I1 = cos 0 − cos π = 2.
( π
( π
sin x · sin x dx =
(1 − cos2 x) · sink−1 x dx =
0
( π#
$
sink−1 x − cos2 x · sink−1 x dx =
=
0
( π
#
$
= Ik−1 −
cos x · sink−1 x · cos x dx.
Ik+1 =
2
k−1
0
0
(15.9)
342
15 Integration
Now, using integration by parts, we get that
( π
0
′
( π
#
$
1
· sink x dx =
cos x· sink−1 x · cos x dx =
cos x ·
k
0
π ( π
1
1
· sink x · (− sin x) dx =
= cos x · · sink x −
k
k
0
0
1
= 0 + · Ik+1 .
k
Combining this with (15.9), we obtain Ik+1 = Ik−1 − 1k · Ik+1 , so Ik+1 =
Thus
1
2n − 1
2n − 1 2n − 3
I2n =
· I2n−2 = · · · =
·
· · · · I0 ,
2n
2n
2n − 2
2
k
k+1
· Ik−1 .
which is exactly (15.7). Similarly,
I2n+1 =
2n − 2
2
2n
2n
· I2n−1 = · · · =
·
· · · · I1 ,
2n + 1
2n + 1 2n − 1
3
⊔
⊓
which is (15.8).
The equations above make possible the proof of a fundamentally important identity that expresses the number π as the limit of a simple product.
Theorem 15.13 (Wallis’ Formula2 ).
π = lim
n→∞
2 · 4 · · · 2n
1 · 3 · · · (2n − 1)
2
1
· .
n
Proof. Since sin2n−1 x ≥ sin2n x ≥ sin2n+1 x for all x ∈ [0, π ], we have I2n−1 ≥ I2n ≥
I2n+1 . Thus
1 · 3 · · · (2n − 1)
2 · 4 · · · (2n − 2)
2 · 4 · · · 2n
·2 ≥
·π ≥
· 2,
1 · 3 · · · (2n − 1)
2 · 4 · · · 2n
1 · 3 · · · (2n + 1)
which gives
2 · 4 · · · 2n
1 · 3 · · · (2n − 1)
2
2
2 · 4 · · · 2n
1
2
.
· ≥π ≥
·
n
1 · 3 · · · (2n − 1)
2n + 1
Let Wn denote the product [(2 · 4 · · · 2n)/(1 · 3 · · · (2n − 1))]2 · 1/n. Then Wn ≥ π ≥
Wn · 2n/2n + 1, that is, π ≤ Wn ≤ π · 2n + 1/2n. Thus, by the squeeze theorem,
⊔
⊓
limn→∞ Wn = π .
2
John Wallis (1616–1703), English mathematician.
15.2 Integration by Parts
343
Remark 15.14. Since
2 · 4 · · · 2n
(2 · 4 · · · 2n)2
[2n · n!]2
4n
=
=
= 2n ,
1 · 3 · · · (2n − 1)
1 · 2 · · · (2n)
(2n)!
n
Wallis’s formula gives
√
4n
lim 2n√ = π .
n→∞
n
n
With our asymptotic notation, we can express this by saying that
2n
4n
∼√ .
n
nπ
By the binomial theorem, the sum of the binomial coefficients
2n
2n
2n
,
,...,
0
1
2n
(15.10)
(15.11)
is 4n , so their mean is 4n /(2n + 1). Now (15.10)
√
√ says that the middle term in (15.11)
(which is the largest term as well) is about c · n times the mean, where c = 2/ π .
We now prove an important theorem as an application of Wallis’s formula that
itself has many applications in many fields of mathematics, especially in probability.
n √
Theorem 15.15 (Stirling’s Formula). n! ∼ ne · 2π n.
√
Proof. First we show that the sequence an = (n/e)n 2π n/n! is strictly monotone
increasing and bounded. A simple computation yields
n+(1/2)
1 + n1
an+1
,
=
an
e
so
1
1
log an+1 − log an = n +
· log 1 +
−1
2
n
(15.12)
for all n. We know that for x > 0, we have log(1 + x) > 2x/(x + 2) (as seen in
Example 12.56). Apply this for x = 1/n, then multiply through by n + (1/2) to get
that
1
1
· log 1 +
> 1.
n+
2
n
This proves that (an ) is strictly monotone increasing by (15.12).
Next, we will use the inequality
log(1 + x) ≤ x −
x2 x3
+
2
3
(x > 0)
(15.13)
344
15 Integration
to find an upper bound for the right-hand side of (15.12). (We refer to Exercise 12.91
or (13.26) to justify (15.13).) If we substitute x = 1/n into (15.13) and multiply
through by n + (1/2), then with the help of (15.12), we get
log an+1 − log an ≤
1
1
1
1
1
+
≤
+
= 2.
12n2 6n3
12n2 6n2
4n
(15.14)
Thus
log an − log a1 =
n−1
n−1
i=1
i=1
1
1
∑ (log ai+1 − log ai ) ≤ ∑ 4i2 < 2
for all n, so it is clear that the sequence (an ) is bounded. Since we have shown that
(an ) is monotone increasing and bounded, it must be convergent. Let limn→∞ an = a.
Since every term of the sequence is positive, a > 0. It is clear that a2n /a2n → a. On the
other hand, a simple computation gives
2n √
· πn
a2n
= n n
a2n
4
for all n. Thus by Wallis’s formula (or more precisely by (15.10)), a2n /a2n → 1, so
a = 1.
⊔
⊓
Remark 15.16. One can show that
n n √
n n √
· 2π n < n! <
· 2π n · e1/(12n)
e
e
for every positive integer n. A somewhat weaker statement is presented in Exercise 15.24.
Exercises
15.13. Compute the following integrals:
√
' √
(a) 01 x · e x dx;
' 2
√
(c) 0π sin x dx ;
'
√
(e) 01 arc tg x dx;
√
'
(g) 01 x3 + x2 dx;
(b)
(d)
' 3 √log x
2
'1
0
x
dx;
arc tg x dx;
'1
(f) 0 log(1 + x2 ) dx;
'
(h) eax cos(bx) dx.
15.14. Apply integration by parts to get the equation
(
(
1 1
1
·
dx = (log x)′ ·
dx =
x log x
log x
(
(
1 1
1 −1
1
− log x · ·
·
dx.
= log x ·
dx
=
1
+
2
log x
x log x
x log x
Thus 0 = 1. Where did we make a mistake?
15.2 Integration by Parts
345
15.15. Prove that
if f is strictly monotone and differentiable in the interval I,
'
ϕ = f −1 , and f (x) dx = F(x) + c, then
(
ϕ (y) dy = yϕ (y) − F(ϕ (y)) + c.
15.16. Check the correctness of the following computation:
′
(
(
1
x2
dx =
dx = − x ·
2n ·
(x2 + 1)n+1
(x2 + 1)n
(
x
dx
=− 2
+
+ c.
(x + 1)n
(x2 + 1)n
15.17. Prove that if f and g are n times continuously differentiable in an interval I,
then
(
f g(n) dx =
= f g(n−1) − f ′ g(n−2) + · · · + (−1)n−1 f (n−1) g + (−1)n
(
f (n) g dx.
(H)
15.18. Prove that if p is a polynomial of degree n, then
(
#
$
e−x p(x) dx = −e−x · p(x) + p′ (x) + · · · + p(n) (x) + c.
15.19. Prove that if f is twice differentiable and f ′′ is integrable in [a, b], then
( b
a
x f ′′ (x) dx = (b f ′ (b) − f (b)) − (a f ′ (a) − f (a)).
15.20. Prove that
( 1
0
xm (1 − x)n dx =
15.21. Compute the value of
15.22. Prove that if
f1 (x) =
( x
0
'1
0
m! n!
(m + n + 1)!
(m, n ∈ N).
(1 − x2 )n dx for every n ∈ N.
f (t) dt, f2 (x) =
( x
0
f1 (t) dt, . . . , fk (x) =
( x
0
fk−1 (t) dt
then
fk (x) =
1
·
(k − 1)!
( x
0
f (t)(x − t)k−1 dt.
15.23.
(a) Prove that for every n ∈ N, there exist integers an and bn such that
'1 n x
x
e dx = an · e + bn holds.
0
346
15 Integration
(b) Prove that
lim
( 1
n→∞ 0
xn ex dx = 0.
(c) Prove that e is irrational.
15.24. Prove the following stronger version of Stirling’s formula:
n n √
n n √
· 2π n < n! ≤
· 2π n · e1/(4(n−1))
e
e
(15.15)
for every integer n > 1. (S)
15.3 Integration by Substitution
We obtained the formulas in Theorems 13.30 and 13.33 by differentiating f (ax + b),
f (x)α +1 , and log f (x), and using the differentiation rules for compositions of functions. These formulas are special cases of the following theorem, which is called
integration by substitution.
Theorem 15.17. Suppose the function g is differentiable on the interval I, f is
defined on the interval J = g(I), and f has a primitive function on J.3 Then the
function ( f ◦ g) · g′ also has a primitive function on I, and
(
where
'
f dx = F(x) + c.
f g(t) · g′ (t) dt = F g(t) + c,
(15.16)
Proof. The theorem is rather clear from the differentiation rule for compositions of
functions.
⊔
⊓
We
We can use equation (15.16) in both directions.
use it “left to right” when
'
we need to compute an integral of the form f g(t) · g′ (t) dt. Then the following
formal procedure automatically changes the integral we want to compute to the
right-hand side of (15.16):
(
(
dx
f g(t) · g′ (t) dt =
.
g(t) = x; g′ (t) = ; g′ (t)dt = dx;
f (x) dx
dt
x=g(t)
'
2
Examples 15.18. 1. The integral t ·et dt is changed into the form
if we divide it and multiply it by 2 at the same time:
(
3
2
t · et dt =
1
·
2
(
2
et · (2t) dt =
1
·
2
(
'
f g(t) ·g′ (t) dt
2
et · (t 2 )′ dt = F(t 2 ) + c,
Here we use the fact that the image of an interval under a continuous function is also an interval;
see Corollary 10.58 and the remark following it.
15.3 Integration by Substitution
347
'
2
where F(x) = ex dx = ex + c. Thus the integral is equal to (1/2) · et + c. With the
help of the formalism above, we can get the same result faster: x = t 2 , dx/dt = 2t,
2t dt = dx,
(
'
t2
t · e dt =
(
1 t2
· e 2t dt =
2
'
(
1 x
1
1 2
· e dx = · ex + c = · et + c.
2
2
2
'
2. tgt'dt = (sint/cost) dt = − (1/cost) · (cost)′ dt = −F(cost) + c, where
F(x) = (1/x) dx = log |x|+c. Thus the integral is equal to − log | cost|+c. The same
result can be obtained with the formal procedure we introduced above: cost =
x, dx/dt = − sint, − sint dt = dx,
(
sint
dt =
cost
(
−
dx
= − log |x| + c = − log | cost| + c.
x
Let us see some examples when we apply
' (15.16) “right to left” that is, when we
want to determine an integral of the form f dx, and we are looking for
a g with
'
which we can compute the left-hand side of (15.16), that is, the integral f g(t) ·
g′ (t) dt. To achieve this goal, we usually look for a function g for which f ◦ g is
simpler than f (and then
we hope that the g′ (t) factor does not make our integral
'
the primitive
too complicated). If f g(t) · g′ (t) dt = G(t) + c, then by (15.16),
function F of the function f we seek satisfies G(t) = F g(t) + c. Therefore, we
'
have f dx = F(x) + c = G(g−1 (x)) + c, assuming that g has an inverse.
'
√
Examples 15.19. 1. We can attempt to solve the integral dx/ 1 + x with the
help of the function g(t) = t 2 , since then for t > 0, we have f g(t) · g′ (t) = 2t/(1 +
t), whose √
integral can be easily computed. If this is G(t) +
√c, then the integral we
seek is g( x) + c, since the inverse of g is the function x. With the formalism
above, the computation looks like this: x = t 2 , dx/dt = 2t, dx = 2t dt,
(
(
(
1
dx
1
√ =
· 2t dt = 2 ·
1−
dt =
1+ x
1+t
1+t
√
√
= 2t − 2 log(1 + t) + c = 2 x − 2 log(1 + x) + c.
Here we get the last inequality by substituting t =
√
x, that is, g(t) = t 2 .
2. We can use the substitution ex = t, i.e., x = logt, for the integral
(ex + 1) dx. We get that dx/dt = 1/t, dx = dt/t,
(
(
(
(
e2x
t2 1
t
1
dx =
· dt =
dt =
dt =
1−
ex + 1
t +1 t
t +1
t +1
' 2x
e /
t − log(t + 1) + c = ex − log(ex + 1) + c.
'√
1 − x2 dx.
3. Let us compute the integral
t ∈ [−π /2, π /2]. Then dx/dt = cost, dx = cost dt,
Let
x = sint,
where
348
15 Integration
(
1 − x2 dx =
(
(
1 − sin2 t · cos t dt =
(
t sin 2t
1 + cos 2t
dt = +
+c =
2
2
4
arc sin x sin(2arc sin x)
=
+
+ c.
2
4
=
cos2 t dt =
Here the second term can be simplified √
if we notice that sin 2t = 2 sint · cost, and so
sin(2arc sin x) = 2x · cos(arc sin x) = 2x 1 − x2 . In the end, we get that
(
1 − x2 dx =
1
1
· arc sin x + · x
2
2
1 − x2 + c.
(15.17)
Examples 15.20. 1. Let r > 0. By (15.17) and applying Theorem 13.30, we get
"
(
x 2
r2
x rx
2
2
r − x dx = · arc sin + · 1 −
+ c.
2
r
2
r
Thus by the fundamental theorem of calculus,
&
%
"
( r
x 2 r
x rx
r2 π
r2
2
2
· arc sin + · 1 −
,
= r2 · arc sin 1 =
r − x dx =
2
r
2
r
2
−r
−r
that is, the area of the semicircle with radius r is r2 π /2. (Recall that we defined
π to be the circumference of the unit semicircle on p. 163.) We have recovered
the well-known formula for the area of a circle with radius r (namely r2 π ), which
Archimedes stated as the area of a circle agrees with the area of the right triangle whose legs (sides adjacent to the right angle) are equal to the radius and the
circumference of the circle.
2. With the help of the integral (15.17), we can determine the area of an ellipse as
well. The equation of an ellipse with axes a and b is
x2 y2
+
= 1,
a2 b2
!
2
so the graph of the function f (x) = b · 1 − ax (x ∈ [−a, a]) bounds half of the
area of the ellipse. Now by (15.17) and Theorem 13.30,
"
"
(
x 2
x 2
x bx
ba
· arc sin + · 1 −
b· 1−
dx =
+ c,
a
2
a
2
a
so by the fundamental theorem of calculus,
"
( a
x 2
abπ
,
b· 1−
dx = ba · arcsin 1 =
a
2
−a
that is, the area of the ellipse is abπ .
15.3 Integration by Substitution
349
The following theorem gives us a version of integration by substitution for
definite integrals.
Theorem 15.21. Suppose that g is differentiable and g′ is integrable on the interval
[a, b]. If f is continuous on the image of g, that is, on the interval4 g([a, b]), then
(
f g(t) · g′ (t) dt =
( b
a
g(b)
f (x) dx.
(15.18)
g(a)
Proof. Since g is differentiable, it is continuous. Thus f ◦ g is also continuous, so
it is integrable on [a, b], which implies that ( f ◦ g) · g′ is also integrable on [a, b].
On the other hand, statement (iii) of Theorem 15.5 ensures that f has a primitive
theorem of calculus, the right-hand
function. If F ′ = f , then by the
fundamental
side of (15.18) is F g(b) − F g(a) . Now by the differentiation rules for composiof
tions of functions, (F ◦ g)′ = ( f ◦ g) · g′ , so by applying the fundamental
theorem
calculus again, we get that the left-hand side of (15.18) is also F g(b) − F g(a) ,
⊔
⊓
meaning that (15.18) is true.
We note that in the theorem above, we can relax the condition of continuity of f
and assume only that f is integrable on the image of g. In other words, the following
theorem also holds.
Theorem 15.22. Suppose that g is differentiable and g′ is integrable on the interval
[a, b]. If f is integrable on the image of g, that is, on the interval g([a, b]), then
( f ◦ g) · g′ is integrable on [a, b], and (15.18) holds.
This more general theorem is harder to prove, since the integrability of ( f ◦ g) · g′
does not follow as easily as in the case of Theorem 15.21, and the fundamental
theorem of calculus cannot be applied either. The proof can be found in the appendix
of this chapter.
Exercises
15.25. Prove that
( π
sin 2kx
0
sin x
dx = 0
holds for every integer k. (H)
15.26. Prove that if f is integrable on [0, 1], then
( π
f (sin x) cos x dx = 0.
0
4
Corollary 10.58 ensures that g([a, b]) is an interval.
350
15 Integration
15.4 Integrals of Elementary Functions
In Chapter 11, we became acquainted with the elementary functions. These are the
polynomials, rational, exponential, power, and logarithmic functions, trigonometric
and hyperbolic functions, their inverses, and every function that can be expressed
from these using a finite sequence of basic operations and compositions.5 We will
familiarize ourselves with methods that allow us to determine the indefinite integrals
of numerous elementary functions.
15.4.1 Rational Functions
Definition 15.23. We define elementary rational functions to be
(i) quotients of the form A/(x − a)n , where n ∈ N+ and A, a ∈ R; as well as
(ii) quotients of the form (Ax + B)/(x2 + ax + b)n , where n ∈ N+ and A, B, a, b ∈ R
are constants such that a2 − 4b < 0 holds.
(This last condition is equivalent to saying that x2 + ax + b does not have any real
roots.)
We will first determine the integrals of the 'elementary rational functions. The
A
first type does not give us any trouble, since (x−a)
dx = A · log |x − a| + c, and
0
'
n
n−1
n > 1 implies A/(x − a) dx = A/(1 − n) (x − a)
+ c.
To find the integrals of the second type of elementary rational functions we will
first show that computing
the integral of (Ax + B)/(x2 + ax + b)n can be reduced to
'
computing the integral dx/(x2 + 1)n . This can be seen as
(
(
(2x + a) · (A/2) + B − (aA/2)
Ax + B
dx =
=
(x2 + ax + b)n
(x2 + ax + b)n
(
(
dx
(x2 + ax + b)′
A
aA
= ·
dx
+
B
−
.
·
2
n
2
2
(x + ax + b)
2
(x + ax + b)n
Here
(
(x2 + ax + b)′
dx = log(x2 + ax + b) + c,
(x2 + ax + b)
and if n > 1, then
(
5
1
(x2 + ax + b)′
1
dx =
+ c.
· 2
2
n
(x + ax + b)
1 − n (x + ax + b)n−1
What we call elementary functions is partially based on history and tradition, partially based on
usefulness, and partially based on a deeper reason that comes to light through complex analysis.
In some investigations, it proves to be reasonable to list algebraic functions among the elementary
functions. (Algebraic functions were defined in Exercise 11.45.)
15.4 Integrals of Elementary Functions
351
Moreover,
(
where d =
dx
=
(x2 + ax + b)n
(
=
(
dx
n =
2
x + (a/2) + b − (a2 /4)
dx
n ,
2
x + (a/2) + d 2
b − (a2 /4). (By the conditions, b − (a2 /4) > 0.) Now if
(
dx
= Fn (x) + c,
(x2 + 1)n
then by Theorem 13.30,
(
dx
1
n = 2n ·
2
d
x + (a/2) + d 2
(
x
dx
a
1
+
·
F
=
+ c.
n
n
a 2
d 2n−1
d 2d
( dx + 2d
) +1
As for the functions Fn , we know that F1 (x) = arctg x + c. On the other hand, for
every n ≥ 1, the equality
Fn+1 =
1
2n − 1
x
·
· Fn + c
+
2n (x2 + 1)n
2n
(15.19)
holds. This is easy to check by differentiating both sides (see also Exercise 15.16).
Applying the recurrence formula (15.19) repeatedly gives us the functions Fn . So
for example,
(
x
1
1
· 2
+ · arc tg x + c,
2 x +1 2
(
dx
x
x
3
1
3
+ · arc tg x + c.
= · 2
+ ·
(x2 + 1)3
4 (x + 1)2 8 x2 + 1 8
dx
(x2 + 1)2
=
(15.20)
By the following theorem every rational function can be expressed as the sum of
a polynomial and finitely many elementary rational functions, and if we know how
to integrate these, we know how to determine the integral of any rational function
(at least theoretically).
We will need the concept of divisibility for polynomials. We say that the polynomial p is divisible by the polynomial q, and we denote this by q | p, if there exists
a polynomial r such that p = q · r. It is known that if the polynomials q1 and q2
do not have a nonconstant common divisor, then there exist polynomials p1 and p2
such that p1 q1 + p2 q2 ≡ 1. (We can find such a p1 and p2 by repeatedly applying
the Euclidean algorithm to q1 and q2 .)
Moreover, we will use the fundamental theorem of algebra (see page 201) and the
corollary that every polynomial with real coefficients can be written as the product
of polynomials with real coefficients of degree one and two.
352
15 Integration
Theorem 15.24 (Partial Fraction Decomposition). Every rational function R can
be written as the sum of a polynomial and finitely many elementary rational functions such that the denominators of these elementary rational functions all divide
the denominator of R.
Proof. Let R = p/q, where p and q ≡ 0 are polynomials. Let the degree of q be n;
we will prove the theorem by induction on n. If n = 0, that is, q is constant, then R
is a polynomial, and the statement holds (without any elementary rational functions
in the decomposition).
Let n > 0, and assume the statement is true for every rational function whose
denominator has degree smaller than n. Factor q into a product of polynomials of
degree one and two. We can assume that the degree-two polynomials here do not
have any real roots, since otherwise, we could factor them further. We distinguish
three cases.
1. Two different terms appear in the factorization of the polynomial q. Then q can
be written as q1 q2 , where q1 ≡ c, q2 ≡ c, and the polynomials q1 , q2 do not have any
nonconstant common divisors. Then there exist polynomials p1 and p2 such that
p1 q1 + p2 q2 ≡ 1;
p
pp1 q1 + pp2 q2
pp1 pp2
=
=
+
.
q
q1 q2
q2
q1
Here we can apply the inductive hypothesis to both pp1 /q2 and pp2 /q1 , which
immediately gives us the statement of the theorem.
2. q = c(x − a)n . In this case, let us divide p by (x − a) with remainder: p =
p1 (x − a) + A, from which
p
p1
A/c
=
+
n−1
q c(x − a)
(x − a)n
0
follows. Here A/c (x − a)n is an elementary
rational
function, and we can apply
n−1
.
the induction hypothesis to the term p1 / c(x − a)
3. q = c · (x2 + ax + b)k , where a2 − 4b < 0 and n = 2k. Then dividing p by (x2 +
ax + b) with remainder gives us p = p1 · (x2 + ax + b) + (Ax + B), from which
A
B
p1
p
cx+ c
=
+
q c(x2 + ax + b)k−1 (x2 + ax + b)k
follows. Here we can apply the inductive hypothesis to the first term, while the
second term is an elementary rational function, proving the theorem.
⊓
⊔
Remarks 15.25. 1. One can show that the decomposition of rational functions in
Theorem 15.24 is unique (see Exercise 15.32). This is the partial fraction decomposition of a rational function.
2. If the degree of p is smaller than q, then only elementary rational functions appear
in the partial fraction decomposition of p/q (and no polynomial). This is because
then limx→∞ p/q = 0. Since every elementary rational function tends to 0 at ∞, this
15.4 Integrals of Elementary Functions
353
must hold for the polynomial appearing in the decomposition as well, which implies
it must be identically zero.
How can we find a partial fraction decomposition? We introduce three methods.
1. Follow the proof of the theorem. If, for example,
p
x+2
=
,
q x(x2 + 1)2
then 1 · (x2 + 1)2 − (x3 + 2x) · x = 1 and
p x + 2 x4 + 2x3 + 2x2 + 4x
=
−
=
q
x
(x2 + 1)2
2 (x2 + 1)(x2 + 2x + 1) + 2x − 1
−
=
x
(x2 + 1)2
2x
2x − 1
2
−
= − 2
.
x x + 1 (x2 + 1)2
= 1+
2. The method of indeterminate coefficients. From the theorem and by remark 15.25,
we know that
Dx + E
p A Bx +C
= + 2
+
.
(15.21)
q
x
x + 1 (x2 + 1)2
Bringing this to a common denominator yields
x + 2 = A(x4 + 2x2 + 1) + (Bx +C)(x2 + 1)x + (Dx + E)x.
This gives us a system of equalities for the unknown coefficients: A + B = 0,
C = 0, 2A + B + D = 0, C + E = 1, and A = 2. Then we can compute that A = 2,
B = −2, C = 0, D = −2, and E = 1.
3. If a term of the form A/(x − a)n appears in the decomposition, then (assuming
that n is the largest such power for a) we can immediately determine A if we
multiply everything by (x − a)n and we substitute x = a into the equation. So for
example, for (15.21) we have
A=
x + 2
= 2.
(x2 + 1)2 x=0
If we subtract the known terms from both sides, we reduce the question to finding
the partial fraction decomposition of a simpler rational function:
x+2
2 x + 2 − 2x4 − 4x2 − 2 −2x3 − 4x + 1
=
−
=
.
x(x2 + 1)2 x
x(x2 + 1)2
(x2 + 1)2
Here −2x3 − 4x + 1 = (−2x)(x2 + 1) + (−2x + 1), so we get the same
decomposition.
354
15 Integration
15.4.2 Integrals Containing Roots
From now on, R(u, v) will denote a two-variable rational function. This means that
R(u, v) is constructed from the variables u and v and from constants by the four basic
operations. One can easily show that this holds exactly when
R(u, v) =
∑ni=0 ∑nj=0 ai j ui v j
,
∑ni=0 ∑nj=0 bi j ui v j
(15.22)
where n ≥ 0 is an integer and ai j and bi j are constants.
1. We show that the integral
(
R x,
"
n
ax + b
cx + d
dx
can be reduced to the integral of a (one-variable) rational function with the substitution n (ax + b)/(cx + d) = t. Clearly,
with
this substitution,
(ax + b)/(cx +
0
d) = t n , ax + b = ct n x + dt n and x = dt n − b a − ct n , so dx/dt is a rational
function.
'
Example 15.26. Compute the integral x−2 · 3 x + 1/x dx. With the substitution
3
x + 1/x = t, x + 1 = t 3 x, x = 1/(t 3 − 1), and dx/dt = −3t 2 /(t 3 − 1)2 , so
"
(
(
1 3 x+1
−3t 2
3
2
dx
=
·
dt =
(t
−
1)
·
t
·
x2
x
(t 3 − 1)2
(
x + 1 4/3
3
3
+ c.
= −3t 3 dt = − t 4 + c = − ·
4
4
x
√
2. The integral R(x, ax2 + bx + c) dx (where a = 0) can also be reduced to an
integral of a rational function with a suitable substitution.
a. If ax2 + bx + c has a root, then ax2 + bx + c = a(x − α )(x − β ), so
"
a(x − β )
ax2 + bx + c = a(x − α )(x − β ) = |x − α |
,
x−α
and this leads us back to an integral in the previous part.
b. If ax2 + bx + c does not have real roots, then it must be positive everywhere,
since otherwise, the integrable function is not defined
√ anywhere. Thus
√ a>0
and c > 0. In this case, we can use the substitution ax2 + bx + c − a · x = t.
We get that
√
ax2 + bx + c = t 2 + 2t a · x + ax2 ,
√
x = (c − t 2 )/(2 a · t − b), and so dx/dt is also a rational function.
15.4 Integrals of Elementary Functions
355
√
√
We can apply the substitution
ax2 + bx + c −√ c = tx as√well. This gives us
√
ax2 + bx + c = x2t 2 + 2 ctx + c, ax + b = xt 2 + 2 ct, x = (2 ct − b)/(a − t 2 ), so
dx/dt is a rational function.
√
'√
x2 + 1 dx. Substituting x2 + 1 − x = t,
Example 15.27. Compute the integral
√
we obtain x = (1 − t 2 )/(2t), x2 + 1 = x + t = (1 + t 2 )/(2t), dx/dt = −(1 +
t 2 )/(2t 2 ), so
(
(
(
1 + t 2 −(1 + t 2 )
1 1 + 2t 2 + t 4
·
dt = −
dt =
2
2t
2t
4
t3
t2
1 1 1
= · 2 − log |t| − + c =
8 t
2
8
√
1
1
( x2 + 1 − x)2
1
2
= · √
+c =
− log( x + 1 − x) −
8 ( x2 + 1 − x)2 2
8
1
1
= · x · x2 + 1 − log( x2 + 1 − x) + c.
(15.23)
2
2
x2 + 1 dx =
15.4.3 Rational Functions of ex
'
Let R(x) be a one-variable rational function. To compute the integral R(ex ) dx,
let us use the substitution ex = t, x = logt, dx/dt = 1/t. The integral becomes the
integral of a rational function. See, e.g, Example 15.19.
15.4.4 Trigonometric Functions
a. Integration of an expression of the form R(sin x, cos x) can always be done with
the substitution tg(x/2) = t. Indeed, sin2 x+cos2 x = 1 gives tg2 x+1 = 1/ cos2 x,
and thus
1
tg x
cos x =
,
sin x =
,
(15.24)
± 1 + tg2 x
± 1 + tg2 x
so
x
2 tg(x/2)
1 − tg2 (x/2)
x
2 x
2 x
and
cos
x
=
cos
−sin
=
.
sin x = 2 sin cos =
2
2 1 + tg2 (x/2)
2
2 1 + tg2 (x/2)
Thus with the substitution tg(x/2) = t, we have
sin x = 2t/(1 + t 2 ), cos x = (1 − t 2 )/(1 + t 2 ), x = 2 arc tgt,
and dx/dt = 2/(1 + t 2 ).
356
15 Integration
Example 15.28.
(
dx
=
sin x
(
1 + t2 2
dt =
2t 1 + t 2
(
Then
(
dx
=
cos x
(
x
dt
= log |t| + c = log tg + c.
t
2
π x
dx
π
= − log tg
− + c.
4 2
sin 2 − x
b. In some cases, the substitution tg x = t will also lead us to our goal. By (15.24),
sin x =
t
√
,
± 1 + t2
cos x =
1
√
± 1 + t2
1
dx
=
,
dt
1 + t2
and
so this substitution also leads to a rational function if the exponents of sin x and
cos x are of the same parity in every term of the denominator and every term of
the numerator.
Example 15.29.
(
dx
=
1 + cos2 x
(
1
dt
·
=
1 + 1/(1 + t 2 ) 1 + t 2
(
dt
1
= ·
2 + t2
2
(
dt
√ 2 =
1 + t/ 2
√
1
t
2
1
· arc tg √ + c = √ · arc tg √ tg x + c.
=
2
2
2
2
c. Applying the substitution sin x = t on the interval [−π /2, π /2] gives us
cos x =
1 − t 2,
x = arc sint
and
1
dx
=√
,
dt
1 − t2
so we get a rational function if the power of cos x is even in the numerator and
odd in the denominator, or vice versa.
Example 15.30.
(
(
(
dt
1
dt
√
·√
=
=
1 − t2
1 − t2
1 − t2
1+t
1
+ c = 1 · log 1 + sin x + c.
= · log
2
1−t
2
1 − sin x
dx
=
cos x
(Check that this agrees with the result from Example 15.28. Also check that the
right-hand side is the primitive function of 1/ cos x in every interval where cos x = 0,
not just in (−π /2, π /2).)
d. The substitution cos x = t also leads us to the integral of a rational function if the
power of sin x is even in the numerator and odd in the denominator, or vice versa.
15.4 Integrals of Elementary Functions
357
√
'
Let us note that the integrals of the form R(x, ax2 + bx + c) dx can also be
computed using a method different from the one seen on page 354. With a linear
substitution, we can reduce the integral to one of the integrals
(
R(x,
1 − x2 ) dx,
(
R(x,
x2 − 1) dx,
or
(
R(x,
x2 + 1) dx.
In the first case, the substitution x = sint gives us an integral that we encountered
in the' last section. In the second case, the substitution x = cht gives us the integral R(cht, sht) sht, which we can tackle as in the third section if we recall the
definitions of cht and sht. The third integral can be computed with the substitution
x = sht.
Exercises
15.27. Compute the following definite integrals.
( 1
( 2 x
x
e +2
dx;
(b)
dx;
(a)
4
x
2x
0 x +1
1 e +e
( 2
( π /2
dx
dx
(c)
dx;
;
(d)
x − 2x
4
1
π /4 sin x(2 + cos x)
( 1√
( π /4
(e)
2x − 1 dx;
(f)
(tg x)2 dx;
(g)
0
0
( 2
( 3
arc sin(1/x) dx;
(h)
2
1
(i)
( π /4
0
(k)
( π /4
0
dx
dx;
cos10 x
(j)
( 4
2
dx
dx.
sin4 x + cos4 x
x · log(x2 − x) dx;
x2
√
dx;
x2 − 1
15.28. Compute the following indefinite integrals.
(
( 3
2x + 3
x − 2x2 + 5x + 1
dx;
(a)
(b)
dx;
2
x − 5x + 6
x2 + 1
(
x100
dx;
x−1
(
dx
√
(e) √
dx;
x+1+ x−1
(c)
(g)
(
dx
;
log log x
(d)
(
(f)
(
(h)
(
dx
x3 + 8
dx;
√
1+ 3 x
√ dx;
1− 3 x
log(x +
1 + x2 ) dx;
358
(i)
15 Integration
(
ex
√
dx;
1 + ex
(j)
(k)
(
dx
dx;
1 + cos x
(l)
(m)
(
dx
;
1 + tg x
(n)
15.29. Compute the integral
the result with (15.23).
15.30. Compute the integral
'√
(
(
(
dx
dx;
1 + sin x
sin x cos x
dx;
1 + sin2 x
sin x · log(tg x) dx.
x2 + 1 dx with the substitution x = sht, and compare
'√
x2 − 1 dx with the substitution x = cht. (S)
15.31. The radius of a regular cylindrical container filled with water is r. The container is lying horizontally, that is, the curved part is on the ground. What force does
the water exert on the vertical flat circular sides of the container due to pressure if
the pressure at depth x is ρ x?
15.32. Show that the partial fraction decomposition of rational functions is
unique. (H)
15.33. Suppose that p and q are polynomials, a ∈ R, and q(a) = 0. We know that
terms of the form Ak /(x − a)k appear in the partial fraction decomposition of the
rational function p(x)/(q(x) · (x − a)n ) for all k = 0, . . . , n.
(a) Prove that An = p(a)/q(a).
(b) Express the rest of the Ak with p and q as well.
15.34. Prove (15.19) with the help of exercise 15.16.
15.5 Nonelementary Integrals of Elementary Functions
Not all integrals of elementary functions can be computed with the help of the methods above. It might sound surprising at first, but there are some elementary functions
whose primitive functions cannot be expressed by elementary functions.6 This is a
significant difference between differentiation and integration, since—as we saw in
Chapter 12—the derivative of an elementary function is always elementary. But if an
operation (in this case differentiation) does not lead us out of a class of objects, why
should we expect its inverse operation (integration) to do the same? For example,
addition keeps us within the class of positive numbers, but subtraction does not; the
set of integers is closed under multiplication but not under division; squaring numbers keeps us in the realm of rational numbers but taking roots does not. It appears
that inverse operations are more complicated.
This phenomenon can be observed with differentiation and integration as well if
we consider subclasses of the elementary functions. For example, the derivatives of
6
2
Some examples are ex /x, 1/ log x, and e−x ; see Example 15.32.
15.5 Nonelementary Integrals of Elementary Functions
359
rational functions are rational functions. However, the integral of a rational function
is not necessarily a rational function:
(
dx
= log |x| + c,
x
and log |x| is not a rational function (which can easily be shown from the fact that
limx→∞ xβ · log x = ∞ and limx→∞ x−β · log x = 0 for all β > 0; see Example 13.2
regarding this last statement).
The same is the case with rational functions formed from trigonometric functions: the derivative of a function of the form R(cos x, sin x) has the same form, but
the integral can be different. This is clear from the fact that every function of the
form R(cos x, sin x) is periodic with period 2π , while, for example,
(
(1 + cos x) dx = x + sin x + c
is not periodic. A less trivial example:
(
x
dx
= log tg + c,
sin x
2
and here the right-hand side cannot be expressed as a rational function of cos x and
sin x.
Thinking about it more carefully, we see that it is not that surprising that the
integrals of some elementary functions are not elementary functions. It is a different matter that a rigorous proof of this is surprisingly hard. Joseph Liouville7 was
the first to show—with the help of complex-analytic methods—that such elementary functions exist. In fact, Liouville proved that if the integral of an elementary
function is elementary, then the formula for that integral cannot be much more complicated than the original function. Unfortunately, even the precise expression of this
is achieved with great difficulty. Some special cases are easier to express, as is the
following theorem of Liouville (but the proof still exceeds the scope of this book).
Theorem'15.31. Let f and g be rational functions, and suppose that f 'is not constant. If e f g dx can be expressed as an elementary function, then e f g dx =
e f h + c, where h is a rational function and c is a constant.
With the help of this theorem, we can find several functions that do not have an
elementary integral.
'
x
Examples 15.32. 1. Let us show that ex dx cannot be expressed in terms of
elementary
functions.
'
If (ex /x) dx could be expressed in terms of elementary functions, then by Liouville’s theorem, there would exist a rational function S such that (S · ex )′ = ex /x.
Let S = p/q, where the polynomials p and q do not have any nonconstant common
divisors. Then
7
Joseph Liouville (1809–1882), French mathematician.
360
15 Integration
p x
·e
q
′
=
p′ q − pq′ x p x ex
·e + ·e = ,
q2
q
x
so x(p′ q − pq′ + pq) = q2 . We will show that this is impossible. First of all, q must
be divisible by x. Let q = xk q1 , where q1 is not divisible by x. Then p cannot be
divisible by x either, since x | q, and we assumed that p and q do not have any
nonconstant common divisors. Thus the polynomial
P = p′ q − pq′ + pq = p′ xk q1 − p(kxk−1 q1 + xk q′1 ) + pxk q1
is not divisible by xk , since every term on the right-hand side except for one is
divisible by xk . On the other hand, P = q2 /x = x2k−1
q21 is divisible by xk , since 2k −
'
1 ≥ k. This is a contradiction, which shows that '(ex /x) dx cannot be elementary.
2. We can immediately deduce that the integral (1/ log x) dx cannot be expressed
in terms of elementary functions either. With the substitution ex = t, we get x =
logt, dx = dt/t, so
( x
(
e
dt
dx =
.
x
logt
'
'
Thus if (1/ logt)dt were elementary, then so would (ex /x) dx, which is
impossible.
'
The integral (1/ logt) dt appears often in various fields of mathematics, so it
has its own notation:
( x
dt
(x ≥ 2).
Li x =
2 logt
The function Li x (which is called the logarithmic integral function, or sometimes the integral logarithm) plays an important role in number theory. Let π (x)
denote the number of primes less than or equal to x. According to the prime number
theorem,
x
if
x → ∞,
π (x) ∼
log x
that is, the function x/ log x approximates π (x) asymptotically well. One can show
that the function Li x is an even better approximation, in that
x
|π (x) − Li x| = o
logk x
for all k, while
π (x) − x > cx
log x log2 x
for a suitable constant c > 0. The logarithmic integral function is just one of many
important functions that are defined as integrals of elementary functions but are not
elementary themselves.
3. Another such function of great importance in probability theory is Φ (x) =
' x −t 2
dt, the function that describes the so-called normal distribution.
0 e
15.5 Nonelementary Integrals of Elementary Functions
361
By Liouville’s theorem above, we can easily deduce that this function is not an
elementary functions (see Exercise 15.38).
4. Another example is the elliptic integrals. In Remark 15.20, we saw that the area
of an ellipse with axes a and b is abπ . Determining the circumference of an ellipse is
a harder problem. To simplify
√ computation, assume that a = 1 and b < a. The graph
of the function f (x) = b · 1 − x2 over the interval [0, 1] gives us the portion of the
ellipse with axes 1 and b lying in the quadrant {(x, y) : x, y ≥ 0}. The arc length
of this graph is thus a quarter of the circumference of the ellipse. The function f
is monotone decreasing on the interval [0, 1], so by Theorem 10.79, the graph is
rectifiable there. Let s(x) denote the arc length of the graph over the interval [0, x].
Since the derivative of f ,
f ′ (x) = b ·
−2x
bx
√
= −√
,
2
2· 1−x
1 − x2
is continuous on [0, 1), s is differentiable on this interval by Theorem 13.41,
√ and
s′ (x) = 1 + ( f ′ (x))2 for all 0 ≤ x < 1. Let us introduce the notation k = 1 − b2 .
Then 0 < k < 1, and
1 + ( f ′ (x))2 = 1 +
1 − (1 − b2 )x2
1 − k 2 x2
b2 x2
=
=
.
2
2
1−x
1−x
1 − x2
Thus s is a primitive function of
s∈
( √
(1 − k2t 2 )/(1 − t 2 ) in the interval (0, 1), that is,
1 − k2 · t 2
√
dt =
1 − t2
(
1 − k2 · t 2
(1 − t 2 )(1 − k2t 2 )
dt.
(15.25)
Liouville showed that the integral appearing in (15.25) cannot be expressed with elementary functions, so the circumference of the ellipse cannot generally be expressed
in a “closed” form (not containing an integral symbol).
Let us substitute t = 1 − (1/u) in the integral above. We get that dt = du/u2 and
(
( √
(1 − k2 ) + (2k2 /u) − (k2 /u2 ) 1
1 − k2 · t 2
√
dt =
· 2 du =
2
u
1−t
(2/u) − (1/u2 )
=
(
u2 ·
b2 u2 + 2k2 u − k2
(2u − 1)(b2 u2 + 2k2 u − k2 )
du.
(15.26)
Of course, we still cannot express this integral in terms of elementary functions as
in (15.25), but it appears simpler than that one, since here, the polynomial inside the
square root has degree three (and not four). These integrals motivate the following
naming convention.
'
√
5. An elliptic integral is an integral of
√ the form R(x, f ) dx, where f is a polynomial of degree three or four and R(x, f ) is a rational function with arguments x and
362
15 Integration
√
f . If the degree of f is greater than four, then we call the integral a hyperelliptic
integral.
The (hyper)elliptic integrals usually cannot be expressed in terms of elementary
functions (but they can in some special cases, for example,
(
f′
√ dx = 2
f
f +c
for every positive f ).
Since (hyper)elliptic integrals appear often in various applications, it is best to
reduce them to simpler integrals. Write R(u, v) in the form seen in (15.22), and then
replace u by x and v by f (x). After taking powers and combining all that we can,
we get that
√
A+B f
√ ,
R(x, f ) =
C+D f
where A, B, C, and D √
are polynomials. If we multiply both the numerator and
f here, then we get an expression of the form (E +
denominator
by
C
−
D
√
F f )/G. Since we already know the integrals of rational functions, it suffices to
find the integral of
√
Ff
F f
= √ .
G
G f
Now apply Theorem 15.24 and decompose the rational function F f /G into the sum
of a polynomial and
√ finitely many elementary rational functions. If we divide this
decomposition by f , then we deduce that it suffices to find the integrals
Ik =
(
xk
dx
f (x)
and
Jr =
(
r(x)
dx,
f (x)
where k ∈ N and r is an arbitrary elementary rational function. One can show that
if the degree of f is n, then every Ik can be expressed as a linear combination of
elementary functions and the integrals I0 , I1 , . . . , In−2 . A similar recurrence holds for
the integrals Jr (see Exercise 15.39).
Exercises
15.35. Prove that Li x ∼
x
log x
if x → ∞.
15.36. Prove that for all n ∈ N+ ,
Li x =
x
x
x
+
+ · · · + (n − 1)! n + n!
2
log x log x
log x
with a suitable constant cn .
( x
2
dt
+ cn
logn+1 t
(x ≥ 2)
15.6 Appendix: Integration by Substitution for Definite Integrals (Proof of Theorem 15.22) 363
15.37. Prove that for all n ∈ N+ ,
n
x
x
Li
x
−
=
o
(k
−
1)!
∑
logn x
logk x
k=1
15.38. Prove that the function Φ (x) =
'
if
x → ∞.
(H)
' x −t 2
dt is not elementary.
0e
15.39. Let Ik = (xk / f (x)) dx (k ∈ N), where f is a polynomial of degree n. Prove
that for all k > n − 2, Ik can be expressed as a linear combination of an elementary
function and the integrals I0 , I1 , . . . , In−2 . (S)
15.6 Appendix: Integration by Substitution for Definite Integrals
(Proof of Theorem 15.22)
Proof (Theorem 15.22). I. First, we assume that g is monotone increasing. If g is
constant, then on the one hand, g′ = 0, so the left-hand side of (15.18) is zero, and
on the other hand, g(a) = g(b), so the right-hand side of (15.18) is also zero. Thus
we can assume that g is not constant, that is, g(a) < g(b).
By our assumptions, f is integrable on the image of g, that is, on the interval
[g(a), g(b)] (which must be the image of g by the Bolzano–Darboux theorem). Let
ε > 0 be fixed, and let F be a partition of the interval [g(a), g(b)] such that ΩF ( f ) <
ε . Since g′ is integrable on [a, b], we can choose a partition Φ : a = t0 < t1 < · · · <
tn = b such that ΩΦ (g′ ) < ε . By adding new base points (which does not increase
the value of ΩΦ (g′ )), we can ensure that the points g(t0 ), . . . , g(tn ) include every
base point of F. Let h = ( f ◦ g) · g′ . We will show that if ci ∈ [ti−1 ,ti ] (i = 1, . . . , n)
are arbitrary inner points, then the approximating sum
n
σΦ (h; (ci )) = ∑ f (g(ci )) · g′ (ci ) · (ti − ti−1 )
i=1
' g(b)
is close to the value of I = g(a) f dx.
Let F1 denote the partition with the base points g(t0 ), . . . , g(tn ). Then F1 is a
refinement of F. Let us introduce the notation g(ti ) = ui , where (i = 0, . . . , n). Then
the points g(a) = u0 ≤ u1 ≤ · · · ≤ un = g(b) list the base points of F1 , possibly
more than once (if g is not strictly monotone). By the mean value theorem, for every
i = 1, . . . , n, there exists a point di ∈ (ti−1 ,ti ) such that
ui − ui−1 = g′ (di ) · (ti − ti−1 ).
364
15 Integration
Then
n
∑f
i=1
n
g(ci ) (ui − ui−1 ) = ∑ f g(ci ) · g′ (di ) · (ti − ti−1 ).
(15.27)
i=1
If we drop the terms where ui−1 = ui (and are thus zero) from the left-hand side
of (15.27), then we get the approximating sum for the function f corresponding to
the partition F1 , since g(ci ) ∈ [g(ti−1 ), g(ti )] = [ui−1 , ui ] for all i. Since ΩF1 ( f ) ≤
ΩF ( f ) < ε , every such approximating sum must differ from I by less than ε . Thus
by (15.27), we get that
n
′
(15.28)
∑ f g(ci ) · g (di ) · (ti − ti−1 ) − I < ε .
i=1
Let ωi (g′ ) denote the oscillation of the function g′ over the interval [ti−1 ,ti ]. Then
|g′ (ci ) − g′ (di )| ≤ ωi (g′ ) for all i, so
n
′
|σΦ (h; (ci )) − I| ≤ ∑ f (g(ci ) g (ci ) − g′ (di ) (ti − ti−1 ) +
i=1
n
+ ∑ f g(ci ) · g′ (di ) · (ti − ti−1 ) − I <
i=1
n
< K · ∑ ωi (g′ )(ti − ti−1 ) + ε = K · ΩΦ (g′ ) + ε <
i=1
< (K + 1)ε ,
where K denotes an upper bound of | f | on the interval [g(a), g(b)]. Since this holds
for an arbitrary choice of inner points ci , by Theorem 14.19, ( f ◦ g) · g′ is integrable
on [a, b], and its integral is I there. This proves (15.18) for the case that g is monotone
increasing.
II. If g is monotone decreasing, then the proof goes the same way, using the fact
that g(a) ≥ g(b) implies
( g(b)
g(a)
f dx = −
( g(a)
f dx.
g(b)
III. Now consider the general case. Let ε > 0 be given, and let Φ : a = t0 < t1 <
· · · < tn = b be a partition such that ΩΦ (g′ ) < ε . Let g(ti ) = ui (i = 0, . . . , n). If
' i
' g(b)
f dx for all i = 1, . . . , n, then I1 + · · · + In = I by TheoI = g(a) f dx and Ii = uui−1
rem 14.41.
Let J1 denote the set of indices i such that the function g is monotone on the
interval [ti−1 ,ti ]. If i ∈ J1 , then by the previous cases that we have already proved,
( f ◦ g) · g′ is integrable on [ti−1 ,ti ], and
15.6 Appendix: Integration by Substitution for Definite Integrals (Proof of Theorem 15.22) 365
( ti
ti−1
(
f g(t) · g′ (t) dt =
ui
ui−1
f dx = Ii .
Thus the interval [ti−1 ,ti ] has a partition Φi such that
Ii − (ε /n) < sΦi ≤ SΦi < Ii + (ε /n),
(15.29)
where sΦi and SΦi denote the lower and upper sums of the function ( f ◦ g) · g′ restricted to the interval [ti−1 ,ti ] over the partition Φi . Let Φ ′ be the union of the
partitions Φ and Φi (i ∈ J1 ). We will show that the lower and upper sums sΦ ′ and
SΦ ′ of ( f ◦ g) · g′ over the partition Φ ′ are close to I. Consider first the upper sum.
Clearly,
(15.30)
SΦ ′ = ∑ SΦi + ∑ Mi · (ti − ti−1 ),
i∈J1
i∈J2
where J2 = {1, . . . , n}\J1 and Mi = sup{ f g(x) g′ (x) : x ∈ [ti−1 ,ti ]}. If i ∈ J2 , then g
is not monotone on the interval [ti−1 ,ti ], and so there exists a point di ∈ [ti−1 ,ti ] such
that g′ (di ) = 0. If this weren’t the case, then by Darboux’s theorem (Theorem 13.44),
g′ would have a constant sign on the interval [ti−1 ,ti ], and so by Theorem 12.54, g
would be monotone there, which is a contradiction to what we just assumed.
Let ωi (g′ ) denote the oscillation of the function g′ over the interval [ti−1 ,ti ]. Then
for arbitrary inner points ci ∈ [ti−1 ,ti ], we have
′
g (ci ) = g′ (ci ) − g′ (di ) ≤ ωi (g′ ),
(15.31)
so |Mi | ≤ K · ωi (g′ ), where K is an upper bound of | f | on the image of g, that is, on
the interval g([a, b]). Then using (15.29) and (15.30), we get that
|SΦ ′ − I| ≤
∑ |SΦi − Ii | + ∑ |Mi | · (ti − ti−1 ) + ∑ |Ii | <
i∈J2
i∈J1
< n · (ε /n) +
∑ K · ωi (g′ )(ti − ti−1 ) +
i∈J2
′
≤ ε + K · ΩΦ (g ) +
< (K + 1)ε +
i∈J2
∑ |Ii | ≤
i∈J2
∑ |Ii | <
i∈J2
∑ |Ii |.
(15.32)
i∈J2
Now by the mean value theorem, for all i = 1, . . . , n, there exists a point ci ∈ (ti−1 ,ti )
such that
ui − ui−1 = g′ (ci ) · (ti − ti−1 ).
Thus if i ∈ J2 then by (15.31),
|ui − ui−1 | ≤ ωi (g′ ) · (ti − ti−1 ),
so by statement (iv) of Theorem 14.49,
|Ii | ≤ K · |ui − ui−1 | ≤ K · ωi (g′ ) · (ti − ti−1 ).
366
15 Integration
Then
∑ |Ii | ≤ ∑ K · ωi (g′ ) · (ti − ti−1 ) ≤ K · ΩΦ (g′ ) < K ε .
i∈J2
i∈J2
Comparing this with (15.32), we find that |SΦ ′ − I| < (2K + 1)ε . The same argument
gives that |sΦ ′ − I| < (2K + 1)ε . Since this inequality holds for all ε , ( f ◦ g) · g′ is
integrable, and its integral is I.
⊔
⊓
Chapter 16
Applications of Integration
One of the main goals of mathematical analysis, besides applications in physics, is
to compute the measure of sets (arc length, area, surface area, and volume). We have
already spent time computing arc lengths, but only for graphs of functions. We saw
examples of computing the area of certain shapes (mostly regions under graphs),
and at the same time, we got a taste of computing volumes when we determined the
volume of a sphere (see item 2 in Example 13.23). We also noted, however, that in
computing area, some theoretical problems need to be addressed (as mentioned in
point 5 of Remark 14.10). In this chapter, we turn to a systematic discussion of these
questions.
When computing area, we obviously deal with sets in the plane, while when
computing volume, we deal with sets in the space. However, when we deal with
arc length we need to concern ourselves with both sets in the plane and in space,
since some curves lie in the plane while others do not. Therefore, it is best to tackle
questions concerning the plane and space simultaneously whenever possible.
In mathematical analysis, points of the plane are associated with ordered pairs
of real numbers, and the plane itself is associated with the set R × R = R2 (see
the appendix of Chapter 9). We will proceed analogously for representing threedimensional space as well. We consider three lines in space intersecting at a point
that are mutually perpendicular, which we call the x-, y-, and z-axes. We call the
plane spanned by the x- and y-axes the xy-plane, and we have similar definitions for
the xz- and yz-planes. We assign an ordered triple (a, b, c) to every point P in space,
in which a, b, and c denote the distance (with positive or negative sign) of the point
from the yz-, xz-, and xy-planes respectively. We call the numbers a, b, and c the coordinates of P. The geometric properties of space imply that the map P → (a, b, c)
that we obtain in this way is a bijection. This justifies our representation of threedimensional space by ordered triples of real numbers.
Thus if we want to deal with questions both in the plane and in space, we need
to deal with sets that consist of ordered d-tuples of real numbers, where d = 2 or
d = 3. We will see that the specific value of d does not usually play a role in the
definitions and proofs coming up. Therefore, for every positive integer d, we can
define d-dimensional Euclidean space, by which we simply mean the set of all
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 16
367
368
16 Applications of Integration
sequences of real numbers of length d, with the appropriately defined addition, multiplication by a constant, absolute value, and distance. If d = 1, then the Euclidean
space is exactly the real line; if d = 2, then it is the plane; and if d = 3, then it is
3-dimensional space. For d > 3, a d-dimensional space does not have an observable
meaning, but it is very important for both theory and applications.
Definition 16.1. Rd denotes the set of ordered d-tuples of real numbers, that is,
the set
Rd = {(x1 , . . . , xd ) : x1 , . . . , xd ∈ R}.
The points of the set Rd are sometimes called d-dimensional vectors. The sum of
the vectors x = (x1 , . . . , xd ) and y = (y1 , . . . , yd ) is the vector
x + y = (x1 + y1 , . . . , xd + yd ),
and the product of the vector x and a real number c is the vector
c · x = (cx1 , . . . , cxd ).
The absolute value of the vector x is the nonnegative real number
!
|x| = x12 + · · · + xd2 .
It is clear that for all x ∈ Rd and c ∈ R, |cx| = |c| · |x|. It is also easy to see that if
x = (x1 , . . . , xd ), then
|x| ≤ |x1 | + · · · + |xd |.
(16.1)
The triangle inequality also holds:
|x + y| ≤ |x| + |y|
(x, y ∈ Rd ).
(16.2)
To prove this, it suffices to show that |x + y|2 ≤ (|x| + |y|)2 , since both sides are
nonnegative. By the definition of the absolute value, this is exactly
(x1 + y1 )2 + · · · + (xd + yd )2 ≤
≤ (x12 + · · · + xn2 ) + 2 ·
that is,
x1 y1 + · · · + xd yd ≤
!
!
x12 + · · · + xd2 · y21 + · · · + y2d + y21 + · · · + y2d ,
!
x12 + · · · + xd2 ·
!
y21 + · · · + y2d ,
which is the Cauchy–Schwarz–Bunyakovsky inequality (Theorem 11.19).
The distance between the vectors x and y is the number |x − y|. By (16.2), it is
clear that
|x| − |y| ≤ |x − y|
and
|x − y| ≤ |x − z| + |z − y|
for all x, y, z ∈ Rd . We can consider these to be variants of the triangle inequality.
16.1 The General Concept of Area and Volume
369
If we apply (16.1) to the difference of the vectors x = (x1 , . . . , xd ) and y =
(y1 , . . . , yd ), then we get that
||x| − |y|| ≤ |x − y| ≤ |x1 − y1 | + · · · + |xd − yd |.
(16.3)
The scalar product of the vectors x = (x1 , . . . , xd ) and y = (y1 , . . . , yd ) is the
real number ∑di=1 xi yi , which we denote by x, y!. With the help of the arguments in
Remark 14.57, it is easy to see that x, y! = |x| · |y| · cos α , where α denotes the angle
enclosed by the two vectors.
16.1 The General Concept of Area and Volume
We deal with the concepts of area and volume at once; we will use the word measure
instead. We will actually define measure in every space Rd , and area and volume will
be the special cases d = 2 and d = 3.
Most of the concepts we define in the plane and in space can be generalized—
purely through analogy—for the space Rd , independent of the value of d. That
includes, first of all, the concepts of axis-parallel rectangles or rectangular boxes.
Since these are sets of the form [a1 , b1 ] × [a2 , b2 ] and [a1 , b1 ] × [a2 , b2 ] × [a3 , b3 ] in
the plane and in space respectively, by an axis-parallel rectangle in Rd , or just a
rectangle for short, we will mean the set
[a1 , b1 ] × · · · × [ad , bd ],
where ai < bi for all i = 1, . . . , d. (Here we use the Cartesian product with a
finite number of terms. This means that A1 × · · · × Ad denotes the set of sequences
(x1 , . . . , xd ) that satisfy x1 ∈ A1 , . . . , xd ∈ Ad .) For the case d = 1, the definition of a
rectangle agrees with the definition of a nondegenerate closed and bounded interval.
We get (open) balls in Rd in the same way through analogy. The open ball B(a, r)
with center a ∈ Rd and radius r > 0 is the set of points that are less than distance r
away from a, that is,
B(a, r) = {x ∈ Rd : |x − a| < r}.
For the case d = 1, B(a, r) is exactly the open interval (a − r, a + r), while when
d = 2, it is an open disk of radius r centered at a (where “open” means that the
boundary of the disk does not belong to the set).
We call the set A ⊂ Rd bounded if there exists a rectangle [a1 , b1 ]× . . . ×[ad , bd ]
that contains it. It is easy to see that a set is bounded if and only if it is contained in
a ball (see exercise 16.1).
We say that x is an interior point of the set H ⊂ Rd if H contains a ball centered
at x; that is, if there exists an r > 0 such that B(x, r) ⊂ H. Since every ball contains
a rectangle and every rectangle contains a ball, a set A has an interior point if and
only if A contains a rectangle.
We call the sets A and B nonoverlapping if they do not share any interior points.
370
16 Applications of Integration
If we want to convert the intuitive meaning of measure into a precise notion,
then we should first list our expectations for the concept. Measure has numerous
properties that we consider natural. We choose three of these (see Remark 14.10.5):
(a) The measure of the rectangle R = [a1 , b1 ] × · · · × [ad , bd ] equals the product of
its side lengths, that is, (b1 − a1 ) · · · (bd − ad ).
(b) If we decompose a set into the union of finitely many nonoverlapping sets, then
the measure of the set is the sum of the measures of the parts.
(c) If A ⊂ B, then the measure of A is not greater than the measure of B.
We will see that these requirements naturally determine to which sets we can assign
a measure, and what that measure should be.
Definition 16.2. If R = [a1 , b1 ] × · · · × [ad , bd ],
then we let m(R) denote the product
(b1 − a1 ) · · · (bd − ad ).
Let A be an arbitrary bounded set in Rd .
Cover A in every possible way by finitely
many rectangles R1 , . . . , RK , and form the sum
∑Ki=1 m(Ri ) for each cover. The outer measure
of the set A is defined as the infimum of the
set of all the sums we obtain in this way (Figure 16.1). We denote the outer measure of the
set A by m(A).
If A does not have an interior point, then we
Fig. 16.1
define the inner measure to be zero. If A does
have an interior point, then choose every combination of finitely many rectangles
R1 , . . . , RK each in A such that they are mutually nonoverlapping, and form the sum
∑Ki=1 m(Ri ) each time. The inner measure of A is defined as the supremum of the set
of all such sums. The inner measure of the set A will be denoted by m(A).
It is intuitively clear that for every bounded set A, the values m(A) and m(A) are
finite. Moreover, 0 ≤ m(A) ≤ m(A). Now by restrictions (a) and (c) above, it is clear
that the measure of the set A should fall between m(A) and m(A). If m(A) < m(A),
then without further inspection, it is not clear which number (between m(A) and
m(A)) we should consider the measure of A to be. We will do what we did when we
considered integrals, and restrict ourselves to sets for which m(A) = m(A), and this
shared value will be called the measure of A.
Definition 16.3. We call the bounded set A ⊂ Rd Jordan1 measurable if m(A) =
m(A). The Jordan measure of the set A (the measure of A, for short) is the shared
value m(A) = m(A), which we denote by m(A).
If d ≥ 3, then instead of Jordan measure, we can say volume; if d = 2, area; and
if d = 1, length as well.
1
Camille Jordan (1838–1922), French mathematician.
16.2 Computing Area
371
If we want to emphasize that we are talking about the inner, outer, or Jordan
measure of a d-dimensional set, then instead of m(A), m(A), or m(A), we may write
md (A), md (A), or md (A).
Exercises
16.1. Prove that for every set A ⊂ Rd , the following statements are equivalent.
(a) The set A is bounded.
(b) There exists an r > 0 such that A ⊂ B(0, r).
(c) For all i = 1, . . . , d, the ith coordinates of the points of A form a bounded set
in R.
16.2. Prove that the set
A = (x, y) : 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, y = 1/n (n ∈ N+ )
is Jordan measurable.
16.3. Prove that the set
A = {(x, y) : 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, x, y ∈ Q}
is not Jordan measurable.
16.4. Prove that if A ⊂ R p and B ⊂ Rq are measurable sets, then A × B ⊂ R p+q is
also measurable, and that m p+q (A × B) = m p (A) · mq (B). (S)
16.2 Computing Area
With the help of Definition 16.3,
we can now prove that the area under the graph of a function agrees
with the integral of the function
(see Remark 14.10.5). We will
actually determine the areas of
slightly more general regions, and
then the area under the graph of a
function, as well as the reflection
of that region in the x-axis, will be
special cases.
Fig. 16.2
372
16 Applications of Integration
Definition 16.4. We call the set A ⊂ R2 a normal domain if
A = {(x, y) : x ∈ [a, b], f (x) ≤ y ≤ g(x)},
(16.4)
where f and g are integrable on [a, b] and f (x) ≤ g(x) for all x ∈ [a, b] (Figure 16.2).
Theorem 16.5. If f and g are integrable on [a, b] and f (x) ≤ g(x) for all x ∈ [a, b],
then the normal domain in (16.4) is measurable, and its area is
m2 (A) =
( b
a
(g − f ) dx.
Proof. For a given ε > 0, choose partitions F1 and F2 such that ΩF1 ( f ) < ε and
ΩF2 (g) < ε . If F = (x0 , . . . , xn ) is the union of the partitions F1 and F2 , then ΩF ( f ) <
ε and ΩF (g) < ε . Let mi ( f ), mi (g), Mi ( f ), and Mi (g) be the infimum and supremum
of the functions f and g respectively on the interval [xi−1 , xi ]. Then the rectangles
[xi−1 , xi ] × [mi ( f ), Mi (g)] (i = 1, . . . , n) cover the set A, so
n
m2 (A) ≤ ∑ (Mi (g) − mi ( f )) · (xi − xi−1 ) =
i=1
= SF (g) − sF ( f ) <
(
( b
<
g dx + ε −
a
=
( b
a
b
a
f dx − ε
=
(g − f ) dx + 2ε .
(16.5)
Let I denote the set of indices i that satisfy Mi ( f ) ≤ mi (g). Then the rectangles
[xi−1 , xi ] × [Mi ( f ), mi (g)] (i ∈ I) are contained in A and are nonoverlapping, so
m2 (A) ≥ ∑(mi (g) − Mi ( f )) · (xi − xi−1 ) ≥
i∈I
n
≥ ∑ (mi (g) − Mi ( f )) · (xi − xi−1 ) =
i=1
= sF (g) − SF ( f ) >
>
( b
a
=
( b
a
g dx − ε −
( b
a
f dx − ε =
(g − f ) dx − 2ε .
(16.6)
Since' ε was arbitrary, by (16.5) and (16.6) it follows that A is measurable and has
area ab (g − f ) dx.
⊔
⊓
Example 16.6. With the help of the theorem above, we can conclude that the domain
bounded by the ellipse with equation x2 /a2 + y2 /b2 = 1 is a measurable set, whose
16.2 Computing Area
373
area is abπ . Clearly, the
given by the continuous
! domain
!set Ainquestion is a normal
x 2
x 2
functions f (x) = −b · 1 − a and g(x) = b · 1 − a over the interval [−a, a].
'
Then by Theorem 16.5, A is measurable and m2 (A) = ab (g − f ) dx = 2 ·
by the computation done in Remark 15.20.2, we obtain m2 (A) = abπ .
'b
a
g dx, so
We now turn to a generalization of Theorem 16.5 that can (theoretically) be used
to compute the measure of any measurable plane set.
Definition 16.7. The sections of the set
A ⊂ R2 are the sets
Ax = {y ∈ R : (x, y) ∈ A}
and
Ay = {x ∈ R : (x, y) ∈ A}
for every x, y ∈ R (Figure 16.3).
Theorem 16.8. Let A ⊂ R2 be a measurable set such that A ⊂ [a, b] × [c, d].
Then the functions x → m1 (Ax ) and x →
m1 (Ax ) are integrable in [a, b], and
m2 (A) =
( b
a
m1 (Ax ) dx =
Fig. 16.3
( b
a
m1 (Ax ) dx.
(16.7)
Similarly, the functions y → m1 (Ay ) and y → m1 (Ay ) are integrable in [c, d], and
m2 (A) =
( d
c
m1 (Ay ) dy =
( d
c
m1 (Ay ) dy.
Proof. It suffices to prove (16.7). Since
A ⊂ [a, b] × [c, d], we have Ax ⊂ [c, d]
for all x ∈ [a, b]. It follows that if x ∈
[a, b], then m1 (Ax ) ≤ m1 (Ax ) ≤ d − c,
so the functions m1 (Ax ) and m1 (Ax ) are
bounded in [a, b].
Let ε > 0 be given, and choose
rectangles Ti = [ai , bi ] × [ci , di ] (i =
1, . . . , n) such that A ⊂
n
Ti and
i=1
∑ni=1 m2 (Ti ) < m2 (A) + ε . We can
assume that [ai , bi ] ⊂ [a, b] for all i =
1, . . . , n. Let
Fig. 16.4
374
16 Applications of Integration
0,
fi (x) =
di − ci ,
if x ∈
/ [ai , bi ],
if x ∈ [ai , bi ]
(i = 1, . . . , n).
'
(See Figure 16.4.) Then fi is integrable in [a, b], and ab fi dx = m2 (Ti ). For arbitrary x ∈ [a, b], the sections Ax are covered by the intervals [ci , di ] that correspond to
indices i for which x ∈ [ai , bi ]. Thus by the definition of the outer measure,
n
m1 (Ax ) ≤ ∑x∈[a ,b ] (di − ci ) = ∑ fi (x).
i
i
i=1
It follows that
( b
a
m1 (Ax ) dx ≤
( b
a
n
f
i=1 i
∑
dx =
( b
a
n
n
∑i=1 fi dx = ∑i=1 m2 (Ti ) < m2 (A) + ε .
(16.8)
Nowlet Ri = [pi , qi ] × [ri , si ] (i = 1, . . . , m) be nonoverlapping rectangles such that
m
A⊃ m
i=1 Ri and ∑i=1 m2 (Ri ) > m2 (A) − ε . Then [pi , qi ] ⊂ [a, b] for all i = 1, . . . , m.
Let
0,
if x ∈
/ [pi , qi ],
gi (x) =
(i = 1, . . . , m).
si − ri , if x ∈ [pi , qi ]
'
Then gi is integrable in [a, b], and ab gi dx = m2 (Ri ). If x ∈ [a, b], then the section Ax
contains all the intervals [ri , si ] whose indices i satisfy x ∈ [ai , bi ]. We can also easily
see that if x is distinct from all points pi , qi , then these intervals are nonoverlapping.
Then by the definition of the inner measure,
m
m1 (Ax ) ≥ ∑x∈[p ,q ] (si − ri ) = ∑i=1 gi (x).
i
i
It follows that
( b
a
m1 (Ax ) dx ≥
( b
a
m
∑i=1 gi dx =
( b
a
m
m
∑i=1 gi dx = ∑i=1 m2 (Ri ) > m2 (A) − ε .
(16.9)
Now m1 (Ax ) ≤ m1 (Ax ) for all x, so by (16.8) and (16.9), we get that
m2 (A) − ε <
( b
a
m1 (Ax ) dx ≤
( b
a
'
m1 (Ax ) dx ≤
( b
a
m1 (Ax ) dx < m2 (A) + ε .
'b
Since this holds for all ε , we have ab m1 (Ax ) dx = a m1 (Ax ) dx = m2 (A), which
means that the function x → m1 (Ax ) is integrable on [a, b] with integral m2 (A). We
'
get that ab m1 (Ax ) dx = m2 (A) the same way.
⊔
⊓
Remark 16.9. Observe that we did not assume the measurability of the sections Ax
and Ay in Theorem 16.8 (that is, that m1 (Ax ) = m1 (Ax ) and m1 (Ay ) = m1 (Ay )).
16.2 Computing Area
375
Corollary 16.10. Let f be a nonnegative and bounded function on the interval [a, b].
The set B f = {(x, y) : x ∈ [a, b], 0 ≤ y ≤ f (x)} is measurable if and only if f is
integrable, and then
m2 (B f ) =
( b
f dx.
a
Proof. With the help of Theorem 16.5. we need to prove only that if B f is measurable, then f is integrable.
is clear by Theorem 16.8, since (B f )x =
This, however,
⊔
⊓
[0, f (x)], and so m1 (B f )x = m1 (B f )x = f (x) for all x ∈ [a, b].
Exercises
16.5. Determine the area of the set {(x, y) : 2 − x ≤ y ≤ 2x − x2 }.
16.6. Determine the area of the set {(x, y) : 2x ≤ y ≤ x + 1}.
16.7. For a given a > 0, determine the area of the set {(x, y) : y2 ≤ x2 (a2 − x2 )}.
16.8. Let 0 < δ < π /2, r > 0, and x0 = r cos δ . Prove, by computing the area under
the graph of the function
(tg δ ) · x, if 0 ≤ x ≤ x0 ,
f (x) = √ 2
r − x2 , if x0 ≤ x ≤ r,
that a circular sector with central angle δ and radius r is measurable and has area
r2 δ /2. (S)
√
16.9. Let u > 1 and v = u2 − 1. The two segments connecting the origin to the
points (u, v) and (u, −v) and the hyperbola x2 − y2 = 1 between the points (u, v) and
(u, −v) define a region Au . Determine the area of Au . (S)
To determine the center of mass of a region, we will borrow the fact from physics
that if we break up a region into smaller parts, then the “weighed average” of the
centers of mass of the parts, where the weights are equal to the area of each part,
gives us the center of mass of the whole region. More precisely, this means that if
we break the region A into regions A1 , . . . , An , and pi is the center of mass of Ai , then
the center of mass of A is
∑ni=1 m(Ai )pi
.
m(A)
Let f : [a, b] → R be a nonnegative continuous function, and consider the region
under the graph of f , B f = {(x, y) : x ∈ [a, b], 0 ≤ y ≤ f (x)}. Let F : a = x0 < x1 <
· · · < xn = b be a fine partition, and break up B f into the regions Ai = {(x, y) : x ∈
[xi−1 , xi ], 0 ≤ y ≤ f (x)} (i = 1, . . . , n). If xi − xi−1 is small, then Ai can be well
approximated by the rectangle Ti = [xi−1 , xi ]×[0, f (ci )], where ci = (xi−1 +xi )/2. Its
center of mass is the point pi = (ci , f (ci )/2), and its area is f (ci ) · (xi − xi−1 ). Thus
376
16 Applications of Integration
the center of mass of the region ni=1 Ti approximating B f is the weighed average of
the points pi with weights f (ci ) · (xi − xi−1 ), that is, the point
1 n
· ∑ f (ci ) · (xi − xi−1 ) · (ci , f (ci )/2),
σF i=1
(16.10)
where σF = ∑ni=1 f (ci ) · (xi − xi−1 ). The first coordinate of the point (16.10) is
n
xF = σF −1 · ∑ f (ci ) · ci · (xi − xi−1 ),
i=1
and its second coordinate is yF = (1/2) · σF−1 · ∑ni=1 f 2 (ci ) · (xi − xi−1 ). Let I =
a f dx. If the partition F is fine enough' (has small enough mesh), then σF is close
to I, xF is close to the value of xs = I −1 ab f (x) · x dx, and yF is close to the value of
'
ys = (1/2) · I −1 ab f 2 (x) dx.
This motivates the following definition. Let f : [a, b] → R be nonnegative and
'
integrable, and suppose that I = ab f dx > 0. Then the center of mass of the region
B f = {(x, y) : x ∈ [a, b], 0 ≤ y ≤ f (x)} is the point (xs , ys ), where
'b
xs =
1
I
( b
a
f (x) · x dx
and
ys =
1
2I
( b
f (x)2 dx.
a
Exercise
16.10. Determine the centers of mass of the following regions.
(a) {(x, y) : x ∈ [0, m], 0 ≤ y ≤ c√
· x} (c, m > 0);
(b) {(x, y) : x ∈ [−r, r], 0 ≤ y ≤ r2 − x2 } (r > 0);
(c) {(x, y) : x ∈ [0, a], 0 ≤ y ≤ xn } (a, n > 0).
16.3 Computing Volume
Theorem 16.8 can be generalized to higher dimensions without trouble, and the
proof of the generalization is the same. Consider, for example, the three-dimensional
version. If A ⊂ R3 , then let Ax denote the set {(y, z) : (x, y, z) ∈ A}.
Theorem 16.11. Let A ⊂ R3 be a measurable set such that A ⊂ [a, b] × [c, d] × [e, f ].
Then the functions x → m2 (Ax ) and x → m2 (Ax ) are integrable in [a, b], and
m3 (A) =
( b
a
m2 (Ax ) dx =
( b
a
m2 (Ax ) dx.
(16.11)
16.3 Computing Volume
377
In the theorem above—just as in Theorem 16.8—the variable x can be replaced
by the variable y or z.
With the help of equation (16.11), we can easily compute the volume of measurable sets whose sections are simple geometric shapes, for example rectangles or
disks. One such family of sets are called solids of revolution, which we obtain by
rotating the region under the graph of a function around the x-axis. More precisely,
if the function f is nonnegative on the interval [a, b], then the set
B f = (x, y, z) : a ≤ x ≤ b, y2 + z2 ≤ f 2 (x)
is called the solid of revolution determined by the function f .
Theorem 16.12. If f is nonnegative and integrable on the interval [a, b], then the
solid of revolution determined by f is measurable, and its volume is
m3 (B f ) = π ·
( b
f 2 (x) dx.
(16.12)
a
Proof. By (16.11), it is clear that (16.12) gives us the volume (assuming that B f
is measurable). The measurability of B f is, however, not guaranteed by Theorem 16.11, so we give a direct proof of this fact that uses the idea of squeezing
the solid of revolution between solids whose volumes we know. We can obtain
such solids if we rotate the inner and outer rectangles corresponding to the curve
{(x, y) : a ≤ x ≤ b, 0 ≤ y ≤ f (x)}, which give us so-called inner and outer cylinders
(Figure 16.5).
Fig. 16.5
Consider a partition F : a = x0 < · · · < xn = b, and let
mi = inf{ f (x) : x ∈ [xi−1 , xi ]} and
The cylinders
Mi = sup{ f (x) : x ∈ [xi−1 , xi ]} (i = 1, . . . , n).
Ci = (x, y, z) : xi−1 ≤ x ≤ xi , y2 + z2 ≤ mi ,
378
16 Applications of Integration
and
clearly satisfy
Ci = (x, y, z) : xi−1 ≤ x ≤ xi , y2 + z2 ≤ Mi
n
1
i=1
Ci ⊂ B f ⊂
n
1
Ci .
(16.13)
i=1
Now we use the fact that the cylinder (x, y, z) : c ≤ x ≤ d, y2 + z2 ≤ r is measurable and has area r2 π · (d − c). This is a simple corollary of the fact that if
A ⊂ R p and B ⊂ Rq are measurable sets, then A × B is also measurable, and
m p+q (A × B) = m p (A)mq (B) (see Exercise 16.4). Then by (16.13),
n
1
m3 (B f ) ≤ m3
n
= π ∑ Mi2 (xi − xi−1 ) = π · SF ( f 2 ),
Ci
i=1
i=1
and
m3 (B f ) ≥ m3
n
1
i=1
Ci
n
= π ∑ m2i (xi − xi−1 ) = π · sF ( f 2 ).
i=1
Since
inf S( f 2 ) = sup s( f 2 ) =
F
F
( b
f 2 dx,
a
we have that
π·
( b
a
f 2 dx ≤ m3 (B f ) ≤ m3 (B f ) ≤ π ·
( b
f 2 dx.
a
⊔
⊓
Thus B f is measurable, and (16.12) holds.
Exercises
16.11. Compute the area of the solids of revolution corresponding to the following
functions:
(a) arc sin x (x ∈√
[0, 1]);
(b) f (x) = e−x · sin x (x ∈ [0, π ]);
(c) f (x) = ch x (x ∈ [−a, a]).
16.12. Prove that the ellipsoid that is a result of rotating the ellipse
around the x-axis has volume (4/3)ab2 π .
x2
a2
+
y2
b2
=1
16.13. Consider two right circular cylinders of radius R whose axes intersect and are
perpendicular. Compute the volume of their intersection. (H)
16.3 Computing Volume
379
16.14. Consider a right circular cylinder of radius R. Compute the volume of the part
of the cylinder bounded by the side, the base circle, and a plane passing through a
diameter of the base circle and forming an angle of π4 with it.
16.15. Compute the volumes of the following solids (taking for granted that they are
measurable):
(a) {(x, y, z) :
(b) {(x, y, z) :
(c) {(x, y, z) :
(d) {(x, y, z) :
(e) {(x, y, z) :
(f) {(x, y, z) :
0 ≤ x ≤ y ≤ 1, 0 ≤ z ≤ 2x + 3y + 4};
x2 ≤ y ≤ 1, 0 ≤ z ≤ x2 + y2 };
x4 ≤ y ≤ 1, 0 ≤ z ≤ 2};
(x2 /a2 ) + (y2 /b2 ) + (z2 /c2 ) ≤ 1} (ellipsoid);
|x| + |y| + |z| ≤ 1};
x, y, z ≥ 0, (x + y)2 + z2 ≤ 1}.
16.16. Check, with the help of Theorem 16.11, that both of the integrals
( 1 ( √x
( 1 ( 1
−x2
−x2
y · e dy dx
y · e dx dy and
0
y2
0
0
give us the volume of the set
2
{(x, y, z) : y ≥ 0, y2 ≤ x ≤ 1, 0 ≤ z ≤ y · e−x }
(taking for granted that it is measurable); therefore, their values agree. Compute this
common value.
16.17. Using the idea behind the previous question, compute the following integrals:
' '
y
(a) 01 x1 x sin
y dy dx;
' ' √
(b) 01 √1 y 1 + x3 dx dy;
' '
(c) 01 y12/3 y cos x2 dx dy.
16.18. Let f be nonnegative and integrable on [a, b]. Prove that the volume of the
solid of revolution
{(x, y, z) : a ≤ x ≤ b, y2 + z2 ≤ f 2 (x)}
is equal to the area of the set
A = {(x, y) : a ≤ x ≤ b, 0 ≤ y ≤ f (x)}
times the circumference of the circle described by rotating the center of mass of A.
(This is sometimes called Guldin’s2 second Theorem.)
2
Paul Guldin (1577–1643), Swiss mathematician.
380
16 Applications of Integration
16.4 Computing Arc Length
We defined the arc length and rectifiability of graphs of functions in Definition 10.78.
Let the function f : [a, b] → R be continuously differentiable. Then by Theorem 13.41,
the graph of f is rectifiable. Moreover, if s(x) denotes the arc length of the
graph of the function over the interval [a, x], then s is differentiable, and s′ (x) =
!
1 + ( f ′ (x))2 for all x ∈ [a, b]. Since the arc length of the graph of f is s(b) =
s(b) − s(a), the fundamental theorem of calculus gives us the following theorem.
Theorem 16.13. If the function
! f : [a, b] → R is continuously differentiable, then the
arc length of the graph is
'b
a
1 + ( f ′ (x))2 dx.
Example 16.14 (The Arc Length of a Parabola). Let s denote the arc
length of the
'a√
the previous theorem,
s
=
1 + 4x2 dx.
function x2 over the√interval [0, a]. By √
0
√
'
1
1
x2 + 1 dx = 2 · x · x2 + 1 − 2 log( x2 + 1 − x) + c, with the
Since by (15.23),
help of a linear substitution and the fundamental theorem of calculus, we obtain
a
1
1
· x · 4x2 + 1 − log( 4x2 + 1 − 2x) =
s=
2
4
0
1
a
2
2
= · 4a + 1 − log( 4a + 1 − 2a).
2
4
If we want to compute the arc length of curves more general than graphs of
functions, we first need to clarify the notion of a curve. We can think of a curve
as the path of a moving particle, and we can determine that path by defining the
particle’s position vector at every time t. Thus the movement of a particle is defined
by a function that assigns a vector in whatever space the particle is moving to each
point of the time interval [a, b]. If the particle is moving in d-dimensional space,
then this means that we assign a d-dimensional vector to each t ∈ [a, b].
We will accept this idea as the definition of a curve, that is, a curve is a map of
the form g : [a, b] → Rd . If d = 2, then we are talking about planar curves, and if
d = 3, then we are talking about space curves.
Consider a curve g : [a, b] → Rd , and let the coordinates of the vector g(t) be
denoted by g1 (t), . . . , gd (t) for all t ∈ [a, b]. This defines a function gi : [a, b] → R
for each i = 1, . . . , d, which is called the ith coordinate function of the curve g.
Thus the curve is the map
t → (g1 (t), . . . , gd (t))
(t ∈ [a, b]).
We say that a curve g is continuous, differentiable, continuously differentiable,
Lipschitz, etc. if each coordinate function of g has the corresponding property.
We emphasize that when we talk about curves, we are talking about the mapping
itself, and not the path the curve traces (that is, its image). More simply: a curve is a
map, not a set in Rd . If the set H agrees with the image of the curve g : [a, b] → Rd ,
that is, H = g([a, b]), then we say that g is a parameterization of H. A set can have
several parameterizations. Let us see some examples.
16.4 Computing Arc Length
381
The segment determined by the points u, v ∈ Rd is the set
[u, v] = {u + t · (v − u) : t ∈ [0, 1]}.
The segment [u, v] is not a curve. On the other hand, the map g : [0, 1] → Rd , which
is defined by g(t) = u +t · (v − u) for all t ∈ [0, 1], is a curve that traces out the points
of the segment [u, v], that is, g is a parameterization of the segment [u, v]. Another
curve is the map h : [0, 1] → Rd , which is defined as h(t) = u +t 2 · (v − u) (t ∈ [0, 1]).
The curve h is also a parameterization of the segment [u, v]. Nevertheless, the curves
g and h are different, since g(1/2) = (u + v)/2, while h(1/2) = (3u + v)/4.
Consider now the map g : [0, 2π ] → R2 , for which
g(t) = (cost, sint)
(t ∈ [0, 2π ]).
The planar curve g defines the path of a particle that traces out the unit circle C
centered at the origin, that is, g is a parameterization of the circle C. The same holds
for the curve g1 : [0, 2π ] → R2 , where
g1 (t) = (cos 2t, sin 2t)
(t ∈ [0, 2π ]).
The curve g1 also traces out C, but “twice over.” Clearly, g = g1 . Since the length of
g is 2π , while the length of g1 is 4π , this example shows that arc length should be
assigned to the curve (that is, the map), and not to the image of the curve.
Note that the graph of any function f : [a, b] → R can be parameterized with the
planar curve t → (t, f (t)) ∈ R2 (t ∈ [a, b]).
We define the arc length of curves similarly to how we defined the arc length
of graphs of functions. A broken or polygonal line is a set that is the union
of connected segments. If a0 , . . . , an are arbitrary points of the space Rd , then
the polygonal line connecting the points ai (in this order) consists of the segments [a0 , a1 ], [a1 , a2 ], . . . , [an−1 , an ]. The length of a polygonal line is the sum of
the lengths of the segments that constitute it, that is, |a1 − a0 | + |a2 − a1 | + · · · +
|an − an−1 | (Figure 16.6).
Fig. 16.6
382
16 Applications of Integration
Definition 16.15. An inscribed polygonal path of the curve g : [a, b] → Rd is a
polygonal line connecting the points g(t0 ), g(t1 ), . . . , g(tn ), where a = t0 < t1 < . . .
< tn = b is a partition of the interval [a, b]. The arc length of the curve g is the least
upper bound of the set of lengths of inscribed polygonal paths of g (which can be
infinite). We denote the arc length of the curve g by s(g). Thus
n
s(g) = sup
∑ |g(ti ) − g(ti−1 )| : a = t0 < t1 < · · · < tn = b, n = 1, 2, . . .
i=1
We say that a curve g is rectifiable if s(g) < ∞.
Not every curve is rectifiable. It is clear that if the
image of the curve g : [a, b] →
Rd is unbounded, then there
exist arbitrarily long inscribed
polygonal paths of g, and so
s(g) = ∞. The following example shows that it is not enough
for a curve to be continuous or
even differentiable in order for
it to be rectifiable.
Example 16.16. Consider the planar curve g(t) = (t, f (t)) (t ∈
[0, 1]), where
t · sin(1/t) if t = 0,
f (t) =
0
if t = 0.
(The curve g parameterizes the
graph of f on the interval [0, 1])
(Figure 16.7). We show that the
curve g is not rectifiable. Let
us compute the length of the
inscribed polygonal path of g
corresponding to the partition
Fn , where Fn consists of the points 0, 1, and
xi =
2
(2i − 1) π
Fig. 16.7
(i = 1, . . . , n).
(We happen to have listed the inner points xi in decreasing order.) Since
f (xi ) = (−1)i+1
2
(2i − 1)π
(i = 1, . . . , n)
.
16.4 Computing Arc Length
383
if 1 ≤ i ≤ n − 1, the length of the segment [g(xi ), g(xi+1 )] is
1
1
2
1
2
+
.
|g(xi+1 ) − g(xi )| ≥ | f (xi+1 ) − f (xi )| =
> ·
π 2i − 1 2i + 1
π i+1
Thus the length of the inscribed polygonal path is at least
n
2
n
1
∑ |g(xi ) − g(xi−1 )| > π ∑ i .
i=1
i=1
Since ∑ni=1 (1/i) → ∞ if n → ∞ (see Theorem 7.8), the set of lengths of inscribed
polygons is indeed unbounded. Thus the curve g is not rectifiable, even though the
functions t and f (t) defining g are continuous.
Now consider the planar curve h(t) = (t 2 , f (t 2 )) (t ∈ [0, 1]), where f is the function above. The curve h also parameterizes the graph of f and has the same inscribed
polygonal paths as g. Thus h is not rectifiable either, even though the functions t 2
and f (t 2 ) are both differentiable on [0, 1] (see Example 13.46).
Remark 16.17. As we have seen, the arc length of the curve g : [a, b] → Rd depends
on the map g, since the inscribed polygonal paths already depend on g. This means
that we cannot generally speak of the arc length of a set H—even when H is the
image of a curve, or in other words, even if it is parameterizable. This is because H
can have multiple parameterizations, and the arc lengths of these different parameterizations could be different. So for example, the curves f (t) = (t 2 , 0) (t ∈ [0, 1])
and g(t) = (t 2 , 0) (t ∈ [−1, 1]) parameterize the same set (namely the interval [0, 1]
of the x-axis), but their arc lengths are different.
In some important cases, however, we do give sets H a unique arc length. We
call sets that have a bijective and continuous parameterization simple curves. Some
examples of simple curves are the segments, arcs of a circle, and the graph of any
continuous function.
Theorem 16.18. If H ⊂ Rd is a simple curve, then every bijective and continuous
parameterization of H defines the same arc length.
Proof. We outline a sketch of the proof. Let β : [a, b] → H and γ : [c, d] → H be
bijective parameterizations. Then the function h = γ −1 ◦ β maps the interval [a, b]
onto the interval [c, d] bijectively, and one can show that h is also continuous. (This
step—which belongs to multivariable calculus—is not detailed here.) It follows that
in this case, h is strictly monotone (see Exercise 10.54). Thus β = γ ◦ h, where h is
a strictly monotone bijection of [a, b] onto [c, d].
This property ensures that the curves β : [a, b] → H and γ : [c, d] → H have
the same inscribed polygonal paths. If F : a = t0 < t1 < . . . < tn = b is a partition
of the interval [a, b], then either c = h(a) < h(t1 ) < · · · < h(tn ) = d or c = h(tn ) <
h(tn−1 ) < · · · < h(t0 ) = d, depending on whether h is increasing or decreasing. One
of the two will give a partition of the interval [c, d] that will give the same inscribed
polygonal path as given by F under the map β = γ ◦ h. That is, every inscribed
384
16 Applications of Integration
polygonal path of β : [a, b] → H is an inscribed polygonal path of γ : [c, d] → H. In
the same way, every inscribed polygonal path of γ : [c, d] → H is also an inscribed
polygonal path of β : [a, b] → H. It then follows that the arc lengths of the curves
β : [a, b] → H and γ : [c, d] → H are the suprema of the same set, so the arc lengths
agree.
⊔
⊓
According to what we said above, we can talk about arc lengths of simple curves:
by this, we mean the arc length of a parameterization that is bijective and continuous.
The following theorem gives us simple sufficient conditions for a curve to be
rectifiable.
Theorem 16.19. Consider a curve g : [a, b] → Rd .
(i) If the curve is Lipschitz, then it is rectifiable.
(ii) If the curve g is differentiable, and the derivatives of the coordinate functions
of g are bounded on [a, b], then g is rectifiable.
(iii) If the curve is continuously differentiable, then it is rectifiable.
Proof. (i) Let the curve g be Lipschitz, and suppose that |gi (x) − gi (y)| ≤ K · |x − y|
for all x ∈ [a, b] and i = 1, . . . , d. Then using (16.3), we get that |g(x) − g(y)| ≤
Kd · |x − y| for all x, y ∈ [a, b]. Then it immediately follows that every inscribed
polygonal path of g has length at most Kd · (b − a), so g is rectifiable.
(ii) By the mean value theorem, if the function gi : [a, b] → R is differentiable on the
interval [a, b] and its derivative is bounded, then gi is Lipschitz. Thus the rectifiability of g follows from (i).
Statement (iii) is clear from (ii), since continuous functions on the interval [a, b]
⊔
⊓
are necessarily bounded (Theorem 10.52).
Theorem 16.20. Suppose that the curve g : [a, b] → Rd is differentiable, and the
derivatives of the coordinate functions of g are integrable on [a, b]. Then g is rectifiable, and
( b !
2
2
g′1 (t) + · · · + g′d (t) dt.
(16.14)
s(g) =
a
We give a proof of this theorem in the appendix of the chapter.
Remark 16.21. Let f : [a, b] → R, and apply the above theorem to the curve given
by g(t) = (t, f (t)) (t ∈ [a, b]). We get that in Theorem 16.13, instead of having to
assume the continuous differentiability of the function f , it is enough to assume that
f is differentiable and that f ′ is integrable on [a, b].
Remark 16.22. Suppose that the curve g : [a, b] → Rd is differentiable. Let the coordinate functions of g be g1 , . . . , gd . If t0 and t are distinct points of the interval [a, b],
then
g1 (t) − g1 (t0 )
g(t) − g(t0 )
gd (t) − gd (t0 )
.
=
,...,
t − t0
t − t0
t − t0
Here if we let t approach t0 , the jth coordinate of the right-hand side tends to g′j (t0 ).
Thus it is reasonable to call the vector (g′1 (t0 ), . . . , g′d (t0 )) the derivative of the curve
g at the point t0 . We denote it by g′ (t0 ). With this notation, (16.14) takes on the form
16.4 Computing Arc Length
385
s(g) =
( b
a
|g′ (t)| dt.
(16.15)
The physical meaning of the derivative g′ is the velocity vector of a particle moving along the curve g. Clearly, the displacement of the particle between the times
t0 and t is g(t) − g(t0 ). The vector (g(t) − g(t0 ))/(t − t0 ) describes the average displacement of the particle during the time interval [t0 ,t]. As t → t0 , this average tends
to the velocity vector of the particle. Since (g(t) − g(t0 ))/(t − t0 ) tends to g′ (t0 ) in
each coordinate, g′ (t0 ) is exactly the velocity vector.
On the other hand, the value |(g(t) − g(t0 ))/(t − t0 )| denotes the average magnitude of the displacement of the moving particle during the time interval [t0 ,t].
The limit of this as t → t0 is the instantaneous velocity of the particle. Thus the
absolute value of the velocity vector, |g′ (t0 )|, is the instantaneous velocity. Thus the
physical interpretation of (16.15) is that during movement (along a curve) of a particle, the distance traversed by the point is equal to the integral of its instantaneous
velocity. We already saw this for motions along a straight path: this was the physical statement of the fundamental theorem of calculus (Remark 15.2). Thus we can
consider (16.14), that is, (16.15), to be an analogue of the fundamental theorem of
calculus for curves.
Example 16.23. Consider a circle of radius a that is rolling along the x-axis. The
path traced out by a point P on the rolling circle is called a cycloid. Suppose that
the point P was at the origin at the start of the movement. The circle rolls along the
x-axis (without slipping), meaning that at each moment, the length of the circular
arc between the point A of the circle touching the axis and P is equal to the length
of the segment OA.
Let t denote the angle between the rays CA and CP, where C denotes the center
of the circle. Then the length of the line segment AP is at, that is, OA = at. In the
triangle CPR seen in the figure, PR = a sint and CR = −a cost, so the coordinates
of the point P are (at − a sint, a − a cost). After a full revolution of the circle, the
point P is touching the axis again. Thus the parameterization of the cycloid is
g(t) = (at − a sint, a − a cost)
(t ∈ [0, 2π ]).
Since g′ (t) = (a − a cost, a sint) and
!
√
|g′ (t)| = (a − a cost)2 + (a sint)2 = a · 2 − 2 cost =
"
t
t
= a · 4 sin2 = 2a sin ,
2
2
by (16.15), the arc length of a cycloid is
( 2π
0
#
t $2π
t
= 8a.
2a sin dt = −4a cos
2
2 0
386
16 Applications of Integration
Thus the arc length of a cycloid is eight times the radius of the rolling circle
(Figure 16.8).
Fig. 16.8
Exercises
16.19. Construct (a) a segment; (b) the boundary of a square as the image of both a
differentiable and a nondifferentiable curve.
16.20. Compute the arc lengths of the graphs of the following functions:
(a) f (x) = x3/2 (0 ≤ x ≤ 4);
(b) f (x) = log(1 − x2 ) (0 ≤ x ≤ a < 1);
(c) f (x) = log cos x (0 ≤ x ≤ a);
x
(d) f (x) = log eex +1
−1 (a ≤ x ≤ b).
16.21. Let the arc length of the graph f : [a, b] → R be denoted by L, and the arc
length of g(t) = (t, f (t)) (t ∈ [a, b]) by S. Prove that L ≤ S ≤ L + (b − a). Show that
the graph of f is rectifiable if and only if the curve g(t) (t ∈ [a, b]) is rectifiable.
16.22. In the following exercises, by the planar curve with parameterization x = x(t),
y = y(t) (t ∈ [a, b]) we mean the curve g(t) = (x(t), y(t)) (t ∈ [a, b]). Compute the
arc lengths of the following planar curves:
(a) x = a · cos3 t, y = a · sin3 t (0 ≤ t ≤ 2π ) (astroid);
(b) x = a · cos4 t, y = a · sin4 t (0 ≤ t ≤ π /2);
(c) x = et (cost + sint), y = et (cost − sint) (0 ≤ t ≤ a);
(d) x = t − tht, y = 1/cht (t ∈ [0, 1]);
(e) x = ctgt, y = 1/(2 sin2 t) (π /4 ≤ t ≤ π /2).
√
16.23. Let n > 0, and consider the curve g = (cos(t n ), sin(t n )) t ∈ [0, n 2π ]
(which is a parameterization of the unit circle). Check that the arc length of this
curve is 2π (independent of the value of n).
16.24. For a given b, d > 0, compute the arc length of the catenary, that is, the graph
of the function f (x) = b−1 · ch(bx) (0 ≤ x ≤ d).
16.4 Computing Arc Length
387
16.25. Let a and b be fixed positive numbers. For which c will the arc length of the
ellipse with semiaxes a and b be equal to the arc length of the function c · sin x over
the interval [0, π ]?
16.26. How large can the arc length of the graph of a (a) monotone; (b) monotone
and continuous; (c) strictly monotone; (d) strictly monotone and continuous function
f : [0, 1] → [0, 1] be?
16.27. Let f be the Riemann function in [0, 1]. For which c > 0 will the graph of f c
be rectifiable? (∗ H S)
16.28. Prove that if f : [a, b] → R is differentiable and f ′ is bounded on [a, b], then
(a) the graph of f is rectifiable, and
(b) the arc length of the graph of f lies between
( b!
a
2
1 + f ′ (x) dx
and
( b!
a
2
1 + f ′ (x) dx.
16.29. Prove that if the curve g : [a, b] → R2 is continuous and rectifiable, then for
every ε > 0, the image g([a, b]) can be covered by finitely many disks whose total
area is less than ε . (∗ H S)
The center of mass of a curve. Imagine a curve g : [a, b] → Rd made up of some
homogeneous material. Then the weight of every arc of g is ρ times the length of
that arc, where ρ is some constant (density). Consider a partition F : a = t0 < t1 <
· · · < tn = b, and let ci ∈ [ti−1 ,ti ] be arbitrary inner points. If the curve is continuously
differentiable and the partition is fine enough, then the length of the arc g([ti−1 ,ti ])
is close to the length of the segment [g(ti−1 ), g(ti )], so the weight of the arc is close
to ρ · |g(ti ) − g(ti−1 )|. Thus if for every i, we concentrate a weight ρ · |g(ti ) − g(ti−1 )|
at the point g(ci ), then the weight distribution of the points of weights we get in this
way will be close to the weight distribution of the curve itself. We can expect the
center of mass of the collection of these points to be close to the center of mass of
the curve.
The center of mass of the system of points above is the point L1F · ∑ni=1 |g(ti ) −
g(ti−1 )| · g(ci ), where LF = ∑ni=1 |g(ti ) − g(ti−1 )|. If the partition is fine enough, then
LF is close to the arc length L.
Let the coordinate functions of g!be g1 , . . . , gd . Then the length |g(ti ) − g(ti−1 )|
is well approximated by the value g′1 (ci )2 + · · · + g′d (ci )2 · (ti − ti−1 ), so the jth
coordinate of the point ∑ni=1 |g(ti ) − g(ti−1 )| · g(ci ) will be close to the sum
n
∑
i=1
!
g′1 (ci )2 + · · · + g′d (ci )2 · g j (ci ) · (ti − ti−1 ).
If the partition is fine enough, then this sum is close to the integral
sj =
( b!
a
g′1 (t)2 + · · · + g′d (t)2 · g j (t) dt.
(16.16)
388
16 Applications of Integration
This motivates the following definition.
Definition 16.24. If the curve g : [a, b] → R p is differentiable and the derivatives of
the coordinate functions of g are integrable on [a, b], then the center of mass of g
is the point (s1 /L, . . . , sd /L), where L is the arc length of the curve, and the s j are
defined by (16.16) for all j = 1, . . . , d.
Exercise
16.30. Compute the center of mass of the following curves:
(a) g(t) = (t,t 2 ) (t ∈ [0, 1]);
(b) g(t) = (t, sint) (y ∈ [0, π ]);
(c) g(t) = (a · (1 + cost) cost, a · (1 + cost) sint) (0 ≤ t ≤ 2π ), where a > 0 is constant (cardioid).
16.5 Polar Coordinates
The polar coordinates of a point P
distinct from the origin are given by
the ordered pair (r, ϕ ), where r denotes
the distance of P from the origin, and
−→
ϕ denotes the angle between OP and
the positive direction of the x-axis (Figure 16.9). From the figure, it is clear
that if the polar coordinates of P are
Fig. 16.9
(r, ϕ ), then the usual (Cartesian) coordinates are (r cos ϕ , r sin ϕ ). The polar
coordinates of the origin are given by (0, ϕ ), where ϕ can be arbitrary.
If [α , β ] ⊂ [0, 2π ), then every function r : [α , β ] → [0, ∞) describes a curve, the
collection of points (r(ϕ ), ϕ ) for ϕ ∈ [α , β ]. Since the Cartesian coordinates of the
point (r(ϕ ), ϕ ) are (r(ϕ ) cos ϕ , r(ϕ ) sin ϕ ), using our old notation we are actually
talking about the curve
g(t) = (r(t) cost, r(t) sint)
(t ∈ [α , β ]).
(16.17)
16.5 Polar Coordinates
389
Definition 16.25. The function r : [α , β ] → [0, ∞) is called the polar coordinate
form of the curve (16.17).
In this definition, we do not assume [α , β ] to be part of the interval [0, 2π ]. This
is justified in that for arbitrary t ∈ R, if r > 0, then the polar coordinate form of the
point P = (r cost, r sint) is (r,t − 2kπ ), where k is an integer such that 0 ≤ t − 2kπ <
−→
2π . In other words, t is equal to one of the angles between OP and the positive half
of the x-axis, so in this more general sense, we can say that the points (r(ϕ ), ϕ )
given in polar coordinate form give us the curve (16.17).
Examples 16.26. 1. If [α , β ] ⊂ [0, 2π ), then the function r ≡ a (ϕ ∈ [α , β ]), where
a > 0 is constant, is the polar coordinate form of a subarc of the circle of radius a
centered at the origin.
2. The function
r(ϕ ) = a · ϕ
(ϕ ∈ [0, β ])
(16.18)
describes what is called the Archimedean
spiral. The Archimedean spiral is the path of
a particle that moves uniformly along a ray
starting from the origin while the ray rotates
uniformly about the origin (Figure 16.10).
Theorem 16.27. Suppose that the function
r : [α , β ] → [0, ∞) is differentiable, and its
derivative is integrable on [α , β ]. Then the
curve given by r in its polar coordinate form
is rectifiable, and its arc length is
( β!
α
(r′ )2 + r2 d ϕ .
Fig. 16.10
(16.19)
Proof. The curve g given by (16.17) is differentiable, and the derivatives of the
coordinate functions, r′ cost − r sint and r′ sint + r cost, are integrable on [α , β ].
Thus by Theorem 16.20, the curve is rectifiable. Since
!
!
|g′ (t)| = (r′ cost − r sint)2 + (r′ sint + r cost)2 = (r′ )2 + r2 ,
by (16.15), we get (16.19).
Example 16.28. The arc length of the Archimedean spiral given by (16.18) is
( β
1
1
2
2
2
2
2
· β · β + 1 − log( β + 1 − β ) .
a t + a dt = a ·
2
2
0
⊔
⊓
390
16 Applications of Integration
Consider a curve given in its polar coordinate form
r : [α , β ] → [0, ∞). The union of the segments connecting every point of the curve to the origin is
called a sectorlike region. By the definition of polar
coordinates, the region in question is exactly the set
A = {(r cos ϕ , r sin ϕ ) : 0 ≤ r ≤ r(ϕ ), α ≤ ϕ ≤ β } .
(16.20)
Theorem 16.29. Let 0 ≤ α < β ≤ 2π . If the
Fig. 16.11
function f is nonnegative and integrable on [α , β ],
then the sectorlike region given in (16.20) is measurable, and its area is
1 'β 2
2 α r (ϕ ) d ϕ .
Proof. To prove this theorem, we use the fact that the circular sector with radius r
and central angle δ is measurable, and has area r2 δ /2 (see Exercise 16.8). Moreover,
we use that if A ⊂ ni=1 Ai , then m(A) ≤ ∑ni=1 m(Ai ), and if A ⊃ ni=1 Bi , where the
sets Bi are nonoverlapping, then m(A) ≥ ∑ni=1 m(Bi ). These follow easily from the
definitions of the inner and outer measure.
Consider a partition F : α = t0 < t1 < · · · < tn = β of the interval [α , β ].
If mi = inf{r(t) : t ∈ [ti−1 ,ti ]} and Mi = sup{r(t) : t ∈ [ti−1 ,ti ]}, then the set of points
with polar coordinates (r, ϕ ) (ϕ ∈ [ti−1 ,ti ], 0 ≤ r ≤ mi ) is a circular sector Bi that is
contained in A (Figure 16.11). Since the circular sectors B1 , . . . , Bn are nonoverlapping, the inner area of A is at least as large as the sum of the areas of these sectors,
which is (1/2) · ∑ni=1 m2i (ti − ti−1 ) = (1/2) · sF (r2 ).
Similarly, the set of points with polar coordinates (r, ϕ ) (ϕ ∈ [ti−1 ,ti ], 0 ≤ r ≤ Mi )
is a circular sector Ai , and the sectors A1 , . . . , An together cover A. Thus the outer
area of A must be less than or equal to the sum of the areas of these sectors, which
is 21 · ∑ni=1 Mi2 (ti − ti−1 ) = 12 · SF (r2 ). Thus
2
1
2 sF (r ) ≤
m(A) ≤ m(A) ≤ 21 SF (r2 )
for every partition F. Since r2 is integrable,
2
2
sup sF (r ) = inf SF (r ) =
F
and
F
( β
α
r 2 (ϕ ) d ϕ ,
1 β 2
1 β 2
r (ϕ ) d ϕ ≤ m(A) ≤ m(A) ≤
r (ϕ ) d ϕ .
2 α
2 α
This shows that A is measurable, and its area is equal to half of the integral.
(
(
⊔
⊓
16.5 Polar Coordinates
391
Exercises
16.31. Compute the arc lengths of the following curves given in polar coordinate
form:
(a) r = a · (1 + cos ϕ ) (0 ≤ ϕ ≤ 2π ), where a > 0 is constant (cardioid);
(b) r = a/ϕ (π /2 ≤ ϕ ≤ 2π ), where a > 0 is constant;
(c) r = a · ec·ϕ (0 ≤ ϕ ≤ α ), where a > 0, c ∈ R, and α > 0 are constants;
(d) r =
p
1+cos ϕ
(0 ≤ ϕ ≤ π /2), where p > 0 is constant; what is this curve? (H)
(e) r =
p
1−cos ϕ
(π /2 ≤ ϕ ≤ π ), where p > 0 is constant; what is this curve? (H)
16.32. Let a > 0 be constant. The set of
planar points whose distance from (−a, 0)
times its distance from (a, 0) is equal to
a2 is called a lemniscate (Figure 16.12).
Show that r2 = 2a2 · cos 2ϕ (−π /4 ≤ ϕ ≤ π /4
or 3π /4 ≤ ϕ ≤ 5π /4) is a parameterization of
the lemniscate in polar coordinate form. Compute the area of the region bounded by the lemniscate.
Fig. 16.12
16.33. Compute the area of the set of points satisfying r2 + ϕ 2 ≤ 1.
16.34. Compute the area of the region bounded by the curve r = sin ϕ + eϕ (0 ≤
ϕ ≤ π ) given in polar coordinate form, and the segment [−eπ , 1] of the x-axis.
16.35. The curve r = a · ϕ (0 ≤ ϕ ≤ π /4) given in polar coordinate form is the graph
of a function f .
(a) Compute the area of the region under the graph of f .
(b) Revolve the region under this graph about the x-axis. Compute the volume of
the solid of revolution we obtain in this way. (H)
16.36. The cycloid with parameter a over [0, 2aπ ] is the graph of a function g.
(a) Compute the area of the region under the graph of g.
(b) Revolve the region under this graph about the x-axis. Compute the volume of
the solid of revolution we obtain in this way. (H)
16.37. Express the curve satisfying the conditions
(a) x4 + y4 = a2 (x2 + y2 ); and
(b) x4 + y4 = a · x2 y
in polar coordinate form, and compute the area of the enclosed region.
392
16 Applications of Integration
16.6 The Surface Area of a Surface of Revolution
Determining the surface area of surfaces is a much harder task than finding the
area of planar regions or the volume of solids; the definition of surface area itself
already causes difficulties. To define surface area, the method used to define area—
bounding the value from above and below—does not work. We could try to copy the
method of defining arc length and use the known surface area of inscribed polygonal
surfaces, but this already fails in the simplest cases: one can show that the inscribed
polygonal surfaces of a right circular cylinder can have arbitrarily large surface area.
To precisely define surface area, we need the help of differential geometry, or at least
multivariable differentiation and integration, which we do not yet have access to.
Determining the surface area of a surface of revolution is a simpler task. Let
f : [a, b] → R be a nonnegative function, and let A f denote the set we get by rotating
graph f about the x-axis. It is an intuitive assumption that the surface area of A f is
well approximated by the surface area of the rotation of an inscribed polygonal path
about the x-axis. Before we inspect this assumption in more detail, let us compute
the surface area of the rotated inscribed polygonal paths.
Let F : a = x0 < x1 < · · · < xn = b be an arbitrary partition. Rotating the inscribed
polygonal paths corresponding to F about x gives us a set PF , which consists of n
parts: the ith part, which we will denote by PiF , is the rotated segment over the
interval [xi−1 , xi ] (Figure 16.13). We can see in the figure that the set PiF is the side
of a right conical frustum with height xi − xi−1 , and radii of bases f (xi ) and f (xi−1 ).
It is intuitively clear that if we unroll the
side of a right conical frustum, then the area
of the region we get is equal to the surface
area of the side of the frustum. Since the unrolled side is the difference of two circular
sectors, the area of this can be computed easily, using that the area of the sector is half the
radius times the arc length.
In the end, we get that if the height of
the frustum is m, and the bases have radii
r and R, then the lateral surface area is
π (R + r) (R − r)2 + m2 (see Exercise
16.38). Thus the surface area of the side
PiF is
π · ( f (xi ) + f (xi−1 )) · hi ,
Fig. 16.13
and the surface area of the set PF is
n
ΦF = π · ∑ ( f (xi ) + f (xi−1 )) · hi ,
i=1
(16.21)
16.6 The Surface Area of a Surface of Revolution
393
where hi = ( f (xi ) − f (xi−1 ))2 + (xi − xi−1 )2 . Here we should note that the sum
∑ni=1 hi is equal to the length of the inscribed polygonal path (corresponding to F).
Therefore, ∑ni=1 hi ≤ L, where L denotes the arc length of graph f .
Now let us return to figuring out
in what sense the value ΦF approximates the surface area of the set A f .
Since the length of the graph of f is
equal to the supremum of the arc lengths
of the inscribed polygonal paths, our first
thought might be that the surface area of
A f needs to be equal to the supremum of
the values ΦF . However, this is already
not the case with simple functions. Consider, for example, the function |x| on the
interval [−1, 1]. In this case, the set A f
is the union of two sides√of cones,√and
its surface area is 2 · (2π · 2/2) = 2 2π
(Figure 16.14). But if the partition F conFig. 16.14
sists only of the points −1 and 1, then PF
is a cylinder whose surface area is 2π · 2 = 4π , which is larger than the surface area
of A f .
By the example above, we can rule out being able to define the surface area of A f
as the supremum of the set of values ΦF . However, the example hints at the correct
definition, since an arbitrary partition F of [−1, 1] makes PF equal either to A f (if
0 is a base point of F) or to the union of the sides of three frustums. If the mesh of
the partition is small, then the surface area of the middle frustum will be small, and
the two other frustums will be close to the two cones making up A f . This means
that the surface area of PF will be arbitrarily close to the surface area of A f if the
partition becomes fine enough. This observation motivates the following definition.
Definition 16.30. Let f be nonnegative on [a, b]. We say that the surface area of
A f = {(x, y, z) : a ≤ x ≤ b,
y2 + z2 = f (x)}
exists and equals Φ if for every ε > 0, there exists a δ > 0 such that for every partition F of [a, b] with mesh smaller than δ , we have |ΦF − Φ | < ε , where ΦF is the
surface area of the set we get by rotating the inscribed polygonal path corresponding
to F about the x-axis, defined by (16.21).
Theorem 16.31. Let f be a nonnegative and continuous function on the interval
[a, b] whose graph is rectifiable. Suppose that f is differentiable on (a, b), and f ·
1 + ( f ′ )2 is integrable on [a, b]. Then the surface area of A f exists, and its value is
2π
( b
a
!
2
f (x) 1 + f ′ (x) dx.
394
16 Applications of Integration
Proof. We did not assume the function f to be differentiable at the points a and b,
so the function g = f · 1 + ( f ′ )2 might not be defined at these points. To prevent
14.46
any ambiguity, let us define g to be zero at the points a and b; by Theorem
'
and Remark 14.47, the integrability of g and the value of the integral I = ab g dx are
unchanged when we do this.
Let ε > 0 be given. Since f is uniformly continuous on [a, b], there exists a number η > 0 such that if x, y ∈ [a, b] and |y − x| < η , then | f (y) − f (x)| < ε . By Theorem 14.23, we can choose a number δ > 0 such that for every partition F of [a, b]
with mesh smaller than δ and every approximating sum σF corresponding to F, we
have |σF − I| < ε .
Let F : a = x0 < x1 < · · · < xn = b be a partition with mesh smaller than
min(η , δ ). We show that the value ΦF defined by (16.21) is close to 2π I.
By the mean value theorem, for each i, there exists a point ci ∈ (xi−1 , xi ) such
that f (xi ) − f (xi−1 ) = f ′ (ci ) · (xi − xi−1 ). Then
!
!
def
hi = ( f (xi ) − f (xi−1 ))2 + (xi − xi−1 )2 = ( f ′ (ci ))2 + 1 · (xi − xi−1 )
for all i. Thus the approximating sum of g with inner points ci is
n
σF (g; (ci )) = ∑ f (ci ) ·
i=1
!
n
( f ′ (ci ))2 + 1 · (xi − xi−1 ) = ∑ f (ci ) · hi .
i=1
Since the partition F has smaller mesh than δ , we have |σF (g; (ci )) − I| < ε . The
partition F has smaller mesh than η , too, so by the choice of η , we have | f (xi ) −
f (ci )| < ε and | f (xi−1 ) − f (ci )| < ε , so | f (xi ) + f (xi−1 ) − 2 f (ci )| < 2ε for all i.
Thus
n
1
f
(x
)
+
f
(x
)
−
2
f
(c
)
i
i
i−1
= ∑
·
h
·
Φ
σ
−
(g;
(c
))
≤
i
F
F
i
2π
2
i=1
n
≤ ε · ∑ hi ≤ ε · L,
i=1
where L denotes the arc length of graph f . This gives
1
1
2π · ΦF − I ≤ 2π · ΦF − σF (g; (ci )) + |σF (g; (ci )) − I| < (L + 1) · ε . (16.22)
Since ε > 0 was arbitrary and (16.22) holds for every partition with small enough
mesh, we have shown that the surface area of A f is 2π I.
⊔
⊓
Example 16.32. We compute the surface area of a spherical segment, that is, part
of a sphere centered at the origin with radius r that falls between the planes x = a
and x = b, where −r ≤ a < b ≤ r.
The sphere centered
√ at the origin with radius r is given by the rotated graph of
the function f (x) = r2 − x2 (x ∈ [−r, r]) about the x-axis.
16.6 The Surface Area of a Surface of Revolution
395
Since f is monotone on the intervals [−r, 0] and [0, r], its graph is rectifiable
on [−r, r] and thus on [a, b], too. On the other hand, f is continuous on [−r, r] and
differentiable on (−r, r), where we have
f (x)
!
1 + ( f ′ (x))2 =
r 2 − x2 ·
1+
x2
r 2 − x2
= r.
Thus we can apply Theorem 16.31. We get that the surface area we are looking for is
2π ·
( b
a
r dx = 2π r · (b − a),
so the area of a spherical segment agrees with the distance between the planes that
define it times the circumference of a great circle of the sphere. As a special case,
the surface area of a sphere of radius r is 4r2 π .
Remark 16.33. Suppose that f is a nonnegative continuous function on the interval
[a, b] whose graph is rectifiable. Denote the arc length of graph f over the interval
[a, x] by s(x). Then s is strictly monotone increasing and continuous on [a, b], and so
it has an inverse function s−1 .
By a variant of the proof of Theorem 16.31, one 'can show that with these conditions, A f has a surface area, and it is equal to 2π · 0L ( f ◦ s−1 ) dx, where L denotes
the arc length of graph f .
Exercises
16.38. Prove that if the height of a frustum is m, and the radii of the lower and upper
circles are r and R, then flattening the side of the frustum gives us a region whose
area is π (R + r) (R − r)2 + m2 .
16.39. Compute the surface area of the surfaces that we get by revolving the graphs
of the following functions about the x-axis:
(a)
(b)
(c)
(d)
ex over [a, b];
√
x over [a, b], where 0 < a < b;
sin x over [0, π ];
ch x over [−a, a].
16.40. Call a region falling between two parallel lines a strip. By the width of the
strip, we mean the distance between the two lines. Prove that if we cover a circle
with finitely many strips, then the sum of the widths of the strips we used is at least
as large as the diameter of the circle. (H S)
16.41. Let f be nonnegative and continuously differentiable on [a, b]. Prove that the
surface area of the surface of revolution A f = {(x, y, z) : a ≤ x ≤ b, y2 + z2 = f 2 (x)}
396
16 Applications of Integration
equals the length of graph f times the circumference of the circle traced by the
center of mass of graph f during its revolution (this is sometimes called Guldin’s
first theorem).
16.7 Appendix: Proof of Theorem 16.20
Proof. Since every integrable function is already!bounded, statement (ii) of
′ 2
g1 + · · · + g′d . We will
Theorem 16.19 implies that g is rectifiable. Let f =
show that for every ε > 0, there exists a partition F such that the inscribed polygonal
path of g corresponding to F has length ℓF , which differs from s(g) by less than ε ,
and also that every Riemann sum σF ( f ) of f differs from ℓF by less than ε . It will
follow from this that |σF ( f ) − s(g)| < 2ε for every Riemann sum corresponding to
the partition F, and so by Theorem 14.19, f is integrable with integral s(g).
Let F : a = t0 < · · · < tn = b be a partition of the interval [a, b], and let ℓF
denote the length of the corresponding inscribed polygonal path, that is, let ℓF =
∑ni=1 |g(ti ) − g(ti−1 )|. For all i = 1, . . . , n,
!
|g(ti ) − g(ti−1 )| = (g1 (ti ) − g1 (ti−1 ))2 + · · · + (gd (ti ) − gd (ti−1 ))2 .
By the mean value theorem, there exist points ci,1 , . . . , ci,d ∈ (ti−1 ,ti ) such that
g j (ti ) − g j (ti−1 ) = g′j (ci, j )(ti − ti−1 )
( j = 1, . . . , d).
Then
n
ℓF = ∑ |g(ti ) − g(ti−1 )| =
i=1
n !
=∑
g′1 (ci,1 )
i=1
2
2
+ · · · + g′d (ci,d ) · (ti − ti−1 ).
(16.23)
Now let ei ∈ [ti−1 ,ti ] (i = 1, . . . , n) be arbitrary inner points, and consider the corresponding Riemann sum of f :
n
σF ( f ; (ei )) = ∑ f (ei )(ti − ti−1 ) =
i=1
n !
=∑
i=1
g′1 (ei )
2
2
+ · · · + g′d (ei ) · (ti − ti−1 ).
(16.24)
By the similarities of the right-hand sides of the equalities (16.23) and (16.24), we
can expect that for a suitable partition F, ℓF and σF ( f ; (ei )) will be close to each
other. By inequality (16.3),
16.7 Appendix: Proof of Theorem 16.20
397
!
! ′
2
′
2
g′ (ci,1 ) 2 + · · · + g′ (ci,d ) 2 −
g1 (ei ) + · · · + gd (ei ) ≤
1
d
≤
d
d
j=1
j=1
∑ |g′j (ci,d ) − g′j (ei )| ≤ ∑ ω (g′j ; [ti−1 ,ti ]).
Thus subtracting (16.23) and (16.24), we get that
d
|ℓF − σF ( f ; (ei ))| ≤
n
d
∑ ∑ ω (g′j ; [ti−1 ,ti ])(ti − ti−1 ) ≤ ∑ ΩF (g′j )
j=1 i=1
(16.25)
j=1
for every partition F and every ei . Let ε > 0 be fixed. Since s(g) is the supremum of
the numbers ℓF , there exists a partition F0 such that s(g) − ε < ℓF0 ≤ s(g). It is easy
to check that if we add new base points to a partition, then the value of ℓF cannot
decrease. Clearly, if we add another base point, then we replace a term |g(tk−1 ) −
g(tk )| in the sum by |g(tk−1 ) − g(tk′ )| + |g(tk′ ) − g(tk )|. The triangle inequality ensures
that the value of ℓF does not decrease with this. Thus if F is a refinement of F0 , then
s(g) − ε < ℓF0 ≤ ℓF ≤ s(g).
(16.26)
Since the functions g′j are integrable, there exist partitions Fj such that ΩFj (g′j ) <
ε ( j = 1, . . . , d). Let F be the union of partitions F0 , F1 , . . . , Fd . Then
ΩF (g′j ) ≤ ΩFj (g′j ) < ε
for all j = 1, . . . , d. If we now combine (16.25) and (16.26), we get that
|σF ( f ; (ei )) − s(g)| ≤ |σF ( f ; (ei )) − ℓF | + |ℓF − s(g)| < (d + 1)ε .
(16.27)
In the end, for every ε > 0, we have constructed a partition F that satisfies (16.27)
with an arbitrary choice of inner points ei . Then by Theorem 14.19, f is integrable,
and its integral is s(g).
⊔
⊓
Chapter 17
Functions of Bounded Variation
We know that if f is integrable, then the lower and upper sums of every partition F
approximate its integral from below and above, and so the difference between either
sum and the integral is at most SF − sF = ΩF , the oscillatory sum corresponding
to F.
Thus the oscillatory sum is an upper bound for the difference between the approximating sums and the integral.
We also know that if f is integrable, then the oscillating sum can become smaller
than any fixed positive number for a sufficiently fine partition (see Theorem 14.23).
If the function f is monotone, we can say more: ΩF ( f ) ≤ | f (b) − f (a)| · δ (F) for
all partitions F, where δ (F) denotes the mesh of the partition F (see Theorem 14.28
and inequality (14.19) in the proof of the theorem). A similar inequality holds for
Lipschitz functions: if | f (x) − f (y)| ≤ K · |x − y| for all x, y ∈ [a, b], then ΩF ( f ) ≤
K ·(b−a)· δ (F) for every partition F (seeExercise
17.1). We can state this condition
more concisely by saying that ΩF ( f ) = O δ (F) holds for f if there exists a number
C such that for an arbitrary partition F, ΩF ( f ) ≤ C · δ (F). (Here we used the big-oh
notation seen on p. 141.) By the above, this condition holds for both monotone and
Lipschitz functions.
Is it true that the condition ΩF ( f ) = O δ (F) holds for every integrable function? The answer is no: one can show that the function
x · sin(1/x), if 0 < x ≤ 1;
f (x) =
0,
if x = 0
does not satisfy the condition (see Exercise 17.3). It is also true that for an arbitrary
sequence ωn that tends to zero, there exists a continuous function f : [0, 1] → R
such that ΩFn ( f ) ≥ ωn for all n, where Fn denotes a partition of [0, 1] into n equal
subintervals (see Exercise 17.4). That is, monotone functions “are better behaved”
than continuous functions in this aspect.
We characterize below the class of functions for which ΩF ( f ) = O δ (F) holds.
By what we stated above, every monotone and every Lipschitz function is included
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 17
399
400
17 Functions of Bounded Variation
in this class, but not every continuous function is. The elements of this class are the
so-called functions of bounded variation, and they play an important role in analysis.
Definition 17.1. Let the function f be defined on the interval [a, b]. If we have a
partition of the interval [a, b] given by F : a = x0 < x1 < · · · < xn = b, let VF ( f ) denn
ote the sum ∑ | f (xi ) − f (xi−1 )|. The total variation of f over [a, b] is the supremum
i=1
of the set of sums VF ( f ), where F ranges over all partitions of the interval [a, b].
We denote the total variation of f on [a, b] by V ( f ; [a, b]) (which can be infinite).
We say that the function f : [a, b] → R is of bounded variation if V ( f ; [a, b]) < ∞.
Remarks 17.2. 1. Suppose that the graph of the function f consists of finitely
many monotone segments. Let f be monotone on each of the intervals [ci−1 , ci ]
(i = 1, . . . , k), where F0 : a = c0 < c1 < · · · < ck = b is a suitable partition of [a, b].
It is easy to check that for an arbitrary partition F, we have
k
VF ( f ) ≤ ∑ | f (ci ) − f (ci−1 )| = VF0 ( f ).
i=1
Thus the total variation of f is equal to VF0 ( f ), and so the supremum defining the total variation is actually a maximum. This statement can be turned around: if there is
a largest value among VF ( f ), then the graph of f consists of finitely many monotone
segments (see Exercise 17.5).
2. Suppose again that f is monotone on each of the intervals [ci−1 , ci ] (i = 1, . . . , k),
where F0 : a = c0 < c1 < · · · < ck = b. Consider the graph of f to be the crest of a
mountain along which a tourist is walking. Suppose that for this tourist, the effort
required to change altitude is proportional to the change in altitude, independent of
whether the tourist is ascending or descending (and thus the tourist floats effortlessly
when the mountain crest is horizontal). Then the value VF0 ( f ) measures the required
effort for the tourist to traverse the crest of the mountain.
Generalizing this interpretation, we can say that the total variation of an arbitrary
function is the effort required to “climb” the graph, and so a function is of bounded
variation if the graph can be climbed with a finite amount of effort.
Theorem 17.3.
(i) If f is monotone on [a, b], then f is of bounded variation there, and V ( f ; [a, b]) =
| f (b) − f (a)|.
(ii) Let f be Lipschitz on [a, b], and suppose that | f (x) − f (y)| ≤ K · |x − y| for
all x, y ∈ [a, b]. Then f is of bounded variation on [a, b], and V ( f ; [a, b]) ≤
K · (b − a).
Proof. (i) If f is monotone, then for an arbitrary partition a = x0 < x1 < · · ·
< xn = b,
n
n
∑ | f (xi ) − f (xi−1 )| = ∑ ( f (xi ) − f (xi−1 )) = | f (b) − f (a)|.
i=1
i=1
17 Functions of Bounded Variation
401
(ii) If | f (x) − f (y)| ≤ K · |x − y| for all x, y ∈ [a, b], then for an arbitrary partition
a = x0 < x1 < · · · < xn = b,
n
n
i=1
i=1
∑ | f (xi ) − f (xi−1 )| ≤ ∑ K · (xi − xi−1 ) = K · (b − a).
⊔
⊓
As the example mentioned in the introduction above demonstrates, not every
continuous function is of bounded variation.
Example 17.4. Let f (x) = x · sin(1/x) if 0 < x ≤ 1, and f (0) = 0. We show that
f is not of bounded variation
on [0, 1]. Let Fn be the partition that consists of the
base points 0, 1, and xi = 2/ (2i − 1)π (i = 1, . . . , n). Then VFn ( f ) ≥ ∑ni=2 | f (xi ) −
f (xi−1 )|. In Example 16.16, we saw that this sum can be arbitrarily large if we
choose n to be sufficiently large, so f is not of bounded variation.
Theorem 17.5.
(i) For every f : [a, b] → R and c ∈ R, we have
V (c · f ; [a, b]) = |c| ·V ( f ; [a, b]).
(17.1)
(ii) For arbitrary functions f , g : [a, b] → R,
V ( f + g; [a, b]) ≤ V ( f ; [a, b]) +V (g; [a, b]).
(17.2)
(iii) If both f and g are of bounded variation on [a, b], then a · f + b · g is also of
bounded variation there for every a, b ∈ R.
Proof. (i) For an arbitrary partition a = x0 < x1 < · · · < xn = b,
n
n
i=1
i=1
∑ |c · f (xi ) − c · f (xi−1 )| = |c| · ∑ | f (xi ) − f (xi−1 )| .
Taking the supremum of both sides over all partitions, we obtain (17.1).
(ii) Let h = f + g. Then for an arbitrary partition a = x0 < x1 < · · · < xn = b,
n
n
n
i=1
i=1
i=1
∑ |h(xi ) − h(xi−1 )| ≤ ∑ | f (xi ) − f (xi−1 )| + ∑ |g(xi ) − g(xi−1 )| ≤
≤ V ( f ; [a, b]) +V (g; [a, b]).
Since this is true for every partition, 17.2 holds. The third statement of the theorem
follows from (i) and (ii).
⊔
⊓
Theorem 17.6. If f is of bounded variation on [a, b], then it also is of bounded
variation in every subinterval.
Proof. If [c, d] ⊂ [a, b], then extending an arbitrary partition F : c = x0 < x1 < · · · <
xn = d of [c, d] to a partition F ′ of [a, b], we obtain that ∑ni=1 | f (xi ) − f (xi−1 )| ≤
VF ′ ( f ) ≤ V ( f ; [a, b]). Since this holds for every partition of [c, d], V ( f ; [c, d]) ≤ V
( f ; [a, b]) < ∞.
⊔
⊓
402
17 Functions of Bounded Variation
Theorem 17.7. Let a < b < c. If f is of bounded variation on [a, b] and [b, c], then
it is of bounded variation in [a, c] as well, and
V ( f ; [a, c]) = V ( f ; [a, b]) +V ( f ; [b, c]).
(17.3)
Proof. Let F : a = x0 < x1 < · · · < xn = c be an arbitrary partition of the interval
[a, c], and let xk−1 ≤ b ≤ xk . It is clear that
k−1
VF ( f ) ≤
∑ | f (xi ) − f (xi−1 )| + | f (b) − xk−1 |+
i=1
n
+ | f (xk ) − f (b)| +
∑
i=k+1
| f (xi ) − f (xi−1 )| ≤ V ( f ; [a, b]) +V ( f ; [b, c]),
and so V ( f ; [a, c]) ≤ V ( f ; [a, b]) +V ( f ; [b, c]).
Now let ε > 0 be given, and let a = x0 < x1 < · · · < xn = b and b = y0 < y1 < · · · <
yk = c be partitions of [a, b] and [b, c] respectively such that ∑ni=1 | f (xi ) − f (xi−1 )| >
V ( f ; [a, b]) − ε and ∑ki=1 | f (yi ) − f (yi−1 )| > V ( f ; [b, c]) − ε . Then
n
k
i=1
i=1
V ( f ; [a, c]) ≥ ∑ | f (xi ) − f (xi−1 )| + ∑ | f (yi ) − f (yi−1 )| >
> V ( f ; [a, b]) +V ( f ; [b, c]) − 2ε .
Since ε was arbitrary, V ( f ; [a, c]) ≥ V ( f ; [a, b]) + V ( f ; [b, c]) follows, and so 17.3
holds.
⊔
⊓
The following theorem gives a simple characterization of functions of bounded
variation.
Theorem 17.8. The function f : [a, b] → R is of bounded variation if and only if it
can be expressed as the difference of two monotone increasing functions.
Proof. Every monotone function is of bounded variation (Theorem 17.3), and the
difference of two functions of bounded variation is also of bounded variation
(Theorem 17.5), so the “if” part of the theorem is clearly true.
Now suppose that f is of bounded variation. Let g(x) = V ( f ; [a, x]) for all
x ∈ (a, b], and let g(a) = 0. If a ≤ x < y ≤ b, then by Theorem 17.7,
g(y) = g(x) +V ( f ; [x, y]) ≥ g(x) + | f (y) − f (x)| ≥ g(x) − f (y) + f (x).
Thus on the one hand, g(y) ≥ g(x), while on the other hand, g(y) + f (y) ≥ g(x) +
f (x), so both g and g + f are monotone increasing in [a, b]. Since f = (g + f ) − g,
this ends the proof of the theorem.
⊔
⊓
Corollary 17.9. If f is of bounded variation on [a, b], then it is integrable there.
In Example 17.4, we saw that not every continuous function is of bounded variation. Thus the corollary above cannot be turned around.
17 Functions of Bounded Variation
403
The following theorem can often be applied to compute total variation. We leave
its proof to the reader (in Exercise 17.12).
Theorem 17.10. If f is differentiable and f ′ is integrable on [a, b], then f is of
'b
bounded variation and V ( f ; [a, b]) = | f ′ | dx.
a
The following theorem clarifies the condition for rectifiability.
Theorem 17.11.
(i) The graph of a function f : [a, b] → R is rectifiable if and only if f is of bounded
variation.
(ii) A curve g : [a, b] → Rd is rectifiable if and only if its coordinate functions are of
bounded variation.
Proof. (i) Let the arc length of the graph f be denoted by s f ; [a, b] (see Definition 10.78). Then for an arbitrary partition F : a = x0 < x1 < · · · < xn = b, we have
n
VF ( f ) = ∑ | f (xi ) − f (xi−1 ))| ≤
i=1
n !
≤∑
i=1
(xi − xi−1 )2 + ( f (xi ) − f (xi−1 ))2 ≤ s( f ; [a, b])
and
n
∑
!
i=1
(xi − xi−1 )2 + ( f (xi ) − f (xi−1 ))2 ≤
n
n
i=1
i=1
≤ ∑ (xi − xi−1 ) + ∑ | f (xi ) − f (xi−1 )| =
= (b − a) +VF ( f ) ≤ (b − a) +V ( f ; [a, b]).
Since these hold for every partition, we have V ( f ; [a, b]) ≤ s( f ; [a, b]) and
s( f ; [a, b]) ≤ (b − a) + V ( f ; [a, b]). It is then clear that s( f ; [a, b]) is finite if and
only if V ( f ; [a, b]) is finite, that is, the graph of f is rectifiable if and only if f is of
bounded variation.
(ii) Let the coordinate functions of g be g1 , . . . , gd , and let F : a = t0 < t1 < · · · <
tn = b be a partition of the interval [a, b]. If pi = g(ti ) (i = 0, . . . , n), then for every
i = 1, . . . , n and j = 1, . . . , d,
g j (xi ) − g j (xi−1 )) ≤
!
≤ |pi − pi−1 | = (g1 (xi ) − g1 (xi−1 ))2 + · · · + (gd (xi ) − gd (xi−1 ))2 ≤
≤ |g1 (xi ) − g1 (xi−1 )| + · · · + |gd (xi ) − gd (xi−1 ))| .
404
17 Functions of Bounded Variation
If we sum these equations for i = 1, . . . , n, then we get that
VF (g j ) ≤ ℓF ≤ VF (g1 ) + · · · +VF (gd ) ≤ V (g1 ; [a, b]) + · · · +V (gd ; [a, b]),
where ℓF denotes the length of the inscribed polygon corresponding to the partition
F. Since this holds for every partition, we have
V (g j ; [a, b]) ≤ s(g) ≤ V (g1 ; [a, b]) + · · · +V (gd ; [a, b])
for all j = 1, . . . , d, where s(g) is the arc length of the curve. It is then clear that s(g)
is finite if and only if V (g j ; [a, b]) is finite for all j = 1, . . . , d, that is, g is rectifiable
if and only if its coordinate functions are of bounded variation.
⊔
⊓
Now we prove the statement from the introduction.
Theorem 17.12. A function f : [a, b] → R satisfies ΩF ( f ) = O δ (F) for every partition F if and only if f is of bounded variation.
Proof. We first prove that if f is of bounded variation, then ΩF ( f ) = O δ (F)
holds. At the beginning of the chapter, we saw that if g is monotone in [a, b], then
ΩF (g) ≤ |g(b) − g(a)| · δ (F) for every partition F. We also know that if f is of
bounded variation in [a, b], then it can be expressed as f = g − h, where g and h are
monotone increasing functions (Theorem 17.8). Thus for an arbitrary partition F,
ΩF ( f ) ≤ ΩF (g) + ΩF (h) ≤ |g(b) − g(a)| · δ (F) + |h(b) − h(a)| · δ (F) = C · δ (F),
where C = |g(b) − g(a)| + |h(b) − h(a)|. Thus the condition ΩF ( f ) = O δ (F)
indeed holds.
Now we show that if f is not of bounded variation, then the property ΩF ( f ) =
O δ (F) does not hold, that is, for every real number A, there exists a partition F
such that ΩF ( f ) > A · δ (F). The proof relies on the observation that for an arbitrary
bounded function f : [a, b] → R and partition F : a = x0 < x1 < · · · < xn = b,
n
ΩF ( f ) = ∑ ω ( f ; [xi1 , xi ]) · (xi − xi−1 ) ≥
i=1
≥
%
n
&
∑ | f (xi ) − f (xi−1 )|
i=1
· min (xi − xi−1 ) =
1≤i≤n
= VF ( f ) · ρ (F),
where ρ (F) = min1≤i≤n (xi − xi−1 ).
If f is not of bounded variation, then for every real number A, there exists a
partition F0 such that VF0 ( f ) > A. However, we know only that this partition F0
satisfies ΩF0 ( f ) ≥ VF0 ( f ) · ρ (F0 ) > A · ρ (F0 ), while we want ΩF ( f ) > A · δ (F) to
hold for some F.
So consider a refinement F of F0 such that ρ (F) ≥ δ (F)/2. We can get such a
refinement by further subdividing the intervals in F0 into pieces whose lengths are
17 Functions of Bounded Variation
405
between ρ (F0 )/2 and ρ (F0 ). In this case, δ (F) ≤ ρ (F0 ) and ρ (F) ≥ ρ (F0 )/2, so
ρ (F) ≥ δ (F)/2 holds. Since F is a refinement of F0 , we easily see that VF ( f ) ≥
VF0 ( f ) > A, so ΩF ( f ) ≥ VF ( f ) · ρ (F) > A · δ (F)/2. Since A was arbitrary, this
concludes the proof.
⊔
⊓
With the theorems above in hand, we mightask whether
there exist functions
for which we can say more than ΩF ( f ) = O δ (F) ? Could it be possible for
ΩF ( f ) ≤ C · δ (F)2 to hold for every partition with some constant C? The answer to
this question is no. If f is constant, then of course ΩF ( f ) = 0 for every partition. If,
however, f is not constant, then there exists a c > 0 such that ΩFn ( f ) ≥ c · δ (Fn ) for
all n, where Fn denotes the uniform partition of [a, b] into n equal subintervals (see
Exercise 17.2).
Exercises
17.1. Prove that if | f (x) − f (y)| ≤ K · |x − y| for all x, y ∈ [a, b], then ΩF ( f ) ≤ K ·
(b − a) · δ (F) for every partition F of [a, b]. (S)
17.2. Prove that if f is bounded in [a, b] and Fn denotes the uniform partition of [a, b]
into n equal subintervals, then
ΩFn ( f ) ≥ ω f ; [a, b] · (b − a)/n,
where ω f ; [a, b] is the oscillation of the function f on the interval [a, b]. (H)
17.3. Let f (x) = x · sin(1/x) if 0 < x ≤ 1, and f (0) = 0. Prove that there exists a
constant c > 0 such that ΩFn ( f ) ≥ c · (log n)/n = c · (log n) · δ (Fn ) for all n, where
Fn denotes the uniform partition of [0, 1] into n equal subintervals. (∗ H)
17.4. Show that if an arbitrary sequence ωn tends to zero, then there exists a continuous function f : [0, 1] → R such that ΩFn ( f ) ≥ ωn for all n, where Fn denotes the
uniform partition of [0, 1] into n equal subintervals. (∗)
17.5. Let f : [a, b] → R and suppose that there is a largest value among VF ( f ) (where
F runs over the partitions of [a, b]). Show that in this case, the graph of f is made
up of finitely many monotone segments. (H)
17.6. Show that if f is of bounded variation in [a, b], then so is f 2 .
17.7. Show that if f and g are of bounded variation in [a, b], then so is f · g. Moreover, if inf |g| > 0, then so is f /g.
17.8. Let f (x) = xα · sin(1/x) if 0 < x ≤ 1 and f (0) = 0. For what α will f be of
bounded variation in [0, 1]?
17.9. Give an example for a function f that is differentiable in [0, 1] but is not of
bounded variation there.
406
17 Functions of Bounded Variation
17.10. For what c > 0 will the cth power of the Riemann function be of bounded
variation in [0, 1]? (H)
17.11. Prove that if f is differentiable on [a, b] and f ′ is bounded there, then f is of
bounded variation on [a, b].
17.12. Prove Theorem 17.10. (S)
17.13. Let α > 0 be given. We say that f is Hölder α in the interval [a, b] if there
exists a number C such that | f (x) − f (y)| ≤ C · |x − y|α for all x, y ∈ [a, b]. Show
that if α > 0, then the function xα is Hölder β in the interval [0, 1], where β =
min(α , 1). (S)
17.14. Show that if f is Hölder α in the interval [a, b], where α > 1, then f is
constant. (H)
17.15. Let f (x) = xα · sin x−β if 0 < x ≤ 1 and f (0) = 0, where α and β are positive
constants. Show that f is Hölder γ in the interval [0, 1], where
α
, 1 . (∗ H S)
γ = min
β +1
17.16. For what α can we say that if f is Hölder α , then f is of bounded variation
in [a, b]? (H)
17.17. Prove that a function of bounded variation in [a, b] has at most countably
many points of discontinuity.
17.18. Let f : [a, b] → R be continuous. Prove that for every ε > 0, there exists
a δ > 0 such that every partition F with mesh smaller than δ satisfies VF ( f ) >
V ( f ; [a, b]) − ε .
17.19. Prove that a function defined on [a, b] is not of bounded variation in [a, b]
if and only if there exists a strictly monotone sequence (xn ) in [a, b] such that
∑∞
i=1 | f (xi+1 ) − f (xi )| = ∞. (∗ H)
Chapter 18
The Stieltjes Integral
In this chapter we discuss a generalization of the Riemann integral that is often used
in both theoretical and applied mathematics. Stieltjes1 originally introduced this
concept to deal with infinite continued fractions,2 but it was soon apparent that the
concept is useful in other areas of mathematics—and thus in mathematical physics,
probability, and number theory, independently of its role in continued fractions.
We illustrate the usefulness of the concept with two simple examples.
Example 18.1. Consider a planar curve parameterized by γ (t) = x(t), y(t) (t ∈
[a, b]), where the x-coordinate function is strictly monotone increasing and continuous, and the y-coordinate function is nonnegative on [a, b]. The problem is to find the
area under the region bounded by the curve. If a = t0 < t1 < · · · < tn = b is a partition
of the interval [a, b] and ci ∈ [ti−1 ,ti ] for all i, then the area can be approximated by
the sum
n
∑ y(ci ) x(ti ) − x(ti−1 ) .
i=1
We can expect the area to be the limit—in a suitable sense—of these sums.
Example 18.2. Consider a metal rod of negligible thickness but not negligible mass
M > 0. Suppose that the rod lies on the interval [a, b], and let the mass of the rod
over the subinterval [a, x] be m(x) for all x ∈ [a, b]. Our task is to find the center of
mass of the rod.
We know that if we place weights m1 , . . . , mn at the points x1 , . . . , xn , then the
center of mass of this system of points {x1 , . . . , xn } is
m1 x1 + · · · + mn xn
.
m1 + · · · + mn
1
2
Thomas Joannes Stieltjes (1856–1894), Dutch mathematician.
For more on continued fractions, see [5].
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 18
407
408
18 The Stieltjes Integral
Consider a partition a = t0 < t1 < · · · < tn = b and choose points ci ∈ [ti−1 ,ti ] for
all i. If we suppose that the mass distribution of the rod is continuous (meaning
that the mass of the rod at every single point is zero), then the mass of the rod over
the interval [ti−1 ,ti ] is m(ti ) − m(ti−1 ). Concentrating this weight at the point ci , the
center of mass of the system of points {c1 , . . . , cn } is
c1 m(t1 ) − m(t0 ) + · · · + cn m(tn ) − m(tn−1 )
.
M
This approximates the center of mass of the rod itself, and once again, we expect
that the limit of these numbers in a suitable sense will be the center of mass.
We can see that in both examples, a sum appears that depends on two functions.
In these sums, we multiply the value of the first function (which, in Example 18.2,
was the function x) at the inner points by the increments of the second function.
We use the following notation and naming conventions. Let f , g : [a, b] → R be
given functions, let F : a = x0 < x1 < · · · < xn = b be a partition of the interval [a, b],
and let ci ∈ [xi−1 , xi ] (i = 1, . . . , n) be arbitrary inner points. Then the sum
n
∑ f (ci ) ·
i=1
is denoted by σF
respect to g.
g(xi ) − g(xi−1 )
f , g; (ci ) , and is called the approximating sum of f with
Definition 18.3. Let f , g : [a, b] → R be given functions. We say that the Stieltjes
'
integral ab f dg of f with respect to g exists and has value I if for every ε > 0, there
exists a δ > 0 such that if F : a = x0 < x1 < · · · < xn = b is a partition of [a, b] with
mesh smaller than δ and ci ∈ [xi−1 , xi ] (i = 1, . . . , n) are arbitrary inner points, then
σF f , g; (ci ) − I < ε .
(18.1)
Remarks 18.4. 1. Let g(x) = x for all x ∈ [a, b]. It is clear that the Stieltjes integral
'b
a f dg exists exactly when the Riemann integral a f dx does, and in this case, their
values agree.
'b
'
2. If the function g is constant, then the Stieltjes integral ab f dg always exists, and
its value' is zero. If the function f is constant and has value
c, then the Stieltjes
integral ab f dg always exists and has value c · g(b) − g(a) .
'
3. The existence of the Stieltjes integral ab f dg is not guaranteed by f and g being
Riemann integrable in [a, b]. One can show that if f and g share a point of discon'
tinuity, then the Stieltjes integral ab f dg does not exist. (See Exercise 18.4.) Thus
if f and g are bounded functions in [a, b] and are continuous everywhere except at
a common point, then they are both Riemann integrable in [a, b], while the Stieltjes
'
integral ab f dg does not exist.
'
4. For the Stieltjes integral ab f dg to exist, it is not even sufficient for f and g to be
continuous in [a, b]. See Exercise 18.5.
18 The Stieltjes Integral
409
Now 'we show that if g is strictly monotone and continuous, then the Stieltjes
integral ab f dg can be reduced to a Riemann integral.
Theorem 18.5. If g : [a, b]→R is strictly monotone increasing and continuous, then
'
' g(b)
the Stieltjes integral ab f dg exists if and only if the Riemann integral g(a) f ◦ g−1 dx
does, and then
( b
a
f dg =
( g(b)
g(a)
f ◦ g−1 dx.
Proof. If F : a = x0 < x1 < · · · < xn = b is a partition and ci ∈ [xi−1 , xi ] (i = 1, . . . , n)
are arbitrary inner points, then the points ti = g(xi ) (i = 1, . . . , n) give us a partition
F of the interval [g(a), g(b)], and we have g(ci ) ∈ [g(xi−1 ), g(xi )] for all i = 1, . . . , n.
Then
n
n
(18.2)
∑ f (ci ) · g(xi ) − g(xi−1 ) = ∑ ( f ◦ g−1 ) g(ci ) · ti − ti−1
i=1
i=1
is an approximating sum of the Riemann sum
' g(b)
g(a)
f ◦ g−1 dx. Conversely, if we
have F : g(a) = t0 < t1 < · · · < tn = g(b) as a partition of the interval [g(a), g(b)]
and di ∈ [ti−1 ,ti ] for all i = 1, . . . , n, then the points xi = g−1 (ti ) (i = 1, . . . , n) create a
partition F of the interval [a, b]. If ci = g−1 (di ), then ci ∈ [xi−1 , xi ] for all i = 1, . . . , n,
and (18.2) holds.
By the uniform continuity of g, we know that for every δ > 0, if the partition F
has small enough mesh, then F has mesh smaller than δ . Thus comparing statement
(iii) of Theorem 14.23 with equality (18.2), we get that if the Riemann integral
' g(b)
'
f ◦ g−1 dx exists, then the Stieltjes integral ab f dg also exists, and they are
g(a)
equal.
Since the function g−1 is also uniformly continuous, it follows that for every
δ > 0, if the partition F has small enough mesh, then F has mesh smaller than δ .
Thus by the definition of the Stieltjes integral, by statement (iii)
of Theorem 14.23,
'
and by equality (18.2), it follows that if the Stieltjes integral ab f dg exists, then the
' g(b)
Riemann integral g(a) f ◦ g−1 dx exists as well, and they have the same value. ⊓
⊔
The following statements can be deduced easily from the definition of the Stieltjes integral. We leave their proofs to the reader.
Theorem 18.6.
'
'
(i) If the Stieltjes integrals ab f1 dg and ab f2 dg exist, then for all c1 , c2 ∈ R,
'
the Stieltjes integral ab (c1 f1 + c2 f2 ) dg exists as well, taking on the value
'
'
c1 · ab f1 dg + c2 · ab f2 dg.
'
'
(ii) If the Stieltjes integrals ab f dg1 and ab f dg2 exist, then for all c1 , c2 ∈ R, the
'
'
Stieltjes integral ab f d(c1 g1 + c2 g2 ) exists as well, and has value c1 · ab f dg1 +
'b
c2 · a f dg2 .
The following theorem gives us a necessary and sufficient condition for the existence of Stieltjes integrals. For the proof of the theorem, see Exercise 18.6.
410
18 The Stieltjes Integral
'
Theorem 18.7 (Cauchy’s Criterion). The Stieltjes integral ab f dg exists if and
only if for every ε > 0, there exists a δ > 0 such that if F1 and F2 are partitions
of [a, b] with mesh smaller than δ , then
σF f , g; (ci ) − σF f , g; (d j ) < ε
1
2
with an arbitrary choice of the inner points ci and d j .
With the help of Cauchy’s criterion, it is easy to prove the following theorem.
We leave the proof to the reader once again.
'
Theorem 18.8. If the Stieltjes integral ab f dg exists, then for all a < c < b, the
'
'
'
'
'
integrals ac f dg and cb f dg also exist, and ab f dg = ac f dg + cb f dg.
'
'
We should note that the existence of the Stieltjes integrals ac f dg and cb f dg
'
alone does not imply the existence of ab f dg (see Exercise 18.2). If, however, at
least one of 'f and g is continuous at c, and the other function is bounded then the
existence of ab f dg already follows (see Exercise 18.3).
The following important theorem shows that the roles of f and g in the Stieltjes
integral are symmetric in some sense. We remind our readers that [F]ba denotes the
difference F(b) − F(a).
Theorem 18.9 (Integration by Parts). If the Stieltjes integral
'b
'b
'b
b
a g d f also exists, and a f dg + a g d f = [ f · g]a .
'b
a
f dg exists, then
Proof. The proof relies on Abel’s rearrangement (see equation (14.31)).
Let F : a = x0 < x1 < · · · < xn = b be a partition, and let ci ∈ [xi−1 , xi ] (i =
1, . . . , n) be inner points. If we apply Abel’s rearrangement to the approximating
sum σF g, f ; (ci ) , we get that
n
σF g, f ; (ci ) = ∑ g(ci ) · f (xi ) − f (xi−1 ) =
i=1
n
= f (b)g(b) − f (a)g(a) − ∑ f (xi ) g(ci+1 ) − g(ci ) ,
(18.3)
i=0
where c0 = a and cn+1 = b. Since a = c0 ≤ c1 ≤ . . . ≤ cn+1 = b and xi ∈ [ci , ci+1 ]
for all i = 0, . . . , n, we have
n
∑ f (xi )
i=0
g(ci+1 ) − g(ci ) = σF ′ f , g; (di ) ,
(18.4)
where F ′ denotes the partition defined by the points ci (0 ≤ i ≤ n) with the corresponding inner points. (We list each of the ci points only once. Note that on the lefthand side of (18.4), we can leave out the terms in which ci = ci+1 .) Then by (18.3),
σF g, f ; (ci ) = [ f · g]ba − σF ′ f , g; (di ) .
18 The Stieltjes Integral
411
Let ε > 0 be given, and suppose that δ > 0 satisfies the condition of Definition 18.3.
It is easy to see that if the mesh ofthe partition F is smaller
than δ /2, then the mesh
'
of F ′ is smaller than δ , and so σF ′ f , g; (di ) − I < ε , where I = ab f dg. This
shows that if the mesh of F is smaller than δ /2, then σF g, f ; (ci ) − ([ f g]ba − I) <
'
ε for an arbitrary choice of the inner points. It follows that the integral ab g d f exists,
⊔
⊓
and that its value is [ f g]ba − I.
Since Cauchy’s criterion (Theorem 18.7) is hard to apply in deciding whether a
specific Stieltjes integral exists, we need other conditions guaranteeing the existence
of the Stieltjes integral that can be easier to check. The simplest such condition is
the following.
Theorem 18.10. If one of the functions f and g defined on the interval [a, b] is con'
tinuous, while the other is of bounded variation, then the Stieltjes integrals ab f dg
'b
and a g d f exist.
'
Proof. By Theorem 18.9, it suffices to prove the existence of the integral ab f dg,
and we can also assume that f is continuous and g is of bounded variation.
By Theorems 17.8 and 18.6, it suffices to consider the case that g is monotone
increasing.
For an arbitrary partition F : a = x0 < x1 < · · · < xn = b, let
n
sF = ∑ mi · g(xi ) − g(xi−1 ) and
i=1
n
SF = ∑ Mi · g(xi ) − g(xi−1 ) ,
i=1
where mi = min{ f (x) : x ∈ [xi−1 , xi ]} and Mi = max{ f (x) : x ∈ [xi−1 , xi ]} for all
i = 1, . . . , n. Since g(xi ) − g(xi−1 ) ≥ 0 for all i,
sF ≤ σF f , g; (ci ) ≤ SF
(18.5)
with any choice of inner points ci .
It is easy to see that sF1 ≤ SF2 for any partitions F1 and F2 (by repeating the proofs
of Lemmas 14.3 and 14.4, using that g is monotone increasing). Thus the set of
“lower sums” sF is nonempty and bounded from above. If I denotes the supremum
of this set, then sF ≤ I ≤' SF for every partition F.
Now we show that ab f dg exists, and its value is I. Let ε > 0 be given. By
Heine’s theorem, f is uniformly continuous on [a, b], so there exists a δ > 0 such that
| f (x) − f (y)| < ε whenever x, y ∈ [a, b] and |x − y| < δ . Let F : a = x0 < x1 < · · · <
xn = b be an arbitrary partition with mesh smaller than δ . By Weierstrass’s theorem,
for each i, there are points ci , di ∈ [xi−1 , xi ] such that f (ci ) = mi and f (di ) = Mi . Then
|di − ci | ≤ xi − xi−1 < δ , so by our choice of δ , we have Mi − mi = f (di ) − f (ci ) < ε .
Thus
n
n
SF − sF = ∑ (Mi − mi ) · g(xi ) − g(xi−1 ) ≤ ε · ∑ g(xi ) − g(xi−1 ) =
i=1
= g(b) − g(a) · ε .
i=1
412
18 The Stieltjes Integral
Now using (18.5), we get that
I − g(b) − g(a) · ε < sF ≤ σF f , g; (ci ) ≤ SF < I + g(b) − g(a) · ε
for arbitrary inner points ci . This shows that
'b
a
f dg exists and that its value is I. ⊓
⊔
Remark 18.11. One can show that the class of continuous functions and the class
of functions of bounded variation
are “dual classes” in the sense that a function f
'
is continuous if and only if ab f dg exists for all functions g that are of bounded
'
variation, and a function g is of bounded variation if and only if ab f dg exists for
every continuous function f . (See Exercises 18.8 and 18.9.)
The following theorem can often be applied to computing Stieltjes integrals.
Theorem 18.12. If f is Riemann integrable, g is differentiable, and g′ is Riemann
integrable on [a, b], then the Stieltjes integral of f with respect to g exists, and
( b
a
f dg =
( b
a
f · g′ dx.
(18.6)
Proof. Since f and g′ are integrable on [a, b], the Riemann integral on the righthand side of (18.6) exists. Let its value be I. We want to show that for an arbitrary
ε > 0, there exists a δ > 0 such that for every partition a = x0 < x1 < · · · < xn = b
with mesh smaller than δ , (18.1) holds with an arbitrary choice of inner values
ci ∈ [xi−1 , xi ] (i = 1, . . . , n).
Let ε > 0 be given. Since f is Riemann integrable on [a, b], there must exist a
δ1 > 0 such that ΩF ( f ) < ε whenever the partition F has mesh smaller than δ1 .
By Theorem 14.23, there exists a δ2 > 0 such that whenever a = x0 < x1 < · · · <
xn = b is a partition with mesh smaller than δ2 ,
n
′
I − ∑ f (di )g (di )(xi − xi−1 ) < ε
i=1
(18.7)
holds for arbitrary inner points di ∈ [xi−1 , xi ] (i = 1, . . . , n).
Let δ = min(δ1 , δ2 ), and consider an arbitrary partition F : a = x0 < x1 < · · · <
xn = b with mesh smaller than δ . Let ci ∈ [xi−1 , xi ] (i = 1, . . . , n) be arbitrary inner
points.
Since g is differentiable on [a, b], by the mean value theorem,
n
∑ f (ci )
i=1
n
g(xi ) − g(xi−1 ) = ∑ f (ci )g′ (di )(xi − xi−1 )
i=1
(18.8)
18 The Stieltjes Integral
413
for suitable numbers di ∈ [xi−1 , xi ]. Let K denote an upper bound of |g′ | on [a, b].
Then
n
n
′
I − ∑ f (ci ) g(xi ) − g(xi−1 ) = I − ∑ f (ci )g (di )(xi − xi−1 ) ≤
i=1
i=1
n
n
≤ I − ∑ f (di )g′ (di )(xi − xi−1 ) + ∑ f (ci ) − f (di ) g′ (di )(xi − xi−1 ) <
i=1
i=1
n
< ε + K · ∑ ω ( f ; [xi−1 , xi ]) · (xi − xi−1 ) = ε + K · ΩF ( f ) < (1 + K) · ε ,
i=1
⊔
⊓
which concludes the proof of the theorem.
Remarks 18.13. 1. The conditions for the existence of the integral
in Theorem 18.12
'
can be significantly weakened. So for example, the integral ab f dg is guaranteed to
exist if f is Riemann integrable and g is Lipschitz (see Exercise 18.11).
2. In the statement above, the Lipschitz property of g can be weakened further. We
say that the function f : [a, b] → R is absolutely continuous if for each ε > 0, there
exists a δ > 0 such that whenever [a1 , b1 ], . . . , [an , bn ] are nonoverlapping subintervals of [a, b] such that ∑ni=1 (bi − ai ) < δ , then ∑ni=1 | f (bi ) − f (ai )| < ε . One can
show that if f is Riemann integrable and g is absolutely continuous in [a, b], then
'
the Stieltjes integral ab f dg exists.
3. The class of Riemann integrable functions and the class of absolutely continuous
'
functions are also dual: a function f is Riemann integrable if and only if ab f dg
exists for every absolutely
continuous function g, and a function g is absolutely
'
continuous if and only if ab f dg exists for every Riemann integrable function f . The
proof of this theorem, however, uses concepts from measure theory that we do not
deal with in this book.
A number-theoretic application. In the introduction of the chapter we mentioned that Stieltjes integrals pop up in many areas of mathematics, such as in
number theory. Dealing with an important problem—namely the distribution of
the prime numbers—we often need to approximate sums that consist of the values of specific functions at the prime numbers. For example, L(x) = ∑ p≤x (log p)/p
and R(x) = ∑ p≤x 1/p are such sums. In these sums, we need to add the numbers
(log p)/p or 1/p for all prime numbers p less than or equal to x. Transforming these
sums (often using Abel’s rearrangement) can be efficiently done with the help of the
Stieltjes integral, as shown by the following theorem.
Let a sequence a1 < a2 < · · · that tends to infinity and the function ϕ defined
at the numbers an be given. Let A(x) = ∑an ≤x ϕ (an ). (If (an ) is the sequence of
prime numbers and ϕ (x) = (log x)/x, then A(x) = L(x), and if ϕ (x) = 1/x, then
A(x) = R(x).)
Theorem 18.14. Suppose that f is differentiable and f ′ is integrable on [a, b], where
a < a1 ≤ b. Then
∑
an ≤b
f (an ) · ϕ (an ) = f (b) · A(b) −
( b
a
A(x) · f ′ (x) dx.
(18.9)
414
18 The Stieltjes Integral
'
Proof. We show that ∑an ≤b f (an ) · ϕ (an ) = ab f dA. The function A(x) is constant
on the interval [an−1 , an ), and has a jump discontinuity at the point an , and there,
A(an )−limx→an −0 A(x) = ϕ (an ). Thus if we take a partition of the interval [a, b] with
'
mesh small enough, then any approximating sum of the Stieltjes integral ab f dA will
consist of mostly zero terms, except for the terms that correspond to a subinterval
containing one of an ≤ b, and the nth such term will be close to f (an ) · ϕ (an ) by the
continuity of f .
Now integration by parts (Theorem 18.9) gives
∑
an ≤b
f (an ) · ϕ (an ) =
( b
a
f dA = f (b) · A(b) −
( b
a
A d f = f (b) · A(b) −
( b
a
A · f ′ dx,
⊔
⊓
also using Theorem 18.12.
If (an ) is the sequence of prime numbers and ϕ ≡ 1, then the value of A(x) is the
sum of prime numbers up to x, that is π (x). This gives us the following corollary.
Corollary 18.15. Suppose that f is differentiable and f ′ is integrable on the interval
[1, x] (x ≥ 2). Then
∑ f (p) = f (x) · π (x) −
p≤x
( x
1
π (t) · f ′ (t) dt.
(18.10)
'
If, for example, f (x) = 1/x, then we get that ∑ p≤x 1/p ≥ 2x (π (t)/t 2 ) dt. Now
there exists
a constant c > 0 such that π (x) ≥ c · x/ log x. (See [5], Corollary 8.6.)
'
Since ex dt/(t · logt) = log log x, we get that ∑ p≤x 1/p ≥ c · log log x. This proves the
following theorem.
Corollary 18.16.
1
∑ p = ∞.
p
We can get a much better approximation for the partial sums if we use the fact
that the difference between the function L(x) = ∑ p≤x (log p)/p and log x is bounded.
(A proof of this fact can be found in [5, Theorem 8.8(b)].) Let η (x) = L(x) − log x.
Let (an ) be the sequence of prime numbers, and apply (18.9) with the choices
ϕ (x) = (log x)/x and f (x) = 1/ log x. Then A(x) = L(x), and so
1
L(x)
∑ p = log x +
p≤x
( x
2
L(t)
dt =
t · log2 t
η (x)
+
log x
( x
(
x η (t)
1
dt +
dt =
2
2 t · logt
2 t · log t
( x
η (x)
η (t)
+
dt.
= log log x − log log 2 + 1 +
2
log x
2 t · log t
= 1+
18 The Stieltjes Integral
415
if x → ∞, then η (x)/ log x tends to zero, and we can also show that the integral
'Here
x
2
2 η (t)/(t ·log t) dt has a finite limit as x → ∞; this follows easily by the theory of
improper integrals; see the next chapter. Comparing all of this, we get the following.
Theorem 18.17. The limit limx→∞ ∑ p≤x 1p − log log x exists and is finite.
Exercises
18.1. Let
0, (0 ≤ x < 1)
1, (x = 0)
0, (x = 1/2)
α (x) =
; β (x) =
; γ (x) =
.
1, (x = 1)
0, (0 < x ≤ 1)
1, (x = 1/2)
Show that the following Stieltjes integrals exist, and compute their values.
(a)
(d)
(g)
'1
0 sin x d α ;
'1
x
0 β de ;
'2 x
0 e d[x];
(b)
(e)
(h)
18.2. Let
0,
f (x) =
1,
'1
0 α d sin x;
'1 2
0 x dγ ;
'2
x
0 [x] de .
if −1 ≤ x ≤ 0
if 0 < x ≤ 1
Prove that the Stieltjes integrals
'0
−1
and
f dg and
'
(c)
(f)
0,
g(x) =
1,
'1
0
'1 x
0 e dβ ;
'1
2
0
γ dx ;
if −1 ≤ x < 0
if 0 ≤ x ≤ 1.
f dg exist, but
'
'1
−1
f dg does not.
18.3. Prove that if the Stieltjes integrals ac f dg and cb f dg exist, at least one of f
'orb g is continuous at c, and the other function is bounded then the Stieltjes integral
a f dg also exists.
18.4. Prove that 'if the functions f and g share a point of discontinuity, then the
Stieltjes integral ab f dg does not exist. (H S)
√
18.5. Let f (x) = x · sin(1/x) if x = 0, and f (0) = 0. Prove that the Stieltjes integral
'1
0 f d f does not exist. (H S)
18.6. Prove Theorem 18.7. (H)
18.7. Prove that if
'b
a
'
f d f exists, then its value is f (b)2 − f (a)2 /2.
18.8. Prove that if ab f dg exists for every function g that is of bounded variation,
then f is continuous. (H)
'
18.9. Prove that if ab f dg exists for every continuous function f , then g is of
bounded variation. (H)
416
18 The Stieltjes Integral
18.10. Let F be an arbitrary set of functions defined on the interval [a, b]. Let G be
'
the set of those functions g : [a, b] → R for which the integral ab f dg exists for all
f ∈ F . Furthermore, let H denote the set of functions h : [a, b] → R whose integral
'b
that H and G are dual classes, that is, a function
a h dg exists for all g ∈ G . Show
'
h satisfies h ∈ H if and only if ab h dg exists for all g ∈ G , and a function g satisfies
'
g ∈ G if and only if ab h dg exists for all h ∈ H .
18.11. Prove that if f is Riemann integrable and g is Lipschitz on [a, b], then the
'
Stieltjes integral ab f dg exists. (H)
18.12. Show that
(a) if f is Lipschitz, then it is absolutely continuous;
(b) if f is absolutely continuous, then it is continuous and of bounded variation.
Chapter 19
The Improper Integral
19.1 The Definition and Computation of Improper Integrals
Until now, we have dealt only with integrals of functions that are defined in some
closed and bounded interval (except, perhaps, for finitely many points of the interval) and are bounded on that interval. These restrictions are sometimes too strict;
there are problems whose solutions require us to integrate functions on unbounded
intervals, or that themselves might not be bounded.
' π /2 √
1 + sin x dx with the substiSuppose that we want to compute the integral 0
√
tution sin x = t. The formulas x = arc sint, dx/dt = 1/ 1 − t 2 give
( 1"
( π /2 √
( 1 √
1+t
1
√
dt.
(19.1)
dt =
1 + sin x dx =
2
1
−
t
0
0
0
1−t
The integral on the right-hand side, however, is undefined (for now), since the function 1/(1 − t) is unbounded, and so it is not integrable in [0, 1]. (This problem
is caused by the√application of the substitution formula, Theorem 15.21, with the
choices f (x) = 1 + sin x, g(t) = arc sint, [a, b] = [0, 1]. The theorem cannot be
applied, since its conditions are not satisfied, the function arc sint not being differentiable at 1.)
'
Wanting to compute the integral 0π dx/(1 + sin x) with the substitution
tg(x/2) =t, we run into trouble with the limits of integration. Since limx→π −0
tg(x/2) = ∞, the result of the substitution is
( π
0
1
=
1 + sin x
( ∞
0
1
2
·
dt =
2
1 + 2t/(1 + t ) 1 + t 2
( ∞
0
2·
dt
,
(1 + t)2
(19.2)
and the integral on the right-hand side is undefined (for now). (Here the problem is
caused by the function tg(x/2) not being defined—and so not being differentiable—
at π .)
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 19
417
418
19 The Improper Integral
′
We know that if the function f : [a, b] → R is differentiable and
!f is integrable
'
on [a, b], then the graph of f is rectifiable, and its arc length is ab 1 + ( f ′ (x))2 dx
(see Theorem 16.13 and Remark 16.21). We can expect this equation to give us the
arc length of the graph of f whenever the graph is rectifiable and f is differentiable
with the exception of finitely many points.
This, however, is not always so. Consider,
√
for example, the function f (x) = x on [0, 1]. Since f is monotone, its graph is
rectifiable. On the other hand, f is differentiable at every point x > 0, and
!
!
1
.
1 + ( f ′ (x))2 = 1 + 4x
We expect the arc length of
√
x over [0, 1] to be
( 1!
1
1 + 4x
dx.
0
(19.3)
However, this integral is not defined (for now), because the function we are integrating is not bounded.
We can avoid all of these problems by extending the concept of the integral.
Definition 19.1. Let f be defined on the interval [a, ∞), and suppose that f is integrable on [a, b] for all b > a. If the limit
lim
( b
b→∞ a
f (x) dx = I
exists and is finite, then we say that the
improper integral of f in [a, ∞) is convergent
'
and has value I. We denote this by a∞ f (x) dx = I.
'
If the limit limb→∞ ab f (x) dx does not exist or exists but is not finite, then we say
that the improper integral is divergent. If
lim
( b
b→∞ a
f (x) dx = ∞
(or − ∞),
'
b
then we say that the improper integral
' a a f (x) dx exists, and its value is ∞ (or −∞).
We define the improper integral −∞ f (x) dx similarly.
'
Example 19.2. By the definition above, the improper integral 0∞ dx/(1 + x)2 is convergent, and its value is 1. Indeed, the function 1/(1 + x)2 is integrable on the interval [0, b] for all b > 0, and
b
( b
dx
1
1
lim
= lim 1 −
= lim −
= 1.
b→∞ 0 (1 + x)2
b→∞
(1 + x) 0 b→∞
(1 + b)
Once we know this, it is easy to see that equality (19.2) truly holds, and the value
of the integral on the left is 2. Indeed, for all 0 < b < π , the function tg(x/2) is
differentiable, and its derivative is integrable on [0, b], so the substitution tg(x/2) = t
can be used here. Thus
19.1 The Definition and Computation of Improper Integrals
( b
0
dx
=
1 + sin x
( tg(b/2)
0
419
( tg(b/2)
1
2
·
dt =
1 + 2t/(1 + t 2 ) 1 + t 2
0
2·
dt
.
(1 + t)2
Now using the theorems regarding the continuity of the integral function (Theorem
15.5) and regarding the limit of compositions of functions, we get that
( π
0
dx
= lim
1 + sin x b→π −0
( b
dx
=
0 1 + sin x
( ∞
( tg(b/2)
dt
dt
= lim
2·
=
= 2.
2·
2
b→π −0 0
(1 + t)
(1 + t)2
0
We define the integrals appearing in the right-hand side of (19.1) and those
in (19.3) in a similar way.
Definition 19.3. Let [a, b] be a bounded interval, let f be defined on [a, b), and suppose that f is integrable on the interval [a, c] for all a ≤ c < b. If the limit
lim
( c
c→b−0 a
f (x) dx = I
exists and is finite, then we say that the improper integral of the function f is
'
convergent and its value is I. We denote this by ab f (x) dx = I.
'b
If the limit limc→b−0 a f (x) dx does not exist, or exists but is not finite, then we
say that the improper integral is divergent. If
lim
( c
c→b−0 a
f (x) dx = ∞ (or − ∞),
'
then we say that the improper integral ab f (x) dx exists, and that its value is ∞
(or −∞).
'
We define the improper integral ab f (x) dx similarly in the case that f is integrable on the intervals [c, b] for all a < c ≤ b.
Remarks 19.4. 1. We say that an improper integral exists if it is convergent or if
its value is ∞ or −∞. (In these last two cases, the improper integral exists but is
divergent.)
'
2. If f is integrable on the bounded interval [a, b], then the expression ab f (x) dx now
denotes three numbers at once: the Riemann
integral of the function f over the inter'
'
val [a, b], the limit limc→b−0 ac f (x) dx, and the limit limc→a+0 cb
f (x) dx. Luckily,
these three numbers agree; by Theorem 15.5, the integral func'
tion I(x) = ax f (t) dt is continuous on [a, b], so
lim
( c
c→b−0 a
f (x) dx = lim I(c) = I(b) =
c→b−0
( b
f (x) dx,
a
and
lim
( b
c→a+0 c
f (x) dx = lim [I(b) − I(c)] = I(b) − I(a) =
c→a+0
( b
a
f (x) dx.
420
19 The Improper Integral
In other words, if the Riemann integral of the function f exists on the interval [a, b],
then (both of) its improper integrals exist there, and their values agree with those of
the Riemann integral. Thus the concept of the improper integral is an extension of
the Riemann integral.
Even though this is true, when we say that f is integrable on the closed interval
[a, b], we will still mean that f is Riemann integrable on [a, b].
Example 19.5. The integral on the right-hand side of equality (19.1) is convergent,
and its value is 2. Clearly, we have
( c"
# √
$c
√
1
dt = lim −2 1 − t = lim 2 − 2 1 − c = 2.
lim
c→1−0 0
c→1−0
c→1−0
1−t
0
Using the argument of Example 19.2, we can easily check that equality (19.1) is
true.
Now for some further examples.
Fig. 19.1
1. The integral
'∞
1
dx/xc is convergent if and only if c > 1. More precisely,
( ∞
1/(c − 1), if c > 1,
dx
=
(19.4)
c
∞,
if c ≤ 1.
1 x
For c = 1, we have
( ∞
dx
1
xc
= lim
( ω
dx
=
xc
ω
1
1
1
1−c
1−c
·x
+
·ω
= lim
.
ω →∞ c − 1
1−c
1−c
1
ω →∞ 1
= lim
ω →∞
19.1 The Definition and Computation of Improper Integrals
421
Now
lim x
1−c
x→∞
0,
=
∞,
if c > 1,
if c < 1,
so if c = 1, then (19.4) is true. On the other hand,
( ∞
dx
1
x
( ω
dx
= lim
x
ω →∞ 1
= lim [log x]ω
1 = lim log ω = ∞,
ω →∞
ω →∞
so (19.4) also' holds when c = 1.
2. The integral 01 dx/xc is convergent if and only if c < 1. More precisely,
( 1
1/(1 − c), if c < 1,
dx
=
(19.5)
c
x
∞,
if c ≥ 1.
0
For c = 1, we have
( 1
dx
0
xc
( 1
dx
= lim
δ →0+0 δ
= lim
δ →0+0
xc
=
1
· x1−c
1−c
1
= lim
δ →0+0
δ
1
1
−
· δ 1−c
1−c 1−c
for all 0 < δ < 1. Now
lim x
1−c
x→0+0
0,
=
∞,
if c < 1,
if c > 1,
so if c = 1, then (19.5) is true. On the other hand,
( 1
dx
δ
x
= lim [log x]1δ = lim [− log δ ] = ∞,
δ →0+0
δ →0+0
so (19.5) also holds when c = 1 (Figure 19.1).
Incidentally, the substitution x = 1/t gives us that
( ∞
dx
1
xc
=
( 1
1
0
t −c
·
1
dt =
t2
( 1
dt
0
t 2−c
.
This shows that statements (19.4) and (19.5) follow each from the other.
(We will justify
' this statement soon, in Theorem 19.12.)
3. The integral 2∞ dx/(x · logc x) is convergent if and only if c > 1. This is because
if c = 1, then
ω
( ∞
( ω
dx
1
dx
1−c
= lim
· log x =
c = ωlim
→∞ 2 x · logc x
ω →∞ 1 − c
2 x · log x
2
1
1
= lim
· log1−c 2 +
· log1−c ω .
ω →∞ c − 1
1−c
422
19 The Improper Integral
Now
1−c
x=
lim log
x→∞
if c > 1,
if c < 1,
0,
∞,
so if c > 1, then the integral is convergent, while if c < 1, then it is divergent.
In the case c = 1,
( ∞
2
dx
= lim
x · log x ω →∞
( ω
2
dx
= lim [log log x]ω
2 =
x · log x ω →∞
= lim [log log ω − log log 2] = ∞,
ω →∞
4.
so
the integral is also divergent when c = 1.
' ∞ −x
−a for all a. Clearly,
a e dx = e
( ∞
e−x dx = lim
( ω
ω →∞ a
a
e−x dx = lim e−a − e−ω = e−a .
ω →∞
dx/(1 + x2 ) = π /2, since limω →∞ 0ω dx/(1 + x2 ) = limω →∞ [arc tg x]ω
0 =
'0
2
π /2. We can similarly see that −∞ dx/(1 + x ) = π /2.
'
6. 01 log x dx = −1, since
5.
'
'∞
0
( 1
log x dx = lim
( 1
δ →0+0 δ
0
log x dx = lim [x log x − x]δ1 =
δ →0+0
= lim [−δ log δ + δ − 1] = −1.
δ →0+0
'1
dx/(1 − x2 ) is divergent, since
( c
1+x c
1
dx
= lim 1 · log 1 + c = ∞.
·
log
lim
=
lim
2
c→1−0 0 1 − x
c→1−0 2
1 − x 0 c→1−0 2
1−c
√
'
8. 01 dx/ 1 − x2 = π /2, since
7. The integral
0
lim
( c
c→1−0 0
dx
π
√
= lim arc sin c = .
2
c→1−0
2
1−x
We sometimes need to compute integrals that are “improper” at both end' ∞ −x2
points, and possibly at numerous interior points as well. The integrals −∞
e dx,
'1
'∞√
−x
x · e dx, and −1 dx/ |x| are examples of this. We define these in the follow0
ing way.
Definition 19.6. Let the function f be defined on the finite or infinite interval (α , β ),
except at at most finitely many points. Suppose that there exists a partition α = c0 <
c1 < · · · < cn−1 < cn = β with the property that for all i = 1, . . . , n, the improper
19.1 The Definition and Computation of Improper Integrals
integral
' ci
ci−1
423
f (x) dx is convergent by Definition 19.1 or by Definition 19.3. Then we
say that the improper integral
( β
α
'β
α
f (x) dx =
f (x) dx is convergent, and its value is
( c1
c0
f (x) dx + · · · +
( cn
f (x) dx.
cn−1
To justify this definition, we need to check that if F ′ : α = d0 < d1 < · · · < dk−1 <
' i
dk = β is another partition such that the integrals ddi−1
f (x) dx are convergent by
Definition 19.1 or Definition 19.3, then
n
∑
( ci
k
f (x) dx =
i=1 ci−1
∑
( dj
f (x) dx.
(19.6)
j=1 d j−1
Suppose first that we obtain the partition F ′ by adding one new base point to the
partition F : α = c0 < c1 < · · · < cn−1 < cn = β . If this base point is d ∈ (ci−1 , ci ),
then to prove (19.6), we must show that
( ci
f (x) dx =
( d
ci−1
ci−1
f (x) dx +
( ci
f (x) dx.
d
This is easy to check by distinguishing the cases appearing in Definitions 19.1
and 19.3, and using Theorem 14.38.
Once we know this, we can use induction to find that (19.6) also holds if we
obtain F ′ by adding any (finite) number of new base points. Finally, for the general
case, consider the partition F ′′ , the union of the partitions F and F ′ . By what we
just said, both sides of (19.6) are equal to the sum of integrals corresponding to the
partition F ′′ , so they are equal to each other.
'
∞
Examples 19.7. 1. The integral −∞
dx/(1 + x2 ) is convergent, and its value is π .
'∞
'0
2
= π /2and −∞ dx/(1 + x2 ) = π /2 by Example 19.5.5.
Clearly, 0 dx/(1 + x )
'
'
1
|x| is convergent, and its value is 4. Clearly, 01 dx/
dx/
2. The integral −1
√
( x) = 2 by Example 19.5.2. Similarly, we can show
use the substitution x = −t
√ (or
'0
to reduce it to the previous integral) that −1
dx/ −x = 2.
The methods used in Example 19.5 to compute improper integrals can be used
in most cases in which the function we want to integrate has a primitive function:
in such cases, the convergence of the improper integral depends on the existence
of the limit of the primitive function. We introduce the following notation. Let F
be defined on the finite or infinite interval (α , β ). If the limits limx→α +0 F(x) and
limx→β −0 F(x) exist and are finite, then let
[F]αβ = lim F(x) − lim F(x).
x→β −0
x→α +0
Here, when α = −∞, the notation limx→α +0 means the limit at negative infinity,
while if β = ∞, then limx→β −0 is the limit at positive infinity.
424
19 The Improper Integral
Theorem 19.8. Let f : (α , β ) → R be integrable on every closed and bounded
subinterval of (α , β ). Suppose that f has a primitive function in (α , β ), and let
'
F : (α , β ) → R be a primitive function of f . The improper integral αβ f (x) dx is
convergent if and only if the finite limits limx→α +0 F(x) and limx→β −0 F(x) exist,
and then
( β
α
f (x) dx = [F]αβ .
Proof. Fix a point x0 ∈ (α , β ). Since f is integrable on every closed and bounded
'
subinterval, it is easy to see that the improper integral αβ f (x) dx is convergent if and
'
'
only if the integrals αx0 f (x) dx and xβ0 f (x) dx are convergent by Definition 19.1 or
Definition 19.3, and then
( β
f (x) dx =
α
( x0
α
f (x) dx +
( β
f (x) dx.
x0
By the fundamental theorem of calculus,
( b
x0
f (x) dx = F(b) − F(x0 )
for all x0 < b < β . This implies that the limits limb→β −0 [F(b) − F(x0 )] along with
'
limb→β −0 xb0 f (x) dx either both exist or both do not exist, and if they do, then they
agree. The same holds for the limits
lim [F(x0 ) − F(a)]
and
a→α +0
lim
( x0
a→α +0 a
f (x) dx
as well. We get that the integral αβ f (x) dx is convergent if and only if the limits
limb→β −0 [F(b) − F(x0 )] and lima→α +0 [F(x0 ) − F(a)] exist and are finite, and then
the value of the integral is the sum of these two limits. This is exactly what we
wanted to show.
⊔
⊓
'
We now turn to the rules of integration regarding improper integrals. We will see
that the formulas for Riemann integration—including the rules for integration by
parts and substitution—remain valid for improper integrals with almost no changes.
Theorem 19.9. If the integrals αβ f dx and αβ g dx are convergent, then the inte'
'
grals αβ c · f dx (c ∈ R) and αβ ( f + g) dx are also convergent, and
'
'
( β
c · f dx = c ·
( β
( f + g) dx =
α
( β
α
f dx
(c ∈ R)
and
α
( β
α
f dx +
( β
α
g dx.
Proof. The statement is clear by the definition of the improper integral, and by
⊓
⊔
Theorems 14.30, 14.31, and 10.35.
19.1 The Definition and Computation of Improper Integrals
425
Theorem 19.10. Let the functions f and g be differentiable, and let f ′ and g′ be
integrable on every closed and bounded subinterval of (α , β ), and suppose that the
limits limx→α +0 f g and limx→β −0 f g exist and are finite. Then
( β
′
f g dx
α
β
= [ f g]α
−
( β
α
f g′ dx,
in the sense that if one of the two improper integrals exists, then the other one does,
too, and the two sides are equal.
Proof. Fix a point x0 ∈ (α , β ). By Theorem 15.11,
( b
x0
f ′ g dx = [ f g]bx0 −
( b
x0
f g′ dx
for all x0 < b < β . Thus if we let b tend to β − 0, then the left side has a limit if
and only if the right side does, and if they exist, then they are equal. Similarly, if
a → α + 0, then the left side of the equality
( x0
a
f ′ g dx = [ f g]xa0 −
( x0
a
f g′ dx
has a limit if and only if the right side does, and if they exist, then they are equal.
We can now get the statement of the theorem just as in the proof of Theorem 19.8.
⊔
⊓
'
We defined Riemann integrals ab f dx for the case b ≤ a (see Definition 14.40).
We should make this generalization for improper integrals as well.
Definition 19.11. If the improper integral
( α
β
'β
α
f (x) dx = −
f (x) dx exists, then let
( β
α
f (x) dx.
The value of the integral αα f dx is zero for all α ∈ R and α = ±∞, and for every
function f (whether it is defined at the point α or not).
'
Theorem 19.12. Let (α , β ) be a finite or infinite open interval. Let g be strictly
monotone and differentiable, and suppose that g′ is integrable on every closed and
bounded subinterval of (α , β ). Let1 limx→α +0 g(x) = γ and limx→β −0 g(x) = δ . If f
is continuous on the open interval (γ , δ ), then
( β
α
f (g(t)) · g′ (t) dt =
( δ
γ
f (x) dx
(19.7)
in the sense that if one of the two improper integrals is convergent, then so is the
other, and they are equal.
1
These finite or infinite limits must exist by Theorem 10.68.
426
19 The Improper Integral
Proof. We suppose that g is strictly monotone increasing (we can argue similarly
if g is monotone
decreasing). Then γ < δ . Fix a number x0 ∈ (α , β ), and let
'
F(x) = xx0 f (t) dt for all x ∈ (γ , δ ). By the definition of the improper integral, it
follows that the right-hand side of (19.7) is convergent if and only if the finite limits
limx→γ +0 F(x) and limx→δ −0 F(x) exist, and then the value of the integral on the
right-hand side of (19.7) is [F]γδ .
By Theorem 15.5, F is a primitive function of f on the interval (γ , δ ), and so by
the differentiation rule for compositions of functions, F ◦ g is a primitive function of
( f ◦ g) · g′ on (α , β ). By Theorem 19.8, the left-hand side of (19.7) is convergent if
and only if the finite limits limt→α +0 F(g(t)) and limt→β −0 F(g(t)) exist, and then
the value of the integral on the left-end side of (19.7) is [F ◦ g]αβ .
Now if limx→γ +0 F(x) = A, then by Theorem 10.41 on the limit of compositions of functions, limt→α +0 F(g(t)) = A. Conversely, if limt→α +0 F(g(t)) = A, then
limx→γ +0 F(x) = A; clearly, the function g−1 maps the interval (γ , δ ) onto the interval (α , β ) in a strictly monotone increasing way, and so limx→γ +0 g−1 (x) = α . Thus
if limt→α +0 F(g(t)) = A, then limx→γ +0 F(x) = A, since F = (F ◦ g) ◦ g−1 .
The same reasoning gives that the limit limx→δ −0 F(x) exists if and only if
limt→β −0 F(g(t)) exists, and then they are equal.
Comparing all of the results above, we get the statement of the theorem we set
out to prove.
⊔
⊓
The theorem above is true with much weaker conditions as well: it is not necessary to require the monotonicity of g (at least if γ = δ ), and instead of the continuity
of f , it suffices to assume that f is integrable in the closed and bounded subintervals
of the image of g. The precise theorem is as follows.
Theorem 19.13. Let g be differentiable, suppose g′ is integrable on every closed
and bounded subinterval of (α , β ), and suppose that the finite or infinite limits
limx→α +0 g(x) = γ and limx→β −0 g(x) = δ exist. Let f be integrable on every closed
and bounded subinterval of the image of g, that is, of g((α , β )).
(i) If γ = δ , then (19.7) holds in the sense that if one of the two improper integrals
exists, then so does the other, and then they are equal.
(ii) If γ = δ and the left-hand side of (19.7) is convergent, then its value is zero.
We give the proof of the theorem in the appendix of the chapter. We note
that when γ = δ , the left-hand side of (19.7) is not necessarily convergent; see
Exercise 19.12.
Remark 19.14. We have seen examples in which we created improper integrals from
“ordinary” Riemann integrals. The reverse case can occur as well: we can create a
Riemann integral starting from an improper integral. For example, the substitution
x = sint gives us
( 1
0
xn
√
dx =
1 − x2
(
π
2
0
sinn t dt,
19.1 The Definition and Computation of Improper Integrals
427
and the substitution x = 1/t gives us
( ∞
1
dx
=
1 + x2
( 0
1
1
−1
· 2 dt =
2
1 + 1/t
t
( 1
0
π
dt
= .
2
1+t
4
Exercises
19.1. Compute the values of the following integrals:
(a)
( ∞
(c)
( ∞
dx
;
x2 + x
(b)
( ∞
dx
;
4
−∞ x + 1
(d)
( ∞
x+1
1
dx
;
2 +x+1
x
−∞
1
x3 + x
( ∞
x
dx;
(1 + x2 )3
(f)
(g)
( ∞
(h)
( ∞
(i)
( ∞
dx
dx;
x
2 −1
e−
(j)
( ∞
log x
(e)
0
1
0
(k)
√
x
a
dx;
(1 + x2 )2
dx
dx;
(x − a)(b − x)
ex
dx;
2x
−∞ e + 1
1
( ∞
x log x
1
( b
dx;
x2
dx;
dx.
( ∞
dx
is convergent if and only if c > 1.
x
·
log
x
·
log logc x
3
How can we generalize the statement?
19.2. Prove that the integral
19.3. Compute the integral
19.4. Compute the integral
substitution x = tgt.
' π /2
π /4
dx/sin4 x with the substitution t = tg x.
'∞
−∞ dx/(1 + x
2 )n
for every positive integer n with the
19.5. Let f be a nonnegative and continuous function in [0, ∞). What simpler ex'
pression is limh→∞ h · 01 f (hx) dx equal to?
19.6. Prove that if the integral
'1
f (nx) dx = 0.
limn→∞ −1
'∞
−∞
f (x) dx is convergent, then we have
19.7.
' ∞ Prove that if f is positive and monotone decreasing in [0, ∞) and the integral
0 f (x) dx is convergent, then limx→∞ x · f (x) = 0. (H S)
19.8. Prove that
1
1
1
1
lim √ · √ + √ + · · · + √
= 2.
n→∞ n
n
1
2
428
19 The Improper Integral
19.9. Prove that if f is monotone in (0, 1] and the improper integral
convergent, then
n ( 1
1
1
2
lim · f
f (x) dx.
+f
+···+ f
=
n→∞ n
n
n
n
0
'1
0
f (x) dx is
Is the statement true if we do not assume f to be monotone? (S)
n−1
19.10. Compute the value of limn→∞ ∑i=1
1/
i · (n − i).
19.11. Let f be continuous in [a, b], and let f be differentiable in [a, b] except at
'
finitely many points. Prove that if the improper integral ab 1 + ( f ′ (x))2 dx is convergent and its value is I, then the graph of f is rectifiable, and its arc length is
I. (S)
19.12. Let α = −∞, β = ∞, let g(x) = 1/(1 + x2 ) for all x ∈ R, and let f (x) = 1/x2
for all x > 0. Check that with these assumptions, f is integrable on every closed and
bounded subinterval of the image of g, but the left-hand side of (19.7) does not exist
(while the right-hand side clearly exists, and its value is zero).
19.2 The Convergence of Improper Integrals
In the applications of improper integrals, the most important question is whether
a given integral is convergent; computing the value of the integral if it is convergent is often only a secondary (or hopeless) task. Suppose that f is integrable on
every closed and bounded subinterval of the interval [a, β ). The convergence of the
'
improper integral aβ f dx depends only on the values of f close to β : for every
'
'β
a < b < β , the integral aβ f dx is convergent if and only if the integral b f dx is
convergent. Clearly,
( ω
a
f dx =
( b
a
f dx +
( ω
f dx
b
for all b < ω < β , so if ω '→ β − 0, then the left-hand side has a finite limit if and
only if the same holds for bω f dx.
'β
From Examples 19.2 and 19.5, we might want to conclude that an integral b f dx
can be convergent only if f tends to zero fast enough as x → β − 0. However, this
'β
is not the case. Consider a convergent integral b f dx (for example, any of the
convergent integrals appearing in the examples above), and change the values of f
arbitrarily at the points of a sequence xn → β − 0. This does not affect the conver'
gence of the integral aβ f dx, since for all a < b < β , the value of the function f is
changed at only finitely many points inside the interval [a, b], so the Riemann inte'
gral ab f dx does not change either. On the other hand, if at the point xn we give f
the value n for every n, then this new function does not tend to 0 at β . In fact we can
19.2 The Convergence of Improper Integrals
429
'β
find continuous functions f such that b f dx is convergent but f does not tend to 0
at β (see Exercises 19.28 and 19.29).
The converse of the (false) statement above, however, is true: if f tends to zero
'β
fast enough as x → β − 0, then the integral b f dx is convergent. To help with the
proof of this, we first give a necessary and sufficient condition for the convergence
of integrals, which we call Cauchy’s criterion for improper integrals.
Theorem 19.15 (Cauchy’s Criterion). Let f be integrable on every closed and
'
bounded subinterval of the interval [a, β ). The improper integral aβ f dx is conif and
only if for every ε > 0, there exists a number a ≤ b < β such that
vergent
' b 2
b1 f (x) dx < ε for all b < b1 < b2 < β .
Proof. The statement follows simply from Cauchy’s criterion for limits of functions
(Theorem 10.34) and the equality
( b2
a
f (x) dx −
( b1
f (x) dx =
( b2
f (x) dx.
b1
a
⊔
⊓
Suppose that f is integrable on every closed and bounded subinterval of the int'
erval [a, β ). If f is nonnegative on [a, β ), 'then the improper integral aβ f dx always
exists. This is because the function ω → aω f dx '(ω ∈ [a, β )) is monotone increasing, and so the finite or infinite limit limω →β −0 aω f dx must necessarily exist by
Theorem 10.68.
Thus if f is integrable on every closed and bounded subinterval of the interval
'
[a, β ), then the improper integral aβ | f | dx exists for sure. The question is whether
its value is finite.
Definition 19.16. We call the improper integral aβ f dx absolutely convergent if f is
integrable on the closed and bounded subintervals of [a, β ) and the improper integral
'β
a | f | dx is convergent.
'
Theorem 19.17. If the improper integral
convergent.
'β
a
f dx is absolutely convergent, then it is
Proof. The statement clearly follows from Cauchy’s criterion, since
( b
(b
2
2
≤
| f (x)| dx
f
(x)
dx
b1
for all a ≤ b1 < b2 < β .
b1
⊔
⊓
The converse of the statement is generally not true;
' we will soon see in
Examples 19.20.3 and 19.20.4 that the improper integral 1∞ (sin x/x) dx is convergent but not absolutely convergent.
The following theorem—which is one of the most often used criteria for
convergence—is called the majorization principle.
430
19 The Improper Integral
Theorem 19.18 (Majorization Principle). Let f and g be integrable in every
closed and bounded subinterval of [a, β ), and suppose that there exists a b0 ∈ [a, β )
'
such that | f (x)| ≤ g(x) for all x ∈ (b0 , β ). If aβ g(x) dx is convergent, then so is
'β
a f (x) dx.
'β
Proof. By the “only if” part of Theorem 19.15, if
an arbitrary ε > 0, there exists a b < β such that
( b
2
g(x) dx < ε
a
g(x) dx is convergent, then for
b1
holds for all b < b1 , b2 < β . Since | f (x)| ≤ g(x) in the interval (b0 , β ),
( b
( b
( b
2
2
2
f (x) dx ≤
| f (x)| dx ≤
g(x) dx < ε
b1
b1
b1
for all max(b, b0 ) < b1 , b2 < β . Thus by the “if” part of Theorem 19.15, the improper
'
integral aβ f (x) dx is convergent.
⊔
⊓
Remark 19.19. The majorization principle can be used to show divergence as well:
'
if g(x) ≥ | f (x)| for all x ∈ (b0 , β ) and aβ f (x) dx is divergent, then it follows that
'β
g(x) dx is also divergent. Clearly, if this latter integral were convergent, then
'aβ
a f (x) dx would be convergent as well.
Such applications of the theorem are often called minorization.
'
'
∞
∞
2
2
Examples 19.20.
' ∞ −21. The integrals 1 (sin x/x ) dx and 1 (cos x/x ) dx are convergent, since 1 x dx is convergent, and
cos x
sin x
1
1
and
2 ≤ 2
x2 ≤ x2
x
x
for all x ≥ 1. '
2. The integral 01 (sin x/x2 ) dx is divergent. We know that limx→0 sin x/x = 1, so
sin x/x > 1/2 if 0 < x < δ0 , and so sin x/x2 > 1/x if 0 < x < δ0 . Thus by the diver'
'δ
gence of 0 0 x−1' dx, it follows that 01 (sin x/x2 ) dx is divergent.
3. The integral 1∞ (sin x/x) dx is convergent.
We can easily show this with the help of integration by parts. Since
ω ( ω
( ω
sin x
cos x
1
dx = − · cos x −
dx
x
x
x2
1
1
1
for all ω > 1,
( ∞
sin x
1
'
x
dx = cos 1 −
( ∞
cos x
1
x2
dx.
4. The integral 1∞ |sin x/x| dx is divergent. Since | 'sin x| ≥ sin2 x for all x, to show
that the integral is divergent it suffices to show that 1∞ (sin2 x/x) dx is divergent. We
19.2 The Convergence of Improper Integrals
431
know that
( ω
sin2 x
1
x
dx =
1
·
2
( ω
1 − cos 2x
x
1
dx =
1
·
2
( ω
dx
1
x
−
1
·
2
( ω
cos 2x
1
x
dx.
Similarly to Example 3, we can show that
if ω → ∞, then the second' term on
'
the right-hand side is convergent, while 1ω dx/x → ∞ as ω → ∞. Thus 1∞ sin2 x/
x dx = ∞.
'
'
2
2
5. The integral 0∞ e−x dx is convergent, and in fact, 0∞ xc e−x dx is also convergent
2
2
for all c > 0. Clearly, e−x < 1/|x|c+2 if |x| > x0 , so |xc e−x | < 1/x2 if |x| > x0 .
'
Example 19.21. The integral 0∞ xc · e−x dx is convergent if c > −1.
'
'
'
Let us inspect the integrals 01 and 1∞ separately. The integral 1∞ xc e−x dx is
convergent for all 'c: we can prove this in the same way as in Example 19.20.5.
If c ≥ 0, then 01 xc · e−x dx is an ordinary integral. If −1 < c < 0, then we can
apply the majorization principle: since if x ∈ (0, 1], then |xc · e−x | ≤ xc and the inte'
'
gral 01 xc dx is convergent, we know that the integral 01 xc · e−x dx is also convergent.
The integral appearing in the example above (as a function of c) appears in applications so often that it gets its own notation. Leonhard Euler, who first worked
with this integral, found it more useful to define this function on the interval (0, ∞)
instead of (−1, ∞), so instead
' of notation for the integral in Example 19.21, he introduced it for the integral 0∞ xc−1 · e−x dx, and this is still used today.
We use Γ (c) to denote the value of the integral
'
' ∞ c−1 −x
· e dx for all c > 0.
0 x
So for example, Γ (1) = 0∞ e−x dx = 1.
One can show that Γ (c + 1) = c · Γ (c) for all c > 0. Once we know this, it easily
follows that Γ (n) = (n − 1)! for every positive integer n (see Exercises 19.38 and
19.39). It is known that Γ is not an elementary function. To see more properties of
the function Γ , see Exercises 19.41–19.46.
Example 19.20.3 can be greatly generalized. The following theorem gives us a
convergence criterion according to which we could replace 1/x in the example by
any monotone function that tends to zero at infinity, and we could replace sin x by
any function whose integral function is bounded.
Theorem 19.22. Suppose that
(i) the function f is monotone on the interval [a, ∞) and limx→∞ f (x) = 0, and
moreover,
(ii) the function
g is integrable on [a, b] for all b > a, and the integral function
'
G(x) = ax g(t) dt is bounded on [a, ∞).
Then the improper integral
'x
'∞
a
f (x)g(x) dx is convergent.
'
| g(t) dt|'≤ K for all' x ≥ a. Then | xy g(t) dt| ≤ 2K for all
Proof. Suppose that
'y a
a ≤ x < y, since x g(t) dt = ay g(t) dt − ay g(t) dt. Let ε > 0 be given. Since
limx→∞ f (x) = 0, there exists an x0 such that | f (x)| < ε for all x ≥ x0 . If x0 ≤ x < y,
432
19 The Improper Integral
then by the second mean value theorem of integration (Theorem 15.8), we get that
for a suitable ξ ∈ [x, y],
( y
( y
( ξ
g(t) dt + | f (y)| · g(t) dt ≤
f (t)g(t) dt ≤ | f (x)| ·
x
ξ
x
≤ ε · 2K + ε · 2K = 4K ε .
Since ε was arbitrary, we have shown that the improper integral
isfies Cauchy’s criterion, so it is convergent.
'∞
a
f (x)g(x) dx sat⊔
⊓
Exercises
19.13. Are the following integrals convergent?
(a)
( ∞
0
(c)
( 1
0
(e)
( 1
[x]!
dx;
2x + 3x
(b)
log | log x| dx;
(d)
xlog x dx;
(f)
dx
;
(g)
sin(sin(sin
x))
0
( ∞ x
x
(i)
dx;
1 [x]!
( π /2
√
(k)
tg x dx;
0
10
( 1
(0
0
0
( 1
( ∞
dx
;
x · log log x
| log x|| log x| dx;
∞ √
√
( x + 1 − x) dx;
( ∞
x+1
√
dx;
x2 x2 − 1
( ∞ √
x x
2 + 3x
√ dx;
(j)
5 (x − log x) x
( 1
dx
.
(l)
0 ((π /2) − arc sin x)3/2
(h)
2
19.14. Determine the values of A and B that make the integral
( ∞
B
2
1 + x − Ax −
dx
x
1
convergent.
19.15. Prove that if R = p/q is a rational function, a < b, q(a) = 0, and p(a) = 0,
'
then the integral ab R(x) dx is divergent.
19.16. Let p and q be'polynomials, and suppose that q does not have a root in [a, ∞).
When is the integral a∞ p(x)/q(x) dx convergent?
19.17. Let p and q be polynomials. When is the integral
gent?
'∞
−∞
p(x)/q(x) dx conver-
19.2 The Convergence of Improper Integrals
433
19.18. For which c are the following integrals convergent?
(
π
2
dx
(a)
;
(sin
x)c
0
( ∞
dx
;
(c)
c
0 1+x
(b)
( 1
1 − cos x
xc
( 1
dx
(d)
;
c
0 x −1
dx;
0
19.19. For which a, b is the integral
' π /2
0
sina x · cosb x dx convergent?
19.20. (a) Prove that the improper integral 0π log sin x dx is convergent.
(b) Compute the value of the integral with the following method: use the substitution
x = 2t, apply the identity sin 2t = 2 sint cost, then prove and use the fact that
' π /2
' π /2
log sint dt = 0 log cost dt. (S)
0
'
19.21. What does the geometric mean of the numbers
sin π /n, sin 2π /n, . . . , sin (n − 1)π /n
tend to as n → ∞? Check that the limit is smaller than the limit of the arithmetic
means of the same numbers. (S)
19.22. Compute the values of the following integrals.
(a)
( π /2
0
x · ctg x dx;
19.23. Prove that
'∞
0
(b)
( π /2
0
sin x2 dx
x2
dx.
sin2 x
is convergent. (H)
19.24. For what values of c ∈ R are the following integrals convergent? (H)
( ∞
( ∞
cos x
sin x
(b)
dx.
(a)
dx;
c
xc
x
1
1
19.25. Prove that
( ∞
dt
x
(a) ·
= 1 if x > 0.
π −∞ t 2 + x2
' ∞ f (t)
(b) limx→0 |x|
π · −∞ t 2 +x2 dt = f (0) if f : R → R is continuous and bounded.
19.26. Let f and g be positive and continuous functions
on the
half-line [a, ∞) such
'
'
that limx→∞ ( f /g) = 1. Prove that the integrals a∞ f dx and a∞ g dx are either both
convergent or both divergent.
19.27. Let f , g, and h be continuous functions in [a,
∞), and suppose
that f (x) ≤
'
'
g(x) ≤ h(x) for all x ≥'a. Prove that if the integrals a∞ f dx and a∞ h dx are convergent, then the integral 0∞ g dx is also convergent. (H)
19.28. Construct a continuous function f : [0, ∞) → R such that the integral
is absolutely convergent, but f does not tend to zero as x → ∞. (S)
'∞
0
f dx
19.29.
Prove that if f is uniformly continuous on the interval [0, ∞) and the integral
'∞
f
dx
is convergent, then f tends to zero as x → ∞. (S)
0
434
19 The Improper Integral
19.30. Construct a continuous function f ': [0, ∞) → R such that the integral
is absolutely convergent, but the integral 0∞ f 2 dx is divergent. (H)
19.31. Prove that if f : [a, ∞) →' R is monotone decreasing and the integral
is convergent, then the integral 0∞ f 2 dx is also convergent.
'∞
f dx
'∞
f dx
0
a
19.32. Let f : [a, ∞) → R be a nonnegative' and continuous function such that
limx→∞ x f (x) = 1/2. Prove that the integral a∞ f dx is convergent.
19.33. (a)
' Prove that if f : [a, ∞) → R is a nonnegative function such that the integral a∞ f dx is convergent, then there
exists a function g : [a, ∞) → R such that
'
limx→∞ g(x) = ∞ and the integral a∞ g · f dx is also convergent.
(b) 'Prove that if f : [a, ∞) → R is a nonnegative function such that the integral
∞
a f dx exists but is divergent, then there exists
' a positive function g : [a, ∞) → R
such that limx→∞ g(x) = 0, and the integral a∞ g · f dx is also divergent. (H S)
'
19.34. Let f : [a, ∞) → R be a nonnegative function such that
the integral a∞ f dx
'∞
exists and is divergent. Is it true that in this case, the integral a f /(1 + f ) dx is also
divergent?
'
19.35. Let f : [3, ∞) → R be a nonnegative function such that 3∞ f dx is convergent.
'
Prove that then the integral 3∞ f (x)1−1/log x dx is also convergent. (∗ H S)
19.36. Prove
that if f : [a,
∞) → R is a decreasing nonnegative function, then the
'
'
integrals a∞ f (x) dx and a∞ f (x) · | sin x| dx are either both convergent or both divergent.
19.37. Let f be a positive, monotone decreasing, convex, and differentiable function on [a, ∞). Define'the (unbounded) area of the region B f lying under the graph
of f as the integral a∞ f (x) dx, the volume of 'the set we obtain by rotating the
region B f about the x-axis by the integral π · a∞ f 2 (x) dx, and the surface area
of the
set we obtain by rotating the graph of f about the x-axis by the integral
'
2π · a∞ f (x) · 1 + ( f ′ (x))2 dx. These three values can each be finite or infinite,
which (theoretically) leads to eight results. Which of these outcomes can actually
occur among functions with the given properties? (H S)
19.38. Prove that Γ (c + 1) = c · Γ (c) for all c > 0. (H S)
19.39. Prove that Γ (n) = (n − 1)! for every positive integer n. (H)
19.40. Compute the integral
'1
0
logn x dx for every positive integer n.
19.41. Express the value of the integral
How can this result be generalized? (S)
' ∞ −x3
dx with the help of the function Γ .
0 e
19.42. Prove that
( 1
0
(1 − x)n xc−1 dx =
n!
c(c + 1) · · · (c + n)
for every c > 0 and every nonnegative integer n. (H)
(19.8)
19.3 Appendix: Proof of Theorem 19.13
435
19.43. Use the substitution x = t/n in (19.8); then let n tend to infinity in the result.
Prove that
nc n!
(19.9)
Γ (c) = lim
n→∞ c(c + 1) · · · (c + n)
for all c > 0. (∗ H S)
19.44. Apply
√ (19.9) with c = 1/2. Using Wallis’s formula, prove from this that
Γ (1/2) = π .
19.45. Show that
' ∞ −x2
√
dx = π /2.
0 e
√
19.46. Prove that Γ (2x) = 4x / 2 π Γ (x)Γ (x + 1/2) for all x > 0. (H)
19.3 Appendix: Proof of Theorem 19.13
Fix a point x0 ∈ (α , β ). We first show that
( β
x0
f (g(t)) · g′ (t) dt =
( δ
g(x0 )
f (x) dx,
(19.10)
in the sense that if one of the two improper integrals is convergent, then so is the
other, and they are equal. We distinguish two cases.
I. First we suppose that the value δ is finite, and that g takes on the value δ in the
interval [x0 , β ). Let (cn ) be a sequence such that cn → β and cn < β for all n. Then
g(cn ) → δ , and so it easily follows that the set H consisting of the numbers g(x0 ),
δ , and g(cn ) (n = 1, 2, . . . ) has a greatest and a smallest element. (This is because
if there is an element greater than max(g(x0 ), δ ) in H, namely some g(cn ), then
there can be only finitely many elements greater than this, so there will be a greatest
among these. This proves that H has a greatest element. We can argue similarly to
show that H has a smallest element, too.) If m = min H and M = max H, then by
the continuity of g, we know that [m, M] is a closed and bounded subinterval of the
image of the function g. By the condition, f is integrable on [m, M]. Thus f is also
'δ
integrable on the interval [g(x0 ), δ ] by Theorem 14.37. Let g(x
f (x) dx = I. Since
0)
the integral function of f is continuous on [m, M] by Theorem 15.5,
lim
( g(cn )
n→∞ g(x0 )
f (x) dx = I.
(19.11)
Now applying the substitution formula (that is, Theorem 15.22) with the interval
[x0 , cn ], we obtain
( cn
x0
f (g(t)) · g′ (t) dt =
( g(cn )
g(x0 )
f (x) dx
(19.12)
436
19 The Improper Integral
for all n, which we can combine with (19.11) to find that
( cn
lim
n→∞ x0
f (g(t)) · g′ (t) dt = I.
(19.13)
Since this holds for every sequence cn → β , cn < β ,
lim
( c
c→β x0
f (g(t)) · g′ (t) dt = I,
(19.14)
which—by the definition of the improper integral—means that the integral on the
left-hand side of (19.10) is convergent, and its value is I.
II. Now suppose that g does not take on the value δ in the interval [x0 , β ) (including
the case δ = ∞ or δ = −∞). Since g is continuous, the Bolzano–Darboux theorem
implies that either g(x) < δ for all x ∈ [x0 , β ) (this includes the case δ = ∞), or
g(x) > δ for all x ∈ [x0 , β ) (this includes the case δ = −∞). We can assume that
the first case holds (since the second one can be treated in the same way). Then
[g(x0 ), δ ) ⊂ g([x0 , β )).
Suppose that the right-hand side of (19.10) is convergent, and that its value is I.
If cn → β and cn < β for all n, then g(cn ) → δ and g(cn ) < δ for all n. Thus (19.11)
and (19.12) give us (19.13) once again. Since this holds for every sequence cn → β ,
cn < β , we also know that (19.14) holds, that is, the integral on the left-hand side
of (19.10) is convergent, and its value is I.
Now finally, let us suppose that the left-hand side of (19.10) is convergent and
that its value is I. Let (dn ) be a sequence such that dn → δ and g(x0 ) ≤ dn < δ for all
n. Since [g(x0 ), δ ) ⊂ g([x0 , β )), there exist points cn ∈ [x0 , β ) such that dn = g(cn )
(n = 1, 2, . . .). We now show that cn → β . Since cn < β for all n, it suffices to show
that for all b < β , we have cn > b if n is sufficiently large.
We can assume that x0 ≤ b < β . Since g is continuous, it must have a maximum
value in [x0 , b]. If this is A, then A < δ , since we assumed every value of g to be
smaller than δ . By the condition dn → δ , we know that dn > A if n > n0 . Since
dn = g(cn ), if n > n0 , then cn > b, since cn ≤ b would imply that g(cn ) ≤ A.
side of (19.12)
This shows that cn → β . This, in turn, implies that the left-hand
' dn
f
(x)
dx, we have
tends to I as n → ∞. Since the right-hand side of (19.12) is g(x
0)
shown that
lim
( dn
n→∞ g(x0 )
f (x) dx = I.
This holds for every sequence dn → δ , dn < δ , so the integral on the right-hand
side of (19.10) is convergent, and its value is I. This proves (19.10). The exact same
argument shows that
( x0
α
f (g(t)) · g′ (t) dt =
( g(x0 )
γ
f (x) dx
(19.15)
in the sense that if one of the two improper integrals exists, then so does the other,
and they are equal.
19.3 Appendix: Proof of Theorem 19.13
437
Now we turn to the proof of the theorem. Suppose that γ = δ . We can assume that
γ < δ , since the case γ > δ is similar. Then the interval (γ , δ ) is part of the image
of g, so there exists a point x0 ∈ (α , β ) such that g(x0 ) ∈ (γ , δ ). By the definition of
the improper integral, it follows that the left-hand side of (19.7) is convergent if and
only if the left-hand sides of (19.10) and (19.15) are convergent, and then it is equal
to their sum. The same holds for the right-hand sides. Thus (19.10) and (19.15)
immediately prove statement (i) of the theorem.
Now suppose that γ = δ and that the left-hand side of (19.7) is convergent. Then
the left-hand sides of (19.10) and (19.15) are also convergent. Thus the corresponding right-hand sides are also convergent, and (19.10) and (19.15) both hold. Since
'
the sum of the right-hand sides of (19.10) and (19.15) is γδ f dx = 0, the left-hand
side of (19.7) is also zero.
Erratum:
Real Analysis
Foundations and Functions of One Variable
Fifth Edition
Miklós Laczkovich and Vera T. Sós
c Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts in Mathematics,
DOI 10.1007/978-1-4939-2766-1
DOI 10.1007/978-1-4939-2766-1 20
The front matter was revised in the print and online versions of this book. On the title
page “Fifth Edition” is incorrect—instead it should be the “First English Edition”.
On the copyright page it should read as follows:
1st Hungarian edition: T. Sós, Vera, Analı́zis I/1 © Nemzeti Tankönyvkiadó,
Budapest, 1972
2nd Hungarian edition: T. Sós, Vera, Analı́zis A/2 © Nemzeti Tankönyvkiadó,
Budapest, 1976
3rd Hungarian edition: Laczkovich, Miklós & T. Sós, Vera: Analı́zis I © Nemzeti
Tankönyvkiadó, Budapest, 2005
4th Hungarian edition: Laczkovich, Miklós & T. Sós, Vera: Analı́zis I © Typotex,
Budapest, 2012
Translation from the Hungarian language 3rd edition: Valós analı́zis I by Miklós
Laczkovich & T. Sós, Vera, © Nemzeti Tankönyvkiadó, Budapest, 2005. All rights
reserved © Springer 2015.
© Springer New York 2015
The online version of the original book can be found at
http://dx.doi.org/10.1007/978-1-4939-2766-1
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1 20
E1
Hints, Solutions
Hints
Chapter 2
2.12 Draw the lines one by one. Every time we add a new line, we increase the
number of regions by as many regions as the new line intersects. Show that this
number is one greater than the previous number of lines.
2.16 Apply the inequality of arithmetic and geometric means with the numbers x,
x, and 2 − 2x.
2.24 Let X = A1 ∪ . . . ∪ An . Prove (using de Morgan’s laws and (1.2)) that every
expression U(A1 , . . . , An ) can be reduced to the following form: U1 ∩U2 ∩ . . . ∩UN ,
where for every i, Ui = A1ε1 ∪ . . . ∪ Anεn . Here ε j = ±1, A1j = A j , and A−1
j = X \ A j.
Check that if the condition of the exercise holds for U and V of this form, then
U = V.
Chapter 3
3.4 A finite set contains a largest element. If we add a positive number to this
element, we get a contradiction.
3.10 Does the sequence of intervals [n/x, 1/n] have a shared point?
3.16 Show that (a) if the number a is the smallest positive element of the set H, then
H = {na : n ∈ Z}; (b) if H = {0} and H does not have a smallest positive element,
then H ∩ (0, δ ) = 0/ for all δ > 0, and so H is everywhere dense.
3.25 Suppose that H = 0/ and H has a lower bound. Show that the least upper bound
of the set of lower bounds of H is also the greatest lower bound of H.
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1
439
440
Hints, Solutions
3.31 Since b/a > 1, there exists a rational number n > 0 such that b/a = 1 + (1/n).
n+1
n
Justify that a = 1 + (1/n) and b = 1 + (1/n)
. Let n = p/q, where p and q
p/q
are relatively prime positive integers. Show that (p + q)/p
can be rational only
if q = 1.
3.33 Suppose first that b is rational. If 0 ≤ b ≤ 1, then apply the inequality of
arithmetic and geometric means. Reduce the case b > 1 to the case 0 < b < 1. For
irrational b, apply the definition of taking powers.
Chapter 4
4.2 Show that if the sequences (an ) and (bn ) satisfy the recurrence, then for every
λ , µ ∈ R, the sequence (λ an + µ bn ) does as well. Thus it suffices to show that if α
is a root of the polynomial p, then the sequence (α n ) satisfies the recurrence.
4.3 (a) Let α and β be the roots of the polynomial x2 − x − 1. According to the
previous exercise, for every λ , µ ∈ R, the sequence (λ α n + µβ n ) also satisfies the
recurrence. Choose λ and µ such that λ α 0 + µβ 0 = 0 and λ α 1 + µβ 1 = 1.
4.22 In order to construct a sequence oscillating at infinity, create a sequence that
moves between 0 and 1 back and forth with each new step size getting closer to
zero.
√
4.27 If the decimal
√ expansion of 2 consisted of only a repeating digit from some
point on, then 2 would be rational.
Chapter 5
5.4 Construct an infinite set A ⊂ N such that A ∩ {kn : n ∈ N} is finite for all k ∈ N,
k > 1. Let an = 1 if n ∈ A and an = 0 otherwise.
5.21 Show that the sequence bn = n · max{aki : 1 ≤ i, k ≤ n} satisfies the conditions.
Chapter 6
6.3 Write the condition in the form an − an−1 ≤ an+1 − an . Show that the sequence
is monotone from some point on.
6.4 (d) Separate the sequence into two monotone subsequences.
√
6.5 Show that an ≥ a and an ≥ an+1 for all n ≥ 2.
Hints, Solutions
441
6.7 Multiply the inequalities 1 + 1/k < e1/k < 1 + 1/(k − 1) for all 2 ≤ k ≤ n.
6.9 Suppose that (an+1 − an ) is monotone decreasing. Prove that a2n − an ≥
n · (a2n+1 − a2n ) for all n.
6.17 The condition is that the finite or infinite limit limn→∞ an = α exists, an ≤ α
holds for all n, and if an = α for infinitely many n, then an < α can hold for only
finitely many n.
√
k−1
k
6.22 A possible construction: let an = k if 22 ≤ n < 22 .
6.23 The statement is true. Use the same idea as the proof of the transference
principle.
Chapter 7
7.2 (a) Give a closed form for the partial sums using the identity
1
n2 + 2n
=
1
1
−
.
2n 2(n + 2)
A similar method can be used for the series (b), (c), and (d).
7.3 Break the rational function p/q into quotients of the form ci /(x − ai ). Show that
here, ∑ki=1 ci = 0, and apply the idea used in part (c) of Exercise 7.2.
7.5 Use induction. To prove the inductive step, use the statement of Exercise 3.33.
7.8 Give the upper bound N/10k−1 to the sum ∑10k−1 ≤an <10k 1/an , where N denotes
how many numbers there are with k digits that do not contain the digit 7.
7.10 It does not follow.
7.11 Yes, it follows. Prove that the given infinite series satisfies Cauchy’s criterion.
Chapter 8
8.2 For all N, there are only finitely many sequences (a1 , . . . , ak ) such that |a1 | +
· · · + |ak | = N.
8.4 Use the fact that every interval contains a rational point.
8.9 Every x ∈ (0, 1] has a unique form x = 2−a1 + 2−a2 + · · · , where a1 < a2 < · · ·
are natural numbers. Apply the bijections
x ↔ (a1 , a2 , . . .) ↔ (a1 , a2 − a1 , a3 − a2 , . . .).
8.11 It suffices to prove that the set of pairs (A, B) has the cardinality of the continuum, where A, B ⊂ N. Find a map that maps these pairs to subsets of N bijectively.
442
Hints, Solutions
Chapter 9
9.4 Such functions exist. Construct first such a function on the four-element set
a, b, −a, −b for all 0 < a < b.
9.5 Let, for example, fc be the identically 1 function for all c. A less trivial example:
let fc (x) = x + c for all c, x ∈ R.
The answer to the second question is no. If, for example, g = f1/2 , then g ◦ g = f1 ,
and not every function f1 has such a g. Show that if f (1) = −1, f (−1) = 1, and
f (x) = 0 for all x = ±1, then there does not exist a function g : R → R such that
g ◦ g = f . (The question of which functions can be expressed in the form g ◦ g has
been studied extensively. See, for example, the following paper: R. Isaacs, Iterates
of fractional order, Canad. J. Math. 2 (1950), 409–416.)
9.6 (a) Let f1 be constant and f2 one-to-one. (c) The answer is positive. (d) The
answer is positive.
Chapter 10
10.8 Show that the infimum of the set of positive periods is positive and also a
period.
10.12 First show, using the statement of Exercise 3.17, that if x is irrational, then
the set of numbers {nx} is everywhere dense in [0, 1].
10.15 Suppose that limy→x f (y) = ∞ for all x. Construct a sequence of nested intervals [an , bn ] such that f (x) > n for all x ∈ [an , bn ].
10.16 Suppose that limy→x f (y) = 0 for all x. Construct a sequence of nested intervals [an , bn ] such that | f (x)| < 1/n for all x ∈ [an , bn ].
10.20 Construct a set A ⊂ R that is not bounded from above, but for all a > 0, we
have n · a ∈
/ A if n is sufficiently large. (We can also achieve that for every a > 0, at
most one n ∈ N+ exists such that n · a ∈ A.) Let f (x) = 1 if x ∈ A, and let f (x) = 0
otherwise.
10.57 Show that if the leading coefficient of the polynomial p of degree three is
positive, then limx→∞ p(x) = ∞ and limx→−∞ p(x) = −∞. Therefore, p takes on both
positive and negative values.
10.58 Apply the Bolzano–Darboux theorem to the function f (x) − x.
10.60 Show that if f is continuous and f f (x) =−x for
all x, then (i) f is
one-to-one, (ii) f is strictly monotone, and so (iii) f f (x) is strictly monotone
increasing.
10.71 (ii) First show that f and g are bounded in A; then apply the equality
Hints, Solutions
443
f (y)g(y) − f (x)g(x) = f (y) · (g(y) − g(x)) + g(x) · ( f (y) − f (x)) .
10.76 Let A = {a1 , a2 , . . .}. For an arbitrary real number x, let f (x) be the sum of
the numbers 2−n for which an < x. Show that f satisfies the conditions.
10.80 Every continuous function satisfies the condition.
10.81 First show that if p < q are rational numbers and n is a positive integer, then
the set {a ∈ [−n, n] : limx→a f (x) < p < q < f (a)} is countable.
10.82 Apply the ideas used in the solution of Exercise 10.17.
10.89 Not possible.
10.90 Not possible.
10.94 The function g(x) = f (x) − f (1) · x is additive, periodic with every rational
number a period, and bounded from above on an interval. Show that g(x) = 0 for
all x.
10.96 Apply the ideas used in the proof of Theorem 10.76.
10.102 By Exercise 10.101, it suffices to show that every point c of I has a neighborhood in which f is bounded. Let f be bounded from above on [a, b] ⊂ I. We can
assume that b < c. Let
α = sup{x ∈ I : x ≥ a, f is bounded in [a, x]}.
Show (using the weak convexity of f ) that α > c.
Chapter 11
11.16 Use Exercise 6.7.
11.31 Use induction, using the identity
cos(n + 1)x + cos(n − 1)x = 2 cos nx · cos x.
11.35 Show that f (0) = 1. Prove, by induction, that if for some a, c ∈ R we have
f (a) = cos(c · a), then f (na) = cos(c · na) for all n.
11.36 (a) Prove and use that sin−2 (kπ /2n) + sin−2 (n − k)π /2n = 4 sin−2 (kπ /n)
for all 0 < k < n. (b) Use induction. (c) Apply Theorem 11.26.
444
Hints, Solutions
Chapter 12
12.9 There is no such point. Use the fact from number theory that for every irrational
x there exist infinitely many rational p/q such that |x − (p/q)| < 1/q2 .
12.15 Show that ( f (yn ) − f (xn ))/(yn − xn ) falls between the numbers
f (xn ) − f (a) f (yn ) − f (a)
f (xn ) − f (a) f (yn ) − f (a)
,
,
and max
.
min
xn − a
yn − a
xn − a
yn − a
12.54 Suppose that c < d and f (c) > f (d). Let α = sup{x ∈ [c, d] : f (x) ≥ f (c)}.
Show that α < d and α = d both lead to a contradiction.
12.57 After subtracting a linear function, we can assume that f ′ (c) = 0. We can
also suppose that f ′′ (c) > 0. Thus f ′ is strictly locally increasing at c. Deduce that
this means that f has a strict local minimum at c, and that for suitable x1 < c < x2 ,
we have f (x1 ) = f (x2 ).
12.65 Differentiate the function ( f (x) − sin x)2 + (g(x) − cos x)2 .
12.71 The statement holds for k = n. Prove, with the help of Rolle’s theorem, that
if 1 ≤ k ≤ n and the statement holds for k, then it holds for k − 1 as well.
12.76 Let g(x) = f (x + h) − f (x) /h. Then
f (a + 2h) − 2 f (a + h) + f (a) /h2 = g(a + h) − g(a) /h.
By the
there exists
a c ∈ (a, a + h) such that g(a + h) −
mean′ value theorem,
g(a) /h = g (c) = f ′ (c + h) − f ′ (c) /h. Apply Theorem 12.9 to f ′ .
12.81 We can suppose that a > 1. Prove that ax = x has a root if and only if the
solution x0 of (ax )′ = ax · log a = 1 satisfies ax0 ≤ x0 . Show that this last inequality
is equivalent to the inequalities 1 ≤ x0 · log a and e ≤ ax0 = 1/ log a.
12.82 (a) Let a < 1. Show that the sequence (a2n+1 ) is monotone increasing, the
sequence (a2n ) is monotone decreasing, and both converge to the solution of the
equation ax = x. The case a = 1 is trivial. (b) Let a > 1. If the sequence is convergent,
then its limit is the solution of the equation ax = x. By the previous exercise, this
has a solution if and only if a ≤ e1/e . Show that in this case, the sequence converges
(monotonically increasing) to the (smaller) solution of the equation ax = x.
12.83 Apply inequality (12.32).
Chapter 13
13.2 Check that the statement follows for the polynomial xn by the binomial
theorem.
Hints, Solutions
445
13.6 Show that the value of the equation given
13.5 does not change if
inExercise
n
.
we write −x in place of x. Use the fact that nk = n−k
13.9 Use Euler’s formula (11.65) for cos x.
Chapter 14
14.3 Show that sF ( f ) ≤ SF (g) for every partition F.
14.6 First show that the upper sums for partitions containing one base point
form an interval. We can suppose that f ≥ 0. Let M = sup{ f (t) : t ∈ [a, b]}. Let g(x)
denote the upper sum corresponding to the partition a < x < b, and let g(a) = g(b) =
M(b − a). We have to see that if a < c < b and g(c) < y < M (b − a), then g takes
on the value y. One of M1 = sup{ f (t) : t ∈ [a, c]} and M2 = sup{ f (t) : t ∈ [c, b]}
is equal to M. By symmetry, we can assume that M1 = M. If c ≤ x ≤ b, then the
first term appearing in the upper sum g(x), that is, sup{ f (t) : t ∈ [a, x]} · (x − a) = M
(x − a), is continuous. The second term, that is, sup{ f (t) : t ∈ [x, b]} · (b − x), is
monotone decreasing.
Thus in the interval [c, b], g is the sum of a continuous and a monotone decreasing
function. Moreover, g(b) = M(b−a) = max g. Show that then g([c, d]) is an interval.
14.9 Using Exercise 14.8, construct nested intervals whose shared point is a point
of continuity.
14.16 Use the equality
sin α + sin 2α + · · · + sin nα =
sin(nα /2)
· sin (n + 1)α /2.
sin(α /2)
(1)
14.30 The statement is false. Find a counterexample in which f is the Dirichlet
function.
14.38 Show that for every ε > 0, there are only finitely many points x such that
| f (x)| > ε . Then apply the idea seen in Example 14.45.
14.42 We can assume that c = 0. Fix an ε > 0; then estimate the integrals
'
and ε1 f (tx) dx separately.
'ε
0
f (tx) dx
14.44 Let max f = M, and let ε > 0 be fixed.
! Show that there exists an interval [c, d]
on which f (x) > M − ε ; then prove that
large.
n
'b
a
f n (x) dx > M − 2ε if n is sufficiently
14.50 Using properties (iii), (iv), and (v), show that if e(x) =
1 for all
x ∈ (a, b) (the
value of e can be arbitrary at the points a and b), then Φ e; [a, b] = b − a. After
'
this, with the help of properties (i) and (iii), show that Φ f ; [a, b] = ab f (x) dx
holds for every step function. Finally, use (iv) to complete the solution. (We do not
need property (ii).)
446
Hints, Solutions
14.51 Let α = Φ (e; [0, 1]), where e(x) = 1 for all x ∈ [0, 1]. Use (iii) and (vi) to
show that if b − a is rational and e(x) = 1 for all x ∈ (a, b) (the value of e can be
arbitrary at the points a and b), then Φ (e; [a, b]) = α · (b − a). Show the same for
arbitrary a < b; then finish the solution in the same way as the previous exercise.
Chapter 15
15.3 Is it true that the function
√
x is Lipschitz on [0, 1]?
15.4 Apply the idea behind the proof of Theorem 15.1.
15.9 Not possible. See Exercise 14.9.
15.17 Compute the derivative of the right-hand side.
15.25 Use the substitution y = π − x.
n
15.32 Let q = c·r1n1 · · · rk k , where r1 , . . . , rk are distinct polynomials of degree one or
two. First of all, show that in the partial fraction decomposition of p/q, the numern
ator A of the elementary rational function A/rk k is uniquely determined. Then show
n
that the degree of the denominator of (p/q) − (A/rk k ) is smaller than the degree of
q, and apply induction.
15.37 By Exercise 15.36, it is enough to show that
Use L’Hôpital’s rule for this.
'x
2
dt/logn+1 t = o (x/logn x).
Chapter 16
16.13 Let the two cylinders be {(x, y, z) : y2 + z2 ≤ R2 } and {(x, y, z) : x2 + z2 ≤ R2 }.
Show that if their intersection is A, then the sections Az = {(x, y) : (x, y, z) ∈ A} are
squares, and then apply Theorem 16.11.
16.27 Let N be the product of the first n primes. Choose a partition whose base
points are the points i/N (i = 0, . . . , N), as well as an irrational point between each
of (i − 1)/N and i/N. Show that if c ≤ 2, then the length of the corresponding
inscribed polygonal path tends to infinity as n → ∞. (We can use the fact that the
sum of the reciprocals of the first n prime numbers tends to infinity as n → ∞; see
Corollary 18.16.) Thus if c ≤ 2, then the graph is not rectifiable.
If c > 2, then the graph is rectifiable. To prove this, show that it suffices to consider the partitions F for which there exists an N such that the rational base points
of F are exactly the points i/N (i = 0, . . . , N). When finding bounds for the inscribed
polygonal paths, we can use the fact that if b > 0, then the sums ∑Nk=1 1/kb+1 remain
smaller than a value independent of N.
Hints, Solutions
447
16.29 Let F : a = t0 < t1 < · · · < tn = b be a fine partition, and let ri be the smallest
nonnegative number such that a disk Di centered at g(ti−1 ) with radius ri covers the
set g([ti−1 ,ti ]). Show that the sum of the areas of the disks Di is small.
16.31 (d) and (e) Show that there exist a point P and a line e such that the points of
the curve are equidistant from P and e. Thus the curve is a parabola.
16.35 The graph of the function f agrees with the image of the curve if we know
that f (a · ϕ · cos ϕ ) = a · ϕ · sin ϕ for all 0 ≤ ϕ ≤ π /4. Check that the function
g(x) = a · x · cosx is strictly monotone√increasing on [0, π /4]. Thus f (x)' = a · g−1
(x) · sin g−1 (x) for all x ∈ [0, a · π · 2/8]. To compute the integrals f dx and
' 2
f dx, use the substitution x = g(t).
16.36 The graph of the function f agrees with the image of the curve if we know
that f (ax − a sin x) = a − a cos x for all x ∈ [0, 2π ]. Check that the function g(x) =
ax − a sin x is strictly monotone increasing on [0, 2aπ ]; then use the ideas from the
previous question.
16.40 Apply the formula for the surface area of a segment of the sphere.
Chapter 17
17.2 Let a = x0 < x1 < · · · < xn = b be a uniform partition of [a, b] into n equal
parts. Show that if j ≤ k, c ∈ [x j−1 , x j ] and d ∈ [xk−1 , xk ], then
k
∑ ω ( f ; [xi−1 , xi ]) · (xi − xi−1 ) ≥ | f (d) − f (c)| · (b − a)/n.
(2)
i= j
√
17.3 Show that if k ≤ ( n)/2, then the numbers 2/ (2i + 1)π (i = 1, . . . , k) fall
into different subintervals of the partition Fn . Deduce from this, using (2), that
√
2
log n
1 [( n)/2]
1
.
ΩFn ( f ) ≥ · ∑
· ≥ c·
2 i=1 (2i + 1)π n
n
17.5 Suppose that the value VF0 ( f ) is the largest, where F0 : a = c0 < c1 < · · · <
ck = b. Show that f is monotone on each interval [ci−1 , ci ] (i = 1, . . . , k).
17.10 See Exercise 16.27.
17.14 Show that f ′ ≡ 0.
17.15 If α ≥ β + 1, then check that f ′ is bounded, and so f is Lipschitz. In the case
α < β + 1, show that if 0 ≤ x < y ≤ 1, then
| f (x) − f (y)| ≤ |xα − yα | + yα · min 2, x−β − y−β .
448
Hints, Solutions
Then prove that yα − xα ≤ C · (y − x)γ , and yα · min 2, (x−β − y−β ) ≤ C · (y − x)γ
with a suitable constant C. In the proof of the second inequality, distinguish two
cases based on whether y − x is small or large compared to y.
17.16 If a function is Hölder α with some constant α ≥ 1, then f is Hölder 1, so
Lipschitz, and so it has bounded variation. If α < 1, then there exists a function
that is Hölder α that does not have bounded variation. Look for the example in the
form xn · sin x−n where n is big.
17.19 To prove the “only if” statement, show that if f does not have bounded
variation on [a, b], then either there exists a point a ≤ c < b such that f does not
have bounded variation in any right-hand neighborhood of c, or there exists a point
a < c ≤ b such that f does not have bounded variation on any left-hand neighborhood of c. If, for example, f does not have bounded variation in any right-hand
neighborhood of c, then look for a strictly decreasing sequence tending to c with the
desired property.
Chapter 18
18.4 Let c be a shared point of discontinuity. Then there exists an ε > 0 such that
no good δ exists for the continuity of f or g at c. Show that for every δ > 0, there
exists a partition with mesh smaller than δ such that the approximating sums formed
with different inner points differ by at least ε 2 . By symmetry, we can suppose that
c < b and that f is discontinuous from the right at c. Distinguish two cases based on
whether g is discontinuous or continuous from the right at c.
18.5 Show that for all δ > 0, there exists a partition 0 = x0 < x1 < · · · < xn = 1 with
mesh smaller than δ , and there exist inner points ci and di such that
n
∑
i=1
f (ci ) f (xi ) − f (xi−1 ) > 1 and
n
∑ f (di )
i=1
f (xi ) − f (xi−1 ) < −1.
18.6 We give hints for two different proofs. (i) Choose a sequence of partitions
Fn satisfying limn→∞ δ (Fn ) = 0, and for each n, fix the inner points. Show that the
sequence of approximating sums σFn ( f , g) is convergent. Let limn→∞ σFn ( f , g) = I.
'
Show that ab f dg exists and its value is I. (ii) Let An denote the set of approximating
sums corresponding to partitions with mesh smaller than 1/n with arbitrary inner
points, and let Jn be the smallest closed interval containing the set An . Show that
J1 ⊃ J2 ⊃ . . ., and as n → ∞, the length of Jn tends to zero. Thus the
intervals Jn
'
have exactly one shared point. Let this shared point be I. Show that ab f dg exists,
and its value is I.
18.8 Use the statement of Exercise 18.4.
18.9 Use the statement of Exercise 17.19.
Hints, Solutions
449
18.11 Suppose first that g is Lipschitz and monotone increasing, and apply the idea
used in the proof of Theorem 18.10. Prove that every Lipschitz function can be
expressed as the difference of two monotone increasing Lipschitz functions.
Chapter 19
19.7 By Cauchy’s criterion, limx→∞
' 2x
x
f dt = 0.
19.23 First, with the help of Theorem
19.22
'
√ or by the method seen in
Example 19.20.3, show that the integral 1∞ sin x/ x dx is convergent; then apply
the substitution x2 = t.
19.24 The integrals are convergent for all c > 0. We can prove this with the help
of Theorem 19.22 or by the method seen in Example 19.20.3. If c ≤ 0, then the
integrals are divergent. Show that in this case, Cauchy’s criterion is not satisfied.
19.27 Use Cauchy’s criterion.
19.30 Apply the construction in the solution of Exercise 19.28, with the change of
choosing the values fn (an ) to be large.
19.33 In both exercises, choose the function g to be piecewise constant; that is,
constant on the intervals (an−1 , an ), where (an ) is a suitable sequence that tends to
infinity.
19.35 Let g(x) = f (x)1−1/log x . Show that if at a point x, we have f (x) < 1/x2 ,
then g(x) ≤ c/x2 ; if f (x) ≥ 1/x2 , then g(x) ≤ c · f (x). Then use the majorization
principle.
19.37 Only three cases are possible.
19.38 Use integration by parts.
19.39 Use induction with the help of the previous question.
19.42 Prove the statement by induction on n. (Let the nth statement be that (19.8)
holds for every c > 0.) Prove the induction step with the help of integration by parts.
19.43 Use the factthat (1 − t/n)n ≤ e−t for every 0 < t ≤ n, and show from this
that Γ (c) > nc · n!/ c(c + 1) · · · (c + n) .
Show that et · (1 − t/n)n is monotone decreasing on [0, n], and deduce that
t n
e−t ≤ (1 + ε ) · 1 −
n
for all t ∈ [0, n] if n is sufficiently
large. Show from
this that for sufficiently large n,
Γ (c) < ε + (1 + ε ) · nc · n!/ c(c + 1) · · · (c + n) .
19.46 Apply (19.9) and Wallis formula.
450
Hints, Solutions
Solutions
Chapter 2
2.3 Let Hn denote the system of sets H ⊂ {1, . . . , n} satisfying the condition, and
let an be the number of elements in Hn . It is easy to check that a1 = 2 and a2 = 3
(the empty set works, too). If n > 2, then we can show that for every H ⊂ {1, . . . , n},
H ∈ Hn ⇐⇒ [(n ∈
/ H) ∧ (H ∩ {1, . . . , n − 1} ∈ Hn−1 )]∨
∨ [(n ∈ H) ∧ (H ∩ {1, . . . , n − 2} ∈ Hn−2 ) ∧ (n − 1 ∈
/ H)].
It is then clear that for n > 2, we have an = an−1 + an−2 . Thus the sequence of
numbers an is 2, 3, 5, 8, . . .. This is called the Fibonacci sequence, which has an
explicit formula (see Exercise 4.3).
2.11 The inductive step does not work when n = 3; we cannot state that P = Q here.
2.14 Let a ≥ −1. The inequality (1 + a)n ≥ 1 + na is clearly true if 1 + na < 0, so
we can assume that 1 + na ≥ 0. Apply the inequality of the arithmetic and geometric
means consisting of the n numbers 1 + na, 1, . . . , 1. We get that
√
n
1 + na ≤ (1 + na) + n − 1 /n = 1 + a.
If we raise both sides to the nth power, then we get the inequality we want to prove.
Chapter 3
3.27 If the set N were bounded from above, then it would have a least upper bound.
Let this be a. Then n ≤ a for all n ∈ N. Since if n ∈ N, then n + 1 ∈ N, we must have
n + 1 ≤ a, that is, n ≤ a − 1 for all n ∈ N. This, however, is impossible, since then
a − 1 would also be an upper bound. This shows that N is not bounded from above,
so the axiom of Archimedes is satisfied.
Let [an , bn ] be nested closed intervals. The set A = {an : n ∈ N} is bounded from
above, because each bn is an upper bound. If sup A = c, then an ≤ c for all n. Since
bn is also an upper bound of A, c ≤ bn for all n. Thus c ∈ [an , bn ] for every n, which
is Cantor’s axiom.
3.32 If b/a = c then c ≥ 1. By Theorems 3.23 and 3.24, we have that br /ar = cr ≥
c0 = 1.
Hints, Solutions
451
3.33 First let b be rational and 0 ≤ b ≤ 1. Then b = p/q, where q > 0 and 0 ≤ p ≤ q
are integers. By the inequality of arithmetic and geometric means,
(1 + x)b =
q
(1 + x) p =
q
(1 + x) p · 1q−p ≤
p(1 + x) + q − p
= 1 + bx.
q
(3)
Now let b > 1 be rational. If 1 + bx ≤ 0, then (1 + x)b ≥ 1 + bx holds. Thus we
can suppose that bx > −1. Since 0 < 1/b < 1, we can apply (3) to bx instead of x,
and 1/b instead of b, to get that
(1 + bx)1/b ≤ 1 + (1/b) · bx = 1 + x.
Then applying Exercise 3.32, we get that 1 + bx ≤ (1 + x)b . Thus we have proved
the statement for a rational exponent b.
In the proof of the general case, we can assume that x = 0 and b = 0, 1, since the
statement is clear when x = 0 or b = 0, 1. Let x > 0 and 0 < b < 1. If b < r < 1 is
rational, then by Theorem 3.27, (1 + x)b ≤ (1 + x)r , and by (3), (1 + x)r ≤ 1 + rx.
Thus (1 + x)b ≤ 1 + rx for all rational b < r < 1. It already follows from this that
(1 + x)b ≤ 1 + bx. Indeed, if (1 + x)b > 1 + bx, then we can chose a rational number
b < r < 1 such that (1 + x)b > 1 + rx, which is impossible.
Now let −1 < x < 0 and 0 < b < 1. If 0 < r < b is rational, then by Theorem 3.27,
(1 + x)b < (1 + x)r , and by (3), (1 + x)r ≤ 1 + rx. Thus (1 + x)b ≤ 1 + rx for all
rational numbers 0 < r < b. It then follows that (1 + x)b ≤ 1 + bx, since if (1 + x)b >
1 + bx, then we can chose a rational number 0 < r < b such that (1 + x)b > 1 + rx,
which is impossible.
We can argue similarly when b > 1.
Chapter 4
4.1 an = 1 + (−1)n · 22−n .
√
4.3 (a) The roots of the polynomial x2 − x − 1 are α = (1 + 5)/2 and β = (1 −
√
5)/2. In the sense of the previous question, for every λ , µ ∈ R, the sequence
(λ α n + µβ n ) also satisfies the recurrence. Choose λ and µ such that λ + µ = λ α 0 +
1
1
µβ√0 = 0 = u0 and
√ λ α + µβ = 1 = u1 hold. It is easy to see that the choice λ =
1/ 5, β = −1/ 5 works. Thus the sequence
√ n
√ n
1
1− 5
1+ 5
vn = √
(4)
−
2
2
5
satisfies the recurrence, and v0 = u0 , v1 = u1 . Then by induction, it follows that
vn = un for all n, that is, un is equal to the right-hand side of (4) for all n.
452
Hints, Solutions
4.13 For a given ε > 0, there exists an N such that if n ≥ N, then |an − a| < ε . Let
|a1 − a| + · · · + |aN − a| = K. If n ≥ N, then
(a1 − a) + · · · + (an − a) |a1 − a| + · · · + |an − a| K + nε
≤
≤
< 2ε ,
|sn − a| =
n
n
n
given that n > K/ε . Thus sn → a. It is clear that the sequence an = (−1)n satisfies
sn → 0.
Chapter 5
5.9 Let max1≤i≤k ai = a. It is clear that
√
√
√
n
n
a = n an ≤ n an1 + · · · + ank ≤ k · an = k · a → a,
and so the statement follows by the squeeze theorem.
Chapter 6
6.15 If the set N were bounded from above, then the sequence an = n would also
be bounded, so by the Bolzano–Weierstrass theorem, it would have a convergent
subsequence. This, however, is impossible, since the distance between any two terms
of this sequence is at least 1. This shows that N is not bounded from above, so the
axiom of Archimedes holds.
Let [an , bn ] be nested closed intervals. The sequence (an ) is bounded, since a1 is
a lower bound and every bi is an upper bound for it. By the Bolzano–Weierstrass
theorem, we can choose a convergent subsequence ank . If ank → c, then c ≤ bi for
all i, since ank ≤ bi for all i and k. On the other hand, if nk ≥ i, then ank ≥ ai , so
c ≥ ai . Thus c ∈ [ai , bi ] for all i, so Cantor’s axiom holds.
6.21 Consider a sequence ak → ∞ such that ak+1 − ak → 0. Repeating the terms of
this sequence enough
√ times gives us a√suitable sequence. For example, starting from
the sequence ak = k: (a) Let an = k if 2k−1 ≤ n < 2k . (d) Let (tk ) be a strictly
monotone increasing sequence of positive integers such that
tk+1 > tk + max sn
n<tk
√
and
for all k, and let an = k if tk−1 ≤ n < tk . Then (an ) is monotone increasing
√
√
tends to infinity. If tk−1 ≤ n < tk , then n + sn < tk+1 , so asn − an ≤ k + 1 − k,
which implies asn − an → 0.
Hints, Solutions
453
Chapter 7
7.2
(a) Since 1/ n2 + 2n = (1/2) · (1/n − 1/(n + 2)), we have that
1 N 1
1
1
1
1
1
1
=
·
−
·
1
+
−
−
=
.
∑ n n+2
∑ 2
2 n=1
2
2 N +1 N +2
n=1 n + 2n
N
Thus the partial sums of the series tend to 3/4, so the series is convergent with
sum 3/4.
(b) If we leave out the first term in the series in (a), then we get the series in (b).
Thus the partial sums of this new series tend to (3/4) − (1/3) = 5/12, so it is
convergent with sum 5/12.
(c) Since 1/(n3 − n) = (1/2) · (1/(n − 1) − 2/n + 1/(n + 1)), we have that
1
1 N
2
1
1 1
1
1
1
∑ 3 = 2 · ∑ n−1 − n + n+1 = 2 · 1− 2 − N + N +1 .
n=2 n − n
n=2
N
Thus the partial sums of the series tend to 1/4, so the series is convergent with
sum 1/4.
7.5 We prove this by induction. The statement holds for n = 1. To prove the inductive step, we need to show that if n ≥ 1, then
1
1
1
1
1
≤ 1+ −
− 1+ −
.
b b · (n + 1)b
b b · nb
(n + 1)b+1
After multiplying this through by b · (n + 1)b and rearranging, this takes the form
1 b
b
≤ 1+
1+
.
n+1
n
(5)
If b ≥ 1, then (5) is clear from the Bernoulli inequality for real powers (the first
statement of Exercise 3.33).
If 0 < b < 1, then take the reciprocal of both sides of (5), then raise them to the
power 1/b. After some rearrangement, (5) becomes the inequality
b
1−
n+1+b
1/b
≥ 1−
1
.
n+1
Since 1/b > 1, (6) again follows from the first statement of Exercise 3.33.
(6)
454
Hints, Solutions
7.14 Let limn→∞ sn = A, where sn is the nth partial sum of the series. Since
a1 + 2a2 + · · · + nan = (a1 + · · · + an ) + (a2 + · · · + an ) + · · · + (an ) =
= sn + (sn − s1 ) + · · · + (sn − sn−1 ) =
= (n + 1)sn − (s1 + · · · + sn ),
we have that
a1 + 2a2 + · · · + nan
n+1
s1 + · · · + sn
=
sn −
.
n
n
n
(7)
Since sn → A, we have (s1 + · · · + sn )/n → A (see Exercise 4.13), and so the righthand side of (7) tends to zero.
Chapter 9
9.6 (b), (c), (d): see the following paper: W. Sierpiński, Sur les suites infinies de
fonctions définies dans les ensembles quelconques, Fund. Math. 24 (1935), 09–212.
(See also: W. Sierpiński: Oeuvres Choisies (Warsaw 1976) volume III, 255–258.)
9.18 Suppose that the graph of f intersects the line y = ax + b at more than two
points. Then there exist numbers x1 < x2 < x3 such that f (xi ) = axi + b (i = 1, 2, 3).
By the strict convexity of f , f (x2 ) < hx1 ,x3 (x2 ). It is easy to see that both sides are
equal to ax2 + b there, which is a contradiction.
Chapter 10
10.17 Let {rn } be an enumeration of the rational numbers. Let r ∈ Q and ε = 1/2.
Since f is continuous in r, there exists a closed and bounded interval I1 such
/ I1 , since
that sup{ f (x) : x ∈ I1 } < inf{ f (x) : x ∈ I1 } + 1. We can assume that r1 ∈
otherwise, we can take a suitable subinterval of I1 . Suppose that n > 1 and we have
already chosen the interval In−1 . Choose an arbitrary rational number r from the interior of In−1 . Since f is continuous in r, there exists a closed and bounded interval
In ⊂ In−1 such that sup{ f (x) : x ∈ In } < inf{ f (x) : x ∈ In } + 1/n. We can assume
that rn ∈
/ In , since otherwise, we can choose a suitable subinterval of In . We can also
assume that In is in the interior of In−1 (that is, they do not share an endpoint).
Thus we have defined the nested closed intervals In for each n. Let x0 ∈ ∞
n=1 In .
Then x0 is irrational, since x0 = rn for all n. Let un = inf{ f (x) : x ∈ In } and vn =
sup{ f (x) : x ∈ In }. Clearly, un ≤ f (x0 ) ≤ vn and vn − un < 1/n for all n. Let ε > 0
be fixed. If n > 1/ε , then f (x0 ) − ε < un ≤ vn < f (x0 ) + ε , from which it is clear
that | f (x) − f (x0 )| < ε for all x ∈ In . Since x0 ∈ In+1 and In+1 is in the interior of In ,
there exists a δ > 0 such that (x0 − δ , x + δ ) ⊂ In . Thus | f (x) − f (x0 )| < ε whenever
|x − x0 | < δ , which proves that f is continuous at x0 .
Hints, Solutions
455
10.21 See the following paper: H.T. Croft, A question of limits, Eureca 20
(1957), 11–13. For the history and a generalization of the problem, see L. Fehér,
M. Laczkovich, and G. Tardos, Croftian sequences, Acta Math. Hung. 56 (1990),
353–359.
10.40 Let f (0) = 1 and f (x) = 0 for all x = 0. Moreover,
g(x) = 0 for all x. Then
limx→0 f (x) = limx→0 g(x) = 0. On the other hand, f g(x) = 1 for all x.
10.61 If I is degenerate, then so is f (I). If I is not degenerate, then let α = inf f (I)
and β = sup f (I). We show that (α , β ) ⊂ f (I). If α < c < β , then for suitable
a, b ∈ I, we have α < f (a) < c < f (b) < β . Since f is continuous on [a, b], by
Theorem 10.57 f attains the value c over [a, b], that is, c ∈ f (I). This shows that
(α , β ) ⊂ f (I).
If α = −∞ and β = ∞, then by the above, f (I) = R, so the statement holds.
If α ∈ R and β = ∞, then (α , ∞) ⊂ f (I) ⊂ [α , ∞), so f (I) is one of the intervals
(α , ∞), [α , ∞), so the statement holds again. We can argue similarly if α = −∞
and β ∈ R. Finally, if α , β ∈ R, then by (α , β ) ⊂ f (I) ⊂ [α , β ], we know that f (I)
can only be one of the following sets: (α , β ), [α , β ), (α , β ], [α , β ]. Thus f (I) is an
interval.
10.88 Such is the function −x, for example.
10.91 It is easy to see by induction on k that
x1 + · · · + x2k
f (x1 ) + · · · + f (x2k )
f
≤
2k
2k
(8)
for all (not necessarily distinct) numbers x1 , . . . , x2k ∈ I. If x1 , . . . , xn ∈ I are fixed
numbers, then let s = (x1 + · · · + xn )/n and xi = s for all n < i ≤ 2n . By (8), we have
f (s) ≤
f (x1 ) + · · · + f (xn ) + (2n − n) · f (s)
,
2n
and so
f (s) ≤
f (x1 ) + · · · + f (xn )
.
n
10.99 Suppose that f (x) ≤ K for all |x − x0 | < δ . If |h| < δ , then
f (x0 ) ≤
f (x0 − h) + f (x0 + h) K + f (x0 + h)
≤
,
2
2
so f (x0 + h) ≥ 2 f (x0 ) − K. This means that 2 f (x0 ) − K is a lower bound of the
function f over (x0 − δ , x0 + δ ). Thus f is also bounded from below, and so it is
bounded in (x0 − δ , x0 + δ ).
456
Hints, Solutions
10.100 If we apply inequality (10.24) with the choices a = x and b = x + 2k h, then
we get that
$
1#
f (x) + f x + 2k h .
f x + 2k−1 h ≤
2
Dividing both sides by 2k−1 then rearranging yields us the inequality
k−1
k
1
1
f
x
+
2
h
−
f
x
+
2
h
≤ 21k f (x).
k−1
k
2
2
If we take the sum of these inequalities for k = 1, . . . , n, then the inner terms cancel
out on the left-hand side, and we get that
1 1
1
1
1
n
f (x + h) − n f (x + 2 h) ≤
+ + · · · + n f (x) = 1 − n f (x).
2
2 4
2
2
A further rearrangement give us the inequality
f (x + h) − f (x) ≤
1
· [ f (x + 2n h) − f (x)] .
2n
(9)
10.101 By Exercise 10.99, f is bounded on the interval J = (x0 − δ , x0 + δ ). Let
| f (x)| ≤ M for all x ∈ J. Let ε > 0 be fixed, and choose a positive integer n such
that 2n > 2M/ε holds. If |t| < δ /2n , then x0 + 2nt ∈ J, so | f (x0 + 2nt) | ≤ M. Thus
by (9), we have
f (x0 + t) − f (x0 ) ≤
1
2n
· [ f (x0 + 2nt) − f (x0 )] ≤
1
2n
· 2M < ε .
If, however, we apply (9) with the choices x = x0 + t and h = −t, then we get that
f (x0 ) − f (x0 + t) ≤
1
2n
· [ f (x0 − (2n − 1)t) − f (x0 + t)] ≤
1
2n
· 2M < ε .
Finally, we conclude that | f (x0 + t) − f (x0 )| < ε for all |t| < δ /2n , which proves
that f is continuous at x0 .
Chapter 11
11.36
(a) First of all, we show that
sin−2 (kπ /2m) + sin−2 (m − k)π /2m = 4 sin−2 (kπ /m)
(10)
for all 0 < k < m. This is because sin−2 (m − k)π /2m = cos−2 (kπ /2m), and
so the left-hand side of (10) is
Hints, Solutions
457
cos2 (kπ /2m) + sin2 (kπ /2m)
=
sin2 (kπ /2m) · cos2 (kπ /2m)
1
4
= 4 · sin−2 (kπ /m).
=
=
sin2 (kπ /2m) · cos2 (kπ /2m) sin2 2 · (kπ /2m)
sin−2 (kπ /2m) + cos−2 (kπ /2m) =
Applying the equality (10) with m = 2n, we get that
sin−2 (kπ /4n) + sin−2 (2n − k)π /4n = 4 sin−2 (kπ /2n)
(11)
for all 0 < k < 2n. If we now sum the equalities (11) for k = 1, . . . , n, then
on the left-hand side, we get every term of the sum defining A2n , except that
sin−2 (nπ /4n) = 2 appears twice, and sin−2 (2nπ /4n) = 1 is missing. Thus the
sum of the left-hand sides is A2n + 1. Since the sum of the right-hand sides is
4An , we get that A2n = 4An − 1.
(b) The statement is clear for n = 0, and follows for n > 0 by induction.
(c) The inequality is clear by Theorem 11.26. Applying this for x = kπ /2n and then
summing the inequality we get for k = 1, . . . , n gives us the desired bound.
n
(d) By the above, the partial sum S2n = ∑2k=1 (1/k2 ) satisfies
and so
π 2
π 2
· (A2n − 2n ) ≤ S2n ≤
· A2n ,
n
2·2
2 · 2n
π 2 π 2 2n
π2
π2
− n+1 ≤ S2n ≤
+
.
6
4
6
3 · 4n+1
Then by the squeeze theorem, S2n → π 2 /6. Since the series is convergent by
2
2
Example 7.11.1, we have Sn → π 2 /6, that is, ∑∞
n=1 1/k = π /6.
Chapter 12
12.15 Let
mn = min
f (xn ) − f (a) f (yn ) − f (a)
,
xn − a
yn − a
f (xn ) − f (a) f (yn ) − f (a)
,
xn − a
yn − a
and
Mn = max
for all n. It is clear that mn → f ′ (a) and Mn → f ′ (a) if n → ∞.
Let pn = (a − xn )/(yn − xn ) and qn = (yn − a)/(yn − xn ). Then pn , qn > 0 and pn +
qn = 1. Since
f (yn ) − f (xn )
f (a) − f (xn )
f (yn ) − f (a)
= pn ·
+ qn ·
,
yn − xn
a − xn
yn − a
458
Hints, Solutions
we have mn ≤ ( f (yn ) − f (xn ))/(yn − xn ) ≤ Mn . Then the statement follows by the
squeeze theorem.
12.19 See Example 13.43.
12.23 Let (x0 , y0 ) be a common point of the two graphs. Then 4a(a − x0 ) =
4b(b + x0 ), so a(a − x0 ) = b(b + x0 ) and x0 = a − b. The slopes of the two graphs
at the point (x0 , y0 ) are
m1 = −2a/ 4a(a − x0 ) and m2 = 2b/
4b(b + x0 ).
Thus
m1 · m2 =
−4ab
4a(a − x0 ) ·
4b(b + x0 )
=
−4ab
4a(a − x0 )
2 =
−b
= −1.
a − x0
It is well known (and easy to show) that if the product of the slopes of two lines is
−1, then the two lines are perpendicular.
12.25 Since π − e < 1, if x < 0, then 2x < 1 < (π − e)x , and if x > 0, then 2x > 1 >
(π − e)x . Thus the only point of intersection of the two graphs is (0, 1). At this point,
the slopes of the tangent lines are log 2 and log(π − e). This means that the angle
between the tangent line of 2x at (0, 1) and the x-axis is arc tg(log 2), while
the angle
between the tangent line of (π − e)x at (0, 1) and the x-axis is arc tg log(π − e)
.
Thus the angle between the two tangent lines is arc tg(log 2) − arc tg log(π − e) .
12.44 If y = loga x, then y′ = 1/(x · log a), so xy′ is constant, (xy′ )′ = 0, y′ + xy′′ = 0.
Thus −1 = (−x)′ = (y′ /y′′ )′ = (y′′2 − y′ y′′′ )/y′′2 , so y′ y′′′ − 2(y′′ )2 = 0.
12.48 Let a ≤ b. The volume of the box is
K(x) = (a − 2x)(b − 2x)x = 4x3 − 2(a + b)x2 + abx.
We need to find the maximum of this function on the interval [0, a/2]. Since K(0) =
K (a/2) = 0, the absolute maximum is in the interior of the interval, so it is a local
extremum. The solutions of the equation K ′ (x) = 12x2 − 4(a + b)x + ab = 0 are
√
a + b ± a2 + b2 − ab
.
x=
6
Since
√
√
a + b + a2 + b2 − ab a + b + a2 + b2 − b2
2a + b a
≥
=
≥
6
6
6
2
and we are looking for extrema inside (0, a/2), K(x) has a local and therefore absolute maximum at the point
Hints, Solutions
459
x=
√
a + b − a2 + b2 − ab
.
6
For example, in the case a = b, the box has maximum volume if x = a/6.
12.53 f ′ (x) = 1 + 4x · sin(1/x) − 2 cos(1/x), if x = 0 and
f ′ (0) = lim
x→0
f (x) − f (0)
= lim (1 + 2x sin(1/x)) = 1.
x→0
x−0
Thus f ′ (0) > 0. At the same time, f ′ takes on negative values in any right- or lefthand neighborhood of 0. This is because
±1
′
f
= 1−2 < 0
2kπ
for every positive integer k. It follows that 0 does not have a neighborhood in which
f is monotone increasing, since f is strictly locally decreasing at each of the points
1/(2kπ ).
12.87 Suppose that e = p/q, where p and q are positive integers. Then q > 1, since
e is not an integer. Let an = 1 + 1/1! + · · · + 1/n!. The sequence (an ) is strictly
monotone increasing, and by Exercise 12.86, e = limn→∞ an . Thus e > an for all n.
If n > q, then
1
1
+···+
q! · (an − aq ) = q! ·
=
(q + 1)!
n!
1
1
1
+
+···+
≤
=
(q + 1) (q + 1)(q + 2)
(q + 1) · . . . · n
1
1
1
≤
+
+···+
=
2
(q + 1) (q + 1)
(q + 1)n−q
2
1
1
1
· 1−
=
1
−
<
(q + 1)
(q + 1)n−q
(q + 1)
2
1
1
1
1−
= .
<
(q + 1)
(q + 1)
q
This holds for all n > q, so 0 < q! · (e − aq ) ≤ 1/q < 1. On the other hand, since
e = p/q,
p
1
1
1
q! · (e − aq ) = q! ·
−1− − −...−
q
1! 2!
q!
is an integer, which is impossible.
12.95 The function f has a strict local minimum at 0, since f (0) = 0 and f (x) > 0
if x = 0. Now
1
1
f ′ (x) = x2 4x 2 + sin
− cos
x
x
460
Hints, Solutions
if x = 0. We can see that f ′ takes on both negative and positive values in every
right-hand neighborhood of 0. For example, if k ≥ 2 is an integer, then
8
1
1
′
f
−1 < 0
·
=
2kπ
(2kπ )2
2kπ
and
f′
1
(2k + 1)π
=
8
1
·
+
1
> 0.
(2k + 1)2 π 2
(2k + 1)π
Chapter 13
13.5 The interval [0, 1] is mapped to [−1, 1] by the function 2x − 1. Thus we first
need to determine the nth Bernstein polynomial of the function f (2x − 1), which is
n
n
2k
∑ f n − 1 · k xk (1 − x)n−k .
k=0
We have to transform this function back onto [−1, 1], that is, we need to replace x
by (1 + x)/2, which gives us the desired formula.
13.7 B1 = 1, B2 = B3 = (1 + x2 )/2, B4 = B5 = (3 + 6x2 − x4 )/8.
13.11
n
n − 1 k−1
k n k
n−k
= x· ∑
x (1 − x)n−k =
x (1 − x)
∑
k=0 n k
k=1 k − 1
n−1
= x · x + (1 − x)
= x.
n
13.12
n
k n−1 k
k2 n k
n−k
(1
−
x)
=
x
∑ n k − 1 x (1 − x)n−k =
∑ n2 k
k=1
k=0
1 n n−1 k
n−1 n k−1 n−1 k
n−k
(1
−
x)
+
=
x
∑ n−1 k−1
∑ k − 1 x (1 − x)n−k =
n k=1
n k=1
x
n−1 n n−2 k
=
∑ k − 2 x (1 − x)n−k + n =
n k=2
n
=
n−1 2 x
x − x2
x + = x2 +
.
n
n
n
13.32 Let f (x) = x2 sin(1/x) if x = 0 and f (0) = 0. We know that f is differentiable
everywhere, and f ′ (x) = 2x sin(1/x) − cos(1/x) if x = 0 and f ′ (0) = 0 (see Example
13.43). The function f ′ + h2 is continuous everywhere, so by Theorem 15.5, it has a
primitive function. If g′ = f ′ + h2 , then (g − f )′ = h2 , so g − f is a primitive function
of h2 .
Hints, Solutions
461
If we start with the function f1 (x) = x2 cos(1/x), f1 (0) = 0, then a similar argument gives a primitive function of h1 .
13.33 The function h21 + h22 vanishes at zero, and is 1 everywhere else. Thus h21 + h22
is not Darboux, so it does not have a primitive function. On the other hand, h22 − h21 =
h2 (x/2), so h22 − h21 has a primitive function. Now the statement follows.
Chapter 14
14.6 I. The solution is based on the following statement: Let g = g1 + g2 , where
g1 : [c, b] → R is continuous and g2 : [c, b] → R is monotone decreasing. If g(b) =
max g, then the image of g is an interval.
Let c ≤ d < b and g(d) < y < g(b). We will show that if s = sup{x ∈ [d, b] :
g(t) ≤ y for all t ∈ [d, x]}, then g(s) = y. To see this, suppose that g(s) < y. Then
s < b but limx→d+0 g1 (x) = g1 (s). Now limx→d+0 g2 (x) ≤ g2 (s) implies that g(x) < y
in a right-hand neighborhood of the point s, which is impossible. If f (s) > y, then
s > d. Thus limx→d−0 g1 (x) = g1 (s) and limx→d−0 g2 (x) ≥ g2 (s) imply that g(x) > y
in a left-hand neighborhood of the point s, which is also impossible. This proves the
statement.
II. Suppose first that there is only one base point. We can assume that f ≥ 0.
Let M = sup{ f (t) : t ∈ [a, b]}. Let g(x) denote the upper sum corresponding to the
partition a < x < b, and let g(a) = g(b) = M(b−a). We have to show that if a < c < b
and g(c) < y < M(b − a), then g takes on the value y. One of the two values M1 =
sup{ f (t) : t ∈ [a, c]}, M2 = sup{ f (t) : t ∈ [c, b]} must be equal to M. By symmetry,
we may assume that M1 = M. If c ≤ x ≤ b, then the first term appearing in the upper
sum of g(x), which is sup{ f (t) : t ∈ [a, x]} · (x − a) = M(x − a), is continuous. The
second term, which is sup{ f (t) : t ∈ [x, b]} · (b − x), is monotone decreasing.
Thus over the interval [c, b], the function g is the sum of a continuous and a
monotone decreasing function. Moreover, g(b) = M(b − a) = max g. By the above,
it follows that g takes on the value of y, and so the set of upper sums corresponding
to partitions with one base point forms an interval.
III. Now let F : a = x0 < x2 < · · · < xn = b be an arbitrary partition, and let
SF < y < M(b − a). We have to show that there is an upper sum that is equal to y.
We can assume that n is the smallest number such that there exists a partition into
n parts whose upper sum is less than y. Then for the partition F ′ : a = x0 < x2 <
· · · < xn = b, we have SF ′ ≥ y. If SF ′ = y, then we are done. If SF ′ > y, then we can
apply step II for the interval [a, x2 ] to find a point a < x < x2 such that the partition
F ′′ : a = x0 < x < x2 < · · · < xn = b satisfies SF ′′ = y.
14.10 The statement is not true: if f is the Dirichlet function and F : a < b, then the
possible values of σF are b − a and 0.
462
Hints, Solutions
14.12 The statement is true. By Exercise 14.9, there exists a point x0 ∈ [a, b]
where f is continuous. Since f (x0 ) > 0, there exist points a ≤ c < d ≤ b such
that f (x) > f (x0 )/2 for all x ∈ [c, d]. It is clear that if the partition F0 contains the
points c and d, then sF0 ≥ (d − c) · f (x0 )/2. By Lemma 14.4, it then follows that
'
SF ≥ (d − c) · f (x0 )/2 for every partition, so ab f dx ≥ (d − c) · f (x0 )/2 > 0.
14.33 To prove the nontrivial direction, suppose that f is nonnegative, continuous,
and not identically zero. If f (x0 ) > 0, then by a similar argument as in the solution
'
of Exercise 14.12, we get that ab f dx > 0.
Chapter 15
15.24 We will use the notation of the proof of Stirling’s formula (Theorem 15.15).
Since the sequence (an ) is strictly monotone increasing
√ and tends to 1, we have
an < 1 for all n. This proves the inequality n! > (n/e)n · 2π n.
By inequality (15.14),
log aN+1 − log an =
N
N
k=n
k=n
1
1
N
1
∑ (log ak+1 − log ak ) < ∑ 4k2 < 4 ∑ (k − 1)k =
k=n
1
1
=
−
4(n − 1) 4N
1
, that is,
for all n < N. Here letting N go to infinity, we get that − log an ≤ 4(n−1)
an ≥ e−1/ 4(n−1) , which is exactly the second inequality we wanted to prove.
√
'√
15.30
x2 − 1 dx = 21 x x2 − 1 − 12 arch x + c.
15.39 Start with the equality
xk ·
f (x)
′
= kxk−1
f (x) +
k · xk−1 · f (x) + 21 xk f ′ (x)
xk f ′ (x)
=
.
2 f (x)
f (x)
The numerator of the fraction on the right-hand side is a polynomial of degree
exactly k + n − 1. It follows that Ik+n−1 can be expressed as a linear combination
of an elementary function and the integrals I0 , I1 , . . . , Ik+n−2 . The statement then follows.
Chapter 16
16.4 Let ε > 0 be given. Since A is measurable, we can find rectangles R1 , . . . , Rn ⊂
R p such that A ⊂ ∪ni=1 Ri and ∑ni=1 m p (Ri ) < m p (A) + ε . Similarly, there are
Hints, Solutions
463
rectangles S1 , . . . , Sk ⊂ Rq such that B ⊂ ∪kj=1 S j and ∑kj=1 mq (S j ) < mq (B) + ε .
Then Ti j = Ri × S j is a rectangle in R p+q and m p+q (Ti j ) = m p (Ri ) · mq (S j ) for every
i = 1, . . . , n, j = 1, . . . , k. Clearly,
A×B ⊂
n 1
k
1
Ti j ,
i=1 j=1
and thus
k
n
n
k
m p+q (A × B) ≤ ∑ ∑ m p+q (Ti j ) = ∑ ∑ m p (Ri ) · mq (S j ) =
i=1 j=1
=
i=1 j=1
n
∑ m p (Ri )
i=1
·
k
∑ mq (Ti j )
j=1
<
< (m p (A) + ε ) · (mq (B) + ε ).
This is true for every ε > 0, and therefore, we obtain m p+q (A × B) ≤ m p (A) · mq (B).
A similar argument gives m p+q (A × B) ≥ m p (A) · mq (B). Then we have m p+q (A ×
B) = m p+q (A × B) = m p (A) · mq (B); that is, A × B is measurable, and its measure
equals m p (A) · mq (B).
16.8 The function f is nonnegative and continuous on the interval [0, r], so by
Corollary 16.10, the sector B f = {(x, y) : 0 ≤ x ≤ r, 0 ≤ y ≤ f (x)} is measurable
and has area
( r cos δ
sin δ
0
· x dx + r2
( r
r2 − x2 dx =
cos δ
r cos δ
( 0
1
= r2 cos δ + sin δ + r2
sint · (− sint) dt =
2
δ
1
1
1
r2 sin 2δ
= r2 cos δ · sin δ + r2 δ −
= r2 δ .
2
2
4
2
16.9 The part of the region Au that falls in the upper half-plane is the difference
between the triangle Tu defined by√the points (0, 0), (u, 0), and (u, v), and the region
Bu = {(x, y) : 1 ≤ x ≤ u, 0 ≤ y ≤ x2 − 1}. Thus
1
1
· t(Au ) = u
2
2
u2 − 1 −
( u
1
x2 − 1 dx.
√
The value of the integral, by Exercise 15.30, is (1/2)u u2 − 1 − (1/2) arch u, so
t(Au )/2 = (arch u)/2 and t(Au ) = arch u.
16.27 We show that if c > 2, then the graph of f c is rectifiable. Let F be an arbitrary
partition, and let N denote the least common denominator of the rational base points
of F. Since adding new base points does not decrease the length of the inscribed
polygonal path, we can assume that the points i/N (i = 0, . . . , N) are all base points.
464
Hints, Solutions
Then all the rest of the base points of F are irrational. It is clear that in this case, the
length of the inscribed polygonal path corresponding to F is at most
N
∑
i=1
N
c
f (i − 1)/N + (1/N) + f c (i/N) ≤ 1 + 2 · ∑ f c (i/N).
i=0
If 1 < k ≤ N, then among the numbers of the form i/N (i ≤ N), there are at most k
with denominator k (after simplifying). Here the value of f is 1/kc , so
N
N
N
i=0
k=2
k=2
∑ f c (i/N) ≤ 2 + ∑ k · (1/kc ) = 2 + ∑ k1−c .
We can now use the fact that if b > 0, then ∑Nk=1 1/ kb+1 < (b + 1)/b for all N (see
Exercise 7.5). We get that ∑Nk=2 k1−c < 1/(c − 2), so the length of every inscribed
polygonal path is at most 5 + (2/(c − 2)).
Now we show that if c ≤ 2, then the graph of f c is not rectifiable. Let N > 1 be
an integer, and consider a partition
FN : 0 = x0 < y0 < x1 < y1 < x2 < · · · < xN−1 < yN−1 < xN = 1
such that xi = i/N (i = 0, . . . , N) and yi is irrational for all i = 0, . . . , N − 1. If p
is a prime divisor of N, then the numbers 1/p, . . . , (p − 1)/p appear among the
base points, and the length of the segment of the inscribed polygonal path over the
interval [xi , yi ] is at least 1/p2 . It is then clear that the length of the whole inscribed
polygonal path is at least ∑ p|N (p − 1)/p2 ≥ (1/2) · ∑ p|N 1/p.
Now we use the fact that the sum of the reciprocals of the first n primes tends to
infinity as n → ∞ (see Corollary 18.16). Thus if N is equal to the product of the first
n primes, then the partition FN above gives us an inscribed polygonal path whose
length can be arbitrarily long.
16.29 Let the coordinates of g be g1 and g2 . By Heine’s theorem (Theorem 10.61),
there exists a δ > 0 such that if u, v ∈ [a, b], |u − v| < δ , then |gi (u) − gi (v)| <
ε /2 (i = 1, 2), and so |g(u) − g(v)| < ε .
Let the arc length of the curve be L, and let F : a = t0 < t1 < · · · < tn = b be a
partition with mesh smaller than δ . If ri = sup{|g(t)−g(ti−1 )| : t ∈ [ti−1 ,ti ]}, then by
choosing δ , we have ri ≤ ε for all i. Then the disks Di = {x ∈ R2 : |x − g(ti−1 )| ≤ ri }
(i = 1, . . . , n) cover the set g([a, b]).
Choose points ui ∈ [ti−1 ,ti ] such that |g(ui ) − g(ti−1 )| ≥ ri /2 (i = 1, . . . , n). Since
the partition with base points ti and ui has an inscribed polygonal path of length at
most L, we have ∑ni=1 ri ≤ 2 · ∑ni=1 |g(ui ) − g(ti−1 )| ≤ 2L. Thus the area of the union
of the discs Di is at most
n
n
i=1
i=1
∑ π ri2 ≤ π · ∑ ε · ri ≤ 2Lπ · ε .
Hints, Solutions
465
16.40 Cover the disk D by a sphere G of the same radius. For every planar strip
S, consider the (nonplanar) strip S′ in space whose projection onto the plane is S.
If the strips Si cover the disk, then the corresponding strips Si′ cover the sphere in
space. Each strip Si′ cuts out a strip of the sphere with surface area 2π r · di from G,
where di is the width of the strips Si and Si′ . Since these strips of the sphere cover G,
we have ∑ 2π r · di ≥ 4r2 π , so ∑ di ≥ 2r.
Chapter 17
17.1 Let F : a = x0 < x1 < · · · < xn = b. By the Lipschitz condition, we know that
ωi = ω ( f ; [xi−1 , xi ]) ≤ K · (xi − xi−1 ) ≤ K · δ (F) for all i. Thus
n
n
i=1
i=1
ΩF ( f ) = ∑ ωi · (xi − xi−1 ) ≤ K · δ (F) · ∑ (xi − xi−1 ) = K · (b − a) · δ (F).
17.12 Since every integrable function is already bounded, there exists a K such
that | f ′ (x)| ≤ K for all x ∈ [a, b]. By the mean value theorem, it follows that f is
Lipschitz, so by statement (ii) of Theorem 17.3, f is of bounded variation.
Let F : a = x0 < · · · < xn = b be an arbitrary partition of the interval [a, b]. By the
mean value theorem, there exist points ci ∈ (xi−1 , xi ) such that | f (xi ) − f (xi−1 )| =
| f ′ (c
i )|(xi − xi−1 ) for all i = 1, . . . , n, and so VF ( f ) is equal to the Riemann sum
σF | f ′ | : (ci ) .
'
Let ab | f ′ | dx = I. For each ε > 0, there exists a partition F such that every
Riemann sum of the function | f ′ | corresponding to the partition F differs from
I by at most ε . Thus VF ( f ) also differs from I by less than ε , so it follows that
V ( f ; [a, b]) ≥ I.
On the other hand, an arbitrary partition F has a refinement F ′ such that every
Riemann sum of the function | f ′ | corresponding to F ′ differs from I by less than ε .
Then VF ′ ( f ) also differs from I by less than ε , so VF ′ ( f ) < I + ε . Since VF ( f ) ≤
VF ′ ( f ), we have VF ( f ) < I + ε . This holds for every partition F and every ε > 0, so
V ( f ; [a, b]) ≤ I.
17.13 Let 0 ≤ x < y ≤ 1. If α ≥ 1, then by the mean value theorem, for a suitable
z ∈ (x, y), we have
yα − xα = α · zα −1 · (y − x) ≤ α · (y − x) = α · (y − x)β .
If α < 1, then yα − xα ≤ (y − x)α = (y − x)β .
466
Hints, Solutions
17.15 If α ≥ β + 1, then it is easy to check that f ′ is bounded in (0, 1]. Then f is
Lipschitz, that is, Hölder 1 on the interval [0, 1]. We can then suppose that α < β + 1
and γ = α /(β + 1) < 1. Let 0 ≤ x < y ≤ 1 be fixed. Then
| f (x) − f (y)| = xα · sin x−β − yα · sin y−β ≤
≤ xα · sin x−β − yα · sin x−β + yα · sin x−β − sin y−β ≤
≤ |xα − yα | + yα · min 2, x−β − y−β ,
so it suffices to show that yα − xα ≤ C · (y − x)γ and
yα · min 2, (x−β − y−β ) ≤ C · (y − x)γ
(12)
with a suitable constant C. By the previous exercise, there is a constant C depending
only on α such that yα − xα ≤ C · (y − x)min(1,α ) ≤ C · (y − x)γ , since γ < min(1, α ).
We distinguish two cases in the proof of the inequality (12). If y − x ≥ yβ +1 /2,
then
α /(β +1)
≤ 2 · 2α /(β +1) · (y − x)α /(β +1) < 4 · (y − x)γ .
yα · 2 = 2 · yβ +1
Now suppose that y − x < yβ +1 /2. Then on the one hand, x > y/2, and on the other
1/(β +1)
. By the mean value theorem, for a suitable u ∈ (x, y),
hand, y > 2(y − x)
we have
yα · (x−β − y−β ) = yα · (−β ) · u−β −1 · (x − y) =
= yα · β · u−β −1 · (y − x) <
where C = β · 2β +1 .
< yα · β · (y/2)−β −1 · (y − x) ≤ C · yα −β −1 · (y − x) <
(α −β −1)/(β +1)
< C · 2(y − x)
· (y − x) < C · (y − x)γ ,
Chapter 18
18.4 Let c be a shared point of discontinuity. We can assume that c < b, and f is
discontinuous from the right at c (since the proof is similar when c > a and f is
discontinuous from the left at c). We distinguish two cases. First, we suppose that
g is discontinuous from the right at c. Then there exists an ε > 0 such that every
right-hand neighborhood of c contains points x and y such that | f (x) − f (c)| ≥ ε
and |g(y) − g(c)| ≥ ε . It follows that for every δ > 0, we can find a partition a =
x0 < x1 < · · · < xn = b with mesh smaller than δ such that c is one of the base points,
say c = xk−1 , and |g(xk ) − g(c)| ≥ ε .
Hints, Solutions
467
Let ci = xi−1 for all i = 1, . . . , n, let di = xi−1 for all i = k, and let dk ∈ (xk−1 , xk ]
be a point such
that | f (dk )− f (ck )| = | f (dk ) − f (c)|
≥ ε . Then the sums S1 =
∑ni=1 f (ci ) · g(xi ) − g(xi−1 ) and S2 = ∑ni=1 f (di ) · g(xi ) − g(xi−1 ) differ only in
the kth term, and
|S1 − S2 | = f (ck ) − f (dk ) · g(xk ) − g(xk−1 ) ≥ ε 2 .
This means that we cannot find a δ for ε'2 such that the condition in Definition 18.3
is satisfied, that is, the Stieltjes integral ab f dg does not exist.
Now suppose that g is continuous from the right at c. Then c ∈ (a, b), and g
is discontinuous from the left at c. It follows that for every δ > 0, we can find a
partition a = x0 < x1 < · · · < xn = b with mesh smaller than δ such that c is not
a base point, say xk−1 < c < xk , and |g(xk ) − g(xk−1 )| ≥ ε . Let ci = di = xi−1 for
all i = k. Also, let ck = c and dk ∈ (c, xk ] be a point such that | f (dk ) − f (ck )| =
| f (dk ) − f (c)| ≥ ε . Then (just as in the previous case) the sums S1 and S2 differ by
'
at least ε 2 from each other, and so the Stieltjes integral ab f dg does not exist.
!
18.5 Let xi = 2/ (2i + 1)π for all i = 0, 1 . . .. Then f (xi ) = (−1)i · 2/ (2i + 1)π
(i ∈ N). Let δ > 0 be given. Fix an integer N > 1/δ , and let xN = y0 < y1 < · · · <
yn = 1 be a partition of [xN , 1] with mesh smaller than δ . Then
FM : 0 < xM < xM−1 < · · · < xN < y1 < · · · < yn = 1
is a partition of [0, 1] with mesh smaller than δ for all M > N. We show that if M is
sufficiently large, then there exists a approximating sum that is greater than 1, and
there also exists one that is less than −1.
In each of the intervals [0, xM ] and [y j−1 , y j ] ( j = 1, . . . , n), let the inner point be
the left !
endpoint of the interval. Let ci = xi (N ≤ i ≤ M − 1). Then f (ci ) = f (xi ) =
i
(−1) · 2/ (2i + 1)π for all N ≤ i ≤ M − 1, so
M−1
∑
i=N
f (ci ) f (xi )− f (xi+1 ) =
M−1
>
∑
i=N
M−1
∑
i=N
2
·
(2i + 1)π
2
+
(2i + 1)π
2
(2i + 3)π
>
M
1
2
1
= · ∑ .
(2i + 2)π
π i=N+1 i
We get the approximating sum corresponding to the partition FM by taking the above
sum and adding the terms corresponding to [y j−1 , y j ] ( j = 1, . . . , n). Note that the
sum of these new terms does not depend on M. Since limM→∞ ∑M
i=N+1 (1/i) = ∞,
choosing M sufficiently large gives us a approximating sum that is arbitrarily large.
Similarly, we can show that with the choice ci = xi+1 (N ≤ i ≤ M − 1), we can get
arbitrarily small approximating sums for sufficiently large M.
468
Hints, Solutions
Chapter 19
19.7 By Cauchy’s criterion, limx→∞
f (2x), we have that
' 2x
x
f dt = 0. Since if t ∈ [x, 2x] then f (t) ≥
0 ≤ x · f (2x) ≤ lim
( 2x
x→∞ x
f dt,
and so by the squeeze theorem, x · f (2x) → 0 if x → ∞.
19.9
(i) We can suppose that f is monotone decreasing and nonnegative.
'
Let ε > 0 be fixed, and choose a 0 < δ < 1 such that x1 f dt − I < ε holds for
'
'
all 0 < x ≤ δ , where I = 01 f dx. Then 0x f dt < ε also holds for all 0 < x ≤ δ .
Fix an integer n > 1/δ . If k/n ≤ δ < (k + 1)/n, then the partition F : k/n <
· · · < n/n = 1 gives us intervals of length 1/n, so by (14.19), the lower sum
n
i
1
sF = · ∑ f
n i=k+1
n
'1
f dx by less than f (k/n) − f
corresponding to F differs from the integral k/n
(1) /n, so it differs from the integral I by less than ε + f (k/n) − f (1) /n.
Now by k/n ≤ δ , it follows that
1 k
·∑ f
n i=1
( k/n
i
f dx < ε ,
≤
n
0
and thus f (1/n)/n < ε . Therefore,
1 n i
− I ≤ ε + |sF − I| ≤
·∑ f
n i=1
n
≤ ε + ε + 1n · f (k/n) − f (1) ≤ 2ε + 1n · f (1/n) < 3ε .
This holds for all n > 1/δ , which concludes the solution of the first part of the
exercise.
(ii) If the function is not monotone, the statement is not true. The function f (1/n) =
n2 (n = 1, 2, . . . ), f (x) = 0 (x = 1/n) is a simple counterexample.
19.11 We can assume that f is differentiable on [a, b) and that f ′ is integrable on
[a, x] for all a < x < b.
We show that the length of every inscribed polygonal path is ≤ I. Let F : a =
x0 < x1 < . . . < xn = b be a partition, and let ε > 0 be given. By the continuity of
f and the convergence'of the improper integral, there exists 0 < δ < ε such that
| f (x) − f (b)| < ε and | ax f dt − I| < ε for all b − δ < x < b. Since adding new base
points does not decrease the length of the inscribed polygonal path, we can assume
Hints, Solutions
469
that xn−1 > b − δ . By Remark' 16.21, the graph of f over the interval [a, xn−1 ] is
rectifiable, and its arc length is axn−1 f dx < I + ε . Thus the inscribed polygonal path
corresponding to the partition F1 : a = x0 < x1 < · · · < xn−1 has length < I + ε , and
so the inscribed polygonal path corresponding to the partition F has length less than
!
2
I + ε + (b − xn−1 )2 + f (b) − f (xn−1 ) ≤
≤ I + ε + |b − xn−1 | + | f (b) − f (xn−1 )| ≤ I + 3ε .
This is true for every ε , which shows that the graph of f is rectifiable, and its arc
length is at most I. On the other hand, for every a < x < b, there exists a partition of
[a, x] such that the corresponding
inscribed polygonal path gets arbitrarily close to
'
the value of the integral ax f dt. Thus it is clear that the arc length of the graph of f
is not less than I.
19.20
(a) Since sin x is concave on [0, π /2], we have that sin x ≥ 2x/π for all x ∈ [0, π /2].
Thus | log sin x| = − log sin x ≤ − log(2x/π ) = | log x| + log(π /2). By Exam'
ple 19.5, 01 | log x| dx is convergent (and its value is 1), so applying the ma' π /2
jorization principle, 'we get that 0 log sin x dx is convergent. The substitution
x = π − t gives that ππ/2 log sin x dx is also convergent.
(b) Let
' π /2
0
log sin x dx =
t gives us that
' π /2
0
'π
π /2 log sin x dx = I. Applying the substitution x = (π /2)−
log cos x dx = I. Now apply the substitution x = 2t:
2I =
( π
log sin x dx =
0
0
= 2·
( π /2
( π /2
log sin(2t) · 2 dt =
(log 2 + log sint + log cost) dt =
0
= log 2 · π + 4I,
so I = − log 2 · π /2 and
'π
0
log sin x dx = − log 2 · π .
19.21 The function log sin x is monotone on the intervals [0, π /2] and [π /2, π ]. Thus
by the statement of Exercise 19.9,
π n−1
iπ
· ∑ log sin =
n→∞ n
n
i=1
lim
( π
0
log sin x dx = − log 2 · π ,
so
1 n−1
iπ
· ∑ log sin = − log 2.
n→∞ n − 1
n
i=1
lim
If we raise e to the power of the expressions present on each side, we get that if n →
∞, then the geometric mean of the numbers sin π /n, sin 2π /n, . . . , sin (n − 1)π /n
tends to e− log 2 = 1/2.
470
Hints, Solutions
The arithmetic mean clearly tends to π1 ·
2/π is obviously true.
'π
0
sin x dx = 2/π . The inequality (1/2) <
19.28 For every n ∈ N+ , let fn : [n − 1, n] → R be a'nonnegative continuous function
n
such that fn (n − 1) = fn (n) = 0, max fn ≥ 1, and n−1
fn dx ≤ 1/2n . (We may take
the function for which fn (x) = 0 if |x − an | ≥ εn , fn (an ) = 1, and fn is linear in the
intervals [an − εn , an ] and [an , an + εn ], where an = (2n − 1)/2 and εn = 2−n .)
Let f (x) = fn (x) if x ∈ [n − 1, n] and
n ∈ N+ . Clearly, f is continuous. Since f is
'x
also nonnegative,
the function x → 0 f dt' is monotone increasing, and so the limit
'
limx→∞ 0x f dt exists. On the other hand, 0n'f dt ≤ 2−1 + · · · + 2−n < 1 for all n, so
the limit is finite, and the improper integral 0∞ f dt is convergent.
19.29 Let ε > 0 be given. By the uniform continuity of f , there exists a δ > 0
such that |x − y| < δ implies | f (x) − f (y)| < ε . By Cauchy’s criterion, for the convergence
of the integral, we can pick a number K > 0 such that if K < x < y, then
'
| xy f dt| < ε · δ . We show that | f (x)| < 2ε for all x > K. Suppose that x > K and
f (x) ≥ 2ε . Then by the choice of δ , we have f (t) ≥ ε for all t ∈ (x, x + δ ), and so
' x+δ
f dt ≥ ε · δ , which contradicts the choice of K. Thus f (x) < 2ε for all x > K,
x
and we can similarly show that f (x) > −2ε for all x > K.
19.33
(a) Let a0 = a. If n > 0 and we have already
chosen the number an−1 > a, then
'
let an > max(n, an−1 ) be such that a∞n f dx < 1/(n · 2n ). Thus we have chosen
numbers an for all n = 0, 1, . . . . Let g(x) = n − 1 for all x ∈ (an−1 , an ) and n =
1, 2, . . . . Then limx→∞ g(x) = ∞. Now
( an
a
g · f dx =
n−1 ( ai+1
∑
i=0 ai
n−1
g · f dx =
∑ i·
i=1
( ai+1
ai
n−1
f dx <
1
∑ i · i · 2i < 1,
i=1
'x
and so the function x → a g · f dt is bounded. Since it is also monotone, its limit
is finite, and so the improper integral is convergent.
already chosen the number an−1 > a, then let
(b) Let a0 = a. If n > 0 and we have
' n
f dx > n. Thus we have chosen numbers an
an > max(n, an−1 ) be such that aan−1
for all n = 0, 1, . . . . Let g(x) = 1/n for all x ∈ (an−1 , an ) and n = 1, 2, . . . . Then
limx→∞ g(x) = 0. Now
( an
a
so
'∞
a
n
g · f dx = ∑
( ai
i=1 ai−1
n
1
·
i=1 i
g · f dx = ∑
( ai
ai−1
n
1
· i = n,
i=1 i
f dx > ∑
g · f dx is divergent.
19.35 We will apply the majorization principle. Let x ≥ 3 be fixed. If we have
0 ≤ f (x) < 1/x2 , then
Hints, Solutions
471
log f (x)1−1/log x = (1 − 1/log x) · log f (x) ≤
1
≤ 1−
· (−2 log x) = 2 − 2 log x,
log x
so
f (x)1−1/log x ≤ e2−2 log x = e2 /x2 .
If, however, f (x) ≥ 1/x2 , then
f (x)1−1/log x = f (x) · f (x)−1/ log x ≤ f (x) · x2/ log x = e2 · f (x).
We get that f (x)1−1/log x ≤ max e2 /x2 , e2 · f (x) ≤ e2 · (1/x2 ) + f (x) for all x ≥ 3.
'
'
Since the integral 3∞ (1/x2 ) + f (x) dx is convergent, 3∞ f (x)1−1/log x dx must also
be convergent.
19.37 Since f is decreasing and convex, f ′ is increasing
and nonpositive, so | f ′ | is
!
2
'
'∞
bounded. It is then clear that the integrals a f (x) · 1 + f ′ (x) dx and a∞ f (x) dx
'
are either both
convergent or both divergent. By Exercise 19.31, if a∞ f dx is conver'∞ 2
gent, then a f dx is also convergent. Thus!only three configurations are possible:
2
'
'
All three integrals are convergent, a∞ f (x) · 1 + f ′ (x) dx and a∞ f (x) dx are di'
vergent while a∞ f 2 dx is convergent, or all three integrals are divergent.
Examples
√
for each three cases are given by the functions 1/x2 , 1/x, and 1/ x over the interval
[1, ∞).
19.38 Applying integration by parts gives
Γ (c + 1) =
( ∞
0
xc · e−x dx = xc · (−e−x )
∞
+
0
( ∞
0
c · xc−1 · e−x dx = 0 + c · Γ (c).
19.41 Using the substitution x3 = t gives us that
( ∞
0
3
e−x dx =
( ∞
0
e−t ·
1 −2/3
1
·t
dt = · Γ (1/3).
3
3
'
s
We similarly get that 0∞ e−x dx = Γ (1/s)/s for all s > 0.
19.43 Use the substitution x = t/n in (19.8). We get that
( n
0
1−
nc · n!
t n c−1
·t
dt =
n
c(c + 1) · · · (c + n)
(13)
for all n= 1, 2, . . . . Since (1 − t/n)n ≤ e−t for all 0 < t ≤ n, by (13) we get that
Γ (c) > nc · n! / c(c + 1) · · · (c + n) .
472
Hints, Solutions
On the other hand, for a given n, the function et · (1 − t/n)n is monotone decreasing on the interval [0, n], since its derivative there is
t n
t n−1
− et · 1 −
≤ 0.
et · 1 −
n
n
Let ε > 0 be fixed. Choose a number K > 0 such that
n0 such that
K n
1
eK · 1 −
>
n
1+ε
' ∞ −t c−1
dt < ε holds, and
K e ·t
holds for all n > n0 . If n ≥ max(n0 , K) and 0 < t < K, then
K n
1
t n
≥ eK · 1 −
≥
et · 1 −
,
n
n
1+ε
so
t n
e−t ≤ (1 + ε ) · 1 −
,
n
and thus
Γ (c) <
( ∞
K
e−t · t c−1 + (1 + ε )
( K
0
1−
nc · n!
.
< ε + (1 + ε )
c(c + 1) · · · (c + n)
t n c−1
·t
dt <
n
(14)
Since ε > 0 was arbitrary and (14) holds for every sufficiently large n, (19.9) also
holds.
Notation
(∗)
vi
(H)
(S)
dx
vi
vi
5
A∧B
A∨B
x → f (x)
25
R
N+
Z
28
30
30
12
12
N
Q
30
30
A
A⇒B
12
13
|a|
√
k
a
30
33
13
13
∃
x∈H
14
15
22
[a, b)
(a, b]
[a, b]
39
39
39
(a, b)
(−∞, a]
39
40
{x : . . .}
0/
22
23
[a, ∞)
(−∞, a)
40
40
B⊂A
A⊃B
23
23
(a, ∞)
(−∞, ∞)
40
40
BA
A∪B
A∩B
23
23
23
max A
min A
sup A
41
41
43
A\B
23
X
f: A→B
23
25
inf A
A+B
43
44
ax
47
A ⇐⇒ B
∀
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1
473
474
Notation
limn→∞ an
an → b
54
54
log x
cos x
182
185
an → ∞
n!
(bn ) ≺ (an )
57
74
75
sin x
tg x
ctg x
185
186
186
an ∼ bn
an ր a
75
78
Tn
Un
192
192
an ց a
e
78
79
arccos x
arc sin x
193
193
∑
∑∞
n=1
ζ (x)
D( f )
R( f )
90
90
arc tg x
arcctg x
194
194
95
103
103
sh x
ch x
θx
195
195
196
f −1
g◦ f
graph f
103
104
105
cth x
arsh x
196
197
[x]
{x}
105
105
arch x
arth x
197
197
arcth x
f ′ (a)
f˙(a)
198
204
d f (x)
dx
204
204
206
A×B
R×R =
sgn x
106
R2
limx→a f (x) = b
f (x) → b
115
119
204
124
124
dy/dx
x!
f (a + 0)
f (a − 0)
f (x) = o(g(x))
126
126
141
f+′ (a)
f−′ (a)
207
207
f′
209
f (x) = O(g(x))
f ∼g
141
142
f (k)
224
C[a, b]
s( f ; [a, b])
144
162
π
gr p
√
b
a
G(b; a1 , . . . , an )
loga x
164
168
178
178
180
dk f
k
dxn
k
224
225
Pn
tn (x)
228
259
Bn (x; f )
Ln (x; f )
267
268
'
271
f dx
Notation
475
sF , SF
299
'b
301
a
'b
301
ω ( f ; [a, b])
307
a
'b
a
301
ΩF ( f )
σF ( f ; (ci ))
π (x)
307
308
Rd
Γ (x)
368
431
360
References
1. Davidson, K.R., Dosig, A.P.: Real Analysis and Applications. Theory in Practice. Springer,
New York (2010)
2. Erdős, P., Surányi, J.: Topics in the Theory of Numbers. Springer, New York (2003)
3. Euclid: The Thirteen Books of the Elements [Translated with introduction and commentary by
Sir Thomas Heath]. Second Edition Unabridged. Dover, New York (1956)
4. Hewitt, E., Stromberg, K.: Real and Abstract Analysis. Springer, New York (1975)
5. Niven, I., Zuckerman, H.S., Montgomery, H.L.: An Introduction to the Theory of Numbers, 5th
edn. Wiley, New York (1991)
6. Rademacher, H., Toeplitz, O.: Von Zahlen und Figuren. Springer, Berlin (1933) [English translation: The Enjoyment of Mathematics]. Dover, New York (1990)
7. Rudin, W.: Principles of Mathematical Analysis, 3rd edn. McGraw-Hill, New York (1976)
8. Zaidman, S.: Advanced Calculus. An Introduction to Mathematical Analysis. World Scientific,
Singapore (1997)
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1
477
Index
A
Abel rearrangement, 327
Abel’s inequality, 327
Abel, N.H., 327
absolute continuity, 413
absolute value, 30
absolutely convergent improper integral, 429
addition formulas, 187
additive function, 158
algebraic differential equation, 227
algebraic function, 199
algebraic number, 98
angular measure, 185
Apollonius, 4
approximating sum, 308, 408
arc length, 162, 382
arc length (circle), 163
Archimedean spiral, 389
area, 370
area beneath the parabola, 3
area under the parabola, 270
arithmetic mean, 18
arithmetic–geometric mean, 82
associativity, 29
astroid, 386
asymptote, 138
asymptotically equal, 142
autonomous differential equation, 282
axiom of Archimedes, 31
axis-parallel rectangle, 369
B
base points, 299
bell curve, 249
Bernoulli, J., 94
Bernstein polynomial, 267
Bernstein, S. N., 267
big-O, 141
bijection, 103
Bolzano, B., 83
Bolzano–Darboux theorem, 146
Bolzano–Weierstrass theorem, 83
bounded function, 109
bounded sequence, 59
bounded set, 41, 369
broken line, 162, 381
Bunyakovsky, V. J., 183
C
Cantor’s axiom, 32
cardinality, 100
cardinality of the continuum, 100
cardioid, 388, 391
Cartesian product, 106
catenary, 284
Cauchy remainder, 261
Cauchy sequence, 86
Cauchy’s criterion (improper integrals), 429
Cauchy’s criterion (series), 95
Cauchy’s functional equation, 177
Cauchy’s mean value theorem, 238
Cauchy, A., 10, 85
Cauchy–Schwarz–Bunyakovsky inequality,
183
center of mass (region under a graph), 376
center of mass of a curve, 388
chain rule, 215
Chebyshev polynomial, 192
Chebyshev, P. L., 192
closed interval, 32
commutativity, 29
complement, 23
© Springer New York 2015
M. Laczkovich, V.T. Sós, Real Analysis, Undergraduate Texts
in Mathematics, DOI 10.1007/978-1-4939-2766-1
479
480
complementary or, 12
complex number, 201
composition, 104
concave, 110
conjunction (and), 11
continuity, 117
continuity from the left, 120
continuity from the right, 120
continuity in an interval, 144
continuity restricted to a set, 133
continuously differentiable function, 336
convergent sequence, 54
convergent series, 90
convex, 110
coordinate function, 380
coordinate system, 4, 115
cosine function, 185
cotangent function, 186
countable set, 97
critical limit, 68
curve, 380
cycloid, 385
D
d’Alembert, J. L. R., 178
Darboux property, 287
Darboux’s theorem, 287
Darboux, J.G., 146
De Morgan identities, 24
decimal expansion, 36
definite integral, 301
degenerate interval, 40
degree (polynomial), 168
derivative, 204
derivative (left-hand), 207
derivative (right-hand), 207
derivative of a curve, 384
Descartes, R., 4
difference of sets, 23
difference quotient, 204
differentiability (at a point), 204
differentiability (over an interval), 209
differentiable curve, 380
differential, 5
differential calculus, 4
differential equation, 227, 276
Dirichlet, L., 105
disjoint sets, 23
disjunction (or), 11
distance, 368
distributivity, 29
divergent sequence, 54
divergent series, 90
divisibility (polynomials), 351
Index
domain, 25, 103
dual class, 412
E
elementary function, 167
elementary integrals, 273
elementary rational function, 350
elliptic integral, 361
empty set, 23
equivalence (if and only if), 11
equivalent sets, 100
Euclidean space, 367
Eudoxus, 1
Euler’s formula, 201
even function, 108
everywhere dense set, 39
evil gnome, 278
exclusive or, 12
existential quantifier, 14
exponential function, 172
F
factorial, 74
Fermat’s principle, 234
Fermat, P. de, 234
field, 29
field axioms, 28
first-order linear differential equation, 277
floor function, 105
fractional part, 105
function, 24
function of bounded variation, 400
functional equation, 177
fundamental theorem of algebra, 201
G
generalized mean, 178
geometric mean, 18
global approximation, 262
global extrema, 145
graph, 105
greatest lower bound, 42
growth and decay, 276
Guldin, P., 379
H
Hölder α function, 406
Hölder’s inequality, 328
Hölder, O. L., 182
half-line, 40
harmonic mean, 18
harmonic oscillation, 280
harmonic series, 92
Heine’s theorem, 150
Index
Heine, H.E., 150
Hilbert, D., 330
Hippias, 1
Hippocrates, 1
hyperbolic function, 195
hyperelliptic integral, 362
hyperharmonic series, 94
I
implication (if, then), 11
improper integral, 418, 419
inclusive or, 12
indefinite integral, 271
index (sequence), 25
induction, 16
inequality of arithmetic and geometric means,
19
infimum, 43
infinite sequence, 25
inflection point, 246
injective map, 103
inner measure, 370
inscribed polygonal path, 162, 382
instantaneous velocity, 203
integer, 30
integrable function, 301
integral function, 334
integration by parts, 340
integration by substitution, 346
interior point, 369
intermediate value theorem, 237
intersection, 23
inverse function, 103
inverse hyperbolic function, 198
inverse trigonometric functions, 193
irrational number, 30
isolated point, 134
J
Jensen, J. L. W. V., 111
jump discontinuity, 153
L
L’Hôpital’s rule, 255
Lagrange interpolation polynomial, 268
Lagrange remainder, 261
leading coefficient, 168
least upper bound, 42
left-hand limit, 126
Legendre polynomial, 228
Leibniz rule, 226
Leibniz, G.W., 7
lemniscate, 391
length, 370
481
limit (function), 123
limit (sequence), 53
limit point, 134
linear function, 165
Lipschitz property, 151
Lipschitz, R.O.S., 151
little ant, 278
little-o, 141
local approximation, 262
local extrema, 231
local maximum, 230
local minimum, 230
locally decreasing, 229
locally increasing, 229
logarithm, 180
logarithmic integral, 360
lower integral, 301
lower sum, 300
M
Maclaurin’s formula, 261
majorization principle (improper integrals),
429
mapping, 24
maximum, 145
mean value theorem, 238
measure, 369
mesh (partition), 311
method of exhaustion, 1
minimum, 145
monotone function, 109
monotone sequence, 77
multiplicity, 169
N
natural number, 30
necessary and sufficient condition, 13
necessary condition, 13
negation (not), 11
neighborhood, 127
Newton, Isaac, 7
nonoverlapping sets, 369
normal domain, 372
O
odd function, 108
one-to-one correspondence, 38, 103
one-to-one map, 103
onto map, 103
open ball, 369
open interval, 32
order of magnitude, 141
ordered n-tuple, 25
ordered field, 31
482
ordered pairs, 25
orthogonal functions, 330
oscillation, 307
oscillatory sum, 307
outer measure, 370
P
parabola, 5
parameterization, 380
partial fraction decomposition, 352
partial sum, 90
partition, 299
period, 109
periodic function, 109
planar curve, 380
point of discontinuity, 153
polygonal line, 381
polygonal path, 162
polynomial, 167
power function, 172
predicates, 13
prime number theorem, 360
primitive function, 271
proof by contradiction, 15
proofs, 11
proper subset, 23
punctured neighborhood, 127
Q
quantifiers, 13
R
radian, 185
range, 103
rational function, 169
rational number, 30
real line, 36
rearrangement (sequence), 64
rectangle, 369
rectifiable curve, 382
rectifiable graph, 162
recursion, 52
refinement (partition), 300
removable discontinuity, 153
resonance, 282
Riemann function, 125
Riemann integral, 301
Riemann sum, 308
Riemann, G.F.B., 125
right-hand limit, 125
Rolle’s theorem, 237
Rolle, M., 237
root-mean-square, 113
Index
S
sandwich theorem, 66
scalar product, 329, 330, 369
Schwarz, H. A., 183
second mean value theorem for integration,
337
second-order homogeneous differential
equation with constant coefficients, 280
second-order inhomogeneous linear
differential equation, 281
second-order linear homogeneous differential
equation, 280
section (set), 373
sectorlike region, 390
segment, 39, 381
separable differential equation, 279
sequence, 25
sequence of nested closed intervals, 32
simple curve, 383
sine function, 185
Snell’s law, 234
solid of revolution, 377
space curve, 380
squeeze theorem, 66, 134
step function, 324
Stieltjes integral, 408
Stieltjes, T.S., 407
strict local maximum, 231
strict local minimum, 231
strict weak concavity, 159
strict weak convexity, 159
strictly concave, 110
strictly convex, 110
strictly monotone function, 109
strictly monotone sequence, 77
subsequence, 63
subset, 23
sufficient condition, 13
sumset, 44
supremum, 43
surface area, 393
surjective map, 103
symmetric difference, 26
T
tangent function, 186
tangent line, 204
Taylor polynomial, 261
Taylor series, 263
Taylor’s formula, 261
Taylor, B., 261
theorems, 11
threshold, 54
total variation, 400
Index
transcendental function, 199
transcendental number, 100
transference principle, 132
triangle inequality, 30, 368
trigonometric function, 184
U
uniform continuity, 150
union, 23
universal quantifier, 13
upper integral, 301
upper sum, 300
483
V
variable quantity, 5
vector, 115, 368
volume, 370
W
Wallis’ formula, 342
weak concavity, 158
weak convexity, 158
Weierstrass approximation theorem, 267
Weierstrass’s theorem, 145
Weierstrass, K., 10, 83