Lecture 10
Lecture 10
Lecture 10
Monotonicity
Conditions that a function may be constant.
.
THEOREM. Suppose that the function f (x) is defined in the interval I and has
inside it a finite derivative f ( x ) and is continuous at the end-points (if they belong to I ). In
order that f (x) be a constant, it is sufficient that f ´ ( x )=0 inside I .
Proof. Fix a point x 0 ∈ I and take an arbitrary point I ∋ x ≠ x 0.In the interval [ x , x 0 ] or
[ x 0 , x ] all conditions of Lagrange's theorem are satisfied.
Consequently we may write
f ( x )−f ( x 0 )=f ´ (c )(x−x 0) ,
where c being between x and x 0: x 0 <c < x∨x <c < x 0.
But, according to the assumption f ´ ( c ) =0 and therefore for all x from I
we have f ( x )=f ( x 0 )=constant , which proves our proposition.
Observe that it is evident that the above condition is also necessary
for a function to be constant, since for any constant function f ( x ) ≡C , C ' =0.
The following simple corollary has an important application in the integral calculus.
COROLLARY. Suppose that two functions f ( x ) and g( x ) are defined in the same
interval I , inside it have finite derivatives f ´ ( x ) and g ´ ( x ), and the functions are
continuous at the end-points (if they belong to I ).
If, moreover,
f ´ ( x )=g ´ ( x ) inside I ,
then the functions f ( x )∧g ( x )differ by only a constant C over the whole interval I
f ( x )=g ( x ) +C
Definitaion. A function f (x ) is called increasing in an interval (a, b) if for any two
points x 1 , x 2 ∈(a , b )
x 1 < x 2 ⇒ f ( x1 )< f ( x2 )
In other words, a function is increasing if the values of the functions increase as the
argument increases.
y y
f(x2) f(x2)
f(x1) f(x1)
o x2 x o x1 x2 x
x1
A function f (x ) is called decreasing in an interval ( a , b ) if for any two points
x 1 , x 2 ∈(a ,b )
x 1 < x 2 ⇒ f ( x1 )> f ( x2 )
In other words, a function is decreasing if its values decrease as the argument increases.
y y
f(x2)
f(x2)
f(x1) f(x1)
o x2 o x1 x2 x
x1 x
take arbitrary two points x 1 , x 2 ∈(a ,b ) and assume that x 1 <x 2 . According to the
Lagrange’s theorem we can write
¿
f (x 2 )−f ( x 1 )=f (ξ )( x 2−x 1 ) , x 1 <ξ < x 2
It is clear that
¿ ¿ ¿
if f ( x )>0 for x ∈(a , b )⇒ f ( ξ )>0 ⇒ f (ξ )⋅(x 2 −x 1 ) > 0 ⇒ f ( x 2 )−f ( x 1 )>0 ,
Thus, f (x ) increases in [ −2,0 ] ∪[2 ,+∞ ) and f (x ) decreases in (−∞ ,−2]∪[ 0,2 ] .
local
y max
max
max
o x
derivative at this point f ´ ´ ( x 0 ¿ ≠ 0 . Then x 0is the extremum point, More exactly, if f ´ ´ (
x 0 ¿ <0then x 0is the maximum point, and if f ´ ´ ( x 0 ¿ >0then x 0is the minimum point.
to x 0 points x .Therefore for all considered values of x , if x < x 0 , then necessarily f ´ ( x ) >0,
and if x > x 0 , then necessarily f ´ ( x ) <0, By the Theorem1, x 0 is the maximum point.
Similarly we establish that if f ´ ´ ( x 0 ¿ >0then x 0is the minimum point.
Theorem3. Let I be an interval and x 0 be an interior point of I and n ≥ 2. Suppose
that the derivatives f ´ , ´ ´ , … , f (n) exist and are continuous in a some neighborhood N δ ( x 0 )
( n−1 ) (n )
of x 0 that f ´ ( x 0 ¿=f ´ ´ ( x 0 ¿=f ( x 0 )=0 , but f ( x 0 ) ≠ 0.
(n )
A) If n is even and f ( x 0 ) <0, then f has a local maximum at x 0;
(n )
B) If n is even and f ( x 0 ) >0, then f has a local minimum at x 0;
C) If n is odd , then f neither a maximum nor a minimum at x 0.
Proof. Applying Taylor’s theorem at x 0 , for x ∈ I we have
f ( n) (c)
( x−x 0 ) ,
n
f ( x )=f ( x 0 ) +
n!
(n )
where c is some point between x 0and x .Since f (n)is continuous ,if f ( x 0 ) ≠ 0, then
there exists an interval U ( x 0 ) containing x 0 such that f (n ) ( x )will have the same sign as
f (n ) ( x 0 ) for x ∈ U ( x 0) and c ∈U ( x 0 ) and consequently f (n ) ( c ) and f (n ) ( x 0 )will have the same
sign.
(n ) n
If n is even and f ( x 0 ) <0,then f (n ) ( c ) <0 and ( x−x 0 ) ≥ 0 and so that f ( x )−f ( x 0 )≤ 0, or
f ( x ) ≤ f ( x 0 ) .Therefore f has a local maximum at x 0.If f (n ) ( x 0 ) >0 , then f (n ) ( c ) >0and
( )
n n
f ∑ λk xk ≤ ∑ λk f ( xk )
k=1 k=1
n
Proof. We use definition and induction, assuming inequality for any x i ∈ I and λ i ∈ [ 0,1 ]
m
with ∑ λi=1and m<n .We then have , for λ i<1 ,
i=1
( ) ( ) ( )
n n−1 n−1 n
λ λ
f ∑ λ k x k =f ( 1− λn ) ∑ 1−λn x j + λ n x n ≤ ( 1− λn ) f ∑ 1−λn x j + λn f ( x n ) ≤ ∑ λ k f ( x k )
k=1 j=1 j j=1 j k=1
. f ´ ´ ( x ¿ <0 or f ´ ´ ( x ¿ >0 ∀ x ∈ ( x x −δ , x 0 +δ )
In this case, the function is either strictly convex upward (when f ´ ´ ¿)<0) or
strictly convex downward (when f ´ ´ ( x )¿ 0). But then the point x 0 is not an
inflection point. Hence, the assumption is wrong and the second derivative of the
inflection point must be equal to zero:
f ´ ´ ( x 0 ¿=0 .
In other words, at least one of the one-sided limits at the point x=a must be equal
to infinity.
A vertical asymptote occurs in rational functions at the points when the
denominator is zero and the numerator is not equal to zero (i.e. at the points of
1
discontinuity of the second kind). For example, the graph of the function y= x
has the vertical asymptote ¿ 0 . In this case, both one-sided limits (from the left
and from the right) tend to infinity:
1 1
lim =−∞ , lim =+∞
x→ 0−0 x x→ 0+0 x
A function which is continuous on the whole set of real numbers has no vertical
asymptotes.
Oblique Asymptote
The straight line y=kx+ b is called an oblique (slant) asymptote of the graph of
the function y=f (x ) as x →+∞ or as x →-∞ if
The oblique asymptotes of the graph of the function y=f ( x ) may be different as x
→+∞ and x →−∞.
Therefore, when finding oblique (or horizontal) asymptotes, it is a good practice
to compute them separately.
The coefficients k and b of an oblique asymptote y=kx+ b are defined by the
following Theorem:
A straight line y=kx+ b is an asymptote of a function
y=f ( x ) as x →+∞ if and only if the following two limits are finite:
f (x) lim [ f ( x )−kx ] =b.
lim =k and
x→+∞ x x→+∞
Proof.Necessity
Let the straight line y=kx+ b be an asymptote of the graph of
y=f ( x ) as x →+∞. Then the following condition is true:
lim [ f ( x )−( kx +b ) ] =0
x→+∞
or, equivalently,
lim α ( x )=0 .
f ( x )=kx+ b+α ( x) as x →+ ∞, where x→+∞
lim
x→+∞
f (x)
x [
b α( x )
= lim k + +
x →+∞ x x ] and
Sufficiency
Suppose that there are finite limits
f (x)
lim =k∧ lim [ f ( x )−kx ] =b .
x→+∞ x x→+ ∞
that meets the definition of an oblique asymptote. Thus, the straight line
y=kx+ b is an asymptote of the function y=f ( x ).
Note:
Similarly we can prove the theorem for the case of x →−¿∞.
Horizontal Asymptote
In particular, if k =0, we obtain a horizontal asymptote, which is described by
the equation y=b. The theorem on necessary and sufficient conditions for the
existence of a horizontal asymptote is stated as follows:
A straight line y=b is an asymptote of a function y=f (x ) as x →+∞, if and only if
the following limit is finite:
lim f ( x )=b
x→+∞