Numerical Analysis I: DX X DX e

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 20

Numerical Analysis I

Introduction
Even though, in practical application one would finally require results in numerical form,
analytical methods sometimes fail to provide solutions for mathematical problems and
available analytical solutions may not also amenable to direct numerical interpretation.

o A set of tabulated data is given and inferences (or opinions) have to be drawn
from it.

x=year 1976 1981 1986 1991 1996 2001


y=No of population in thousand 23 34 39 42 48 53

One may want to estimate the number of population in the years 1979, 1988, 1999 and
2003.

o One may want to solve a non linear equation, algebraic equations, and systems of
linear and non-linear equations.

x  cos x  x 2 , e x  x ln x  0 , x 3  3 x 2  3x  2 0

x  2x  z  2
x2  y2  7
2x  y  1
x 2  y  11
 x  y  4z  3
o It may be necessary to find the approximate values of numbers like

4 2
x 2
, e  cos( x
3 3
2, 11 , dx , )dx ,
0 0

There are many more such situations where analytical methods unable to produce the
desired result. The aim of numerical analysis is to provide convenient method for
obtaining useful solution to mathematical problems in a numerical form.

Chapter 1:
1
Basic concepts in error estimation
Introduction
When solving problems one usually starts with some initial data and then computes to
arrive at the final solution. But this computed result may be with an error due to the errors
in the data, or the error in the method, or both. We shall describe, in this chapter some
basic ideas concerning errors.

Sources of errors
Inherent error: Errors which are already present in the statement of the problem.
Original data being approximate (using approximate tabulated data, mathematical
formulas involving constants like,  , g , q, ........ )
Rounding errors: These errors arise from the process of rounding off the numbers
during computation.
o When a calculator or computer is used to perform real number calculations, they
use numbers with only a finite number of digits. As a result, those calculations are
performed with only approximate representation of the actual numbers,
o Sometimes it is desirable to limit numbers with large number of digits to a
manageable number of digits. This process is called rounding off.

Truncation error: These errors are caused by using approximate formula in


computation, such as the one that arises when a function f (x) is evaluated from an
infinite series after truncating it at a certain step.
x3 x5 x7 x9 x 11 x 2 n 1
sin x  x      .  ....  (1) n 1  .......... ...
3! 5! 7! 9! 10! (2n  1)!
x3 x5 x7 x9 x 11
sin x  x     
3! 5! 7! 9! 10!
2 3 2 5 2 7 2 9 211
sin 2  2     
3! 5! 7! 9! 10!

Significant digits and rule of rounding off

Definition: Numbers that represent a given number to a certain degree of accuracy is


called an approximate number.
2 1
0.6667, 0.33333, 3.1416, 1.414 are approximations for the numbers , , , 2 respectively.
3 3
Definition: A Significant digits (or figure) of an approximate number is any non-zero
digit, any zero lying between significant digits and any zero used as a place value to
indicate a retained place.
2347, 3.045 and 0.3452 contain four significant digits,
0.00123, 0.0000205 contain three significant digits,
340000, 230000.00 contain only two significant digits,

Rule to rounding of a number to n significant digits (through out this course)


i) Discard all numbers to the right of the nth digit
ii) If the (n+1)th digit is

2
a) less than 5, leave the nth digit unchanged
b) greater than 5, increase the nth digit by unity
c) exactly 5, increase the nth digit by unity if it is odd otherwise leave it
unchanged
Ex. Round the following numbers to three significant digits
12.375, 23456, 65.0234, 6.4356, 3.567
Ex. Round the following numbers to four decimal places
6.8365133, 0.876352, 12.003892, 0.9999832

Absolute and Relative errors

Definition: If p* is an approximation to a number p, then the absolute error of p* is


p  p*
p  p* , and the relative error of p* is p
.
Example: Find the absolute and relative errors if
a ) p  0.3000  101 and p *  0.3100  101
b) p  0.3000  10 3 and p *  0.3100  10 3
c ) p  0.3000  10 4 and p *  0.3100  10 4
d ) p  0.005 and p *  0.015

Sol n :  a) E a  0.10000 E r  0.033 3

b) E a  0.1  10  4 E r  0.033 3

c) E a  100 E r  0.033 3
d ) E a  0.01 E r  2.0000

Exercise: Find the absolute and relative errors of the approximate numbers found in the
rounding of examples above.

Exercise: Suppose p * must approximate p with relative error at most 10 3 . Find the
largest interval in which p * must lie for each value of p.

a. 150 b. 900 c. 90 d. 1500

Ex: Find the largest interval in which p* must lie to approximate p with relative error at
most 10 4 for each value of p.
a.  b. e c. 2 d. 3
7

CHAPTER 2
Solution of Non Linear equations

Introduction:

3
Consider the equation
f ( x )  0 .......... ................... (*)
If f (x) is not linear then (*) is called Non Linear Equation.

Example :
a. e x  2  x  2 cos x  6  0 b. ln( x  1)  cos(x  1)  0
c. 2 x cos 2 x  ( x  2)  0
2
d. x3  2x 2  x  4
e. ( x  2) 2   ln x f . x x 1  1

Definition: -The value  which satisfies (*) is called the root of the equation or the zero
of f(x).

To solve an equation like (*), where there is no formula for the exact solution, we use an
approximation method, in particular an Iteration Method.

An Iteration Method is a method in which we start from an initial guess x0 (which may
be poor) and compute step by step to obtain better and better approximations
x `1 , x 2 , x 3 ,.......... of an unknown solution of the equation f(x) = 0.
Here, we shall discuss on some iteration methods.

Theorem: - (Intermediate value theorem)


Let f(x) be continuous in [a, b] and let p be any number between f(a) and f(b), then their
exist a number  in (a, b) such that f ( )  p.

f(b)

f()
f(a)

a  b

Corollary: - If f(x) is continuous in [a, b] and f (a ) f (b)  0 , then f ( )  0 for at least

one number  such that a    b.

Exercise: -
1. Locate all the roots of the equation
 x 3  10 x 2  x  2  0 .
2. Show that the given equations has at least one solution in the given interval
a. x 3  3 x  3  0 ; on [1,1] b. x  1  x 2 ; on [1, 2]
c. tan x  cos x ; on [  / 2 ,  / 2] d . sin 2 x  cos x ; on [0,  / 2]

4
The Bisection Method
This method is based on the repeated application of the intermediate value property.

Consider
f ( x)  0
Where, f(x) can be algebraic or transcendental (non linear) equation.
Let f(x) be continuous on [a, b] with f (a ) f (b)  0

y
(a, f(a))

b
x3 x1
x
a x2

(b,f(b))

ab
The 1st approximation to the root = x1 
2
The 2nd approximation to the root = x2
a  x1 x b
Where x 2  if f ( x1 )  0 or x 2  1 if f ( x1 )  0
2 2
Other approximations to the root (x3, x4, x5 …) can be obtained analogously,

Theorem: - Suppose that f is continuous on [a, b] and f (a ) f (b)  0 . The Bisection


Method generates a sequence { p n }n 1 approximating a zero p of f with
ba
pn  p  , when n  1
2n
ba
Remark: - In order the absolute error p n  p    (a given tolerance), the number
2n
of iterations required to achieve this accuracy is given by
 ba 
ln  


n  
ln 2
Example: - Find the number of iterations needed to approximate the positive root of
x 3  2 x  5  0 with an absolute error < 0.001 and also find that approximate root.
Solution:

5
y
20

15

10

0 x

-5

-10

-15

-20

-25

-30
-3 -2 -1 0 1 2 3

Fig. Matlab graph of f(x) = x3 -2x-5 on [-3, 3]

a=2 f(2) = -1 < 0


b=3 f(3) = 16 <0

Tolerance ε = 0.001
 ba 
ln 
 =9.965784

n  
ln 2
The number of iterations needed to achieve the tolerance 0.001 is an integer
greater than or equals 9.965784, i.e. at least for n = 10.

x[1]= 2.5 f(x[1])= 5.625000000


x[2]= 2.25000000 f(x[2])= 1.890625000
x[3]= 2.12500000 f(x[3])= 0.345703125
x[4]= 2.06250000 f(x[4])= -0.351318359
x[5]= 2.09375000 f(x[5])= -0.008941650
x[6]= 2.10937500 f(x[6])= 0.166835785
x[7]= 2.10156250 f(x[7])= 0.078562260
x[8]= 2.09765625 f(x[8])= 0.034714282
x[9]= 2.09570312 f(x[9])= 0.012862332
x[10]= 2.09472656 f(x[10])= 0.001954348

See that │x[10] - x[9] │ = │2.09472656- 2.09570312│= 0.0009765625 < 0.001


6
Example: - Use Bisection Method to approximate the root of x3 –x2 -1 = 0 correct to
three decimal places. (Proceed until the first three decimal places of two consecutive
iterates agree.)

Solution: -
y
20

10

0 x

-10

-20

-30

-40
-3 -2 -1 0 1 2 3

Fig. Matlab graph of x3 –x2 -1 on [-3, 3]

f(x) = x3 – x2 -1
f(1) = -1
f(2) = 3
x[1]= 1.5 f(x[1])= 0.125000000
x[2]= 1.25000000 f(x[2])= -0.609375000
x[3]= 1.37500000 f(x[3])= -0.291015625
x[4]= 1.43750000 f(x[4])= -0.095947266
x[5]= 1.46875000 f(x[5])= 0.011199951
x[6]= 1.45312500 f(x[6])= -0.043193817
x[7]= 1.46093750 f(x[7])= -0.016203403
x[8]= 1.46484375 f(x[8])= -0.002553523
x[9]= 1.46679688 f(x[9])= 0.004310243
x[10]= 1.46582031 f(x[10])= 0.000875120
x[11]= 1.46533203 f(x[11])= -0.000840011
x[12]= 1.46557617 f(x[12])= 0.000017352
x[13]= 1.46545410 f(x[13])= -0.000411380

7
x[14]= 1.46551514 f(x[14])= -0.000197027

Therefore, the root correct to three decimal places is 1.465

Definition: - A sequence of iterations {x n }n 0 is said to converge to the root p of an


equation f ( x)  0 , if lim x n  p  0, i.e, lim x n  p .
n  n 

Note: - In the case of convergence the absolute error in xn = │ p -xn│  │xn+1 – xn│. And
to find the solution of an equation accurate to within some tolerance  we continue to
iterate untill │xn+1 – xn│<  and say the root accurate to within  is x = xn .

Ex: - Use the Bisection Method to find the solution accurate to within a) 10-2 b) 10-4 for
the following problems.
1) x  2x  0 for 0  x  1
2) 3x  e  0
x
for 1 x  2
3) e  x  3x  2  0
x 2
for 0  x  1
4) 2 x cos(2 x )  ( x  1)  0
2
for  3  x  2 and 1  x  0
5) x cos x  2 x  3 x  1  0
2
for 0 .2  x  0 .3 and 1.2  x  1.3

Example: - Use the Bisection Method to approximate a) 3 b) 3


25 accurate to
within
i) 10-2 ii) 10-4

Solution: - i)

3 is one root of the equation f ( x )  0, where f ( x )  x 2  9 and it is located between


1 and 2
[x1]= 1.5
[x2]= 1.75
[x3]= 1.625
[x4]= 1.6875
[x5]= 1.71875
[x6]= 1.734375
Since │x[7] - x[6] │ = │1.7265625- 1.734375│= 0.007812500 < 0.01 after 6 iterations
the root is: 1.734375

ii)
[x1]= 1.5
[x2]= 1.75
[x3]= 1.625
[x4]= 1.6875
[x5]= 1.71875
[x6]= 1.734375
[x7]= 1.7265625
[x8]= 1.73046875
8
[x9]= 1.73242188
[x10]= 1.73144531
[x11]= 1.73193359
[x12]= 1.73217773
[x13]= 1.73205566
[x14]= 1.73199463
[x15]= 1.73202515
[x16]= 1.73204041

Since │x[17] - x[16] │ = │1.73204803- 1.73204041│= 0.000007629 < 0.00001 after 16


iterations the root is: 1.73204041

The fastness of convergence towards a root in any method is represented by its rate of
convergence.

Definition: - Suppose { p n }n 0 is a sequence that converges to p, with pn ≠ p for all n. If


positive constants  and  exist with
p n 1  p
lim
n   
pn  p
then  is called the order of convergence of the method.

Remarks:-
1. The Bisection Method always converge
2. In the case of Bisection Method, since the error decrease at each step by a factor
of ½ convergence in this method has order one (or linear).

Example: - Consider x  2  x  0 for 0  x  1 and relate it with remark 2 above


************************************************************************
itr(n) an bn xn /xn+1-xn/
************************************************************************
1 0.50000000 1.00000000 0.75000000 0.25000000
2 0.50000000 0.75000000 0.62500000 0.12500000
3 0.62500000 0.75000000 0.68750000 0.06250000
4 0.62500000 0.68750000 0.65625000 0.03125000
5 0.62500000 0.65625000 0.64062500 0.01562500
6 0.64062500 0.65625000 0.64843750 0.00781250
7 0.64062500 0.64843750 0.64453125 0.00390625
8 0.64062500 0.64453125 0.64257812 0.00195312
9 0.64062500 0.64257812 0.64160156 0.00097656
10 0.64062500 0.64160156 0.64111328 0.00048828
11 0.64111328 0.64160156 0.64135742 0.00024414
12 0.64111328 0.64135742 0.64123535 0.00012207

9
13 0.64111328 0.64123535 0.64117432 0.00006104
14 0.64117432 0.64123535 0.64120483 0.00003052
15 0.64117432 0.64120483 0.64118958 0.00001526
16 0.64117432 0.64118958 0.64118195 0.00000763

Method of Fixed Point Iteration

Definition: - A number p is a fixed point for a given function g if g(p) = p.


y

y=x
b

p=g(p)
y= g(x)

x
b
a p

Ex: - Check that the function g ( x )  x 2  2, for  2  x  3, has fixed points at x = -1 and
x = 2. See also that these points are points where the graphs of y  x & y  x 2  2
intersect. Show this by drawing the graphs.

The following theorem gives sufficient condition for the existence and uniqueness of a
fixed point.

Theorem: -

a ) If g  C[ a, b] ( set of continuous functions on [ a, b] ) and g ( x)  [ a, b] for all x  [a, b],


then g has a fixed point in [ a, b]

b) If in addition, g ' ( x) exists on ( a, b) and a posetive constant k  1 exists with


g ' ( x)  k , for all x  ( a, b),
then the fixed point in [a, b] is unique.

1
Example: - Let g ( x)  on [0, 1].See that g is continues on [0, 1] and since
x 1
1
g ' ( x)   negative for all x in [0, 1] g(x) id decreasing on [0,1] and
2 x 1
1
 g (1)  g ( x )  g (0)  1 that is, g ( x )  [0,1] x  [0,1] . Moreover
2

10
1 1 1
g ' ( x)    1 for x  (0,1) .
2 ( x  1) 3 2 ( x  1) 3 2

So g satisfies all the hypothesis of the above theorem and has a unique fixed point in
[0, 1].
1
To find this fixed point p, if x  g ( x)  then x 3  x 2  1  0 and we are expected to
x 1
solve this equation.
1
x is equivalent to x 3  x 2  1  0
x 1
We will soon see an iterative method to solve x 3  x 2  1  0 using its equivalent fixed
point formula x = g(x).
x2 1
Example: - Let g ( x)  on [1,1] .The extreme value theorem implies that the
3
1
absolute minimum of g occurs at x = 0 and g (0)   , similarly the absolute maximum
3
of g occurs at x  1 and the value of g ( 1)  0 , Moreover g is continues and
2x 2
g ' ( x)   x  ( 1,1)
3 3
So g has a unique fixed point in [-1, 1]. The unique fixed point x in the interval [-1, 1]
can be determined algebraically. If
x2 1
x  g ( x)  , then x 2  3 x  1  0 which by quadratic formula implies that
2
1
x (3  13 ).
2
1
Note that g also has a unique fixed point p  (3  3 ) for the interval [3, 4].However,
2
g(4) = 5 and g’(4) = 8/3>1, so g does not satisfy the hypothesis the above theorem on
[3,4]. Hence, the hypotheses of the theorem are sufficient to guarantee a unique fixed
point but are not necessary, i.e. g has a fixed point on [a, b], does not always imply that
g ( x )  [ a, b] x  [ a, b].

11
5

4
3 y=x x2 1
y
2 3

1
-1 x
2 3 4
-1 1

Example: - Let g ( x)  3  x , since g ' ( x)  3  x ln 3  0 on [0,1] , the function g is


decreasing on [0, 1]. So
1
g (1)   g ( x)  1  g (0), for 0  x  1.
3
Thus for x  [0,1], we have g ( x)  [0,1] , and g has a fixed point in [0, 1]. Since
g ' (0.01)  -1.0866, g ' ( x ) is not less than one on (0, 1), and the theorem can not be
used to determine the uniqueness. However g is always decreasing, and it is clear from
the following figure that the fixed point must be unique

1
y= x

(1, 1/3)
x
1

12
To approximate the fixed point of a function g, we choose an initial approximation p0 and
generate the sequence { p n }n 0 by letting
pn  g ( pn 1 ) for each n  1
If the sequence converges to p and g is continuous, then
p  lim p n  lim g ( p n 1 )  g (lim p n 1 )  g ( p )
n  n  n
And a solution to x = g(x) is obtained. This technique is called fixed point iteration.

Example: - The equation x3 + 4x2-10 = 0 has a unique root in [1, 2]. There are many
ways to change the equation x3 + 4x2-10 = 0 (the root finding problem) to fixed point
form x = g(x) (fixed point problem) using simple algebraic manipulation.
a ) x  g1 ( x)  x  x 3  4 x 2  10

1
 10 2
b) x  g 2 ( x )    4x 
 x 
1
1
c) x  g 3 ( x )  (10  x 3 ) 2
2
1
 10  2
d ) x  g 4 ( x)   
4 x

This implies we can have different fixed point problems for the same root finding
problem. For p0 = 1.5, the results of the fixed point iteration pn = g(pn-1) for all four
choices of g is given by the following table bellow.(the actual root is 1.365229964)

n (a) (b) (c) (d)


0 1.5 1.5 1.5 1.5
1 -0.875 0.8165 1.286953807 1.348399758
2 6.735 2.9969 1.402540803 1.367376328
3 -469.7 (-8.65)1/2 1.345458388 1.364956975
4 1.03×108 dom=0<x<(5/2)1/2=1.5811 1.375170231 1.365264773
5 1.36009419 1.365225554
6 1.367846966 1.36523056
7 1.363886952 1.365229964
8 1.365916729
9 1.364878178
10 1.365410089
15 1.365223646
20 1.365230203
25 1.365229964
29 1.365229964

13
Now, the question lies on choosing effective, in a sense of convergence, i.e. fast to
converge, fixed point problem for approximating the solution to the root finding problem.
The following theorem gives us some clue concerning this.

Theorem: - (Fixed point theorem)


Let gC [a, b] be such that g(x) [a, b], for all x in [a, b]. Suppose in addition, that g’
exists on (a, b) and that a constant 0<k<1 exists with
g’(x)  k, for all x (a, b).
Then, for any number p0 in [a, b], the sequence defined by
Pn = g(pn-1), n  1,
converges to the unique fixed point p in [a, b].

Corollary: - If g satisfies the hypothesis of the fixed point theorem, then the bound for
the error involved in using pn to approximate p are given by

p n  p  k n max{ p 0  a, b  p 0 }
and
kn
pn  p  p1  p 0 , for all n  1
1 k

Both inequalities in the corollary relate the rate at which { p n }n 0 converges to the bound
k on the first derivative. The rate of convergence depends on the factor kn. The smaller
the value of k, the faster the convergence, which may be very slow if k is close to 1.

Exercise: - Verify the four fixed point problems in the above example in relation with the
above theorem.

Theorem: - Let gC [a, b] be such that g(x)[a, b], for all x[a, b]. Suppose in addition,
that g’ is continuous on (a, b) and that positive constant k < 1 exists with
g’(x) k, for all x(a, b).
If g ' ( p )  0 , then for any number p0 in [a, b], the sequence
p n  g ( p n 1 ) , for n  1
Converges only linearly to the unique fixed point p in [a, b]

Proof: - We know from the fixed point theorem that the sequence converges to p. Since
g’ exists on (a, b), we can apply the Mean Value Theorem to g to show that for any n,
pn+1 - p = g(pn) - g(p)=g’(en)(pn - p) for pn < en < p
pn p  enp
since, g’ is continuous on (a, b) we have g’(en)g’(p)
p n 1  p
lim  g ' ( p)
n pn  p
Hence, if g’ (p) ≠0, fixed point iteration converges linearly.

14
Example: -Use an algebraic manipulation to show that each of the following function
has a fixed point at p precisely when f (p) = 0, where f(x) = x4+2x2-x-3.
1
1
 x  3  x4 2
a. g1 ( x)  (3  x  2 x ) 2 4
b. g 2 ( x)   
 2 
1
 x  3 2 3x 4  2 x 2  3
c. g 3 ( x)   2  d . g 4 ( x) 
 x  2 4x3  4x  1

Solution: - (a)
p is the root of f ( x)  0
if by algebraic manupulation , we succeed to show that x  g1 ( x)  f ( x)  0
we could say p is a fixed point of g1 ( x)
since inparticular for x  p, f ( p )  0  p  g1 ( p)
1
x 4  2 x 2  x  3  0  x 4  3  x  2 x 2  x  (3  x  2 x 2 ) 4
1

take g1 ( x)  (3  x  2 x 2 ) 4 and g1 ( p)  p.

Example: - Use fixed point iteration Method to determine a solution accurate to within
10-2 and 10-5 for
a. x 3  x 2  1  0
b. x 3  4 x 2 !0  0
Solution: - a)
x 3  x 2  1  0 has a root in [0, 1].
1
x is one among the several ways of changing the root finding problem
x 1
x 3  x 2  1  0 into its equivalent fixed point problem and we previously showed that
1
g ( x)  has a unique fixed point in [0, 1]. Therefore by fixed point theorem the
x 1
sequence of iterations
x n  g ( x n 1 ) n  1
converges to the fixed point of g(x) for any initial approximation taken in [0,1].
For x 0  0.5
x1  g ( x 0 )
x 2  g ( x1 ),...............
x[1]= 0.816496611
x[2]= 0.741963804
x[3]= 0.757670581
Since │x[4] - x[3] │ = │0.754277706- 0.757670581│= 0.003392875 < 0.01 after 3
iterations the root is: 0.757670581

b) x 3  4 x 2 !0  0 has a root in [1, 2]


1
 10  2
x  4 x !0  0 is equivalent to
3 2
x 
4 x

15
1

Now take  10  2 and check whether g satisfies all the conditions in the fixed
g ( x)   
4 x
point theorem.
5
g ' ( x)   3
 0 for all x in [1,2]  g is decreasing on [1,2] and
10 ( 4  x) 2

1.291  g ( 2)  g ( x)  g (1)  1.4142. Therefore x  [1,2], g ( x)  [1,2]

5 5 5 5
g ' ( x)   3
 3
 3
 3
 0.15 x  [1,2]
10 ( 4  x ) 2
10 ( 4  x ) 2
10 ( 4  1) 2
10 (5) 2

Therefore g has unique fixed point in [1, 2] and for x0 = 1 and accuracy 0.01
1
 10 2
x n 1  g ( x n )   
 4  xn 
x[1]= 1.41421354
x[2]= 1.35904026

Since │x[3] - x[2] │= │1.36601818- 1.35904026│= 0.006977916 < 0.01 after 2


iterations the root is: 1.359040260

To get the accuracy 0.00001 with x0 = 1

x[1]= 1.41421354
x[2]= 1.35904026
x[3]= 1.36601818
x[4]= 1.36512971
x[5]= 1.36524272
x[6]= 1.36522841

Since │x[7] - x[6] │ = │1.3652302- 1.36522841│= 0.000001788 < 0.00001 after 6


iterations the root is: 1.365228415

Exercise: -
1. Show that g(x) = +0.5sin(x/2) has a unique fixed point on [0,2]. Use fixed point
iteration to find an approximation to the fixed point that is accurate to within 10-2.
Use error bound formula for the method of fixed point iteration to estimate the
number of iterations required to achieve 10-2 accuracy and compare this
theoretical estimate to the number actually needed.

2. Repeat problem (3), but now g(x)=2-x, on [1/3,1] and accuracy 10-4.
3. For each of the following equations, determine an interval [a,b] on which fixed
point iteration will converge. (i) Estimate the number of iteration necessary to

16
obtain approximations accurate to within 10-5, and (ii) perform the calculations.
1
2  ex  x2 5  ex 2
a. x b. x  2  2 c. x   
3 x  3 
d. x  5 x e x  6x f . x  0.5(sin x  cos x)

Newton-Raphson Method

Suppose that fC2 [a, b] (set of all functions that have first and second derivatives on [a,
b]). Let p 0 [a, b] be an approximation to the root f(x) = 0 and for “small” h let p1 =
p0+h be the correct root so that f(p1) = 0. Expanding f ( p 0  h) by Taylor’s series about
p0, we obtain
h 2 ''
f ( p 0  h)  f ( p 0 )  hf ' ( p 0 )  f ( p 0 )  ...............  0
2
Since, Newton’s Method is derived by assuming that h is small, the term involving h2, h3,
h4 …. , are much smaller, so
0  f ( p0 )  hf ' ( p 0 )
Solving for h gives
f ( p0 )
h
f ' ( p0 )
A better approximation than p0 is therefore given by p1, where
f ( p0 )
p1  p 0 
f ' ( p1 )
Repeating the above step successive approximations are given by p2, p3, p4… pn+1, where
f ( pn )
p n 1  p n  n  0,1,2,......
f ' ( pn )
Which is the NRF

Theorem: - Let f C2 [a, b]. If p [a, b] is such that f (p) = 0 and f ' ( p )  0 , then there

exists a >0 such that Newton’s Method generates a sequence { p n }n 1 converging to p
for any initial approximation p 0 [p-,p+].

The theorem states that, under reasonable assumptions, Newton’s method converges
provided a sufficiently accurate initial approximation is chosen (i.e. an initial
approximation that is close to the root).

Newton’s Method is often used to refine an answer obtained by another technique such as
Bisection Method, since this method requires a good first approximation but generally
gives rapid second order convergence.

To find the rate of convergence for this method, expanding f(x) using Taylor’s expansion
about xn gives

17
f ( x )  f ( x n ) 

f ( p )  f ( x n ) 

f ( x n )
  ( p 
f ' ( x n )
from NRF
1 f "
 


x  p
n 1
2 f '
S ince , f ' & f
lim
n  
f " ( n )  f
 x  p
lim 
n 1
n    2
 ( p  x n )

So that Newton-Raphson process has a second-order or quadratic convergence.

Geometrically, the approximations in NRM are obtained using successive tangents


(figure).

Example: - Use NRM to find a root of the equation x3-2x-5=0 accurate to within 10-4.

Solution: - f(x) = x3-2x-5 and the root lies in [2, 3]


For x0=2,
Tolerance = .0001
x[1] = x0 –( f(x0)/f ’(x0)) = 2.0999999
x[2]= x1 –( f(x1)/f ’(x1)) = 2.094568

Since /x[3] - x[2]/ = /2.0945516- 2.094568/= 0.000016451 < 0.000100000 after 2


iterations the root is: 2.094568014

Comparing the result with the Bisection Method


a = 2 , b = 3 Tolerance = .0001
[x1]= 2.5
[x2]= 2.25
[x3]= 2.125
[x4]= 2.0625
[x5]= 2.09375
[x6]= 2.109375
[x7]= 2.1015625
[x8]= 2.09765625
[x9]= 2.09570312
[x10]= 2.09472656
[x11]= 2.09423828
[x12]= 2.09448242
[x13]= 2.09460449

Since /x[14] - x[13]/ = /2.09454346- 2.09460449/= 0.000061035 < 0.0001 after 13


iterations the root is: 2.094604492 i.e. 13 iterations are necessary to achieve the accuracy
0.0001.

Example: - Use NRM to find a root of the equation xsinx+cosx =0 accurate to within
10-4 for /2  x  .

Solution: - f ( x)  x sin x  cos x


Taking x0 =  = 3.14
18
x[1]= 2.8231213
x[2]= 2.7985971
x[3]= 2.7983861

The root after 3iterations is: 2.7983861


since /x[4] - x[3]= /2.7983861 - 2.7983861/< 0.0001

Comparing the result with the Bisection Method

[x1]= 2.3561945
[x2]= 2.74889374
[x3]= 2.94524336
[x4]= 2.84706855
[x5]= 2.79798126
[x6]= 2.82252502
[x7]= 2.81025314
[x8]= 2.8041172
[x9]= 2.80104923
[x10]= 2.79951525
[x11]= 2.79874825
[x12]= 2.79836464
[x13]= 2.79855633
After 14 iterations the root is: 2.79846048 i.e. to arrive to the given accuracy 14 iterations
are needed.

Exercises:-
1. Use Newton’s Method to find the solutions accurate to within 10-5 for the
following problems
a. e x  2  x  2 cos x  6  0 for 1  x  2
b. ln( x  1)  cos( x  1)  0 for 1.3  x  2
c. 2 x cos 2 x  ( x  2)  0
2
for 2  x  3 and 3  x  4
d . ( x  2)  ln x  0
2
for 1  x  2 and e  x  4
e e  3x  0
x 2
for 0  x  1 and 3  x  5
x
f sin x  e 0 for 0  x  1, 3  x  4 and 6  x  7
2. Use Newton’s Method to approximate, to within 10-4, the value of x that produces
the point on the graph of y = x2 that is closest to (1, 0). [Hint: Minimize [d(x)]2,
where d(x) represents the distance from (x, x2) to (1, 0).
3. Repeat problem, but now y = 1/x, tolerance 10-4 and the point is (2, 1).
4. The sum of two numbers is 20. If each number is added its square root the product
of the two sums is 155.55. Determine the two numbers to within 10-4.

19
Secant Method
Secant Method is mainly aimed in reducing the effort needed to know the value of the
derivative of f at each approximation in NRM.
f ( p n 1 )
p n  p n 1  for n  1 NRM
f ' ( p n 1 )
By definition
f ( x)  f ( p n 1 )
f ' ( p n 1 )  lim
x p
n 1 x  p n 1
Letting x  p n  2 , we have
f ( p n  2 )  f ( p n 1 ) f ( p n 1 )  f ( p n  2 )
f ' ( p n 1 )  
p n  2  p n 1 p n 1  p n  2
Using this approximation for f ' ( p n 1 ) in NRF gives
f ( p n 1 )( p n 1  p n  2 )
p n  p n 1  which is called SECANT METHOD
f ( p n 1 )  f ( p n  2 )
Geometrically, the method starts with two initial approximation p0 and p1, the
approximation p2 is the x-intercept of the line joining (p0,f(p0)) and (p1,f(p1)). The
approximation p3 is the x-intercept of the line joining (p1,f(p1)) and (p2,f(p2)), and so on.
(figure)

Example: - Use Secant Method to find a root of the equation x = cosx accurate to within
10-4.
Solution: - f(x) = x - cosx = 0, the root lies between 0 and /2.

Taking x0 = 0 and x1 = 1.57


x1  x 0
x 2  x1  * f ( x1 ) = 0.61108428
f ( x1 )  f ( x 0 )
x 2  x1
x3  x 2  * f ( x 2 ) = 0.72328609
f ( x 2 )  f ( x1 )
x3  x 2
x 4  x3  * f ( x3 ) = 0.73956633
f ( x3 )  f ( x 2 )
x 4  x3
x5  x 4  * f ( x 4 ) = 0.73908347
f ( x 4 )  f ( x3 )
x5  x 4
x 6  x5  * f ( x5 ) = 0.73908514
f ( x5 )  f ( x 4 )
x 6  x5  .0001 and hence the root to the given accuracy is : 0.73908347

Compare the result obtained by this method with the NRM, the FPIM and the BM

Exercise: - Do all the problems given in the exercise of NRM using Secant Method.

20

You might also like