Numerical Analysis I: DX X DX e
Numerical Analysis I: DX X DX e
Numerical Analysis I: DX X DX e
Introduction
Even though, in practical application one would finally require results in numerical form,
analytical methods sometimes fail to provide solutions for mathematical problems and
available analytical solutions may not also amenable to direct numerical interpretation.
o A set of tabulated data is given and inferences (or opinions) have to be drawn
from it.
One may want to estimate the number of population in the years 1979, 1988, 1999 and
2003.
o One may want to solve a non linear equation, algebraic equations, and systems of
linear and non-linear equations.
x cos x x 2 , e x x ln x 0 , x 3 3 x 2 3x 2 0
x 2x z 2
x2 y2 7
2x y 1
x 2 y 11
x y 4z 3
o It may be necessary to find the approximate values of numbers like
4 2
x 2
, e cos( x
3 3
2, 11 , dx , )dx ,
0 0
There are many more such situations where analytical methods unable to produce the
desired result. The aim of numerical analysis is to provide convenient method for
obtaining useful solution to mathematical problems in a numerical form.
Chapter 1:
1
Basic concepts in error estimation
Introduction
When solving problems one usually starts with some initial data and then computes to
arrive at the final solution. But this computed result may be with an error due to the errors
in the data, or the error in the method, or both. We shall describe, in this chapter some
basic ideas concerning errors.
Sources of errors
Inherent error: Errors which are already present in the statement of the problem.
Original data being approximate (using approximate tabulated data, mathematical
formulas involving constants like, , g , q, ........ )
Rounding errors: These errors arise from the process of rounding off the numbers
during computation.
o When a calculator or computer is used to perform real number calculations, they
use numbers with only a finite number of digits. As a result, those calculations are
performed with only approximate representation of the actual numbers,
o Sometimes it is desirable to limit numbers with large number of digits to a
manageable number of digits. This process is called rounding off.
2
a) less than 5, leave the nth digit unchanged
b) greater than 5, increase the nth digit by unity
c) exactly 5, increase the nth digit by unity if it is odd otherwise leave it
unchanged
Ex. Round the following numbers to three significant digits
12.375, 23456, 65.0234, 6.4356, 3.567
Ex. Round the following numbers to four decimal places
6.8365133, 0.876352, 12.003892, 0.9999832
Exercise: Find the absolute and relative errors of the approximate numbers found in the
rounding of examples above.
Exercise: Suppose p * must approximate p with relative error at most 10 3 . Find the
largest interval in which p * must lie for each value of p.
Ex: Find the largest interval in which p* must lie to approximate p with relative error at
most 10 4 for each value of p.
a. b. e c. 2 d. 3
7
CHAPTER 2
Solution of Non Linear equations
Introduction:
3
Consider the equation
f ( x ) 0 .......... ................... (*)
If f (x) is not linear then (*) is called Non Linear Equation.
Example :
a. e x 2 x 2 cos x 6 0 b. ln( x 1) cos(x 1) 0
c. 2 x cos 2 x ( x 2) 0
2
d. x3 2x 2 x 4
e. ( x 2) 2 ln x f . x x 1 1
Definition: -The value which satisfies (*) is called the root of the equation or the zero
of f(x).
To solve an equation like (*), where there is no formula for the exact solution, we use an
approximation method, in particular an Iteration Method.
An Iteration Method is a method in which we start from an initial guess x0 (which may
be poor) and compute step by step to obtain better and better approximations
x `1 , x 2 , x 3 ,.......... of an unknown solution of the equation f(x) = 0.
Here, we shall discuss on some iteration methods.
f(b)
f()
f(a)
a b
Exercise: -
1. Locate all the roots of the equation
x 3 10 x 2 x 2 0 .
2. Show that the given equations has at least one solution in the given interval
a. x 3 3 x 3 0 ; on [1,1] b. x 1 x 2 ; on [1, 2]
c. tan x cos x ; on [ / 2 , / 2] d . sin 2 x cos x ; on [0, / 2]
4
The Bisection Method
This method is based on the repeated application of the intermediate value property.
Consider
f ( x) 0
Where, f(x) can be algebraic or transcendental (non linear) equation.
Let f(x) be continuous on [a, b] with f (a ) f (b) 0
y
(a, f(a))
b
x3 x1
x
a x2
(b,f(b))
ab
The 1st approximation to the root = x1
2
The 2nd approximation to the root = x2
a x1 x b
Where x 2 if f ( x1 ) 0 or x 2 1 if f ( x1 ) 0
2 2
Other approximations to the root (x3, x4, x5 …) can be obtained analogously,
5
y
20
15
10
0 x
-5
-10
-15
-20
-25
-30
-3 -2 -1 0 1 2 3
Tolerance ε = 0.001
ba
ln
=9.965784
n
ln 2
The number of iterations needed to achieve the tolerance 0.001 is an integer
greater than or equals 9.965784, i.e. at least for n = 10.
Solution: -
y
20
10
0 x
-10
-20
-30
-40
-3 -2 -1 0 1 2 3
f(x) = x3 – x2 -1
f(1) = -1
f(2) = 3
x[1]= 1.5 f(x[1])= 0.125000000
x[2]= 1.25000000 f(x[2])= -0.609375000
x[3]= 1.37500000 f(x[3])= -0.291015625
x[4]= 1.43750000 f(x[4])= -0.095947266
x[5]= 1.46875000 f(x[5])= 0.011199951
x[6]= 1.45312500 f(x[6])= -0.043193817
x[7]= 1.46093750 f(x[7])= -0.016203403
x[8]= 1.46484375 f(x[8])= -0.002553523
x[9]= 1.46679688 f(x[9])= 0.004310243
x[10]= 1.46582031 f(x[10])= 0.000875120
x[11]= 1.46533203 f(x[11])= -0.000840011
x[12]= 1.46557617 f(x[12])= 0.000017352
x[13]= 1.46545410 f(x[13])= -0.000411380
7
x[14]= 1.46551514 f(x[14])= -0.000197027
Note: - In the case of convergence the absolute error in xn = │ p -xn│ │xn+1 – xn│. And
to find the solution of an equation accurate to within some tolerance we continue to
iterate untill │xn+1 – xn│< and say the root accurate to within is x = xn .
Ex: - Use the Bisection Method to find the solution accurate to within a) 10-2 b) 10-4 for
the following problems.
1) x 2x 0 for 0 x 1
2) 3x e 0
x
for 1 x 2
3) e x 3x 2 0
x 2
for 0 x 1
4) 2 x cos(2 x ) ( x 1) 0
2
for 3 x 2 and 1 x 0
5) x cos x 2 x 3 x 1 0
2
for 0 .2 x 0 .3 and 1.2 x 1.3
Solution: - i)
ii)
[x1]= 1.5
[x2]= 1.75
[x3]= 1.625
[x4]= 1.6875
[x5]= 1.71875
[x6]= 1.734375
[x7]= 1.7265625
[x8]= 1.73046875
8
[x9]= 1.73242188
[x10]= 1.73144531
[x11]= 1.73193359
[x12]= 1.73217773
[x13]= 1.73205566
[x14]= 1.73199463
[x15]= 1.73202515
[x16]= 1.73204041
The fastness of convergence towards a root in any method is represented by its rate of
convergence.
Remarks:-
1. The Bisection Method always converge
2. In the case of Bisection Method, since the error decrease at each step by a factor
of ½ convergence in this method has order one (or linear).
9
13 0.64111328 0.64123535 0.64117432 0.00006104
14 0.64117432 0.64123535 0.64120483 0.00003052
15 0.64117432 0.64120483 0.64118958 0.00001526
16 0.64117432 0.64118958 0.64118195 0.00000763
y=x
b
p=g(p)
y= g(x)
x
b
a p
Ex: - Check that the function g ( x ) x 2 2, for 2 x 3, has fixed points at x = -1 and
x = 2. See also that these points are points where the graphs of y x & y x 2 2
intersect. Show this by drawing the graphs.
The following theorem gives sufficient condition for the existence and uniqueness of a
fixed point.
Theorem: -
1
Example: - Let g ( x) on [0, 1].See that g is continues on [0, 1] and since
x 1
1
g ' ( x) negative for all x in [0, 1] g(x) id decreasing on [0,1] and
2 x 1
1
g (1) g ( x ) g (0) 1 that is, g ( x ) [0,1] x [0,1] . Moreover
2
10
1 1 1
g ' ( x) 1 for x (0,1) .
2 ( x 1) 3 2 ( x 1) 3 2
So g satisfies all the hypothesis of the above theorem and has a unique fixed point in
[0, 1].
1
To find this fixed point p, if x g ( x) then x 3 x 2 1 0 and we are expected to
x 1
solve this equation.
1
x is equivalent to x 3 x 2 1 0
x 1
We will soon see an iterative method to solve x 3 x 2 1 0 using its equivalent fixed
point formula x = g(x).
x2 1
Example: - Let g ( x) on [1,1] .The extreme value theorem implies that the
3
1
absolute minimum of g occurs at x = 0 and g (0) , similarly the absolute maximum
3
of g occurs at x 1 and the value of g ( 1) 0 , Moreover g is continues and
2x 2
g ' ( x) x ( 1,1)
3 3
So g has a unique fixed point in [-1, 1]. The unique fixed point x in the interval [-1, 1]
can be determined algebraically. If
x2 1
x g ( x) , then x 2 3 x 1 0 which by quadratic formula implies that
2
1
x (3 13 ).
2
1
Note that g also has a unique fixed point p (3 3 ) for the interval [3, 4].However,
2
g(4) = 5 and g’(4) = 8/3>1, so g does not satisfy the hypothesis the above theorem on
[3,4]. Hence, the hypotheses of the theorem are sufficient to guarantee a unique fixed
point but are not necessary, i.e. g has a fixed point on [a, b], does not always imply that
g ( x ) [ a, b] x [ a, b].
11
5
4
3 y=x x2 1
y
2 3
1
-1 x
2 3 4
-1 1
1
y= x
(1, 1/3)
x
1
12
To approximate the fixed point of a function g, we choose an initial approximation p0 and
generate the sequence { p n }n 0 by letting
pn g ( pn 1 ) for each n 1
If the sequence converges to p and g is continuous, then
p lim p n lim g ( p n 1 ) g (lim p n 1 ) g ( p )
n n n
And a solution to x = g(x) is obtained. This technique is called fixed point iteration.
Example: - The equation x3 + 4x2-10 = 0 has a unique root in [1, 2]. There are many
ways to change the equation x3 + 4x2-10 = 0 (the root finding problem) to fixed point
form x = g(x) (fixed point problem) using simple algebraic manipulation.
a ) x g1 ( x) x x 3 4 x 2 10
1
10 2
b) x g 2 ( x ) 4x
x
1
1
c) x g 3 ( x ) (10 x 3 ) 2
2
1
10 2
d ) x g 4 ( x)
4 x
This implies we can have different fixed point problems for the same root finding
problem. For p0 = 1.5, the results of the fixed point iteration pn = g(pn-1) for all four
choices of g is given by the following table bellow.(the actual root is 1.365229964)
13
Now, the question lies on choosing effective, in a sense of convergence, i.e. fast to
converge, fixed point problem for approximating the solution to the root finding problem.
The following theorem gives us some clue concerning this.
Corollary: - If g satisfies the hypothesis of the fixed point theorem, then the bound for
the error involved in using pn to approximate p are given by
p n p k n max{ p 0 a, b p 0 }
and
kn
pn p p1 p 0 , for all n 1
1 k
Both inequalities in the corollary relate the rate at which { p n }n 0 converges to the bound
k on the first derivative. The rate of convergence depends on the factor kn. The smaller
the value of k, the faster the convergence, which may be very slow if k is close to 1.
Exercise: - Verify the four fixed point problems in the above example in relation with the
above theorem.
Theorem: - Let gC [a, b] be such that g(x)[a, b], for all x[a, b]. Suppose in addition,
that g’ is continuous on (a, b) and that positive constant k < 1 exists with
g’(x) k, for all x(a, b).
If g ' ( p ) 0 , then for any number p0 in [a, b], the sequence
p n g ( p n 1 ) , for n 1
Converges only linearly to the unique fixed point p in [a, b]
Proof: - We know from the fixed point theorem that the sequence converges to p. Since
g’ exists on (a, b), we can apply the Mean Value Theorem to g to show that for any n,
pn+1 - p = g(pn) - g(p)=g’(en)(pn - p) for pn < en < p
pn p enp
since, g’ is continuous on (a, b) we have g’(en)g’(p)
p n 1 p
lim g ' ( p)
n pn p
Hence, if g’ (p) ≠0, fixed point iteration converges linearly.
14
Example: -Use an algebraic manipulation to show that each of the following function
has a fixed point at p precisely when f (p) = 0, where f(x) = x4+2x2-x-3.
1
1
x 3 x4 2
a. g1 ( x) (3 x 2 x ) 2 4
b. g 2 ( x)
2
1
x 3 2 3x 4 2 x 2 3
c. g 3 ( x) 2 d . g 4 ( x)
x 2 4x3 4x 1
Solution: - (a)
p is the root of f ( x) 0
if by algebraic manupulation , we succeed to show that x g1 ( x) f ( x) 0
we could say p is a fixed point of g1 ( x)
since inparticular for x p, f ( p ) 0 p g1 ( p)
1
x 4 2 x 2 x 3 0 x 4 3 x 2 x 2 x (3 x 2 x 2 ) 4
1
take g1 ( x) (3 x 2 x 2 ) 4 and g1 ( p) p.
Example: - Use fixed point iteration Method to determine a solution accurate to within
10-2 and 10-5 for
a. x 3 x 2 1 0
b. x 3 4 x 2 !0 0
Solution: - a)
x 3 x 2 1 0 has a root in [0, 1].
1
x is one among the several ways of changing the root finding problem
x 1
x 3 x 2 1 0 into its equivalent fixed point problem and we previously showed that
1
g ( x) has a unique fixed point in [0, 1]. Therefore by fixed point theorem the
x 1
sequence of iterations
x n g ( x n 1 ) n 1
converges to the fixed point of g(x) for any initial approximation taken in [0,1].
For x 0 0.5
x1 g ( x 0 )
x 2 g ( x1 ),...............
x[1]= 0.816496611
x[2]= 0.741963804
x[3]= 0.757670581
Since │x[4] - x[3] │ = │0.754277706- 0.757670581│= 0.003392875 < 0.01 after 3
iterations the root is: 0.757670581
15
1
Now take 10 2 and check whether g satisfies all the conditions in the fixed
g ( x)
4 x
point theorem.
5
g ' ( x) 3
0 for all x in [1,2] g is decreasing on [1,2] and
10 ( 4 x) 2
5 5 5 5
g ' ( x) 3
3
3
3
0.15 x [1,2]
10 ( 4 x ) 2
10 ( 4 x ) 2
10 ( 4 1) 2
10 (5) 2
Therefore g has unique fixed point in [1, 2] and for x0 = 1 and accuracy 0.01
1
10 2
x n 1 g ( x n )
4 xn
x[1]= 1.41421354
x[2]= 1.35904026
x[1]= 1.41421354
x[2]= 1.35904026
x[3]= 1.36601818
x[4]= 1.36512971
x[5]= 1.36524272
x[6]= 1.36522841
Exercise: -
1. Show that g(x) = +0.5sin(x/2) has a unique fixed point on [0,2]. Use fixed point
iteration to find an approximation to the fixed point that is accurate to within 10-2.
Use error bound formula for the method of fixed point iteration to estimate the
number of iterations required to achieve 10-2 accuracy and compare this
theoretical estimate to the number actually needed.
2. Repeat problem (3), but now g(x)=2-x, on [1/3,1] and accuracy 10-4.
3. For each of the following equations, determine an interval [a,b] on which fixed
point iteration will converge. (i) Estimate the number of iteration necessary to
16
obtain approximations accurate to within 10-5, and (ii) perform the calculations.
1
2 ex x2 5 ex 2
a. x b. x 2 2 c. x
3 x 3
d. x 5 x e x 6x f . x 0.5(sin x cos x)
Newton-Raphson Method
Suppose that fC2 [a, b] (set of all functions that have first and second derivatives on [a,
b]). Let p 0 [a, b] be an approximation to the root f(x) = 0 and for “small” h let p1 =
p0+h be the correct root so that f(p1) = 0. Expanding f ( p 0 h) by Taylor’s series about
p0, we obtain
h 2 ''
f ( p 0 h) f ( p 0 ) hf ' ( p 0 ) f ( p 0 ) ............... 0
2
Since, Newton’s Method is derived by assuming that h is small, the term involving h2, h3,
h4 …. , are much smaller, so
0 f ( p0 ) hf ' ( p 0 )
Solving for h gives
f ( p0 )
h
f ' ( p0 )
A better approximation than p0 is therefore given by p1, where
f ( p0 )
p1 p 0
f ' ( p1 )
Repeating the above step successive approximations are given by p2, p3, p4… pn+1, where
f ( pn )
p n 1 p n n 0,1,2,......
f ' ( pn )
Which is the NRF
Theorem: - Let f C2 [a, b]. If p [a, b] is such that f (p) = 0 and f ' ( p ) 0 , then there
exists a >0 such that Newton’s Method generates a sequence { p n }n 1 converging to p
for any initial approximation p 0 [p-,p+].
The theorem states that, under reasonable assumptions, Newton’s method converges
provided a sufficiently accurate initial approximation is chosen (i.e. an initial
approximation that is close to the root).
Newton’s Method is often used to refine an answer obtained by another technique such as
Bisection Method, since this method requires a good first approximation but generally
gives rapid second order convergence.
To find the rate of convergence for this method, expanding f(x) using Taylor’s expansion
about xn gives
17
f ( x ) f ( x n )
f ( p ) f ( x n )
f ( x n )
( p
f ' ( x n )
from NRF
1 f "
x p
n 1
2 f '
S ince , f ' & f
lim
n
f " ( n ) f
x p
lim
n 1
n 2
( p x n )
Example: - Use NRM to find a root of the equation x3-2x-5=0 accurate to within 10-4.
Example: - Use NRM to find a root of the equation xsinx+cosx =0 accurate to within
10-4 for /2 x .
[x1]= 2.3561945
[x2]= 2.74889374
[x3]= 2.94524336
[x4]= 2.84706855
[x5]= 2.79798126
[x6]= 2.82252502
[x7]= 2.81025314
[x8]= 2.8041172
[x9]= 2.80104923
[x10]= 2.79951525
[x11]= 2.79874825
[x12]= 2.79836464
[x13]= 2.79855633
After 14 iterations the root is: 2.79846048 i.e. to arrive to the given accuracy 14 iterations
are needed.
Exercises:-
1. Use Newton’s Method to find the solutions accurate to within 10-5 for the
following problems
a. e x 2 x 2 cos x 6 0 for 1 x 2
b. ln( x 1) cos( x 1) 0 for 1.3 x 2
c. 2 x cos 2 x ( x 2) 0
2
for 2 x 3 and 3 x 4
d . ( x 2) ln x 0
2
for 1 x 2 and e x 4
e e 3x 0
x 2
for 0 x 1 and 3 x 5
x
f sin x e 0 for 0 x 1, 3 x 4 and 6 x 7
2. Use Newton’s Method to approximate, to within 10-4, the value of x that produces
the point on the graph of y = x2 that is closest to (1, 0). [Hint: Minimize [d(x)]2,
where d(x) represents the distance from (x, x2) to (1, 0).
3. Repeat problem, but now y = 1/x, tolerance 10-4 and the point is (2, 1).
4. The sum of two numbers is 20. If each number is added its square root the product
of the two sums is 155.55. Determine the two numbers to within 10-4.
19
Secant Method
Secant Method is mainly aimed in reducing the effort needed to know the value of the
derivative of f at each approximation in NRM.
f ( p n 1 )
p n p n 1 for n 1 NRM
f ' ( p n 1 )
By definition
f ( x) f ( p n 1 )
f ' ( p n 1 ) lim
x p
n 1 x p n 1
Letting x p n 2 , we have
f ( p n 2 ) f ( p n 1 ) f ( p n 1 ) f ( p n 2 )
f ' ( p n 1 )
p n 2 p n 1 p n 1 p n 2
Using this approximation for f ' ( p n 1 ) in NRF gives
f ( p n 1 )( p n 1 p n 2 )
p n p n 1 which is called SECANT METHOD
f ( p n 1 ) f ( p n 2 )
Geometrically, the method starts with two initial approximation p0 and p1, the
approximation p2 is the x-intercept of the line joining (p0,f(p0)) and (p1,f(p1)). The
approximation p3 is the x-intercept of the line joining (p1,f(p1)) and (p2,f(p2)), and so on.
(figure)
Example: - Use Secant Method to find a root of the equation x = cosx accurate to within
10-4.
Solution: - f(x) = x - cosx = 0, the root lies between 0 and /2.
Compare the result obtained by this method with the NRM, the FPIM and the BM
Exercise: - Do all the problems given in the exercise of NRM using Secant Method.
20