Numerical Analysis 1
Numerical Analysis 1
Numerical Analysis 1
Oliver Mhlanga
Definition
A mathematical model is a mathematical description of a
physical situation.
The main goal of numerical analysis is to develop efficient
algorithms for computing precise numerical values of mathematical
quantities, including functions, integrals, solutions of algebraic
equations, solutions of differential equations (both ordinary and
partial), solutions of minimization problems, and so on.
Part of this process is the consideration of the errors that arise in
these calculations, from the errors in the arithmetic operations or
from other sources.
Computer arithmetic and number storage
(3) is the true domain of numerical analysis, and refers to the fact
that most systems of equations are too complicated to solve
explicitly, or, even in cases when an analytic solution formula is
known, directly obtaining the precise numerical values may be
difficult.
Definition
Let x be a real number and let x ∗ be an approximation. The
absolute error in the approximation x ∗ ≈ x is defined as
∆x = |x ∗ − x|. The relative error is defined as the ratio of the
∗ −x|
absolute error to the size of x, i.e., |x |x| = ∆x
|x| , provided x 6= 0;
otherwise relative error is not defined.
If x ∗ < x , then we say that the number x ∗ is an approximate
value of the number x by defect and if x ∗ > x , then it is an
approximate value of x by excess.
Errors in Numerical Computations
Oliver Mhlanga
f (x) = 0 (1)
f (x) = x 6 − x − 1 = 0 (2)
n a b c b-c f(c)
1 1.0000 2.0000 1.5000 0.5000 8.8906
2 1.0000 1.5000 1.2500 0.2500 1.5647
3 1.0000 1.2500 1.1250 0.1250 -0.0977
4 1.1250 1.2500 1.1875 0.0625 0.6167
5 1.1250 1.1875 1.1562 0.0312 0.2333
6 1.1250 1.1562 1.1406 0.0156 0.0616
7 1.1250 1.1406 1.1328 0.0078 -0.0196
8 1.1328 1.1406 1.1367 0.0039 0.0206
9 1.1328 1.1367 1.1348 0.0020 0.0004
10 1.1328 1.1348 1.1338 0.00098 -0.0096
ln( b−a
)
n≥ (3)
ln 2
Bisection Method.
f (xn )
xn+1 = xn − , n = 0, 1, 2, .... (4)
f 0 (xn )
xn6 − xn − 1
xn+1 = xn − (5)
6xn5 − 1
Newton-Raphson Method.
Convergence analysis
Let α be the root so that f (α) = 0.
f (xk ) f 0 (xk )ek −f (xk )
Define Error: ek+1 = xk+1 − α = xk − f 0 (xk ) −α= f 0 (xk )
Using Taylor’s theorem
|ek+1 | =≡ C |ek |2
This is called quadratic convergence.
ICS226/ISE225: Numerical Analysis
Oliver Mhlanga
Pn (xi ) = yi (6)
In matrix form
y0
y1
...
yn
The coefficient matrix is called
Q the Vandermonde matrix. It can
be shown that det(X ) = 0≤j<k≤n (xk − xj ). Since xi are distinct,
the Vandermonde system is non-singular, and solving this system
gives the coefficients ai which solves (6).
Other reasons:
I To approximate the function by polynomial interpolation;
I Other computations such as derivatives and definite integrals
of polynomials are easy to determine and are also polynomials;
Interpolation and Approximation
Example 3.1. Interpolate the given data set with a polynomial of
degree 2.
2
xi 0 1 3
yi 1 0 0.5
Solution: Let P2 (x) = a0 + a1 x + a2 x 2 .
We want to find the coefficients a0 , a1 , a2 . By interpolating
properties we’ve:
x = 0, y = 1 : P2 (0) = a0 = 1
x = 1, y = 0 : P2 (1) = a0 + a1 + a2 = 0
2 2 2 4
x = , y = 0.5 : P2 ( ) = a0 + a1 ( ) + a2 ( ) = 0.5
3 3 3 9
Solving the system gives a0 = 1, a1 = − 41 , a2 = − 34 . Thus,
P2 (x) = 1 − 14 x − 43 x 2
Interpolation and Approximation
Problem: A Vandermonde problem is often ill-conditioned, thus
the computed solutions ai will be inaccurate. Furthermore, the
amount of computational cost, including forming the matrix,
factorisation, and triangular substitutions, is excessive.
Lagrange Interpolation Polynomials
Given (n + 1) distinct data points: (xi , yi ) for i = 0, 1, ..., n, ∃ a
unique Pn , deg (Pn ) ≤ n that solves equation (6) by Lagrangre’s
Formula
n
X
Pn (x) = yi Li (x) = y0 L0 + y1 L1 + ... + yn Ln (8)
i=0
where
n
Y x − xj (x − x0 )...(x − xi−1 )(x − xi+1 )...(x − xn )
Li (x) = =
xi − xj (xi − x0 )...(xi − xi−1 )(xi − xi+1 )...(xi − xn )
j=0,j6=i
(9)
Interpolation and Approximation
Li (x) are cardinal
n functions, satisfying the properties
Li (xj ) = δij = 1, i=j
0, i6=j , i = 0, 1, ..., n
δij is the Kronecker delta function.
Locally supported in discrete sense.
Example 3.2. Write the Lagrange polynomial for the data in (same
as Example 3.1).
2
xi 0 1 3
yi 1 0 0.5
Solution :x0 = 0, x1 = 1, x2 = 32 , y0 = 1, y1 = 0, y2 = 0.5.
Compute the cardinal functions
(x − x1 )(x − x2 )
L0 (x) =
(x0 − x1 )(x0 − x2 )
(x − 2/3)(x − 1) 3 2
⇒ L0 (x) = = (x − )(x − 1)
(0 − 2/3)(0 − 1) 2 3
Interpolation and Approximation
So
P2 (x) = 2i=0 yi Li (x) = 32 (x − 23 )(x − 1) − 49 x(x − 1) + 0
P
= 1 − 41 x − 43 x 2 same as in Example 3.1.
H/W: Given the following 4 data points
xi 0 1 3 5
yi 1 2 6 7
find a polynomial in Lagrange form to interpolate these data.
Interpolation and Approximation
Oliver Mhlanga
Numerical Differentiation
Numerical Differentiation
If f (x) can be computed with only n digits of precision, is it also
possible to calculate f 0 (x) numerically with n digits of precision?
This challenge can be traced to the subtraction between quantities
that are nearly equal. In this section, several alternatives are
offered for the numerical computation of f 0 (x) and f 00 (x).
Recall that
f (x + h) − f (x)
f 0 (x) = lim
h→0 (10)
h
Taylor theorem:
∞ n
X 1 (k) X 1 (k)
f (x + h) = f (x)hk = f (x)hk + En+1 ,
k! k!
k=0 k=0
(11)
∞
X 1 (k) 1
En+1 = f (x)hk = f (n+1) ()hn+1 + O(hn+1 )
k! (n + 1)!
k=n+1
Numerical Differentiation
f (x0 + h) − f (x0 ) h 00
f 0 (x0 ) = − f ().
h 2
For small values of h,
f (x0 + h) − f (x0 )
f 0 (x0 ) ≈
h
M|h|
with an error bounded by 2 , where M is a bound on |f 00 (x)| for
x ∈ (x0 , x0 + h).
Numerical Differentiation
h2 00 h3 000 h4 iv
f (x + h) = f (x) + hf 0 (x) + f (x) + f (x) + f (x) + ...
2! 3! 4!
h2 h3 000 h4 iv
f (x − h) = f (x) − hf 0 (x) + f 00 (x) − f (x) + f (x) + ...
2! 3! 4!
h3
⇒ f (x + h) − f (x − h) = 2hf 0 (x) + 2 f 000 (x) + ...
3!
0 f (x + h) − f (x − h)
⇒ f (x) = + O(h2 )
2h
FD, BD and CD formulas each involves 2 function calls, 1
subtraction , and 1 division: same computation time.
FD and BD formulas are comparable in accuracy.
CD formula is the most accurate and most recommended method,
however, sometimes CD can not be applied.
Numerical Differentiation
h2 00 h3 000 h4
f (x + h) = f (x) + hf 0 (x) + f (x) + f (x) + f iv (x) + ...
2! 3! 4!
h2 h3 000 h4 iv
f (x − h) = f (x) − hf 0 (x) + f 00 (x) − f (x) + f (x) + ...
2! 3! 4!
h2 00 h4 iv
⇒ f (x + h) + f (x − h) = 2f (x) + 2 f (x) + 2 f (x) + ...
2! 4!
f (x + h) − 2f (x) + f (x − h)
⇒ f 00 (x) = + O(h2 )
h2 | {z }
f iv ()h2
Error =− 12
Numerical Differentiation
f (x + h) − 2f (x) + f (x − h)
f 00 (x) =
h2
f (x + 2h) − 2f (x + h) + 2f (x − h) − f (x − 2h)
f 000 (x) =
2h3
f (x + 2h) − 4f (x + h) + 6f (x) − 4f (x − h) + f (x − 2h)
f iv (x) =
h4
00 000
Other formulas for f (x), f (x), ... are also possible.
Numerical Differentiation
Solution (4.2.1):
Our aim here is to approximate x 0 (t). The choice of h is dictated
by the available data. Using data with t = 0.5s at its centre we
obtain
x(1.0) − x(0.0)
x 0 (0.5) ≈ = 6.80m/s (12)
2 × 0.5
Data centred at t = 1.25s gives us the approximation
x(1.5) − x(1.0)
x 0 (1.25) ≈ = 6.20m/s (13)
2 × 0.25
Note the value of h used.
ICS226/ISE225: Numerical Analysis
Oliver Mhlanga
Numerical Integration
Numerical Integration
Z b n−1
1 1 X
f (x)dx ≈ h[ f (x0 ) + f (xn ) + f (xi )] (15)
a 2 2
i=1
Example
Z 2 5.2. If the trapezoid rule is to be used to compute
e x dx
0
with an error of at most 0.5 × 10−4 , how many points should be
used?
Numerical Integration
Solution:
f (x) = e x , f 00 (x) = e x , a = 0, b = 2, max 00 2
|{z} |f (x)| = e .
x∈(a,b)
By error bound, it is sufficient to require that
1
|ET (f ; h)| ≤ h2 .e 2 ≤ 0.5 × 10−4
6
⇒ h2 ≤ 0.5 × 10−4 × e −2 ≈ 4.06 × 10−5
2 p
⇒ = h ≤ 4.06 × 10−5 = 0.0064
n
2
⇒ n ≥ 0.0064 ≈ 313.8
We’ve at least 314 + 1 = 315 points will certainly produce the
desired accuracy.
Numerical Integration
Simpson’s rule.
A numerical integration rule over two equal sub-intervals with
partition points a, a + h, & a + 2h = b is the widely used basic
Simpsons Rule:
Numerical Integration
Z a+2h
h
f (x)dx ≈ [f (a) + 4f (a + h) + f (a + 2h)] (16)
a 3
Simpsons Rule computes exactly the integral of an interpolating
quadratic polynomial over an interval of length 2h using three
points; namely, the two endpoints and the middle
Ra point.
To derive Simpsons Rule for approximating a f (x)dx, we divide
the interval into an even number of sub-intervals of width
b−a
h= , x0 = a, x2m = b, xi+1 − xi = h, 2m = n.
2m
It can be derived by integrating over the interval [xi , xi+2 ] with the
Lagrange quadratic polynomial pi :
Numerical Integration
Thus,
Rb we obtain
f (x)dx ≈
a
n/2 (n−2)/2
h X X
[f (a) + f (b)] + 4 f [a + (2i − 1)h] + 2 f (a + 2ih)
3
i=1 i=1
| {z }
Simpson0 s rule=S(f ;h)
Numerical Integration
Z 1
2
Example 5.3. Evaluate I (f ) = e −x dx by Sympson’s rule with
0
2m = 10.
Answer: h = 1−0
10 = 0.1
2
i xi xi2 f (xi ) = e −xi
0 0.0 0.0 1.000000
1 0.1 0.01 0.990050
2 0.2 0.04 0.960789
3 0.3 0.09 0.913931
4 0.4 0.16 0.852144
5 0.5 0.25 0.778801
6 0.6 0.36 0.697676
7 0.7 0.49 0.612262
8 0.8 0.64 0.527292
9 0.9 0.81 0.444858
10 1.0 1.00 0.367879
Numerical Integration
Answer:
2 4 2
|ES (f ; h)| ≤ h e ≤ 0.5 × 10−4
180
180 −2
⇒ h4 ≤ e ≤ 0.5 × 10−4
2
⇒ h ≤ 0.18682
b−a
⇒m= = 5.35 ≈ 6
2h
We need at least 2m + 1 = 13 points (more powerful!).
ICS226/ISE225: Numerical Analysis
Oliver Mhlanga
AX = b (19)
where A=
a11 a12 . . . a1n
x1
b1
a21 a22 . . . a2n x2
b2
.. , X = ,b =
.. .. ..
. . . . ... . . .
an1 an2 . . . ann xn bn
Systems of Linear Equations
I The system of linear equation (17) is consistent if it has a
solution. If a system of linear equations has no solution, then
it is inconsistent (or incompatible).
I A consistent system of linear equations may have one solution
or several solutions and is said to be determinate if there is
one solution and indeterminate if there are more than one
solution.
Generally, the following three types of the elementary
transformations to a system of linear equations are used.
1. Interchange: The order of two equations can be changed.
2. Scaling: Multiplication of both sides of an equation of the
system by any non-zero number.
3. Replacement: Addition to (subtraction from) both sides of
one equation of the corresponding sides of another equation
multiplied by any number.
Systems of Linear Equations
xn = bn /unn ,
xn−1 = (bn−1 − un−1,n xn )/un−1,n−1 ,
...
x1 = (b1 − u12 x2 − u13 x3 − · · · − u1n xn )/u11
(20)
R2 + R3 7−→ R3 yields
−0.04 0.04 0.12 3
0 −1 2 43
0 0 1 25
Backward substitution of Gauss elimination is
25
x3 =
1
[43 − (2)(25)]
x2 = =7
−1
[3 − (0.12)(25) − (0.04)(7)]
x1 = =7
(−0.04)
Systems of Linear Equations
2x + 16y + z = 39,
x + y + 25z = 83,
10x + y + z = 19.
10x + y + z = 19.
2x + 16y + z = 39,
x + y + 25z = 83.
Systems of Linear Equations
Augmented matrix
10 1 1 19
2 16 1 39
1 1 25 83
2 1
Forward elimination: R2 :7−→ R2 − 10 R1 and R3 :7−→ R3 − 10 R1
10 1 1 19
0 158 8 352
10 10 10
0 10 10 811
9 249
10
R2 :7−→ 10R2 , R3 :7−→ 10R3
10 1 1 19
0 158 8 352
0 9 249 811
Systems of Linear Equations
9
R3 :7−→ R3 − 158 R2
10 1 1 19
0 158 8 352
39270 124970
0 0 158 158
Backward substitution:
39270 124970
z= ⇒ z = 3.1823
158 158
352 − 8(3.1823)
158y + 8z = 352 ⇒ y = = 2.0667
158
19 − 2.0667 − 3.1823
10x + y + z = 19 ⇒ x = = 1.3751
10
Systems of Linear Equations
Cond(A) = ||A||||A−1 ||
where ||A|| is any matrix norm, gives a measure of the condition of
the matrix A. The large value of Cond(A) indicates the
ill-condition of a matrix or the associated system of equations.
A system is called ill-conditioned or ill-posed if a small change in
the coefficients of the system produces large change in the solution
(unstable),
Systems of Linear Equations
however, if the change in the solution is small for small changes in
the coefficients, then the system is called well-conditioned or
well-posed system.
Iteration Methods.
I If the system of equations has a large number of variables,
then the direct methods are not much suitable. In this case,
the approximate numerical methods are used to determine the
variables of the system.
I The efficiency of the application of approximate methods
depends on the choice of the initial vector and the rate of
convergence of the process.
The following two approximate methods are widely used to solve a
system of linear equations:
1. method of iteration (Jacobis iteration method), and
2. Gauss-Seidals iteration method.
Systems of Linear Equations
Jacobi’s Iteration Method.
Consider the linear system (17), also assume that the quantities aii
are pivot elements. System (17) can be written as
1
x1 = (b1 − a12 x2 − a13 x3 − · · · − a1n xn )
a11
1
x2 = (b2 − a21 x1 − a23 x3 − · · · − a2n xn )
a22
..
.
1
xn = (bn − an1 x1 − an2 x2 − · · · − ann−1 xn−1 ).
ann
k x y z
0 0 0 0
1 2.00000 4.80000 2.03704
2 1.00878 3.72839 1.91111
3 1.24225 4.14167 1.94931
4 1.15183 4.04319 1.93733
5 1.17327 4.08096 1.94083
6 1.16500 4.07191 1.93974
7 1.16697 4.07537 1.94006
8 1.16614 4.07454 1.93996
9 1.16640 4.07488 1.93999
10 1.16632 4.07477 1.93998
11 1.16635 4.07481 1.93998
The solution correct up to four decimal places is
x = 1.1664, y = 4.0748, z = 1.9400.
Systems of Linear Equations
(k+1) (k)
The method is repeated until |xi − xi | < ∀i = 1, 2, ..., n,
where > 0 is any pre-assigned number called the error tolerance.
Example 6.5. Solve the following system of equations by
Gauss-Seidals iteration method, correct up to four decimal places.
27x + 6y − z = 54, 6x + 15y + 2z = 72, x + y + 54z = 110.
Solution. The iteration scheme is
1
x k+1 = (54 − 6y k + z k )
27
1
y k+1 = (27 − 6x k+1 − 2z k )
15
1
z k+1 = (110 − x k+1 − y k+1 )
54
Let y = 0, z = 0 be the initial solution. The successive iterations
are shown below
Systems of Linear Equations
k x y z
0 - 0 0
1 2.00000 4.00000 1.92593
2 1.18244 4.07023 1.93977
3 1.16735 4.07442 1.93997
4 1.16642 4.07477 1.93998
5 1.16635 4.07480 1.93998
6 1.16634 4.07480 .93998
The solution correct up to four decimal places is
x = 1.1663, y = 4.0748, z = 1.9400
Remark: Usually, the Gauss-Seidal’s method converges rapidly
than the Gauss- Jacobis method. But, this is not always true.
There are some examples in which the Gauss-Jacobi,s method
converges faster than the Gauss-Seidal’s method.
References
[1]. S.T Chapra and R. P. Candle: (2015) Numerical Methods For
Engineers, Seventh Edition.
[2] N. R. Nassif and D. K. Fayyad: (2014) Introduction to
Numerical Analysis and Scientific Computing.
[3]. Richard L. Burden, J. Douglas Faires. Numerical Analysis
Ninth Edition, Youngstown State University 2011 Brooks Cole,
Cengage Learning, www.cengage.com.
[4]. K.Atkinson ,W Han :(2004) Elementary Numerical Analysis
Wiley India.
[5]. Madhumangal Pal. Numerical Analysis for Scientists and
Engineers: Theory and C Programs
[6]. E.Kreyszig: Advanced Engineering mathematics. Copyright
(2006) John Wiley & Sons, Inc.
[7]. C. Trenchea: Numerical Mathematical Analysis (2010). Dpt of
Mathematics , University of Pittsburgh.
[8]. P. J. Olver: AIMS lecture notes on Numerical Analysis (1995).