Chapter One

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

NUMERICAL METHODS

CHAPTER ONE

BASIC CONCEPTS IN ERROR ESTIMATION

Introduction

The solution of most practical problems in science and engineering involves two
successive stages:

1. The mathematical description of the problem i.e. Mathematical modeling


2. The solution of the formulated mathematical problem

A mathematical model may require the use of Nonlinear equations, System of


equations, Curve fitting and Interpolations, Differentiations, Integrations, Differential
equations, etc.

In many practical applications, it is not possible or it is difficult to get an exact solution


to a given mathematical problem. In such cases, where it is impossible or it is difficult to
get an exact (or analytic) solution of a problem, alternative numerical methods are used
to find approximate solutions.

1.1. Sources of Errors


(i) Before Computation
1. Modeling Errors: The process happening in reality cannot always be described
by mathematics and simplifications are introduced to describe them. Hence,
these models are idealized and contain error.
2. Data Errors: these errors arise from the fact that initial parameters often are not
exact since they are obtained from measurement or previous computations.
(ii) During Computation
1. Round off Errors: Round-off errors originate from the fact that computers can
only represent numbers using a fixed and limited number of significant figures.

Thus, numbers such as  or 2 cannot be represented exactly in computer


memory. The discrepancy introduced by this limitation is called round-off error.
2. Truncation Errors: Truncation errors in numerical methods arise when
approximations are used to estimate exact mathematical procedures. Often a
Taylor series is used to approximate a solution which is then truncated.

x2 x3
E.g: e x  1  x   . . .
2 ! 3!

x2
Now if e x  1  x  , the truncation error is given by
2!

 x2  x3 x4
e x   1  x      ...
 2!  3! 4!

Numerical Errors = Round-off Errors + Truncation Errors

1.2. Absolute and Relative Errors

Definition: If x is an approximation to the actual ( or exact ) value x , then

1. The absolute error E x in x is defined by Ex  x  x , and

xx Ex
2. The relative error  x in x is defined by  x   if x  0 .
x x

Note: 1. The absolute error doesn’t show how good a measurement or calculation is.
2. Relative errors can be stated directly or as percentages.
3. The actual value x lies in the interval [ x  E x , x  E x ]

i.e. x satisfies the inequality x  E x  x  x  E x

4. This inequality is often written as x  x  E x


Example 1: Suppose the lengths of a bridge and a rivet are measured to be 9,999 and 9
cm respectively. If the actual values are 10,000 and 10 cm respectively, compute

(a) The absolute error and


(b) The relative error for each case

Example 2: Determine, in percent, the relative error of the approximate number


x  35.148 if x  35.148  0.00074

Example 3: The derivative of f ( x ) at a particular value of x can approximately be

calculated by

f ( xh) f ( x)
f ( x )  .
h

For f ( x )  7 e 0.5 x and h  0.3 find:

(a) The exact value of f  ( 2)

(b) The approximate value of f  ( 2)

(c) The absolute and relative errors of this approximation

Example 4: Find the largest interval in which x must lie in order to approximate 2

with a relative error of at most 10 2 .

Significant Digits and Relative Errors

The significant digits of a number are all its digits except zeros which appear to the left
of the first non-zero digit.

E.g: 0.001604 and 30.500have 4 and 5 significant digits respectively.

Definition: A number x is said to approximate x correct to k significant digits (figures)

xx
if k is the largest nonnegative integer for which  5x10 k .
x
Example 1: Let x  3.29 and x  3.2 . To how many significant digits does x
approximate x ?

Example 2: Given a relative error of  x  0.5 . How many significant digits are there in

the approximation?

Example 3: Given x  3.2 . What are the worst possible approximations of x which are

correct to 2 significant digits?

Note: when a number is rounded, all the digits in the rounded number are considered
to be correct.

1.3. Approximation of Errors

When we are solving a problem numerically, we will only have access to approximate
values and not actual values. Thus, we quantify error for such cases as follows:

Approximat e absolute error = present ap proximation - previous approxi mation

present ap proximatio n - previous approxi mation


Approximat e relative error =
present ap proximatio n

1.4. Round-off Errors

Floating point representation of numbers: A normalized decimal floating number has


the form

x   0.d1 d 2 d 3 ... 10 n

where d1 , d 2 , d 3 , ... are decimal (base 10) digits, d1  0 and n is an integer.

In simple notation: x   r  10 n where 110  r 1 . Here r is called the normalized

mantissa and n is an exponent.


Note: Most real numbers cannot be exactly represented on the computer. There are two
approaches to represent a general real number on the computer: Chopping and
rounding.

Example: Given two numbers x  0.88909 10 4 and y  0.8887 10 4 . Compute x  y using

decimal floating point approximation with 4 significant digits in the mantissa in each of
the following cases:

(a) Chopping (b) Rounding


1.5. Truncation Error and Taylor Series
Taylor Series:

f ( x0 ) f 
f ( x)  f ( x0 )  f  ( x0 )( x  x0 )  ( x  x0 )  ( x0 )( x  x0 )  ...
2! 3!

Maclaurin Series: Taylor Series when x0  0

Taylor’s Theorem:
f ( x)  Pn ( x)  Rn ( x)

where
f ( x0 ) f  f (n)
Pn ( x)  f ( x0 )  f  ( x0 )( x  x0 )  ( x  x0 ) 2  ( x0 )( x  x0 ) 3  ...  ( x0 )( x  x0 ) n &
2! 3! n!

f ( n 1 )
Rn ( x )  ( z )( x  x0 ) n  1 for some z between x0 and x .
( n  1) !

Here,

 Pn (x) is the Taylor polynomial approximation of order n about x  x 0 .

 Rn (x ) is the remainder term and it gives the truncation error in approximating

f (x) by Pn (x) .
Example: Let f ( x)  e x . Then

(a) Find the third order Taylor polynomial approximation of f about x=0 and use it
to approximate the value of e.
(b) Use the remainder term to find an upper bound on the truncation error.
1.6. Error Propagation
(i) Functions of a single variable

Let f (x) be a function and x be an approximation to x with absolute error  x . If x is


close to x and f (x ) is continuous and differentiable, then

f ( x ) f 
f ( x)  f ( x )  f  ( x )( x  x )  (x  x)2  ( x )( x  x ) 3  ...
2! 3!

Dropping the second and higher order terms, we get

f ( x)  f ( x )  f  ( x )(x  x )

 f ( x)  f ( x )  f  ( x )(x  x )

 f ( x)  f ( x )  f  ( x ) x  x

  f ( x)  f  ( x )  x

(ii) Functions of Several variables

Let F  f ( x1 , x 2 , ..., x n ) be a function of n variables. If  x i is the absolute error in x i for

each 1  i  n , then

f f f f
F  x1   x2   x3  ...   xn
 x1  x2  x3  xn

i.e. the right hand side expression gives the maximum absolute error in F.

Note that all the partial derivatives are evaluated at the point ( x1 , x2 , x3 , . . . , xn ) .

6
Example 1: Given x  2.5  0.01 . Estimate the resulting error in f ( x)  x 3 .

5x 2 y
Example 2: Find the maximum error in F where F  .
z3
Given x 1  0.001 , y  2  0.002 and z  3  0.003 .

F
Example 3: The strain in an axial member of a square cross-section is given by  
h2 E

where F =axial force in the member, N

h = length or width of the cross-section, m

E =Young’s modulus, Pa
Given F  72  0.9 N , h  4 0.1mm , E  70  1.5 GPa .

Find the maximum possible error in the measured strain.

Estimated error bounds associated with common arithmetic operations

Operation Error bound

1 ( x  y )  x  y

2 ( x  y )  x  y

3  ( x. y ) x y y x

x x  y y x
4   
 y
2
y

Proof:

f f
1. Let f ( x, y )  x  y . Then  1
x y

f f
 ( x  y )   f  x  y   x   y
x y

7
i.e. the absolute error in the sum of two numbers is the sum of the absolute errors in two
numbers.

f f
3. Let f ( x, y )  x . y . Then  y and x
x y

f f 
 ( x . y )   f  x  y  y  x  x  y
x y

If we consider the relative error in this case, we get


( x. y ) ( x. y ) y  x  x  y  x  y  x  y
      
x. y x. y x. y x y x y

i.e. the relative error in the product of two numbers is the sum of the relative errors in
two numbers.

Exercise: Proof (2) and (4).

You might also like