Emt 3200 - Group 6 - Numerical Approximations & Errors - Report

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

University of Guyana

Faculty of Engineering & Technology

Engineering Mathematics V (EMT 3200)


Group 6

Group 6
Numerical Approximation & Errors
❖ Date: 03/20/2020
❖ Lecturer: Ms. Elena Trim

1|Page
Group Members

1. Akeem Shepherd – 1025297

2. Jevaughn Loncke – 1023445

3. Nicolai Mahangi – 1025072 (Group Leader)

4. Ravendra Narine – 1024953

5. Satanand Persaud – 1014809

6. Reuel Sukdeo – 1024505

7. Stephen Thompson – 1024891

8. Gavin Todd – 1012776

2|Page
Contents

Introduction ……………………………………………………………………………………4

Definition & Importance ………………………………………………………………………5

Historical Background …………………………………………………………………………6

Types of Numerical Methods ………………………………………………………………….7-9

Errors- Absolute & Relative Errors ……………………………………………………………10-12

Types of Numerical Errors …………………………………………………………………….13-17

Error Propagation ……………………………………………………………………………....18-21

Conclusion ………………………………………………………………………………………22

Further Problems ………………………………………………………………………………...23

Reference …………………………………………………………………………………………24

3|Page
1. Introduction

This report seeks to explain the errors derived from performing Numerical Approximation calculations and
how to compute those errors. It also gives a brief introduction to Numerical Approximations and some
common methods.

4|Page
2. Definition & Importance

A Numerical approximation is anything similar, but not exactly equal, to something else (Butt 2010) or
solving mathematical problems, taking into account the extent of possible errors. It is the final result of
computations of unknown quantities that are not exact but involve errors. Such errors may result due to a
combination of different effects. They are systematic methods suitable for solving, numerically, the problems
on computers or calculators. Such solutions result in tables of numbers, graphical representation (figures), or
both. Most numeric methods are iterative in nature and, for a well-chosen problem and a good initial value, will
often result to a desired answer.
Numeric techniques can yield estimates that are close to the exact analytical solution but there are rooms for
errors. This is because the numeric analysis involves approximations. In may applied engineering problems, an
analytical solution cannot be obtained. Hence, we cannot compute exactly the errors associated with our
numerical methods. In these scenarios we must be content with approximations or estimates of the errors.
Numeric analysis provides an invaluable extension to the knowledge base of the problem-solving engineer.
Many problems have no solution formula (think of a complicated integral or a polynomial of high degree or the
interpolation of values obtained by measurements). In other situations, a complicated solution formula may
exist but may be practically useless. It is for these kinds of problems that a numerical method may generate a
good answer. Thus, it is very important that the applied mathematician, engineer, physicist, or scientist becomes
familiar with the essentials of numeric and its ideas, such as estimation of errors, order of convergence,
numerical methods expressed in algorithms, and is also informed about the important numeric methods.

5|Page
3. Historical Background

Methods of Numerical Approximations are around since the time of the Egyptian Rhind papyrus (c.
1650 BC), which explains a root-finding technique for resolving a simple equation. The Ancient Greek
mathematicians made greater improvements in numerical methods. Specifically, Eudoxus of Cnidus (c. 400–
350 BC) shaped and Archimedes (c. 285–212/211 BC) perfected the process of exhaustion for calculating
lengths, areas, and volumes of symmetrical figures. The method of exhaustion, when used to derive
approximations; played a vital role in the development of calculus by Isaac Newton (1642–1727) and Gottfried
Leibniz (1646–1716).
The invention of calculus produced accurate mathematical representations for physical reality, first in the
physical sciences and later in other sciences such as engineering, medicine, and business. These mathematical
models became too complex to be solved explicitly, and the need to find approximate solutions became
necessary. This gave a lead-way to the introduction of numerical analysis. In 1614, the Scottish
mathematician John Napier and others, created logarithms. This served as another great development to
Numerical approximations. Logarithms replaced complex multiplication and division with simpler addition and
subtraction operations after altering the original values to their corresponding logarithms.
A number of numerical methods were founded by Newton, competent to solve a range of problems. One
such outstanding feat was finding roots (solutions) for general functions and discovery of a polynomial equation
that best fits a set of data (“polynomial interpolation”). After Newton, many great mathematicians of the 18th
and 19th centuries contributed to the development of numerical analysis. Firstly, there were the Swiss Leonhard
Euler (1707–1783), the French Joseph-Louis Lagrange (1736–1813), and the German Carl Friedrich
Gauss (1777–1855).
Later years, many problems arose with Newton’s methods, thereby requiring a solution by approximate
means, usually involving ordinary differential equations. After Newton developed his basic laws of physics,
many mathematicians and physicists applied these laws to acquire mathematical models for solid and fluid
mechanics. Civil and mechanical engineers, even up to today, still uses these models as a base and numerical
analysis is one of their fundamental tools. Heat, electricity, and magnetism were modelled in the 19th century
and later in the 20th century, relativistic mechanics, quantum mechanics, and other theoretical constructs were
developed to extend and build on previous ideology. One of the most popular numerical approximation
techniques for working with such models involves approximating a complex, continuous surface, structure, or
process by a limited number of simple elements. This technique was called the Finite Element Method (FEM)
and was established by the American engineer Harold Martin. He and his colleagues helped the Boeing
Company study stress forces on innovative jet wing designs in the 1950s. FEM is broadly used in mechanical
fields of stress analysis, heat transfer, fluid flow, and torsion analysis.

6|Page
4. Types of Numerical Approximation Methods
There are many methods but a brief overview of some common types are given below.
1. Taylor Series Method
Taylor series is a representation of a function as an infinite sum of terms that are calculated from the values
of the function's derivatives at a single point. Taylor series is of great importance in the study of numerical
methods. Essentially, the Taylor series provides a means to forecast a function value at one point in terms of the
function value and its derivatives at another point. In particular, the theorem states that any smooth function can
be approximated as a polynomial.
Taylor Series Expansion
𝑑𝑓(𝑥) (∆𝑥)2 𝑑 2 𝑓(𝑥) (∆𝑥)3 𝑑 3 𝑓(𝑥)
𝑓(𝑥 + ∆𝑥) = 𝑓(𝑥) + ∆𝑥 + + +⋯
𝑑𝑥 2! 𝑑𝑥 2 3! 𝑑𝑥 3

If we approximate a derivative using a forward difference scheme, the Taylor expansion gives:

𝑓(𝑥+∆𝑥)−𝑓(𝑥) 𝑑𝑓(𝑥) (∆𝑥) 𝑑 2 𝑓(𝑥) (∆𝑥)2 𝑑 3 𝑓(𝑥)


= + + +⋯
∆𝑥 𝑑𝑥 2! 𝑑𝑥 2 3! 𝑑𝑥 3

Where the truncation error in this case is;


(∆𝑥) 𝑑 2 𝑓(𝑥) (∆𝑥)2 𝑑 3 𝑓(𝑥)
+ +⋯
2! 𝑑𝑥 2 3! 𝑑𝑥 3

Because it involves terms in ∆𝑥 and higher powers, it can be said that the order of the error is of order ∆𝑥, which
is usually written as 𝑂(∆𝑥). As one can see as ∆𝑥 approach 0, the true derivative is obtained.

𝑓(𝑥)−𝑓(𝑥−∆𝑥)
If the backward differential scheme, given by: is applied to the Taylor Expansion it can be seen
∆𝑥

that there will be an error of similar order 𝑂(∆𝑥).

However, by using the central difference scheme, ie.

𝑓(𝑥+∆𝑥)−𝑓(𝑥−∆𝑥)
2∆𝑥

We can obtain,

𝑑𝑓(𝑥) (∆𝑥)2 𝑑2 𝑓(𝑥) (∆𝑥)3 𝑑3 𝑓(𝑥)


𝑓(𝑥 − ∆𝑥) = 𝑓(𝑥) − ∆𝑥 + − +⋯
𝑑𝑥 2! 𝑑𝑥 2 3! 𝑑𝑥 3

7|Page
Therefore,

𝑓(𝑥+∆𝑥)−𝑓(𝑥−∆𝑥) 𝑑𝑓 (∆𝑥)2 𝑑3 𝑓(𝑥)


= 𝑑𝑥 + +⋯
2∆𝑥 3! 𝑑𝑥 3

In using the central difference scheme, the error is 𝑂(∆𝑥 2 ).


Since (∆𝑥) is small ∆𝑥 2 < ∆𝑥.

This being shown, the truncation error of the central scheme is much smaller than that of both the forward and
backward schemes, which means the centered scheme is much accurate.
2. Runge-Kutta Method
The Runge–Kutta methods are a family of implicit and explicit iterative methods, which include the well-
known routine called the Euler Method, used in temporal discretization for the approximate solutions of
ordinary differential equations. Euler’s method outlined one possible approach for solving differential equations
numerically. The problem with Euler's Method is that you have to use a small interval size to get a practically
correct result. That is, it's not very efficient. The Runge-Kutta Method produces a better outcome in fewer steps.
Runge-Kutta Method Order 4 formula:
1
𝑦(𝑥 + ℎ) = 𝑦(𝑥) + (𝐹1 + 2𝐹2 + 2𝐹3 + 𝐹4 )
6
𝑤ℎ𝑒𝑟𝑒 𝐹1 = ℎ𝑓(𝑥, 𝑦)
ℎ 𝐹1
𝐹2 = ℎ𝑓 (𝑥 + , 𝑦 + )
2 2
ℎ 𝐹2
𝐹3 = ℎ𝑓 (𝑥 + , 𝑦 + )
2 2
𝐹4 = ℎ𝑓(𝑥 + ℎ, 𝑦 + 𝐹3 )

3. Modified Euler’s Method


The Modified Euler's Method addressed these problems by finding the average of the slope based on the
initial point and the slope of the new point, which will give an average point to estimate the value. It also
decreases the errors that Euler's Method would have.
Using the equation
ℎ ℎ𝑓𝑘
𝑦𝑘+1 ≈ 𝑦𝑘 + ℎ𝑓 (𝑥𝑘 + , 𝑦𝑘 + )
2 2

8|Page
To estimate 𝑦(𝑦𝑘+1 ) by 𝑦𝑘+1 is known as the Modified Euler Method. This method is in the spirit of Euler’s
ℎ ℎ𝑓𝑘 ℎ
method except that 𝑓(𝑥, 𝑦) is evaluated at (𝑥𝑘 + 2 , 𝑦𝑘 + ) instead at (𝑥𝑘 , 𝑦𝑘 ). It is to be noted that 𝑥𝑘 + 2 is
2

halfway between 𝑥𝑘 and 𝑥𝑘 + ℎ.

Linear Multistep Methods


1. Euler’s Method
Also known as the first order method - The Euler method is a first-order method, which means that the local
error (error per step) is proportional to the square of the step size, and the global error (error at a given time) is
proportional to the step size.

Euler method of approximation is defined by 𝑦𝑘+1 in terms of 𝑦𝑘 by

𝑦𝑘+1 = 𝑦𝑘 + ℎ𝑓 (𝑥𝑘 , 𝑦𝑘 )

For k = 0,1, 2, …, n - 1

2. Trapezoidal method- also known as the second order method - The trapezoidal rule is an
implicit second-order method, which can be considered as both a Runge–Kutta method and a linear
multistep method.

3. Finite Element Method (F.E.M) - The finite element method (FEM) is the most widely
used method for solving problems of engineering and mathematical models. Typical problem areas
of interest include the traditional fields of structural analysis, heat transfer, fluid flow, mass
transport, and electromagnetic potential.

9|Page
5. Errors
An error is defined as the difference between the measured value and the actual or true value.
Accuracy & Precision
The errors related to both computations and measurements can be defined based on their accuracy and
precision. Accuracy is how closely a computed or measured value matches with the true value. On the contrary,
precision means how closely individual calculated or measured values agree with each other. These notions can
be demonstrated graphically using the analogy from target practice. The bullet holes on each target in Fig. 2
below can be thought of as the forecasts of a numerical method, while the bull’s-eye represents the truth.

(a) (b)

(c) (d)

Fig.2-Target practice chart

Inaccuracy refers to a systematic deviation from the truth. Even though the shots in Fig.2(b) are more
tightly grouped than those in Fig.2(d), the two cases are equally biased or inaccurate because they are both
located on the upper right quadrant of the target. Imprecision or uncertainty, on the contrary, is the magnitude of
the scatter. Therefore, although Fig.2(a) and Fig. 2(c) are equally accurate (i.e. focussed on the bull’s-eye), the
first is more precise because the shots are closely grouped.

Numerical methods should be adequately precise or unbiased to meet the necessities of a specific engineering
problem. They also should be precise enough for satisfactory engineering design.

10 | P a g e
Absolute & Relative of Errors
Absolute Error refers to the amount of error in a measurement. It is the difference between the measured
or approximated value and the true value.
𝑇𝑟𝑢𝑒 𝑉𝑎𝑙𝑢𝑒 = 𝑀𝑒𝑎𝑠𝑢𝑟𝑒𝑑 𝑉𝑎𝑙𝑢𝑒 (𝑜𝑟 𝐴𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑖𝑜𝑛) + 𝐴𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝑒𝑟𝑟𝑜𝑟, 𝐸𝑡 (1)

Absolute Error, 𝐸𝑡 = 𝑇𝑟𝑢𝑒 𝑉𝑎𝑙𝑢𝑒 − 𝑀𝑒𝑎𝑠𝑢𝑟𝑒𝑑 𝑉𝑎𝑙𝑢𝑒 (𝑜𝑟 𝐴𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑖𝑜𝑛) (2)


, where Et is the exact value of the error. ‘t’ means “true” error.
A drawback with this definition of Absolute error is that it doesn’t consider the order of magnitude of
the value under investigation. For instance, an error of a centimetre is much more substantial if we are
measuring a rivet rather than a bridge. One way to justify for the magnitudes of the quantities being assessed is
to standardize the error to its true value,
𝑎𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝑒𝑟𝑟𝑜𝑟, 𝐸𝑡
True fractional relative error, 𝜀𝑡 = (3)
𝑡𝑟𝑢𝑒 𝑣𝑎𝑙𝑢𝑒

The above formula defines relative error. The relative error signifies the ratio of the absolute error of the
measured or computed quantity to the accepted measurement (true value). In this way we can find the
magnitude of the absolute error in terms of the actual size of the measurement or approximation.
Example 1: A piece of iron rod was measured and found to be 120 cm. but the actual value of the rod is 123 cm.
Find the relative error.
Solution:
➢ True value = 123 cm
➢ Measured value = 120 cm
Finding absolute error,
Absolute Error, 𝐸𝑡 = 𝑇𝑟𝑢𝑒 𝑉𝑎𝑙𝑢𝑒 − 𝑀𝑒𝑎𝑠𝑢𝑟𝑒𝑑 𝑉𝑎𝑙𝑢𝑒 (𝑜𝑟 𝐴𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑖𝑜𝑛)
𝐸𝑡 = 123 − 120 = 3 𝑐𝑚
𝑎𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝑒𝑟𝑟𝑜𝑟, 𝐸𝑡
Relative error, 𝜀𝑡 = 𝑡𝑟𝑢𝑒 𝑣𝑎𝑙𝑢𝑒
3
𝜀𝑡 = 123 = 0.024

Relative error as a percentage can be expressed as shown below;


𝑎𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝑒𝑟𝑟𝑜𝑟, 𝐸𝑡
𝜀𝑡 = (100 %) (4)
𝑡𝑟𝑢𝑒 𝑣𝑎𝑙𝑢𝑒

For Equations (2) and (3) above, E and 𝜀 are subscripted with a t to indicate that
the error is standardized to the true value. Usually the true value is given in the problem as in example above.
For numerical methods, however, the true value will be known only when we worked with functions that can be
computed analytically. Such will be the case when we analyze the theoretical behaviour of a specific process for
simple systems. However, in real-life applications, we will certainly not know the true answer from before. For
11 | P a g e
these conditions, a way out is to normalize the error using the best obtainable estimate of the true value, that is,
to the approximation itself,
𝑎𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑒 𝑒𝑟𝑟𝑜𝑟
𝐸𝑎 = (100%) (5)
𝑎𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑖𝑜𝑛

where the subscript ‘a’ means that the error is standardized to an approximate value.
It is to be noted also that Equation (1) cannot be used to calculate the error term for Eq. (5). One of the
difficulties with numerical methods is to find error estimates when the true value is not given. This is why
certain numerical methods use an iterative technique to compute answers. With this approach, a present
approximation is made on the basis of an earlier approximation. This process is performed repetitively, or
iteratively, to gradually calculate better and better approximations. For these scenarios, the error is often
projected as the difference between previous and existing approximations. Hence, percent relative error is
determined by;
𝑐𝑢𝑟𝑟𝑒𝑛𝑡 𝑎𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑖𝑜𝑛−𝑝𝑟𝑒𝑣𝑖𝑜𝑢𝑠 𝑎𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑖𝑜𝑛
𝜀𝑎 = (100 %) (6)
𝑐𝑢𝑟𝑟𝑒𝑛𝑡 𝑎𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑖𝑜𝑛

When performing computations, we are not concerned with the sign of the error, rather we are
concerned whether the percent absolute value is lower than a prespecified percent tolerance 𝜀𝑠 . Therefore, it is
necessary to apply the absolute value of the equations above. For such occasions, the computation is repeated
until
|𝜀𝑎 | < 𝜀𝑠 (7)
If this relationship stands, our result is assumed to be within the prespecified acceptable level 𝜀𝑠 .
It is a common practice to relate these errors to the number of significant figures in the approximation. It
this criterion is met; it is certain that the answer is correct to at least n significant figures.
𝜀𝑠 = (0.5 × 102−𝑛 )% (8)

Example 2: You are given a cube of pure copper. You measured the sides of the cube to find the volume and
𝑔
weigh it to find its mass. When you calculate the density using your measurements, you got 8.78 𝑐𝑚3 . Copper
𝑔
accepted density is 8.96 𝑐𝑚3 . What is your percent error?
𝑔
➢ Experimental value = 8.78 𝑐𝑚3
𝑔
➢ Accepted value = 8.96 𝑐𝑚3

Difference (error) = experimental value – accepted value


𝑔
= 8.78 − 8.96 = −0.18 𝑐𝑚3
𝑔 𝑔
Absolute error, 𝐸𝑡 = |−0.18 | = 0.18
𝑐𝑚3 𝑐𝑚3
𝑎𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝑒𝑟𝑟𝑜𝑟, 𝐸𝑡 0.18
Percent error, 𝜀𝑡 = (100 %) = ( ) × 100% = 2%
𝑡𝑟𝑢𝑒 𝑣𝑎𝑙𝑢𝑒 8.96

12 | P a g e
6. Types of Numerical Errors

Since numerical results are an approximation, and since the computer program that performs the numerical
method might have errors, a numerical solution needs to be scrutinized closely. There are three major causes of
error in computation: roundoff errors, truncation errors and human errors.

a) Roundoff Errors

A roundoff error is an error caused by chopping or rounding of a decimal number. Chopping means to
discard all numbers after a certain decimal place. This error is called roundoff error, irrespective of whether we
chop or round.
Roundoff errors came from the fact that computers hold only a fixed number of significant figures during a
calculation. Numbers such as 𝜋, 𝑒 or √8 cannot be expressed by a fixed digit of significant figures. Thus, they
cannot be represented precisely by the computer. Additionally, since computers use a base-2 representation,
they cannot exactly represent certain exact base-10 numbers. The variation introduced by the omission of
significant figures is referred to as roundoff error.

The general rule for rounding off a number to k decimal places is as follows:

Find 5 × 10−(𝑘+1) , where k represents number of places desired

Add 5 × 10−(𝑘+1) to the decimal number given and then chop to the (𝑘 + 1)st digit.

Example 3: Round the number 1.23454621 to 3 decimals


𝑘=1

5 × 10−(𝑘+1) = 5 × 10−(3+1) = 0.0005

Now 1.23454621 + 0.0005 = 1.235 04621

Chopping 1.235 04621 to 3 decimal places give 1.234

It is to be noted that chopping is not recommended as the error produced can be greater than rounding.

Floating point representation of numbers

13 | P a g e
Fractional numbers are usually represented in computers using a format called the floating-point form. In this
form, the quantity is expressed as a fractional part, named the mantissa or significand, and an integer part, called
an exponent or characteristic, i.e.
𝑚 × 𝑏𝑒
, where m = the mantissa, b = the base of the number system being used, and = the exponent.

Fig. 1- shows one way that a floating-point number could be stored in a word. The first bit is reserved for the
sign, the next 8 bits for the signed exponent, and the last 23 bits for the mantissa.

For example, the number 156.78 could be represented as 0.15678 x 103 in a floating-point base-10
system.
It is important to note that the mantissa is usually regularized if it has leading zero digits. For example, if
1
the fraction 34 = 0.029411765. . . was stored in a floating-point base-
1
10 system that permitted only four decimal places to be stored. Thus, 34 would be stored as 0.0294 × 100

But in the answer above, the inclusion of the useless zero to the right of the decimal forces us to drop a
significant digit in the fifth decimal place. The number can be normalized to remove the leading zero by
multiplying the mantissa by 10 and decreasing the exponent by 1 which gives 0.2941 × 10−1
We therefore include an additional significant figure when the number is stored. The consequence of
normalization is that the absolute value of m is limited, i.e.,
1
≤𝑚 <1
𝑏

where b = the base. For instance, for a base-10 system, m would fit between a range of 0.1 and 1 and for a base-
2 system, between 0.5 and 1.

Floating-point representation facilitates both fractions and very large numbers to be represented on the
computer. However, it has some drawbacks. For example, floating-point numbers use more storage and take

14 | P a g e
longer to process than integer numbers. More importantly, however, their application presents a source of error
because the mantissa holds only a limited number of significant figures. Thus, a round-off error is introduced.

b) Truncation Errors
Truncation errors are those that result from using an approximation in place of an exact mathematical
procedure. It may also be defined as the difference between the true (analytical) derivative of a function and its
derivative obtained by numerical approximation. Truncation errors are caused by the method being used. The
truncated term originates from the fact that numerical methods can be compared to a truncated Taylor Series.
Taylor Series is a numerical method used to express functions in an appropriate manner, the properties of
truncation errors can be understood more clearly. Consider the Taylor series expansion;
𝑥2 𝑥𝑛
𝑒𝑥 = 1 + 𝑥 + + ⋯+ +⋯
2! 𝑛!
If the formula is used to calculate 𝑒 0.3 we get

0.3
0.32 0.3𝑛
𝑒 = 1 + 0.3 + + ⋯+ +⋯
𝑛! 𝑛!
Where do we break the calculation? How many terms do we accept? Theoretically the calculation will continue
to infinity. There are always be more terms to add on. If we do stop after a finite number of terms, we will not
get the exact answer. For example, if we do take the first four terms as the approximation, we get
0.32 0.33
𝑟 = 𝑒 0.3 ≈ 1 + 0.3 + + = 𝑟′
2! 3!
For this calculation, the truncation error is
𝑡𝑟𝑢𝑛𝑐𝑎𝑡𝑖𝑜𝑛 𝑒𝑟𝑟𝑜𝑟 = 𝑟 − 𝑟′
The truncation error is dependent on the particular numerical method or algorithm used to solve a problem. It is
also independent of roundoff error.
𝜋
Example 4: Use Taylor Series expansions with n = 0 to 6 to approximate 𝑓(𝑥) = cos(𝑥)𝑎𝑡 𝑥 i+1 = on the basis
3
𝜋
of the value of 𝑓(𝑥) and its derivatives at 𝑥i = 4 .
𝜋 𝜋 𝜋
This would mean that ℎ = − =
3 4 12
𝜋 𝜋
𝑓 ( ) = cos ( ) = 0.5
3 3
The zero-order approximation which states:
𝑓(𝑥i+1) ≅ 𝑓(𝑥i) + 𝑓’(𝑥i)(𝑥i+1− 𝑥i)
Therefore,
𝜋 𝜋
𝑓 ( ) ≅ cos ( ) = 0.707106781
3 4

15 | P a g e
Which represents a percentage relative error of
0.5−0.707106781
𝜀i = ∗ 100% = −41.4213562%
0.5

Which has 𝜀 i = −41.4%


Now, For the first order approximation, we add the 1st derivative term where 𝑓(𝑥) = − sin(𝑥):
𝜋 𝜋 𝜋 𝜋
𝑓 ( 3 ) ≅ cos (4 ) − sin ( 4 ) (12) = 0.521986659
0.5−0.521986659
𝜀i = ∗ 100% = −4.3973318%
0.5

Which has 𝜀 i = −4.40%


Now, For the second order approximation, we add the 2nd derivative term where 𝑓(𝑥) = − cos(𝑥):
𝜋
𝜋 𝜋 𝜋 𝜋 cos( )
4 𝜋 2
𝑓 ( ) ≅ cos ( ) − sin ( ) ( ) − ( ) = 0.497754491
3 4 4 12 2 12
0.5−0.497754491
𝜀i = ∗ 100% = 0.4491018%
0.5

Which has 𝜀 i = 0.449%


It can be seen that the inclusion of the other terms results in an improved estimate which reduces the truncated
error.

When the third order is computed the error is reduced to 𝜀 i = 0.0262% which means that the error has been
further decreased and 99.9738% of the true value is attained. Hence further error reduction is negligible since it
may take an infinite amount of iterations in order to attain 100% of the true value.

Another example of a truncation error is the approximate calculation of derivatives. The value of the derivative
of a function at a point can be approximated by the expression:

𝑑𝑓 𝑓(𝑥2 ) − 𝑓(𝑥1 )
=
𝑑𝑥|𝑥=𝑥1 𝑥2 − 𝑥1

where 𝑥2 is a point near 𝑥1 . The difference between the value of the true derivative and the value that is
calculated with this equation is called a truncation error.

c) Human Errors
This error arises when the equation of the mathematical model is formed, due to sources such as the idealistic
assumptions made to simplify the model, inaccurate measurement of data, miscopying of figures, or the inaccurate
representation of mathematical constants. Characteristic human errors are arithmetic errors, and/or programming
errors. These errors can be very troublesome to detect unless they give clearly incorrect solution. Example of
16 | P a g e
arithmetic errors is when parentheses or the rules about orders of operation are misunderstood or ignored,
BODMAS.
These errors are also called gross errors and they occur because of human mistakes. Although this type of error
is not connected with most numerical methods, most times they have a great impact on the success of modelling
a system. Blunders or errors are unavoidable to a certain extent but there are ways to remove or avoid the errors
as much as possible.
These include:
• Readings should be taken very carefully.
• Two or more readings should be taken of the measurement quantity.

Example 5: If the constant 𝜋 occurs in an equation, should we replace with 3.1416 or 3.141593?
Either value can be used but it will give different degrees of error based on which value is chosen.

17 | P a g e
7. Error Propagation
Error propagation refers to how errors at the beginning and in later steps (roundoff, for instance) propagate
into the computation and affect accuracy, sometimes very rapidly. Bounds for absolute error add under addition
and subtraction, whilst bounds for relative error add under multiplication and division.
Numerical methods that feed their outputs back to their inputs are particularly prone to this. Suppose we
have to simulate the Solar system using a numerical algorithm to predict the position of each planet at the next
time step. This algorithm will observe the positions of the other planets, calculate their gravitational force on the
planet it is simulating, and then used that force to find out where that planet would move to next. That new
position then becomes the jumping-off point for the next step of the simulation, and so on. The problem here is
that the algorithm’s answer is used as its input on the next round of simulation. If that answer is wrong — if the
planet is an inch too far to the right— the algorithm will faithfully figure out where it will go from that (wrong)
position. This is an instance where the error propagates.
This phenomenon is identified variously as dynamic error, propagating error, multiplicative error, etc. The
amount of error that occurs in each individual step of such a process is sometimes called local error; the accrued
results of those local errors over the whole run is sometimes called global error. Global, propagating error is
different from additive error, such as the distortion introduced by a badly focused lens, which only comes in
once, and is not fed back around into the input of the method.
Error Propagation is sometimes referred propagation of uncertainty is defined as the effect of a variables’
error (random errors) on the uncertainty of a function. The value and its error/uncertainty are expressed as an
interval 𝑥 ∓ 𝑢.

The uncertainty 𝑢 can be expressed and defined in a number of ways including:

• Defined by Absolute error ∆𝑥


∆𝑥
• Defined by Relative error 𝑥

• Quantified in terms of Standard deviation 𝜎

Theorems:
a) In addition, and subtraction, a bound for the error of the results is given by the sum of the error bounds
for the terms.
b) In multiplication and division, an error bound for the relative error of the results is given
(approximately) by the sum of the bounds for the relative errors of the given number.
Proof of Theorem (a) for subtraction:
Let 𝑥 = 𝑥 ′ + 𝜖𝑥 ,

18 | P a g e
𝑦 = 𝑦 ′ +𝜖𝑦 ,

|𝜖𝑥 | ≤ 𝛽𝑥 ,

And |𝜖𝑦 | ≤ 𝛽𝑦
Then for error, 𝜖, of the difference we get,
|𝜖𝑥 | = |𝑥 − 𝑦 − (𝑥 ′ − 𝑦 ′ )|

|𝜖𝑥 | = |𝑥 − 𝑥′ − (𝑦 − 𝑦 ′ )|

∴ |𝜖𝑥 | = |𝜖𝑥 − 𝜖𝑦 | ≤ |𝜖𝑥 | + |𝜖𝑦 | ≤ 𝛽𝑥 + 𝛽𝑦


The proof of the sum is similar to the above

Proof of theorem (b) for multiplication:


To find relative error, 𝜖𝑟 , of 𝑥′𝑦′ we get from the relative errors , 𝜖𝑟𝑥 and 𝜖𝑟𝑦 of 𝑥′𝑦′ and bounds 𝛽𝑥 , 𝛽𝑦 ,
𝑥𝑦 − 𝑥 ′ 𝑦 ′ 𝑥𝑦 − (𝑥 − 𝜖𝑥 )(𝑦 − 𝜖𝑦 )
|𝜖𝑟 | = | |=| |
𝑥𝑦 𝑥𝑦

𝜖𝑥 𝑦+𝜖𝑦 𝑥−𝜖𝑥 𝜖𝑦
=| |
𝑥𝑦

𝜖𝑥 𝑦+𝜖𝑦 𝑥 𝜖 𝜖𝑦
≈| | ≤ | 𝑥𝑥 | + | 𝑦 | = |𝜖𝑟𝑥 | + |𝜖𝑟𝑦 |
𝑥𝑦

Functions of a single variable


Suppose that we have a function f(x) that is dependent on a single autonomous variable x.
Assume that 𝑥 ′ is an approximation of x. We, therefore, would like to evaluate the effect of the difference
between x and 𝑥 ′ on the value of the function. i.e., we would like to project;

∆𝑓(𝑥 ′ ) = |𝑓(𝑥) − 𝑓(𝑥 ′ )|

The problem with assessing ∆𝑓(𝑥 ′ ) is that f(x) is unknown because x is unknown. We can overcome this
difficulty if 𝑥 ′ is close to x and f (𝑥 ′ ) is continuous and differentiable. If these
conditions stand, a Taylor series can be used to find f(x) near f (𝑥 ′ ), as in

19 | P a g e
′) ′ (𝑥 ′ )(𝑥 ′)
𝑓 ′′ (𝑥 ′ )
𝑓(𝑥) = 𝑓(𝑥 + 𝑓 −𝑥 + (𝑥 − 𝑥 ′ )2 + ⋯
2

Dropping the second- and higher-order terms and reordering gives

𝑓(𝑥) − 𝑓(𝑥 ′ ) ≅ 𝑓′′(𝑥 ′ )(𝑥 − 𝑥 ′ )

𝑜𝑟 ∆𝑓(𝑥 ′ ) = |𝑓 ′ (𝑥 ′ )|∆𝑥 ′

𝑤ℎ𝑒𝑟𝑒 ∆𝑓(𝑥 ′ ) = |𝑓(𝑥) − 𝑓(𝑥 ′ )| signifies an estimate of the error of the function and ∆𝑥 ′ =
|𝑥 − 𝑥 ′ |represents an estimate of the error of x. This equation, ∆𝑓(𝑥 ′ ) = |𝑓 ′ (𝑥 ′ )|∆𝑥 ′, provides the
capability to approximate the error in f(x) given the derivative of a function and an estimate of the error
in the independent variable.

Example 6: Given a value of 𝑥 ′ = 5 2.5 with an error of ∆𝑥 ′ = 5 0.01, estimate the resulting error in the function
𝑓(𝑥) = 𝑥 3 .
Using equation, ∆𝑓(𝑥 ′ ) = |𝑓 ′ (𝑥 ′ )|∆𝑥 ′

∆𝑓(𝑥 ′ ) ≅ 3(2.5)2 (0.01) = 0.1875

Because 𝑓(2.5) = (2.5)3 = 15.625 we can predict that

𝑓(2.5) = 15.625 ± 0.1875

or that the true value lies between 15.4375 and 15.8125.


More on rules for Propagation of Errors

Two measure quantities, 𝑥 𝑎𝑛𝑑 𝑦, would have uncertainties, ∆𝑥 𝑎𝑛𝑑 ∆𝑦 respectively. The uncertainty error in
these cases would be 𝑥 − ∆𝑥, 𝑎𝑛𝑑 𝑦 − ∆𝑦. From the measured quantities, a new quantity, 𝑧, would be calculated
and its uncertainty, ∆𝑧, would need to be calculated.

• Addition and Subtraction


• Multiplication and Division
• Product of Powers

20 | P a g e
• Other Functions

Addition and Subtraction: 𝒛 = 𝒙 + 𝒚 𝒂𝒏𝒅 𝒛 = 𝒙 − 𝒚

Assuming that the uncertainties are arranged so as to make 𝒛 as far from its true value as possible, we
calculate the average deviation as ∆𝑧 = |∆𝑥| + |∆𝑦| 𝑜𝑟 ∆𝑧 = √(∆𝑥)2 + (∆𝑦)2 + ⋯ in both cases (Addition and
Subtraction).

Multiplication and Division: 𝒛 = 𝒙𝒚 𝒂𝒏𝒅 𝒛 = 𝒙/𝒚

We can derive the relation for multiplication easily by taking the largest values for 𝑥 𝑎𝑛𝑑 𝑦, ie. 𝑧 + ∆𝑧 =
(𝑥 + ∆𝑥)(𝑦 + ∆𝑦) = 𝑥𝑦 + 𝑥∆𝑦 + 𝑦∆𝑥 + ∆𝑥∆𝑦. Usually ∆𝑥 ≪ 𝑥 𝑎𝑛𝑑 ∆𝑦 ≪ 𝑦 so the final term can be neglected
because of how small the multiplicand will result to. Therefore, 𝑧 = 𝑥𝑦 𝑎𝑛𝑑 ∆𝑧 = 𝑥∆𝑦 + 𝑦∆𝑥 which we can
∆𝑧 ∆𝑥 ∆𝑦 ∆𝑧
calculate the error as = + 𝑜𝑟 = √(∆𝑥/𝑥)2 + (∆𝑦/𝑦)2 + ⋯
𝑧 𝑥 𝑦 𝑧

Product of Powers: 𝒙𝒎 𝒚𝒏

∆𝑧 ∆𝑥 ∆𝑦 ∆𝑧
The results in this case are = |𝑚| + |𝑛| 𝑜𝑟 = √(𝑚∆𝑥/𝑥)2 + (𝑛∆𝑦/𝑦)2 + ⋯
𝑧 𝑥 𝑦 𝑧

Other Functions

These functions would include a mixture of Addition, Subtraction, Multiplication and Division, as well as
the 𝑧 = 𝑆𝑖𝑛(𝑥).

21 | P a g e
8. Conclusion
The main types of Errors in Numerical Approximations are roundoff, truncation and human errors. There
are a variety of Numerical methods and depending on the type chosen that will determine what type of error one
will encounter. Errors can also propagate in a computation thereby affecting the accuracy of the results
obtained.

22 | P a g e
9. Further Problems
Application Questions

Civil Engineering

1. Carl hired a contractor to layout a plan and construct a three-story house with three rooms on each flat.
One of the rooms had specific dimensions. When the contractor had his workers complete the job, he
took measurements of the said room and found it to be 12.5 (0.1) ft X 10.3 (0.1) ft X 7.8 (0.1) ft, where
the uncertainty is the digit given in parenthesis. Calculate the volume and approximate error of the room.
Answer: 1004.25 ft & 𝟑. 𝟐𝟑 × 𝟏𝟎−𝟒
2. At a distance of 120 ft from the foot of a tower the elevation of its top is 60° . If the possible error in
measuring the distance and elevation are 1 inch and 1 minute, find the approximate error in calculated
height.
Answer:∆𝒉 = 𝟎. 𝟐𝟖𝟒

Mechanical Engineering

3. An object was weighed on a dial balance. Its mass was found to be 26.10 ± 0.01g. What is the relative
error in percentage?
Answer: ± 0.0383%

Electrical Engineering

4. The voltage in a high-voltage transmission line is stated to be 2.4 MV while the actual voltage may
range from 2.1 MV to 2.7 MV. What is the maximum absolute and relative error of voltage?
Answer: 0.3 MV & 0.14
5. A capacitor is labelled as 100 mF whereas it is actually 108.2532 mF. What are the absolute and relative
errors of the label?
Answer: 8.2 mF & 0.076

23 | P a g e
10. REFERENCE
Websites
• https://www.britannica.com/science/numerical-analysis/Historical-background accessed on the 17th of
March, 2020.

• https://www.statisticshowto.datasciencecentral.com/relative-error/ accessed on the 17th of March, 2020.

• https://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/01Error/Error/ accessed on the 18th of March,


2020.

• https://www.sanfoundry.com/engineering-mathematics-questions-answers-errors-approximations/
accessed on the 17th of March, 2020.

• https://www.watelectrical.com/different-types-of-errors-in-measurement-and-measurement-error-
calculation/

• https://www.britannica.com/science/numerical-analysis/Approximation-theory accessed on the 18th of


March, 2020.

• http://hplgit.github.io/INF5620/doc/pub/H14/trunc/html/._main_trunc001.html accessed on the 18th of


March, 2020.

• http://home.iitk.ac.in/~pranab/ESO208/rajesh/03-04/error3.pdf accessed on the 17th of March, 2020.

• https://www.slideshare.net/cyndyArgote/numerical-approximation accessed on the 17th of March, 2020.

Books

✓ Numerical Methods for Engineers, SEVENTH EDITION by Steven C. Chapra (Berger Chair in
Computing and Engineering- Tufts University) and Raymond P. Canale (Professor Emeritus of Civil
Engineering - University of Michigan).
✓ Advanced Engineering Mathematics, TENTH EDITION, By Erwin Kreyszig. JOHN WILEY &
SONS, INC.
✓ Advanced Engineering Mathematics, SEVENTH EDITION, By Perter V. O’Neil (The University of
Alabama at Birmingham).
✓ Introduction to Numerical Analysis Using Matlab, 2010, by Butt Rizwan. JONES AND
BARTLETT PUBLISHERS, LLC.

24 | P a g e

You might also like