Numerical Differentiation
Numerical Differentiation
Numerical Differentiation
5.1 Introduction
Numerical differentiation deals with the following problem: We are given the func-
tion y = f (x) and wish to obtain one of its derivatives at the point x = xk . The term
“given” means that we either have an algorithm for computing the function, or we
possess a set of discrete data points (xi , yi ), i = 0, 1, . . . , n. In either case, we have ac-
cess to a finite number of (x, y) data pairs from which to compute the derivative. If
you suspect by now that numerical differentiation is related to interpolation, you are
correct—one means of finding the derivative is to approximate the function locally
by a polynomial and then to differentiate it. An equally effective tool is the Taylor se-
ries expansion of f (x) about the point xk , which has the advantage of providing us
with information about the error involved in the approximation.
Numerical differentiation is not a particularly accurate process. It suffers from
a conflict between roundoff errors (caused by limited machine precision) and errors
inherent in interpolation. For this reason, a derivative of a function can never be com-
puted with the same precision as the function itself.
The derivation of the finite difference approximations for the derivatives of f (x) is
based on forward and backward Taylor series expansions of f (x) about x, such as
h2 h3 h4 (4)
f (x + h) = f (x) + hf (x) + f (x) + f (x) + f (x) + · · · (a)
2! 3! 4!
h2 h3 h4 (4)
f (x − h) = f (x) − hf (x) + f (x) − f (x) + f (x) − · · · (b)
2! 3! 4!
183
Downloaded from https:/www.cambridge.org/core. University of Florida, on 20 Jan 2017 at 22:01:31, subject to the Cambridge Core terms of use, available at
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139523899.006
184 Numerical Differentiation
h4 (4)
f (x + h) + f (x − h) = 2f (x) + h2 f (x) + f (x) + · · · (e)
12
h3
f (x + h) − f (x − h) = 2hf (x) + f (x) + . . . (f)
3
4h4 (4)
f (x + 2h) + f (x − 2h) = 2f (x) + 4h2 f (x) + f (x) + · · · (g)
3
8h3
f (x + 2h) − f (x − 2h) = 4hf (x) + f (x) + · · · (h)
3
Note that the sums contain only even derivatives, whereas the differences retain just
the odd derivatives. Equations (a)–(h) can be viewed as simultaneous equations that
can be solved for various derivatives of f (x). The number of equations involved and
the number of terms kept in each equation depend on the order of the derivative and
the desired degree of accuracy.
f (x + h) − f (x − h) h2
f (x) = − f (x) − · · ·
2h 6
or
f (x + h) − f (x − h)
f (x) = + O(h2 ) (5.1)
2h
which is called the first central difference approximation for f (x). The term O(h2 )
reminds us that the truncation error behaves as h2 .
Similarly, Eq. (e) yields the first central difference approximation for f (x):
f (x + h) − 2f (x) + f (x − h) h2 (4)
f (x) = 2
+ f (x) + . . .
h 12
or
f (x + h) − 2f (x) + f (x − h)
f (x) = + O(h2 ) (5.2)
h2
Downloaded from https:/www.cambridge.org/core. University of Florida, on 20 Jan 2017 at 22:01:31, subject to the Cambridge Core terms of use, available at
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139523899.006
185 5.2 Finite Difference Approximations
Downloaded from https:/www.cambridge.org/core. University of Florida, on 20 Jan 2017 at 22:01:31, subject to the Cambridge Core terms of use, available at
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139523899.006
186 Numerical Differentiation
We can derive the approximations for higher derivatives in the same manner. For
example, Eqs. (a) and (c) yield
f (x + 2h) − 2f (x + h) + f (x)
f (x) = + O(h) (5.7)
h2
The third and fourth derivatives can be derived in a similar fashion. The results
are shown in Tables 5.2a and 5.2b.
h2 h3 h4 (4)
f (x + h) = f (x) + hf (x) + f (x) + f (x) + f (x) + · · ·
2 6 24
4h3 2h4 (4)
f (x + 2h) = f (x) + 2hf (x) + 2h2 f (x) + f (x) + f (x) + · · ·
3 3
We eliminate f (x) by multiplying the first equation by 4 and subtracting it from the
second equation. The result is
h4 (4)
f (x + 2h) − 4f (x + h) = −3f (x) − 2hf (x) + f (x) + · · ·
2
Downloaded from https:/www.cambridge.org/core. University of Florida, on 20 Jan 2017 at 22:01:31, subject to the Cambridge Core terms of use, available at
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139523899.006
187 5.2 Finite Difference Approximations
Therefore,
−f (x + 2h) + 4f (x + h) − 3f (x) h2 (4)
f (x) = + f (x) + · · ·
2h 4
or
−f (x + 2h) + 4f (x + h) − 3f (x)
f (x) = + O(h2 ) (5.8)
2h
Equation (5.8) is called the second forward finite difference approximation.
Derivation of finite difference approximations for higher derivatives involves ad-
ditional Taylor series. Thus the forward difference approximation for f (x) uses series
for f (x + h), f (x + 2h), and f (x + 3h); the approximation for f (x) involves Taylor ex-
pansions for f (x + h), f (x + 2h), f (x + 3h), f (x + 4h), and so on. As you can see, the
computations for high-order derivatives can become rather tedious. The results for
both the forward and backward finite differences are summarized in Tables 5.3a and
5.3b.
Downloaded from https:/www.cambridge.org/core. University of Florida, on 20 Jan 2017 at 22:01:31, subject to the Cambridge Core terms of use, available at
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139523899.006
188 Numerical Differentiation
To illustrate the errors, let us compute the second derivative of f (x) = e−x at x =
1 from the central difference formula, Eq. (5.2). We carry out the calculations with
six- and eight-digit precision, using different values of h. The results, shown in Table
5.4, should be compared with f (1) = e−1 = 0.367 879 44.
In the six-digit computations, the optimal value of h is 0.08, yielding a result ac-
curate to three significant figures. Hence three significant figures are lost due to a
combination of truncation and roundoff errors. Above optimal h, the dominant er-
ror is caused by truncation; below it, the roundoff error becomes pronounced. The
best result obtained with the eight-digit computation is accurate to four significant
figures. Because the extra precision decreases the roundoff error, the optimal h is
smaller (about 0.02) than in the six-figure calculations.
Richardson extrapolation is a simple method for boosting the accuracy of certain nu-
merical procedures, including finite difference approximations (we also use it later in
other applications).
Suppose that we have an approximate means of computing some quantity G.
Moreover, assume that the result depends on a parameter h. Denoting the approxi-
mation by g(h), we have G = g(h) + E (h), where E (h) represents the error. Richard-
son extrapolation can remove the error, provided that it has the form E (h) = chp , c
and p being constants. We start by computing g(h) with some value of h, say h = h1 .
In that case we have
p
G = g(h1 ) + ch1 (i)
Downloaded from https:/www.cambridge.org/core. University of Florida, on 20 Jan 2017 at 22:01:31, subject to the Cambridge Core terms of use, available at
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139523899.006
189 5.3 Richardson Extrapolation
which is an approximation of (e−x ) with the error O(h4 ). Note that it is as accurate as
the best result obtained with eight-digit computations in Table 5.4.
EXAMPLE 5.1
Given the evenly spaced data points
compute f (x) and f (x) at x = 0 and 0.2 using finite difference approximations of
O(h2 ).
Solution. We will use finite difference approximations of O(h2 ). From the forward
difference tables Table 5.3a we get
Downloaded from https:/www.cambridge.org/core. University of Florida, on 20 Jan 2017 at 22:01:31, subject to the Cambridge Core terms of use, available at
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139523899.006
190 Numerical Differentiation
EXAMPLE 5.2
Use the data in Example 5.1 to compute f (0) as accurately as you can.
EXAMPLE 5.3
b C
B β
c
a
α d
A D
The linkage shown has the dimensions a = 100 mm, b = 120 mm, c = 150 mm, and
d = 180 mm. It can be shown by geometry that the relationship between the angles
α and β is
α (deg) 0 5 10 15 20 25 30
β (rad) 1.6595 1.5434 1.4186 1.2925 1.1712 1.0585 0.9561
Downloaded from https:/www.cambridge.org/core. University of Florida, on 20 Jan 2017 at 22:01:31, subject to the Cambridge Core terms of use, available at
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139523899.006
191 5.4 Derivatives by Interpolation
If link A B rotates with the constant angular velocity of 25 rad/s, use finite difference
approximations of O(h2 ) to tabulate the angular velocity dβ/dt of link BC against α.
α (deg) 0 5 10 15 20 25 30
β̇ (rad/s) −32.01 −34.51 −35.94 −35.44 −33.52 −30.81 −27.86
If f (x) is given as a set of discrete data points, interpolation can be a very effective
means of computing its derivatives. The idea is to approximate the derivative of f (x)
by the derivative of the interpolant. This method is particularly useful if the data
points are located at uneven intervals of x, when the finite difference approximations
listed in Section 5.2 are not applicable1 .
Polynomial Interpolant
The idea here is simple: Fit the polynomial of degree n
Pn−1 (x) = a 0 + a 1 x + a 2 x 2 + · · · + a n x n
through n + 1 data points and then evaluate its derivatives at the given x. As pointed
out in Section 3.2, it is generally advisable to limit the degree of the polynomial to
less than six in order to avoid spurious oscillations of the interpolant. Because these
oscillations are magnified with each differentiation, their effect can devastating.
1 It is possible to derive finite difference approximations for unevenly spaced data, but they would
not be as accurate as the formulas derived in Section 5.2.
Downloaded from https:/www.cambridge.org/core. University of Florida, on 20 Jan 2017 at 22:01:31, subject to the Cambridge Core terms of use, available at
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139523899.006
192 Numerical Differentiation
In view of this limitation, the interpolation is usually a local one, involving no more
than a few nearest-neighbor data points.
For evenly spaced data points, polynomial interpolation and finite difference
approximations produce identical results. In fact, the finite difference formulas are
equivalent to polynomial interpolation.
Several methods of polynomial interpolation were introduced in Section 3.2. Un-
fortunately, none are suited for the computation of derivatives of the interpolant.
The method that we need is one that determines the coefficients a 0 , a 1 , . . . , a n of the
polynomial. There is only one such method discussed in Chapter 3: the least-squares
fit. Although this method is designed mainly for smoothing of data, it will carry out
interpolation if we use m = n in Eq. (3.22)—recall that m is the degree of the inter-
polating polynomial and n + 1 represents the number of data points to be fitted. If
the data contain noise, then the least-squares fit should be used in the smoothing
mode; that is, with m < n. After the coefficients of the polynomial have been found,
the polynomial and its first two derivatives can be evaluated efficiently by the func-
tion evalPoly listed in Section 4.7.
' (
ki+1 3(x − xi )2 yi − yi+1
− − (xi − xi+1 ) + (5.10)
6 xi − xi+1 xi − xi+1
x − xi+1 x − xi
fi,i+1 (x) = ki − ki+1 (5.11)
xi − xi+1 xi − xi+1
EXAMPLE 5.4
Given the data
compute f (2) and f (2) using (1) polynomial interpolation over three nearest-
neighbor points, and (2) a natural cubic spline interpolant spanning all the data
points.
Downloaded from https:/www.cambridge.org/core. University of Florida, on 20 Jan 2017 at 22:01:31, subject to the Cambridge Core terms of use, available at
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139523899.006
193 5.4 Derivatives by Interpolation
⎡ ⎤⎡ ⎤ ⎡ ⎤
3 6.4 13.78 a0 4.6742
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ 6.4 29.944 ⎥ ⎢ ⎥ ⎢ ⎥
⎣ 13.78 ⎦ ⎣ a 1 ⎦ = ⎣ 10.0571 ⎦
13.78 29.944 65.6578 a2 21.8385
T
which yields a = −0.7714 1.5075 −0.1930 .
The derivatives of the interpolant are P2 (x) = a 1 + 2a 2 x and P2 (x) = 2a 2 . There-
fore,
Solution of Part (2). We must first determine the second derivatives ki of the spline
at its knots, after which the derivatives of f (x) can be computed from Eqs. (5.10) and
(5.11). The first part can be carried out by the following small program:
#!/usr/bin/python
## example5_4
from cubicSpline import curvatures
from LUdecomp3 import *
import numpy as np
xData = np.array([1.5,1.9,2.1,2.4,2.6,3.1])
yData = np.array([1.0628,1.3961,1.5432,1.7349,1.8423, 2.0397])
print(curvatures(xData,yData))
input("Press return to exit")
Downloaded from https:/www.cambridge.org/core. University of Florida, on 20 Jan 2017 at 22:01:31, subject to the Cambridge Core terms of use, available at
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139523899.006
194 Numerical Differentiation
Because x = 2 lies between knots 1 and 2, we must use Eqs. (5.10) and (5.11) with
i = 1. This yields
' (
k 1 3(x − x2 )2
f (2) ≈ f1,2
(2) = − (x1 − x2 )
6 x1 − x2
' (
k 2 3(x − x1 )2 y1 − y2
− − (x1 − x2 ) +
6 x1 − x2 x1 − x2
' (
(−0.4258) 3(2 − 2.1)2
= − (−0.2)
6 (−0.2)
' (
(−0.3774) 3(2 − 1.9)2 1.3961 − 1.5432
− − (−0.2) +
6 (−0.2) (−0.2)
= 0.7351
x − x2 x − x1
f (2) ≈ f1,2
(2) = k 1 − k2
x1 − x2 x1 − x2
2 − 2.1 2 − 1.9
= (−0.4258) − (−0.3774) = −0. 4016
(−0.2) (−0.2)
Note that the solutions for f (2) in parts (1) and (2) differ only in the fourth signif-
icant figure, but the values of f (2) are much farther apart. This is not unexpected,
considering the general rule: The higher the order of the derivative, the lower the pre-
cision with which it can be computed. It is impossible to tell which of the two results
is better without knowing the expression for f (x). In this particular problem, the data
points fall on the curve f (x) = x 2 e−x/2 , so that the “true” values of the derivatives are
f (2) = 0.7358 and f (2) = −0.3679.
EXAMPLE 5.5
Determine f (0) and f (1) from the following noisy data:
Solution. We used the program listed in Example 3.10 to find the best polynomial fit
(in the least-squares sense) to the data. The program was run three times with the
following results:
Downloaded from https:/www.cambridge.org/core. University of Florida, on 20 Jan 2017 at 22:01:31, subject to the Cambridge Core terms of use, available at
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139523899.006
195 5.4 Derivatives by Interpolation
Based on standard deviation, the cubic seems to be the best candidate for the
interpolant. Before accepting the result, we compare the plots of the data points and
the interpolant—see the following figure. The fit does appear to be satisfactory.
2.3
2.2
2.1
2.0
f(x)
1.9
1.8
1.7
1.6
1.5
0.00 0.20 0.40 0.60 0.80 1.00 1.20 1.40
x
f (x) ≈ a 0 + a 1 x + a 2 x 2 + a 3 x 3
so that
f (x) ≈ a 1 + 2a 2 x + 3a 3 x 2
Therefore,
f (0) ≈ a 1 = 1.093
In general, derivatives obtained from noisy data are at best rough approxima-
tions. In this problem, the data represent f (x) = (x + 2)/ cosh x with added random
noise. Thus f (x) = 1 − (x + 2) tanh x / cosh x, so that the “correct” derivatives are
f (0) = 1.000 and f (1) = −0.833.
Downloaded from https:/www.cambridge.org/core. University of Florida, on 20 Jan 2017 at 22:01:31, subject to the Cambridge Core terms of use, available at
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139523899.006
196 Numerical Differentiation
2. Given the first backward finite difference approximations for f (x) and f (x), de-
rive the first backward finite difference approximation for f (x) using the oper-
ation f (x) = f (x) .
3. Derive the central difference approximation for f (x) accurate to O(h4 ) by apply-
ing Richardson extrapolation to the central difference approximation of O(h2 ).
4. Derive the second forward finite difference approximation for f (x) from the
Taylor series.
5. Derive the first central difference approximation for f (4) (x) from the Taylor series.
6. Use finite difference approximations of O(h2 ) to compute f (2.36) and f (2.36)
from the following data:
10. Using five significant figures in the computations, determine d(sin x)/dx at x =
0.8 from (a) the first forward difference approximation and (b) the first central
difference approximation. In each case, use h that gives the most accurate result
(this requires experimentation).
11. Use polynomial interpolation to compute f and f at x = 0, using the data
Given that f (x) = x 3 − 0. 3x 2 − 8. 56x + 8. 448, gauge the accuracy of the result.
12.
B
2.5
R R
θ x
A C
Downloaded from https:/www.cambridge.org/core. University of Florida, on 20 Jan 2017 at 22:01:31, subject to the Cambridge Core terms of use, available at
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139523899.006
197 5.4 Derivatives by Interpolation
the angle θ as
% &
x = R cos θ + 2.52 − sin2 θ
v γ
C
A αB β
a x
The radar stations A and B, separated by the distance a = 500 m, track the plane
C by recording the angles α and β at one-second intervals. If three successive
readings are
t (s) 9 10 11
α 54.80◦ 54.06◦ 53.34◦
β 65.59◦ 64.59◦ 63.62◦
calculate the speed v of the plane and the climb angle γ at t = 10 s. The coordi-
nates of the plane can be shown to be
20
D
70 β
Dimensions C
in mm 190
0
19
A
α θ
B 60
Geometric analysis of the linkage shown resulted in the following table relating
the angles θ and β:
Downloaded from https:/www.cambridge.org/core. University of Florida, on 20 Jan 2017 at 22:01:31, subject to the Cambridge Core terms of use, available at
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139523899.006
198 Numerical Differentiation
Assuming that member A B of the linkage rotates with the constant angular ve-
locity dθ/dt = 1 rad/s, compute dβ/dt in rad/s at the tabulated values of θ . Use
cubic spline interpolation.
15. The relationship between stress σ and strain ε of some biological materials in
uniaxial tension is
dσ
= a + bσ
dε
where a and b are constants (dσ /dε is called tangent modulus). The following
table gives the results of a tension test on such a material:
Downloaded from https:/www.cambridge.org/core. University of Florida, on 20 Jan 2017 at 22:01:31, subject to the Cambridge Core terms of use, available at
https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139523899.006