Chapter 6 - Numerical Differentiation & Integration

Download as pdf or txt
Download as pdf or txt
You are on page 1of 75

Lecture 24

Last lecture:
Least square approximation using common orthogonal polynomials

This lecture:

Numerical differentiation
Numerical integration
CHAPTER VI NUMERICALDIFFERENTIATION INTEGRATION

6.1 Numerical differentiation

◊ Taylor series expansion for evenly spaced data near x = xi


1 1
f ( x) = f ( xi ) + f ( xi ) ( x − xi ) + f ( xi ) ( x − xi ) ... + f ( xi ) ( x − xi ) + ....
2 n

2! n!
◊ Consider (xi-1, xi, xi+1) with spacing h = xi - xi-1.
xi-1 xi+1
h2 h3
f ( xi +1 ) = f ( xi + h) = f ( xi ) + hf ( xi ) + f ( xi ) + f ( xi ) + .... (1)
x 2! 3!
a=xi h 2
h3
f ( xi −1 ) = f ( xi − h) = f ( xi ) − hf ( xi ) + f ( xi ) − f ( xi ) + .... (2)
2! 3!
❑ First order derivatives fi"

𝑓(𝑥𝑖+1 ) − 𝑓(𝑥𝑖 ) ℎ ″ (a)


 * Forward difference: 𝑓 ′ (𝑥𝑖 ) = − 𝑓 (𝜉1 ) 𝑂(ℎ) error
ℎ 2
𝑓(𝑥𝑖 ) − 𝑓(𝑥𝑖−1 ) ℎ ″
* Backward difference: 𝑓 ′ (𝑥𝑖 ) = + 𝑓 (𝜉2 ) 𝑂(ℎ) error (b)
ℎ 2

* Central difference: 𝑓(𝑥𝑖+1 ) − 𝑓(𝑥𝑖−1 ) ℎ2 ‴



𝑓 (𝑥𝑖 ) = − 𝑓 (𝜉3 ) 𝑂(ℎ2 ) error (c)
2ℎ 6
(1) -(2)
Numerical differentiation

❑ Second order derivatives fi"


2ℎ4 (4) h h h h
Add (1) & (2): fi+1 + fi-1= 2fi + fi" h2 + 𝑓
4! 𝑖
xi-2 xi-1 xi x xi xi+1 xi+2 x
𝑓𝑖+1 −2𝑓𝑖 +𝑓𝑖−1 ℎ2 (4)
 f i" = - 𝑓 +... Backward Forward
ℎ2 12 𝑖

❑ Second order one-side difference


𝑓𝑖 −𝑓𝑖−1 Eqn. (2) x 4 – Eqn. (3) to eliminate O(fi"h2) terms
* f i' = does not need fi+1, but it only has O(h) accuracy
ℎ 3𝑓𝑖 −4𝑓𝑖−1 +𝑓𝑖−2 ℎ2
 f i' = + f ′ + ...
2ℎ 3 i
* If we need fi' accurate to O(h2) at the end of the interval, say xi,
−3𝑓𝑖 +4𝑓𝑖+1 −𝑓𝑖+2 ℎ2
we can use fi, fi-1, fi-2 to obtain accurate fi'. * Similarly, fi' = - f ′ + ...
2ℎ 3 i

* Consider the Taylor series expansion for fi-2 & fi-1:


(2ℎ)2 (2ℎ)3
fi-2 = fi - fi'2h + f i" - fi′ + ... (3)
2! 3!

ℎ2 ″ ℎ3
fi-1 = fi - fi' h + 𝑓 - 3! fi′ (2)
2!
Example

Example: Consider f(x) = x3.


𝑓𝑖+1 −𝑓𝑖−1 1.728−0.512
Find various derivatives using finite difference at xi =1, h= 0.2. * Central difference: fi’ = = =3.04
2ℎ 0.4
Error = T.V. - A.V. = -0.04; (actual);
Solution:
1
* x: xi-2 = 0.6, xi-1 = 0.8, xi = 1, xi+1 = 1.2 T.E. ~ 6 𝑓 ″′ (𝑥𝑖 ) ℎ2 = -0.04 (estimate)

* f(x): fi-2 = 0.216, fi-1 = 0.512, fi =1, fi+1 = 1.728. * Second order one-side difference:
3𝑓𝑖 −4𝑓𝑖−1 +𝑓𝑖−2 3−4∗0.512+0.216
* Exact derivatives: fi' = 3xi2 = 3; fi" = 6, fi"' = 6. f i' = = = 2.92
2ℎ 2∗0.2
𝑓𝑖+1 −𝑓𝑖 1.728−1
* Forward difference: f i' = = = 3.6 Error = T.V. - A.V. = 3.0 – 2.92 = 0.08 (actual);
ℎ 0.2
Error = T.V. - A.V. = 3 - 3.64 = -0.64 1
T.E. ~ 𝑓 ″′ (𝑥𝑖 ) ℎ2 = 0.08 (estimate)
3
1
T.E.~ − 2 𝑓 ″ (𝑥𝑖 ) ℎ = -3*0.2 = -0.6
* Second order derivative: exact f" = 6x, fi"= 6
𝑓𝑖 −𝑓𝑖−1 1−0.512
* Backward difference: : f i' = = =2.44 fi" =
𝑓𝑖+1 −2𝑓𝑖+𝑓𝑖−1 1.728−2+0.512
=6
ℎ 0.2 ℎ2 0.22
Error = T.V. - A.V. = 3-2.44 = 0.56 (actual);
1 ″ Error = 6 - 6 = 0. Why?
T.E.~ 2
𝑓 (𝑥𝑖 ) ℎ = 3*0.2 = 0.6 1
T.E. ~ - 12 𝑓 (4) (𝑥𝑖 ) ℎ2 = 0 (estimate)
Lecture 25
Last lecture:
Numerical differentiation

This lecture:

Newton-Cotes integration formula


6.2 Newton-Cotes Integration Formulae

6.2.1 General formulation


 Objective: to numerically evaluate the integration accurately:
𝑏
I = ‫ 𝑎׬‬f(x) dx
 Idea: approximate f(x) by a polynomial of degree n, Pn(x), in [a, b] & carry out integration of Pn(x ) analytically,
𝑏
I n = ‫ 𝑎׬‬Pn(x) dx x = h
x
using equi-interval h,
a=x0 xi-2 xi-1 xi xi+1 xi+2 b=xn
h= x1 - x0 = ... = xn - xn-1;
x0=a; xj = a+jh, xn = b = a+nh

* Interpolating polynomial of degree n,


Pn(x) = f0 + f[x0, x1](x- x0) + f[x0, x1, x2](x- x0)(x-x1)+ ... + f[x0, x1, x2,..., xn] (x-x0) (x-x1)...(x-xn-1)
Error in numerical integration

* Error from interpolation: (recall)


1
f(x) -Pn(x) = (x-x0)(x-x1)...(x-xn) f(n+1)(x) (*)
(𝑛+1)!

= (x-x0) (x-x1)...(x-xn-1) (x-xn) f[x0, x1, x2,..., xn, x] (**)


𝑏
* Integration error = ‫[ 𝑎׬‬f(x) − Pn(x)]dx
𝑏 1
= ‫( 𝑎׬‬x−x0)(x−x1)...(x−xn) (𝑛+1)! f(n+1)(x) dx

* Let s = (x-x0)/h x = h
x
 x-xj = [x0+hs – (x0+jh)] = (s-j)h
& dx = h ds, a=x0 xi-2 xi-1 xi xi+1 xi+2 b= xn

x =[a, b]  s =[0, n]
𝑛 𝑠 𝑠−1 𝑠−2 ... 𝑠−𝑛
 Integration error = hn+2 ‫׬‬0 f(n+1)(x) ds
(𝑛+1)!
6.2.2 Trapezoidal rule (n=1)

◊ Consider a 1st order (n=1) polynomial for y =f(x) in [x0, x1]


n=1: P1(x) = f0 + sf0 

𝑥1 1 y f1
I1 = ‫ 𝑥׬‬P1(x)dx =‫׬‬0 [f0 + s(f1−f0)] hds
0 f(x) P1(x)
* Shaded area = error
 I1 = h(f0+ f1)/2 --Trapezoidal rule.
f0 in trapezoidal rule.
𝑥
* Integration Error = ‫ 𝑥׬‬1 (interpolation error) dx * Error  curvature.
0
1
= h3 ‫׬‬0 s(s−1)/2 f"(x) ds
1 1
= f"(x) h3 [ 3 s3 − s2]10
2 x0 x1 x
1
= - 12 f"(x) h3
6.2.3 Simpson’s rule (n=2)

◊ Consider a 2nd order polynomial for y=f(x) between x0, x1, x2


1 data
n=2: P2(x) = f0 + sf0 + 2 s(s-1)2f0 
f(x)
𝑥 2 1
I2 = ‫ 𝑥׬‬2 P2(x)dx =‫׬‬0 [f0 + sf0 + 2 s(s−1)2f0] hds Parabola
0

1 1 1 1
 I2 = h [sf0 + 2 𝑠2f0+ 2f0 2(3 𝑠3- 2 𝑠2)]20
1
= h [2f0 + 2f0+ 2f0]
3

 I2 = h(f0+ 4f1 + f2)/3 -- Simpson’s 1/3 rule.


x0 x1 x2

* Integration Error -0.5 0 0.5 1 1.5 2 S


0.5
2
Note: h4f"'(x) ‫׬‬0 s(s−1)(s−2)/3! ds = 0 0.4
0.3 s(s-1)*(s-2)
 the error is smaller than O(h4), but ≠ 0 0.2
0.1
0
The Integral Mean Value Theorem CANNOT be applied directly -0.1 0 0.5 1 1.5 2

here since s(s-1)(s-2) is no longer non-negative in (0, 2).


-0.2
interpolation
-0.3
-0.4
error
-0.5
Error in Simpson’s 1/3rule (n=2)

◊ Reexamine the error using 0.4 w'(x)


𝑥2 w(x)
Error = E2 = ‫( 𝑥׬‬x−x0) (x−x1)(x−x2) f[x0, x1, x2, x] dx
0 0.2
• Define x1 x
𝑥 0
w(x) = ‫( 𝑥׬‬t−x0)(t−x1)(t−x2)dt x22
0 0x0 0.5 1 1.5
 w'(x) = (x-x0) (x-x1)(x-x2) -0.2

& w(x0) =w(x2) =0, w(x) ≥0


-0.4
• Thus Error = E2 is
1 𝑥
𝑥 E2 = - 4!f(4)(x) ‫ 𝑥׬‬2 w(x)dx
E2 = ‫ 𝑥׬‬2 (x−x0) (x−x1)(x−x2) f[x0, x1, x2, x] dx 0
0
1 𝑥 𝑥
𝑥 𝑥 𝑑 = - 4! f(4)(x) ‫ 𝑥׬‬2 ‫( 𝑥׬‬t−x0)(t−x1)(t−x2)dt dx
= {w(x) f[x0, x1, x2, x] }𝑥02 - ‫ 𝑥׬‬2 w(x) 𝑑𝑥 f[x0, x1, x2, x] dx 0 0
0
1
=0-
𝑥2 = - 90 h5 f(4)(x)
‫ 𝑥׬‬w(x) f[x0, x1, x2, x, x] dx
0
Simpsons' rule is much more accurate than trapezoidal rule
1
• Since w(x) ≥ 0, we can now apply Integral Mean Value Thm (12 f"(x) h3)
6.2.4 Simpson’s 3/8 rule (n=3)

◊ Consider a 2nd order polynomial for y=f(x) between x0, x1, x2


1 1
n=3: P3(x) = f0 + sf0 + 2 s(s-1)2f0 + 3! s(s-1) (s-2) 3f0 
n=2
𝑥 3 1 1 n=2
I3 = ‫ 𝑥׬‬3 P3(x)dx =‫׬‬0 [f0 + sf0 + 2 s(s−1)2f0+ 3! s(s−1) (s−2) 3f0] hds
0
3 1 1 1 1
 I3 = 8 h [sf0 + 2 𝑠2f0+ 2f0 2(3 𝑠3- 2 𝑠2)]20 n=3

3
 I3 = 8 h(f0+ 3f1 + 3f2 + f3) -- Simpson’s 3/8 rule. 3/8 rule (n=3) is often used in conjunction with
* Integration Error composite Simpson’s 1/3 rule (n=2)
𝑥
Error = E3 = ‫ 𝑥׬‬3 (x−x0) (x−x1)(x−x2))(x−x3)f[x0, x1, x2, x3, x] dx when the total number of intervals is odd.
0
◊ The integration formula based on Pn(x) =>
After a very lengthy derivation 
Newton-Cotes formula
1 3
E3 =... = 4! h5 f(4)(x) ‫׬‬0 𝑠(𝑠 − 1)(𝑠 − 2)(𝑠 − 3)ds ◊ There are NO practical needs to use higher
3 order polynomials due to increasing roundoff
= - 80 h5 f(4)(x) error associated with large n.
1
• Compare w/ E2 = - 90h5 f(4)(x), There are other ways to improve.
E3 has the same order and larger coefficient.
Lecture 26
Last lecture:
Newton-Cotes integration formula
--numerical integration formula
--integration error
This lecture:

Composite integration
6.2.4 Simpson’s 3/8 rule (n=3)

◊ Consider a 2nd order polynomial for y=f(x) between x0, x1, x2


1 1
n=3: P3(x) = f0 + sDf0 + 2 s(s-1)D2f0 + 3! s(s-1) (s-2) D3f0 
n=2
𝑥 3 1 1 n=2
I3 = ‫ 𝑥׬‬3 P3(x)dx =‫׬‬0 [f0 + sDf0 + 2 s(s−1)D2f0+ 3! s(s−1) (s−2) D3f0] hds
0
3 1 1 1 1
 I3 = 8 h [sf0 + 2 𝑠2Df0+ D2f0 2(3 𝑠3- 2 𝑠2)]20 n=3

3
 I3 = 8 h(f0+ 3f1 + 3f2 + f3) -- Simpson’s 3/8 rule. 3/8 rule (n=3) is often used in conjunction with
* Integration Error composite Simpson’s 1/3 rule (n=2)
𝑥
Error = E3 = ‫ 𝑥׬‬3 (x−x0) (x−x1)(x−x2))(x−x3)f[x0, x1, x2, x3, x] dx when the total number of intervals is odd.
0
◊ The integration formula based on Pn(x) =>
After a very lengthy derivation 
Newton-Cotes formula
1 3
E3 =... = 4! h5 f(4)(x) ‫׬‬0 𝑠(𝑠 − 1)(𝑠 − 2)(𝑠 − 3)ds ◊ There are NO practical needs to use higher
3 order polynomials due to increasing roundoff
= - 80 h5 f(4)(x) error associated with large n.
1
• Compare with E2 = - 90h5 f(4)(x) There are other ways to improve.
E3 has the same order and larger coefficient.
6.2.5 Mid-point rule (or rectangle rule)

f1
◊ What happens if n=0? y
f1/2 f(x)
* Consider a 0th order polynomial for y = f(x) in [x0, x1]
P0(x)
* It is approximated by a constant value y =P0(x) = f1/2= f(x1/2)
at x =x1/2 (mid point between x0 & x1). f0
* Obviously, the integration gives
 I0 = h f1/2 -- rectangle rule.
𝑥
* Integration Error: E0 = ‫ 𝑥׬‬1 [f(x) − f1/2] dx x0 x1/2 x1 x
0

Taylor series expansion for f(x) near x = x1/2:


1 • The error is half of trapezoidal rule (n=1);
f(x) ~ f1/2+ f '1/2 (x-x1/2) + 2 f "1/2 (x-x1/2)2+ O((x-x1/2)3)
but of opposite sign.
𝑥1 1
E0 ~ ‫[ 𝑥׬‬f ′1/22 (x−x1/2) + f "1/2(x−x1/2)2] dx
0 2 QUESTION: Can you cancel O(h3) errors by combining
1 3
=0+ h f "1/2 = leading order error n=1 and n=0 formula?
24
for mid-point rule
(or rectangle rule)
6.2 Composite rules

◊ Question: For x[a, b] with large |b-a|, how to improve accuracy?

Answer: Divide [a, b] into n sub-intervals & use low order Dx


x
Newton-Cotes formula in each sub-interval
a=x0 x1 x2 x3 … xn -1 xn=b
 Composite rules.
* Error from the entire interval (a, b) or n such sub-intervals:
6.2.1 Composite rectangle rule
* Extend the rectangle rule to the entire interval (a, b) ℎ3 ℎ2
Error ~ n 24 f "(x) = (b-a) 24 f "(x) 2nd order accuracy
consisting of n sub-intervals of step size h = (b - a)/n:
h=Dx Or more precisely,
x
ℎ2 ℎ2 𝑥𝑛
a=x0 x1 x2 x3 xn -1 xn=b T.E. = h σ𝑛𝑖=1 𝑓"( 𝜉𝑖 ) ~ ‫)𝑥("𝑓 ׬‬ dx
x1/2 x5/2 xn-1/2 24 24 𝑥0

 Composite rectangle rule:


ℎ2
I0 =
𝑏
‫𝑥𝑑)𝑥(𝑓 𝑎׬‬
~ [ f ′(b)− f ′(a)]
24

= h [f(x1/2) + f(x3/2) + f(x5/2)+…+ f(xn-1/2)] * Question: what happens to periodic f '(x) with f '(b)= f '(a)?
Example:

1 1
Evaluate I = ‫׬‬0 𝑑𝑥 using rectangle rule with 2, 4, & 8 intervals
1+𝑥 2
Exact value: Iexact = tan-11 = p/4 = 0.7853981634…
Solution:
i) 2 sub-intervals: h = 0.5; iii) 8 intervals: h = 0.125
x = [0, 0.5] & x = [0.5, 1];
I0 = 0.5×[ f(0.25) + f(0.75)] x x1/2 f(x1/2) Integration Error
0 0.0625 0.996108949
= 0.5×(0.941176 + 0.64) = 0.7905882; 0.125 0.1875 0.966037736
0.25 0.3125 0.911032028
Error(h=0.5) = -0.00519 0.375 0.4375 0.839344262
ii) 4 sub-intervals: h = 0.25, 0.5 0.5625 0.759643917
0.625 0.6875 0.679045093
I = 0.25×(0.984615385 + 0.876712329 0.75 0.8125 0.602352941
0.875 0.9375 0.532224532
+ 0.719101124 + 0.566371681)
1 0.78572368 -0.0003255
= 0.78670013 Error(h=0.25) = -0.001302
• Each time as h is reduced by 1/2, error is reduced by (1/2)2, as
expected for a 2nd order accurate method.
6.2.2 Composite trapezoidal rule

* Again, divide [a, b] into n sub-intervals with spacing h f(x)


a = x0<x1<x2<...<xn-1<xn=b,
h= (b - a)/ n.
h h
* At the ith interval, x
𝑥 ℎ
‫𝑖 𝑥׬‬+1 f(x) dx = 2 (𝑓𝑖 + 𝑓𝑖+1 ) a=x0 x1 xn -1
𝑖 x2 x3 xn=b
* Sum over i=0 to n-1, 1 2
E1= - ℎ hn f "(h) for x0 ≤ h ≤xn.
12
𝑥 ℎ 𝑛−1
I1 = σ𝑛−1
𝑖=0 ‫𝑥׬‬
𝑖+1
f(x) dx = σ (𝑓
2 𝑖=0 𝑖
+ 𝑓𝑖+1 ) =
1
- 12 ℎ2 (b-a) f "(h)
𝑖


= (𝑓0 + 2𝑓1 + 2𝑓2 +. . . 2𝑓𝑛−1 + 𝑓𝑛 ) Or more precisely,
2
1 𝑥
ℎ E1= -
1 2
ℎ ℎ σ𝑛𝑖=1 𝑓"( 𝜉𝑖 ) = - 12 ℎ2 ‫𝑥𝑑)𝑥("𝑓 𝑛 𝑥׬‬
= (𝑓 + 𝑓𝑛 ) + ℎ σ𝑛−1
𝑖=1 𝑓𝑖 12 0
2 0
ℎ2
* Integration error: ~- [ f ′(b)− f ′(a)]
12
1
A sub-interval contributes: -12 ℎ3 f "(xi) for xi ≤ xi ≤xi+1. * If f'(x) is singular near a or b, the error will be much larger.

* Sum n such sub-intervals  * If f'(b)=f'(a), [f'(x) is periodic in (a, b)]  much smaller error.
Lecture 27
Last lecture:
Composite rectangle rule & trapezoidal rule
This lecture:

Romberg integration
(--Richardson extrapolation)
Composite Simpson’s rule
Example:

1 1
Evaluate I = ‫׬‬0 𝑑𝑥 using trapezoidal rule with 2, 4, & 8 intervals
1+𝑥 2
Exact value: Iexact = tan-11 = p/4 = 0.7853981634…

Solution: iii) 8 intervals: h = 0.125


i) 2 sub-intervals: h = 0.5; x f(x) Integration Error
x = [0, 0.5] & x = [0.5, 1]; 0 1
0.125 0.984615385
I0 = 0.5×[ f(0) + f(1)]/2 + 0.5× f(0.5) 0.25 0.941176471
0.375 0.876712329
= 0.5×(1+0.5)/2+0.5* 0.8= 0.775;
0.5 0.8
Error(h=0.5) = 0.010419 0.625 0.719101124
0.75 0.64
ii) 4 sub-intervals: h = 0.25, 0.875 0.566371681
1.0 0.5
I = 0.25 (1+0.5)/2 + 0.25  (0.94117647+ 0.8+ 0.64)
0.784747124 0.000651
= 0.782794 Error(h=0.25) = 0.002604 • Each time as h is reduced by 1/2, error is reduced by (1/2)2,
as expected for a 2nd order accurate method.
• Compare the errors with those from rectangle rule, …?
Example of periodic f '(x)

Consider f(x)= x4/4- x3+x2-x


1 1.5
Evaluate IA = ‫׬‬0 f(x)dx, IB = ‫׬‬1 f(x)dx, using trapezoidal rule.

How does periodicity in f(x) affect the accuracy?


How do errors in IA & IB depend on h in using trapezoidal rule?
Solution:
* Note: f '(x) = x3-3x2+2x-1  f '(0)= f '(1) = -1
but f '(0.5) =-0.625 ≠ f '(1.5)=-1.375.
 Expect IA to have much smaller ERROR than IB. 1
E1= - 12 ℎ2 (b-a) f "(h)
ℎ2
* Exact integral values: ~- [ f ′(b)− f ′(a)]
12
IA,exact =-11/30= -0.36666667; IB,exact =-757/960 = -0.788541667

h IB E(h)=|I(h)-exact| E(h)/E(h/2) h IA E(h)=|I(h)-exact| E(h)/E(h/2)


0.1 -0.7891675 6.2583E-04 4.004 0.1 -0.3666675 8.3333E-07 16.000
0.05 -0.78869796875 1.5630E-04 4.001 0.05 -0.36666671875 5.2083E-08 16.009
0.025 -0.78858073242 3.9066E-05 (expected) 0.025 -0.36666666992 3.2533E-09 (accurate!)
6.3.3 Romberg Integration

* Start with n sub-intervals with spacing h=(b-a)/n.


Ih = σ𝑛𝑖=1 ‫𝑖𝑥𝑥׬‬−1
𝑖
𝑓(𝑥)𝑑𝑥 h h h h
x
1
Error(h) ~ − 12 ℎ2 [𝑓′(𝑏) − 𝑓′(𝑎)] a=x0 x1 x2 x3 xn -1 xn=b

* Now use h′ =h/2, apply the composite trapezoidal rule: h/2 h/2
x
Ih/2 = σ2𝑛
𝑥𝑖
𝑖=1 𝑥𝑖−1 𝑓(𝑥)𝑑𝑥
‫׬‬
a=x0 x1 x2 x3 x4 x5 x2n -1 x2n=b
1
Error(h/2) ~ − 12 (ℎ/2)2 [𝑓′(𝑏) − 𝑓′(𝑎)] ~ ¼* Error(h)
1
Since Ih = Iexact - Error(h) ~ Iexact + 12 ℎ2 [𝑓′(𝑏) − 𝑓′(𝑎)]
1
Ih/2 = Iexact - Error(h/2) ~ Iexact + 12 (ℎ/2)2 [𝑓′(𝑏) − 𝑓′(𝑎)]

We can combine Ih and Ih/2:


1 4𝐼 Τ −𝐼ℎ
 4 Ih/2 - Ih => Error of O(h2) cancels out!  improved integral: 𝐼ሜℎ/2 = ℎ 2 =Romberg integration
3
* The above is essentially the Richardson extrapolation procedure.
Example

2 1
Perform Romberg integration for I= ‫׬‬0 𝑑𝑥 using rectangle rule with 8, 16, 32, & 64 intervals
1+𝑥 2
Soln. Note: Iexact = tan-1(2) = 1.10714871779409…

n In 1st extrap. 𝐼ሜ𝑛


(1) Error

8 1.10631663171835
3.116E-07 =(22x1 1.10694046254292- 1.10631663171835)/(22-1)
16 1.10694046254292 1.10714840615111
1.952E-08
32 1.10709663934290 1.10714869827623
1.220E-09 (1)
64 1.10713569726592 1.10714871657359 Apply another extrapolation to 𝐼ሜ𝑛 now:

(2)
n E(n) E(n/2)/E(n) 𝐼ሜ16 =(24x1.10714869827623-1.10714840615111)/(24-1)
8 8.32086E-04 =1.10714871775124  Error= 4.286E-11
16 2.08255E-04 3.996
(2)
32 5.20785E-05 3.999 𝐼ሜ32 =(24x1.10714871657359-1.10714869827623)/(24-1)
64 1.30205E-05 4.000
=1.10714871779342  Error= 6.719E-13
WHY/HOW does the idea work?-- Romberg integration

• In general, the integration error for smooth function f(x) has


the following asymptotic form:
𝑑2 𝑑4 𝑑6 𝑑8
𝐸𝑛 = 𝐼 − 𝐼𝑛 = 2 + 4 + 6 + 8 +. . .
𝑛 𝑛 𝑛 𝑛
There are no odd power terms of n in En.

Terms such as, n-2k-1, cancel out during integration.


𝑑2 𝑑4 𝑑6 𝑑8
𝐸2𝑛 = 𝐼 − 𝐼2𝑛 = + + + +. . .
22 𝑛2 24 𝑛4 26 𝑛6 28 𝑛8
Thus 3𝑑4
22 𝐸2𝑛 − 𝐸𝑛 = 22 𝐼 − 22 𝐼2𝑛 − (𝐼 − 𝐼𝑛 ) = − 4 +. . .
4𝑛

(1) 𝒅
 I ~ 𝐼ሜ2𝑛 = (22 𝐼2𝑛 − 𝐼𝑛 )/(22 − 1) − 𝟒𝟒 +. . .
𝟒𝒏
Another extrapolation removes O(n-4) term and gives

(2) (1) (1) 𝒅
𝐼ሜ2𝑛 = (24 𝐼ሜ2𝑛 − 𝐼ሜ𝑛 )/(24 − 1) − 𝟔𝟔 + …
𝒏
Romberg integration

(𝑘)
Repeat application of this Richardson extrapolation  Romberg integration 𝐼ሜ2𝑛
This procedure will improve the accuracy at little extra cost.
(1) (2) (3)
n In 1st extrap. 𝐼ሜ2𝑛 𝐼ሜ2𝑛 𝐼ሜ2𝑛
8 1.10631663171835
16 1.10694046254292 1.10714840615111
32 1.10709663934290 1.10714869827623 1.10714871775124
64 1.10713569726592 1.10714871657359 1.10714871779342 1.10714871779409
Note: Iexact = 1.10714871779409…

General procedure:
• obtain In using trapezoidal rule; step size h=(2-0)/n = 2/n
(1)
• obtain the 1st extrapolation as 𝐼ሜ2𝑛 =(22I2n-In)/(22 -1).
(2) (1) (1)
• obtain the 2nd extrapolation as 𝐼ሜ2𝑛 = (24 𝐼ሜ2𝑛 - 𝐼ሜ𝑛 )/(24 -1).
(3) (2) (2)
• obtain the 3rd extrapolation as 𝐼ሜ2𝑛 = (26 𝐼ሜ2𝑛 - 𝐼ሜ𝑛 )/(26 -1).
6.3.4 Composite Simpson's Rule

* Divide [a, b] into n (=2K) sub-intervals with spacing h


a = x0<x1<x2<...<xn-1<xn=b, x

x3 x4 h
h= Dx= (b - a)/n. a=x0 x1 x2 xn -2 xn -1 xn=b

* From the ith to (i+2)th sub-intervals (i=2k=even, k=0, 1,…, n/2)


x ℎ
‫׬‬x i+2 f(x)𝑑𝑥  3 (fi + 4fi+1 + fi+2)
i
* Sum from k=0 to n/2:

I2  (f1 + 4f2 + 2f3 + 4f4 + ... + 4fn-1+ fn)
3

* What about the error in I2?


1
Every two sub-intervals contribute one “ - 90 h5f(4)(xi) ”
𝑛 1 𝑏−𝑎 4 (4)
 total error is - 2 90 h5 f(4)(x) = - h f (x)
180
1 𝑏−𝑎
• Or more precisely, T.E. = - h5 σ𝑛/2
𝑖=1 f (4)(x ) ~ -
i h4[f(3)(b)- f(3)(a)]
90 180

• If f(3)(b) = f (3)(a), T.E. will thus be of O(h6).


Example:

2 1
Compute I = ‫׬‬0 𝑑𝑥 using Simpson's rule & n=8, 16, 32, & 64
1+𝑥 2

Solution: First, n=8, h=0.25 Asymptotic error of integration:



I = 3 [1x1+4x0.941176470588 + 2x0.8 + 4x0.64 + 2x0.5+ 4x0.390243902439 I- I(n/4) ~ c(n/4)-p (1)

+ 2x 0.307692307692 + 4x 0.246153846154+ 1x 0.2) = 1.107140124342 I- I(n/2) ~ c(n/2)-p (2)


I- I(n) ~ cn-p (3)
Iexact - I2(h=0.25) = 1.107148717794- 1.107140124342 = 8.59345E-6
96.7 times Eq.(1)-Eq. (2): In/2 - In/4 ~ cn-p (4p – 2p)
• Composite trapezoidal rule with n=8: = cn-p 2p (2p -1)
smaller
T.E.(n=8) = 1.107148717794- 1.106316631718 = 8.304E-4 Eq.(2)-Eq. (3): In - In/2 ~ cn-p (2p - 1)

▪ Error in composite Simpson’s 1/3 rule: • Convergence ratio R:


n h I2(h) Error E(h) E(h)/E(h/2) R(n) = R(n) = (In/2 - In/4)/(In - In/2) ~ 2p
8 0.25 1.10714012434242 8.59345E-8 (In/2 - In/4)/(In - In/2) p = order of convergence
16 0.125 1.10714840615111 3.11643E-7 27.57 = order of accuracy
32 0.0625 1.10714869827623 1.95178E-8 15.967 28.350 • Aitken extrapolation
64 0.03125 1.10714871657360 1.22040E-9 15.993 15.965
Iextrap = (R In – In/2)/(R-1)
p~4
= Richardson extrapolation
Lecture 28
Last lecture:
Romberg integration
(--Richardson extrapolation)
Composite Simpson’s rule
This lecture:

Richardson extrapolation)
Gauss quadrature
Example:

2 1
Compute I = ‫׬‬0 𝑑𝑥 using Simpson's rule & n=8, 16, 32, & 64
1+𝑥 2

Solution:
Asymptotic error of integration:
I- I(n) ~ cn-p (3)
p=4 for present problem;
2
but p<4 for I = ‫׬‬0 𝑥𝑑𝑥 using Simpson's rule

▪ Error in composite Simpson’s 1/3 rule: Convergence ratio R(n) =


(In/2 - In/4)/(In - In/2) R(n) ~ 2p
n h I2(h) Error E(h) E(h)/E(h/2)
8 0.25 1.10714012434242 8.59345E-8 p = order of convergence
16 0.125 1.10714840615111 3.11643E-7 27.57 = order of accuracy
32 0.0625 1.10714869827623 1.95178E-8 15.967 28.350 • Aitken extrapolation
64 0.03125 1.10714871657360 1.22040E-9 15.993 15.965 Iextrap = (R In – In/2)/(R-1)
p~4 = Richardson extrapolation
Example

1 1
Examine T.E. in I = ‫׬‬0 𝑑𝑥 using Simpson's rule with n=8 & 16
1+𝑥 2
I exact = 0.785398163397448
Iextrap = (R In – In/2)/(R-1)
Solution: For h=0.125 and h=0.0625,
(1)
h I(h) Error E(h) E(h)/E(h/2) R Iextrap =𝐼ሜℎ Error
0.125 0.785398125614677 3.778E-8 16 0.785398165285641 -1.8882E-09 −1.8882E−09 ~ 3.2
5.912E−10
0.0625 0.785398162806206 5.912E-10 63.904 64 0.785398163396548 9.0017E-13
0.03125 0.785398163388209 9.2393E-12
Still use incorrect R=16:
Note: as h decreases by a factor of 2, the error is supposed to (1) (2)
h 1st Iextrap = 𝐼ሜℎ 2nd Iextrap = 𝐼ሜℎ
decrease by a factor of 24=16. 0.125

Here, the error actually decreases by a factor of almost 64. 0.0625 0.785398165285641
0.03125 0.785398163427009 0.785398163397507
Why?
Error=
-2.9561E-11 Error= 5.873E-14
Reason: f(3)(0)= f(3)(1)=0 so that T.E. on the O(h4) level is 0.
The T.E. must come from the effects of next higher −2.9561E−11
= 3.20
9.2393E−12
order, which is O(h6). Thus, E(h)/E(h/2)=26=64.
Richardson Extrapolation

1
Use Simpson's rule for I = ‫׬‬0 𝑒 𝑥 𝑑𝑥 and assess errors.
Iexact = e –1 =1.718281828459045

I(h=0.25) =1.71831884192175 ➔ T.E.= -3.701346E-5


I(h=0.125) =1.71828415469990 ➔ T.E.= -2.326241E-6 →Ratio=15.911
I(h=0.0625) =1.71828197405189 ➔ T.E.= -1.455928E-7 →Ratio=15.978

We expect the error ratio to be R~16 for O(h4) error.


 T.E. = O(h4) is confirmed.
Now we can apply Richardson extrapolation to improve the accuracy further
since each I(h) has an error of O(h4):
* Iextrap =[16*I(h=0.125)- I(h=0.25) ]/(16-1) = 1.718281842221844
1
 Error = Iexact - Iextrap = -1.376E-8  Iextrap = 1.71828182846039
63
* Iextrap =[16*I(h=0.0625)- I(h= 0.125) ]/(16-1) = 1.71828182867536  Error =-1338E-12

 Error = Iexact - Iextrap = -2.163E-10


Error ratio 63.6
6.4 Gaussian Quadrature

• Newton-Cotes formulae used fixed coordinates, x0, x1, ..., xn, to integrate.
• How about letting xi’s be free to choose too so that we have more 𝑦
flexibility to eliminate higher order T.E. & get better accuracy?

6.4.1 Two-term Gaussian quadrature


* Now consider integration of f(t) over [-1, 1],
1
𝐼 = ‫׬‬−1 f(t)dt = a𝑓(𝑡1 ) + b𝑓 𝑡2 (1)

• With 4 free parameters (a, b, t1, t2) to choose, -1 t1 t2 1 𝑡


expect Eq. (1) to be exact
when f(t) is a polynomial of degree 3 or less. 1
𝑓 =𝑡0 ⇒ න 1𝑑 t = 2 = a + b
• Any 3rd
order polynomial is a linear combination of −1 a=1
1
monomials: 1, t, t2, or t3. 𝑓 = 𝑡1 ⇒ න 𝑡𝑑 t = 0 = a𝑡1 + b𝑡2
b=1
−1
−1 ⇒ t1 =
• That is, if f(t) =1, t, t2, or t3, Eq. (1) should be exact: 1 3
𝑓 = 𝑡2 ⇒ න 𝑡2𝑑 t = 2/3 = at12 + bt 22 1
𝑡2 =
1 −1 1 −1 3
𝐼 = ‫׬‬−1 f(t)dt = 𝑓( 3) + 𝑓 1
3
𝑓 = 𝑡3 ⇒ න 𝑡3𝑑 t = 0 = at13 + bt 32
−1
Standard 2-point Gauss Quadrature on [a, b]

• When [a, b] ≠ [-1, 1];


need change of variables
𝑏−𝑎 𝑎+𝑏 𝑏−𝑎
𝑥= 𝑡+ ; dx = 𝑑𝑡
2 2 2
4
e.g. for 𝐼 = ‫׬‬0 𝑥𝑒 2𝑥 𝑑𝑥, 𝑎 = 0, 𝑏 = 4 a x0 x1 b 𝑥
⇒ 𝑥 = 2𝑡 + 2; 𝑑𝑥 = 2𝑑𝑡
a (a+b)/2 b x
4 1 1
𝐼 = න 𝑥𝑒 2𝑥 𝑑𝑥 = න 2(2𝑡 + 2)𝑒 4𝑡+4 𝑑𝑡 = න 𝑔(𝑡)𝑑𝑡
0 −1 −1
t
𝑔(𝑡) -1 0 1
• Apply two-point formula 𝑏−𝑎 𝑎+𝑏
𝒙= 𝑡+
1
−1 1 4 4−
4
4 4+
4 2 2
𝐼 = න 𝑔(𝑡)𝑑𝑡 = 𝑔( ) + 𝑔( ) = (4 − )𝑒 3 + (4 + )𝑒 3
−1 3 3 3 3
= 9.167657324 + 3468.376279 = 3477.543936 (𝜀 = 33.34%; 𝑡𝑟𝑎𝑝𝑧 𝑒𝑟𝑟𝑜𝑟: 357.2%)
Lecture 29
Last lecture:
Gauss quadrature
This lecture:

Composite Gauss quadrature


−1
Example: t1 = = - t2
3

1 1
Evaluate I = ‫׬‬0 𝑑𝑥 using 2-point Gauss quadrature
1+𝑥 2
Solution: a=0, b=1, x = (b+a)/2 + (b-a)t/2 = 0.5+ 0.5 t; dx = dt/2

x1 = 0.5+ 0.5 t1 = 0.2113248654,

x2 = 0.5+ 0.5 t2 = 0.7886751346

f(x1) = 0.9572508991, f(x2) = 0.6165195927,


 I= (0.9572508991 + 0.6165195927) /2 (dx = dt/2)

T.E. = Iexact –I =0.7853981634 - 0.786885246 =1.487E-3

* If a 4-interval rectangle rule is used, T.E. =1.302E-3

• Advantage of Gauss Quadrature: fewer f(x) evaluations.


• Requirement: exact form of f(x) is known;
as knowing discrete f(xi) is not enough
Composite Gaussian quadrature
1
Example: Evaluate ‫׬‬0 𝑒𝑥 𝑑𝑥 using composite 2-point Gauss quadrature

Solution: As an illustration, we first show the details for h=0.25.


* In [a=0, b=0.25], using x = k0 + k1t (as change of variables)
we find x =0.125 + 0.125t x

−1 1 x=0 x=0.25 0.5 0.75 x=1


Using t1= ,t = ,
3 2 3
a=0 b=0.25
 two nodes:
xi
x1=0.0528312163512968 & x2=0.197168783648703. f(xi)
 f(x1) = 1.05425168965496 & f(x2)= 1.21794959333476 0.552831216351297 1.738167185405680
0.802831216351297 2.231850844513490
* We then move on to the next interval [0.25, 0.5]:
0.802831216351297 2.231850844513490
change of variables: x =0.375 + 0.125t
0.947168783648703 2.578399309323080
 two nodes:
x3=0.302831216351297 & x4=0.447168783648703. I ~ 0.125 σ8𝑖=1 𝑓(𝑥𝑖 ) = 1.71828027782411
 f(x3) = 1.35368596510297 & f(x4)= 1.56387823408633
Composite Gaussian quadrature

h I(h) E(h)=|Iexact-I(h)| E(h)/E(h/2)


0.5 1.7182571650526 2.46634E-05
0.25 1.7182802778241 1.55063E-06 15.9054
0.125 1.7182817314002 9.70589E-08 15.9762
0.0625 1.7182818223906 6.06844E-09 15.9941
0.03125 1.7182818280797 3.79313E-10 15.9985
0.015625 1.7182818284353 2.37081E-11 15.9993

* The ratio 16 confirms that 2-term GQ has O(h4) error.


6.4.2 Gaussian quadrature with more than two terms

* Consider
1
𝐼 = ‫׬‬−1 f(t)dt = σ𝑛𝑖=1 𝑤𝑖 𝑓(𝑡𝑖 ) (2)

where wi is the weighting factor.


◊ Since (ti, wi) are free to choose, integration can be made exact for polynomials of degree (2n-1) or less.
◊ Substitute f(t) = tk, k=0, 1, 2,..., 2n-1, evaluate the integral

1 0, 𝑘 𝑜𝑑𝑑 𝑛 0, 𝑘 𝑜𝑑𝑑
න tkdt =ቐ 2  ෍ 𝑤𝑖 𝑡𝑖𝑘 =ቐ 2
−1
, 𝑘 𝑒𝑣𝑒𝑛 𝑖=1 , 𝑘 𝑒𝑣𝑒𝑛
𝑘+1 𝑘+1

 a system of 2n nonlinear equations.

It is difficult to solve for large n.

◊ There is an alternative derivation for (ti, wi) based on the roots of Legendre polynomials.
Nodes & weights for N-term Gauss-Legendre quadrature

ti wi
◊ n=3: ◊ n=12:
-0.774596669241483 0.555555555555556
-0.981560634246732 0.047175336386475
0 0.888888888888889
-0.904117256370452 0.106939325995363
0.774596669241483 0.555555555555556
-0.769902674194317 0.160078328543358
◊ n=4: -0.587317954286614 0.203167426723067
-0.367831498998180 0.233492536538353
-0.861136311594052 0.347854845137454
-0.125233408511468 0.249147045813402
-0.339981043584856 0.652145154862546
0.125233408511468 0.249147045813402
0.339981043584856 0.652145154862546
0.367831498998180 0.233492536538353
0.861136311594052 0.347854845137454
0.587317954286614 0.203167426723067
◊ n=5: 0.769902674194317 0.160078328543358
-0.906179845938664 0.236926885056189 0.904117256370452 0.106939325995363
-0.538469310105683 0.478628670499366 0.981560634246732 0.047175336386475
0 0.568888888888889
0.538469310105683 0.478628670499366
0.906179845938664 0.236926885056189

◊ n=6:
-0.932469514203152 0.171324492379170
-0.661209386466264 0.360761573048137
-0.238619186083196 0.467913934572691
0.238619186083196 0.467913934572691
0.661209386466264 0.360761573048137
0.932469514203152 0.171324492379170
6.4.3 Gaussian quadrature with different weighting functions w(x)

i) Gauss-Hermite quadrature (GHQ)


 −x2
Task: evaluate ‫׬‬− f(x) e dx (1)

efficiently & accurately


Ideas:
 Two-term quadrature: I = l1f(x1) + l2f(x2) (2)
Four free parameters to choose; and they are:
l1 = l2 = 0.886226925452758
x2 = -x1 = 0.7071067811865476 (= 1/21/2)
How good is this formula? Expect (2) to exactly match (1) for
f(x) = pn(x) = polynomials of degree n 3.

For f(x)=1, Iexact = p1/2 = 1.77245385090552;


formula (2) gives: I = l1 + l2 = 1.77245385090552.
Nodes & weights for 2-term Gauss-Hermite quadrature

For f(x)= x, Iexact = 0; I = (l1 - l2) x2 = 0

For f(x)= x2, Iexact = p1/2/2 = 0.886226925452758

formula (2) gives: I =2l1 x2𝑖 = l1 = 0.886226925452758

For f(x)= x3, Iexact = 0; I = (l1 - l2) x22 = 0.


Nodes & weights for N-term Gauss-Hermite quadrature

 N-term quadrature: 
2
න f(x) e−x dx = σ𝑛𝑘=1 lk 𝑓(xk) (3)
−
Expect: I=Iexact for f(x) = pn(x) = polynomial of degree n (2N-1).
 
Since 2 2𝑚−1 !! 2
න 𝑥 2𝑚 e−x dx = 2𝑚
𝜋, ‫׬‬− 𝑥 2𝑚−1 e−x dx = 0
−

 2N conditions to determine (lk, xk) for k=1,2,…, N.


 In practice, we use the roots of Hermite polynomial to find the nodes and weights (xk, lk).

* n=3: * n=8: xk lk
xk lk
1.2247448713915891 0.295408975150919 2.930637420257240 1.99604072211367E-04
0 1.18163590060368 1.981656756695840 1.70779830074134E-02
1.157193712446780 2.07802325814891E-01
* n=4: 0.381186990207322 6.61147012558241E-01
1.65068012388579 0.08131283544724518
0.524647623275290 0.8049140900055128
* n=128:

xk lk -6.34251881700178E+00 7.22010731692829E-19
-1.52918197668827E+01 1.79906598010928E-102 -6.12979658942216E+00 1.01893323042329E-17
-1.47338424735893E+01 2.60817240240911E-95 -5.91833117508581E+00 1.29454815933937E-16
-1.42739813047878E+01 1.40468977131508E-89 -5.70805150876808E+00 1.48422383751385E-15
-1.38652069844762E+01 1.26124948333853E-84 -5.49889066897391E+00 1.53904973035354E-14
-1.34895564126231E+01 3.40123008693663E-80 -5.29078548147718E+00 1.44634732119041E-13
-1.31377747880276E+01 3.75121586880472E-76 -5.08367616748934E+00 1.23421448660055E-12
-1.28043120820671E+01 2.04158579724398E-72 -4.87750603026481E+00 9.58031650873586E-12
-1.24855125853494E+01 6.21424416183031E-69 -4.67222117493263E+00 6.77578048777455E-11
-1.21788086198312E+01 1.15615516409637E-65 -4.46777025714858E+00 4.37318665984840E-10
-1.18823101188783E+01 1.40446725774048E-62 -4.26410425682551E+00 2.57939722942639E-09
-1.15945750547414E+01 1.17197850121298E-59 -4.06117627374927E+00 1.39219071529351E-08
-1.13144716442899E+01 6.99307292405195E-57 -3.85894134234428E+00 6.88458112215435E-08
-1.10410909760196E+01 3.08207738333929E-54 -3.65735626323530E+00 3.12287298617890E-07
-1.07736891151614E+01 1.03048625205569E-51 -3.45637944957173E+00 1.30074700323819E-06
-1.05116473299148E+01 2.67274375173606E-49 -3.25597078635065E+00 4.97992453259098E-06
-1.02544439284709E+01 5.48021702897879E-47 -3.05609150120268E+00 1.75404858480939E-05
-1.00016337989301E+01 9.02804013878664E-45 -2.85670404529740E+00 5.68874376004024E-05
-9.75283321343917E+00 1.21177953413059E-42 -2.65777198318948E+00 1.70014088262809E-04
-9.50770832327906E+00 1.34149748176436E-40 -2.45925989056573E+00 4.68551537808411E-04
-9.26596630029618E+00 1.23808555797636E-38 -2.26113325897306E+00 1.19156381445716E-03
-9.02734841339479E+00 9.61670806796750E-37 -2.06335840670856E+00 2.79783940160579E-03
-8.79162454488869E+00 6.33991352636649E-35 -1.86590239514059E+00 6.06886240692588E-03
-8.55858879506451E+00 3.57437889587942E-33 -1.66873294980372E+00 1.21669188644693E-02
-8.32805592079015E+00 1.73510302028206E-31 -1.47181838567448E+00 2.25543101678244E-02
-8.09985842150790E+00 7.29654500676840E-30 -1.27512753608915E+00 3.86739548106369E-02
-7.87384413353543E+00 2.67292362005807E-28 -1.07862968481090E+00 6.13607210044900E-02
-7.64987422768100E+00 8.57283048376936E-27 -8.82294500792981E-01 9.01086783764489E-02
-7.42782152995230E+00 2.41840345964766E-25 -6.86091975217334E-01 1.22503273164135E-01
-7.20756910338733E+00 6.02598403200645E-24 -4.89992360415458E-01 1.54210435298354E-01
-6.98900904264477E+00 1.33136785903359E-22 -2.93966110300295E-01 1.79773083907799E-01
-6.77204144325592E+00 2.61745758393481E-21 -9.79838219558190E-02 1.94097611864087E-01
-6.55657351526448E+00 4.59400767732972E-20 … …
Gaussian-Chebyshev quadrature

ii) Gaussian-Chebyshev quadrature :

Evaluate:
1 f(x)
𝐼 = ‫׬‬−1 dx accurately and efficiently
1−𝑥 2
p p
Idea: use x = cos(q)  I =‫׬‬0 f(cos(q))dq =‫׬‬0 F(q)dq

* Now we can use trapezoidal rule to integrate with respect q.


Why trapezoidal rule is good for this problem?
Because F′(q) = dF/dq = -sin(q)df/dx  F′(0)= F′(p)=0

p
 trapezoidal rule for ‫׬‬0 F(q)dq has O((Dq)4) error.
iii) Gauss-Laguerre quadrature:


Task: evaluate ‫׬‬0 f(x) e−x dx (1)
* Formula: 𝑘=1 lk 𝑓(xk)
I = σ𝑛
* n=16: (see example next)
* n=4: xk lk
0.322547689619392 0.603154104341633
1.74576110115834 0.357418692437799
4.53662029692112 3.88879085150053E-2
9.39507091230113 5.39294705561327E-4
* n=8:

1.70279632305101E-01 3.69188589341637E-01
9.03701776799380E-01 4.18786780814343E-01
2.25108662986613E+00 1.75794986637171E-01
4.26670017028765E+00 3.33434922612156E-02
7.04590540239346E+00 2.79453623522567E-03
1.07585160101810E+01 9.07650877335821E-05
1.57406786412780E+01 8.48574671627253E-07
2.28631317368892E+01 1.04800117487151E-09
Example:


Example: evaluate I =‫׬‬0 𝑥 𝑛 e−x dx = Gamma function for n=10 (1)

Solution: We use 16-term quadrature for n=10. f(x)= 𝑥 𝑛


The result should be exact since 2*16-1=31>10
The result will NOT be exact if n is NOT an integer.
xk lk l k (xk)10
Exact value = 10! = 3628800. (Gamma function)
8.76494104789278E-02 2.06151714957801E-01 5.51666924283209E-12
4.62696328915080E-01 3.31057854950884E-01 1.48889624452700E-04
1.14105777483122E+00 2.65795777644214E-01 9.94544963329366E-01
2.12928364509838E+00 1.36296934296377E-01 2.61108656764782E+02 Absolute Error = -6.79865E-08;
3.43708663389320E+00 4.73289286941252E-02 1.08900898643366E+04
5.07801861454976E+00 1.12999000803394E-02 1.28830142932817E+05
7.07033853504823E+00 1.84907094352631E-03 5.77238994746393E+05 Relative error = -1.87353E-14
9.43831433639194E+00 2.04271915308278E-04 1.14591665648866E+06
1.22142233688661E+01 1.48445868739813E-05 1.09705069026429E+06
1.54415273687816E+01 6.82831933087120E-07 5.2627940886981E+05 Thus T.E. is practically 0
1.91801568567531E+01 1.88102484107967E-08 1.26740692680414E+05
2.35159056939919E+01 2.86235024297388E-10 1.48026636974295E+04
2.85787297428821E+01 2.12707903322410E-12 7.73059358345026E+02 and the error is dominate by roundoff error.
3.45833987022866E+01 6.29796700251786E-15 1.54126421991585E+01
4.19404526476883E+01 5.05047370003551E-18 8.50478319942106E-02
5.17011603395433E+01 4.16146237037285E-22 5.67869931468020E-05

Sum=3628799.99999993
Example:

 1 p
Example: evaluate I =‫׬‬0 1+𝑥4 dx (Iexact = 2 2 = 1.11072073453959…) (2)

 𝒆𝒙
Solution: First we rewrite the integral as I =‫׬‬0 4 e−xdx
1+𝑥

𝑒𝑥
That is, f(x) =
1+𝑥 4
Next, we use 256-term Gauss-Laguerre quadrature
(see tabulated nodes and weights in a separate file)

 𝑘=1 lk 𝑓(xk) = 1.11072073290589


I = σ𝟐𝟓𝟔
Error = 1.6337 E-09
6.5 Improper Integral

6.5.1 Singular integrand


i) Strong but integrable singularity
4
𝑑𝑥
 Consider I = න (Iexact = 1.06394227622575…)
𝑥 𝑒𝑥 + 1
0
1
Note: f(x) = is singular near x=0:
𝑥 𝑒 𝑥 +1
Lecture 30
Last lecture:
Composite Gauss quadrature
This lecture:

Improper Integral
Singular integrand
Unbounded interval
6.5 Improper Integral

6.5.1 Singular integrand


i) Strong but integrable singularity
4
𝑑𝑥
 Consider I = න (Iexact = 1.06394227622575…) 10
f(x)
𝑥 𝑒𝑥 + 1
0 8
1
Note: f(x) = is singular near x=0:
𝑥 𝑒 𝑥 +1
6
• Since f(x) rises rapidly  large |f(k)(x)|.
4

• Truncation error (T.E.) in I is proportional to hm f(k)(x). 2

• Now larger |f(k)(x)| requires smaller h to reduce the T.E. 0


0 0.5 1 1.5 2 2.5 3 3.5 4 x
 need many points to resolve the rapid rise of f(x) near a
singular point in order to accurately evaluate the integral.
▪ This procedure can be costly

▪ If we uses trapezoidal rule or Simpson's rule, f(x=0) is needed in the formulation. However f(0)→∞ .
▪ Note, the singularity, ~O(x-1/2), near x=0 is integrable and the integral I exists.
4
𝑑𝑥
I= න (contd) (Iexact = 1.06394227622575…)
𝑥 𝑒 𝑥 +1
0

▪ If one uses Gaussian quadrature, f(x=0) is NOT needed and the formulae can be directly applied.
▪ However, using composite 4-term Gaussian quadrature, we obtain
I(n=1000) = 1.057818, I(n=2000) = 1.05961335 (8000 function evaluations)
where n is the number of sub-intervals in [0, 4].
▪ Obviously, more terms are still needed to obtain reasonable convergence. And this is not a good approach.

* Now let x = 𝑥  x = x2  dx = 2x dx,


dx 2xdx (singularity gone!)
1
x = x = 2dx 0.8 F (x)
2
2dx 0.6
Thus, I= න
exp(x2 ) + 1 0.4
0
2
The new integrand F(x)= is well behaved near x=0. 0.2
exp(x2 )+1
0
▪ Using 4-term Gaussian quadrature and 2 intervals only in x, 0 0.5 1 1.5 2
x
I = 1.063944 (with 8 function evaluations)
4 2
𝑑𝑥 2dx
I= න  න (contd) (Iexact = 1.06394227622575…)
𝑥 𝑒𝑥 + 1 exp(x2 ) + 1
0 0

* Using 4-term Gaussian quadrature and 2 intervals only in x,


I = 1.063944 (with 8 function evaluations)
* Using Simpson's rule:
4-intervals (h=0.5) in [0, 2]  I = 1.0628536363
8-intervals (h=0.25)  I = 1.0639147408
R(n) = (In/2 - In/4)/(In - In/2) Richardson
n I(n) extrapolation
2 1.06250126362804
4 1.06285363625177
0.3321 1.06287712776002
8 1.06391474078716
40.9981 1.06398548108952
16 1.06394062259110 1.06394234804470
32 1.06394217383836 16.6845
1.06394227725484
64 1.06394226984197 16.1582
1.06394227624221
128 1.06394227582700 16.0406 1.06394227622600 13 significant digits
256 1.06394227620083 16.0100 1.06394227622575
512 1.06394227622419 16.0030 1.06394227622575
1024 1.06394227622565 15.9982 1.06394227622575
significantly improved the accuracy while
Order of convergence k = 4 drastically reduced computational effort.
g(x)
* Suppose f(x) = xp ~ O(x -p) (p>0), g(x) is smooth near x=0.
Let x=x q be the desired transformation.

g(x q)
We note f(x) dx ~ qx q-1 dx ~ O(xq-pq-1)dx
x qp

Set q – qp -q = 0 to remove singularity

 q=1/(1-p)  x -pdx = dx/(1-p).

Thus the change of variables x=x1/(1-p) effectively eliminates the singularity at x.

• This only works for p<1.

• For p>1, integral diverges.


ii) Weak singularity

 Use standard Simpson’s 1/3 rule to evaluate


2 4
I = ‫׬‬0 𝑥𝑑𝑥 (Iexact = 3 2 = 1.88561808316413…
and assess error E(h) = |I - Iexact| versus step size h (= 2/n). 1.E+00
E(h)
f(x)
Results: 1.E-01 1

h n I(n) E(h) R(n)


1.E-02 0.5
1 2 1.80473785412436 0.0808802 singularity
0.5 4 1.85693669544760 0.0286814 1.E-03
0.25 8 1.87547142164965 0.0101467 2.816273 0
0.125 16 1.88203028547284 0.0035878 0 0.5 x 1
2.825905 1.E-04
0.0625 32 1.88434957907840 0.0012685 2.827958 3
0.03125 64 1.88516959760996 0.0004485 2.828343 1.E-05
0.015625 128 1.88545951947362 0.0001586 2.828412
0.0078125 256 1.88556202242733 5.606E-05 2.828424 1.E-06
3.90625E-03 512 1.88559826270015 1.982E-05 2.828427 2
1.95313E-03 1024 1.88561107557186 7.008E-06 2.828427 1.E-07
9.76563E-04 2048 1.88561560560612 2.478E-06 2.828427
4.88281E-04 4096 1.88561720721509 8.759E-07 2.828427
1.E-08 h
2.44141E-04 8192 1.88561777346937 3.097E-07 2.828427 1.E-05 1.E-04 1.E-03 1.E-02 1.E-01 1.E+00
1.22070E-04 16384 1.88561797367049 1.095E-07 2.828427
6.10352E-05 32768 1.88561804445228 3.871E-08 2.828427 =21.5

• E(h)=0.0811846 h1.5~ O(h1.5) instead of E(h) ~ O(h4) for smooth functions.


6.5.2 Unbounded interval

∞ 1 
 Example: evaluate I=‫׬‬
0 1+𝑥 4
𝑑𝑥 , f(x) = 1/(1+x4) (Iexact = = 1.11072073453959…)
2 2

Solution: Have worked on it using Gauss-Laguerre quadrature. Now use change of variable:

𝑒 𝑡 𝑑𝑡 𝑒𝑡
* Let x = 𝑒𝑡 -1  dx = et dt  I = න 𝑡 − 1)4 + 1 , F(t) =
0 (𝑒 (𝑒 𝑡 −1)4 +1
5
* Use Simpson’s rule for න 𝐹 𝑡 𝑑𝑡 (use t =5 to replace t→ ) with h = dt = 0.01 
0
5
I ~ ‫׬‬0 𝐹 𝑡 𝑑𝑡 = 1.11072063042711, Error = Iexact - 1.11072063042711 = 1.04112E-7

* Advantage of transformation: bring “” or x= e5 -1=147.413 to t =5. Note: f(147.413) ~2.118E-9 is “sufficiently” small.
* Anatomy of the Error (1.04112E-7):
 ∞
1
i) It comes mainly from න 𝐹 𝑡 𝑑𝑡 = න 4 𝑑𝑥
5 147.413 1 + 𝑥

1 1 1 1 1
~ = (1- 𝑥 −4 + 𝑥 −8+…)  න 𝑑𝑥 ~ ‫׬‬
∞ 1
(1− 𝑥 −4 + 𝑥 −8 +…)𝑑𝑥= ( 1 − 1
)
1+𝑥 4 𝑥 1+𝑥 −4
4 𝑥4 4 147.413 𝑥 4 3𝑥 3 7𝑥 7 147.413
147.413 1 + 𝑥

= 1.0405668E-7
∞ 1
I=‫׬‬
0 1+𝑥 4
𝑑𝑥 (contd)

Thus the result for I can be improved to :


1.11072063042711+1.040566847E-7 = 1.11072073448379

ii) The rest of the error: exact – above =1.11072073453959-1.11072073448379


= 5.580003E-11
comes from the truncation error in Simpson’s rule:
ℎ4 10−8
- [𝐹 ‴ (5) − 𝐹 ‴ (0)] ~ - (0 − 1) ~ 5.5556 E-11.
180 180
6.5.3 Unbounded interval with singularity at x=0

∞ 1
Evaluate I=‫׬‬ 𝑑𝑥 efficiently and accurately.
0 2
𝑥(1+𝑥 )
1.E+03
( Iexact= 𝜋/ 2 = 2.22144146907918 ) 1.E+02
f

1.E+01
1
* Diagnosis: The challenges for f(x) = 𝑥(1+𝑥 2 )
are 1.E+00
1.E-01
i) it has a strong singularity at x=0 with f(0) →. 1.E-02 75 Algebaric
dacay
1.E-03 50
ii) it decays algebraically (i.e. slowly) as f(x) ~ 1/x2.5 as x→ 1.E-04 singularity
1.E-05 25
so that if the integration stop at x=1000, the error beyond 1.E-06 0
x can easily be on the order of O(1000-1.5) for I, which is large. 1.E-07 0.0 0.1 0.2 0.3 0.4 0.5 x
1.E-08 x

iii) finding a change of variable to suit both 0 &  is not easy. 1.E-04 1.E-03 1.E-02 1.E-01 1.E+00 1.E+01 1.E+02 1.E+03

Solution:
* Strategy: divide and conquer.
∞ 1 1 ∞
* Split the interval into (0,1] and [1, ): I = ‫׬‬0 𝑑𝑥 = ‫׬‬0 (… )𝑑𝑥 + ‫׬‬1 (… )𝑑𝑥  I1 + I2
𝑥(1+𝑥 2 )
∞ 1 1 ∞
Divide & conquer strategy for I = ‫׬‬ 𝑑𝑥 = ‫׬‬ (… )𝑑𝑥 + ‫׬‬ (… )𝑑𝑥
0 𝑥(1+𝑥 2 ) 0 1

1 1
* For I1 =‫׬‬0 𝑥(1+𝑥 2 )
𝑑𝑥, use change of variable: x = x2  use a “microscope” to enlarge the x=0 region.
1 𝑑x
 I1 =2‫׬‬0 𝑑𝑥
1+x4
For this smooth integrand, we can then use 16-term Gauss-Legendre quadrature to obtain
I1 = 1.73394597467982; 12-term G-L: I1 = 1.73394597467931
Total # of function evaluations = 16.
∞ 1
* For I2 =‫׬‬1 𝑑𝑥, let x=et  use “telescope” to bring “” a lot closer
𝑥(1+𝑥 2 )


𝑒 𝑡Τ2 𝑑𝑡 ∞ −𝑡 𝑒 𝑡Τ2
 I2 = න = ‫׬‬0 𝑒 𝑒 𝑡 +𝑒 −𝑡 𝑑𝑡
0 𝑒 2𝑡 + 1
𝑒 𝑡 Τ2
Let f(t) = . It decays exponentially.
𝑒 𝑡 +𝑒 −𝑡

We then use 100-term Gauss-Laguerre quadrature.


We note that lk 𝑓(xk) ~1.20E-17 at k=32.  σ𝑘=1 lk 𝑓(xk) = 0.48749549439936
𝟑𝟐

 I = 1.73394597467982 + 0.487495494399361 = 2.22144146907918  Error: E ~ 0


Total # of f(x) evaluations: 48
6.6 Adaptive Integration

1 𝑑𝑥
• Consider for example: I= ‫׬‬0.2 2
𝑥
30
The integrand f(x) = 1/ x2 rises rapidly near x = 0.2.
25
How do we know if we have obtained desired accuracy? f(x)

20
Typically, compute I1 with h1 = (b-a)/2,
15
then I2 with h2 = (b-a)/22, ...,
10
In-1,
5
In with hn = (b-a)/2n.
Compare In with In-1, if |In-In-1|<e the result is accepted. 0
0 0.2 0.4 0.6 0.8 1 x
* Using Simpson’s rule with n intervals, hk = (b-a)/2k
𝑏−𝑎 4 (4)
I = Sk[a, b] - ℎ f (x), a≤x≤b k n hk Sk |Sk-Sk-1|
180 𝑘
1 2 0.4 4.948148
Sk[a, b] = [fa + 4 f1 + 2f2 + 4f3 + ... +4fn-1 + fn] * hk /3 2 4 0.2 4.187037 0.76111
3 8 0.1 4.024281 0.16276
* If we set the tolerance to be e=0.02, then S= S5 = 4.000154 4 16 0.05 4.002164 0.02212
Extrapolation: I = (24S5 -S44)/(24-1) = 4.00002 5 32 0.025 4.000154 0.00201
Adaptive scheme

 Comments:
• Disadvantage: h is same everywhere in this approach.
• Only need small h near the region where f(x) changes rapidly.
• The above is NOT an adaptive procedure.
 Adaptive integration scheme seeks to automatically chooses small h only in regions where f(x) changes rapidly.

◊ Adaptive scheme
* Set e=0.02, evaluate S1[0.2, 1] = [f(0.2) + 4f(0.6)+f(1)] *0.4/3 = 4.948145148, h1 = (1-0.2)/2 =0.4

* We are not sure if S1[0.2, 1] is accurate enough. ( )


( )
* Interval halfing: divide [0.2, 1.0] into two: [0.2, 0.6] and [0.6, 1.0]. 0.2 0.4 0.6 0.8 1.0

Evaluate S2[0.2, 0.6] & S2[0.6, 1] using h2 = 0.2: S2[0.2, 0.6] =[f(0.2) + 4f(0.4)+f(0.6)]* 0.2/3 = 3.5185182

S2[0.6, 1.0] =[f(0.6) + 4f(0.8)+f(1.0) )]* 0.2/3 = 0.6685185

S2[0.2, 1.0] = S2[0.2, 0.6] + S2[0.6, 1.0] = 4.1870367


* Test: |S2[0.2, 1.0] -S1[0.2, 1.0]| = 0.761 > e=0.02
S1[0.2, 1] =4.948145148

S2[0.2, 0.6] =3.5185182+ S2[0.6, 1.0] =0.6685185 → S2[0.2, 1] = 4.1870367


Test: |S2[0.2, 1] -S1|=0.76> e=0.02
S3[0.2, 0.4] S3[0.4, 0.6] S3[0.6, 0.8] S3[0.8, 1.0]
=2.52314825 + 0.83425926 → S3= 3.35740741 =0.41678477 =0.25002572 → S3[0.6, 1]= 0.66687049
Test: |S3[0.2,0.6] -S2[0.2,0.6]| Test: |S3[0.6, 1]- S2[0.6, 1]|=0.001708< e/2
=0.01611> e/2
Extrap.: RS[0.6,1] = (16S3 -S2)/15 =0.6666962
S4[0.2, 0.3] S4[0.3, 0.4] S4[0.4, 0.5] S4[0.5, 0.6]
=1.66851852 + 0.83356954 =0.50005144+ 0.33334864 → S4[0.4, 0.6]= 0.8334008
→ S4[0.2,0.4]= 2.50208806 Test: |S4[0.4, 0.6]- S3[0.4, 0.6]|=0.000859< e/4=0.005
Test: |S4[0.2,0.4]- S3[0.2,0.4]|=0.021> e/4
Extrap: RS[0.4,0.6] = (16S4 –S3 )/15 =0.8333428
S5[0.3, 0.35] S5[0.35, 0.4]
=0.47620166 + 0.357247584 → S5= 0.83334924 Test: |S5[0.3,0.4]- S4[0.3,0.4] | =2.2E-4 < e/8=0.0025

S5[0.2, 0.25] S5[0.25, 0.3] Extrap: RS[0.3,0.4] = (16S5 –S4 )/15 =0.8333492

=1.000102881 + 0. 666697275 → S5= 1.666800157 Test: |S5[0.2,0.3]- S4[0.2,0.3]|=1.78E-3 < e/8=0.0025


Extrap: RS[0.2,0.3] = (16S5 –S4 )/15 =1.6666856
RS[0.6,1] =0.6666962
RS[0.4,0.6] = 0.8333428

RS[0.3,0.4] = 0.8333492
RS[0.2,0.3] = 1.6666856

 S[0.2,1] = 1.6666856 + 0.8333492 + 0.8333428 + 0 .6666962


= 4.00005957
Actual Error = 5.957E-5

• Number of function evaluation to meet the tolerance (e=0.02)


= 17
Non-adaptive: = 33 (Error = 1.54E-4)
Supplemental Reading
Chapter 6 Numerical Differentiation & Integration
 /2 dx
Example 1 Find I =  without using numerical method.
0 1 + tan 10
x
1
Solution: Let f ( x) = , then
10
1 + tan x

 1 1 tan 10 x
f( − x) = = = = 1 − f ( x)
2 1 + tan 10
( / 2 − x) 1 + cot 10
x 1 + tan 10
x
 /2    /2   /2
Hence  f  − x dx =  1 − f ( x )dx = −  f (x )dx
0 2  0 2 0
 /2   0  /2  /2
Since  
f − x  dx = −  f (t )dt =  f (t )dt =  f ( x )dx ,
0  2   /2 0 0
 /2    /2   /2
then  f  − x  dx =  f ( x )dx = −  f (x )dx
0  2  0 2 0
 /2 
  f ( x )dx =
0 4

• Lesson:
Study the analytical
behavior of the integrand
before using computer for
brute force calculation.
Example 2 Numerical integration for complimentary error function erfc(x)
Develop an accurate and efficient numerical integration procedure to evaluate
the complimentary error function for large x.
Solution:

2 x 2
* The error function is: erf ( x) =  exp(−t )dt ; erf () = 1
0
* For small x, we can use Taylor series expansion for erf(x) to obtain erfc(x)
by using
erfc(x) = 1- erf(x)
* However, we have learnt that for large |x|, the roundoff error in the Taylor
series expansion will destroy the accuracy of erf(x) and erfc(x).
* For large |x|, it is advantageous to express it as

2  −t 2
erf(x) = 1 −  e dt = 1- erfc(x )
 x

2  −t 2
so that erfc(x ) =  e dt (1)
 x
* Introduce change of variable:
t=v+x (2)

2 − x2 − v 2 − 2 xv
so that erfc(x ) = e  e dv
 0

2 − x2 − v − v 2 − ( 2 x −1)v
= e  e e dv (3)
 0
* Recall Gauss-Laguerre quadrature (GLQ):

 N
I = 0 f(x) exp(-x) dx =  k f(x) (4)
k =1
* There are two advantages of dealing with (3) or using (2):
i) The integration limit is [0, ) so that we can use accurate GLQ procedure

by recognizing f(v) = exp[ −v 2 − (2 x − 1)v] . Since f(v) decays rapidly, we


do not need many terms in the GLQ in order to obtain accurate results.
For x=1, for example, f(v=5.436)=6.405E-16. If we use a 256-term
256
GLQ, then we can stop at 24th node since in I =  k f(k),
k =1

24 =  24 = 1.998E-3  24 f(24)=1.28E-18 while 1g(1)=


1.43E-2. Hence at most only the first 24 terms in the GLQ are needed;
any term beyond k=24 will make contributions that are even smaller
than machine error.

ii) For large value of |x|, exp( − x 2 ) has been factored out of the integration
so that the magnitude of the integrand is no longer very small.

For x=3:
Nodes vi (i) Weight i f(i) i * f(i)
5.63664024461788E-03 1.43841583354197E-2 9.72179352085835E-01 1.39839817308284E-02
2.96993538627220E-02 3.26880042670518E-2 8.61242796294662E-01 2.81523082002475E-02
7.29909700414853E-02 4.91865460452045E-2 6.90539203710237E-01 3.39652383393124E-02
1.35522727629362E-01 6.30083332613029E-2 4.98585570745617E-01 3.14150458008168E-02
2.17298150208849E-01 7.35473272042786E-2 3.21837268945633E-01 2.36702709256759E-02
3.18320587462526E-01 8.04813955215006E-2 1.83979501271005E-01 1.48069270096402E-02
4.38593924750508E-01 8.37782059806297E-2 9.20582035598338E-02 7.71247114004250E-03
5.78122716012576E-01 8.36703647157153E-2 3.97621646872032E-02 3.32691482126463E-03
7.36912219811437E-01 8.06048869119271E-2 1.45873477452929E-02 1.17581151535429E-03
9.14968412092162E-01 7.51760246912409E-2 4.46266456347441E-03 3.35485381412478E-04
1.11229799195779E+00 6.80518234921075E-2 1.11522758509835E-03 7.58932707746425E-05
1.32890838503956E+00 5.99042513766679E-2 2.22509798077039E-04 1.33292828777786E-05
1.56480774598414E+00 5.13507526567855E-2 3.45652397697468E-05 1.77495107793875E-06
1.82000496063290E+00 4.29121916845640E-2 4.06765919676904E-06 1.74552171159233E-07
2.09450964813355E+00 3.49890164176762E-2 3.52040747962061E-07 1.23175595101356E-08
2.38833216309314E+00 2.78546588330936E-2 2.17035208864264E-08 6.04544169768328E-10
2.70148359782584E+00 2.16631054438426E-2 9.21119241503868E-10 1.99543032550506E-11
3.03397578472306E+00 1.64664049048934E-2 2.59490014739922E-11 4.27286765148432E-13
3.38582129876156E+00 1.22376124792404E-2 4.66792126804301E-13 5.71242115619148E-15
3.75703346015866E+00 8.89512796696100E-3 5.14645780598107E-15 4.57784007607669E-17
4.14762633718054E+00 6.32530484500813E-3 3.33012773502293E-17 2.10640730968365E-19
4.55761474910781E+00 4.40132094141652E-3 1.20827149292846E-19 5.31799062474263E-22
4.98701426936161E+00 2.99737443582080E-3 2.34313406260787E-22 7.02325013896175E-25
5.43584122879277E+00 1.99814663199109E-3 2.30954680331271E-25 4.61481316646508E-28
Sum=1.58635639863988E-01

which results in erfc(3) = 2 e − 9 * Sum = 2.20904969985855E-05



From Matlab, the result is
erfc(3) = 2.20904969985854E-005
Thus the present procedure is very accurate while it only requires at most 24
function evaluations.

The results of other values of x using only 24 terms are as follows:


x Present erfc(x) Matlab erfc(x)
0 1.00000000000000E+00 1
0.1 8.87537083981715E-01 8.87537083981715E-01
0.2 7.77297410789522E-01 7.77297410789521E-01
0.3 6.71373240540873E-01 6.71373240540872E-01
0.4 5.71607644953332E-01 5.71607644953331E-01
0.5 4.79500122186953E-01 4.79500122186953E-01
0.6 3.96143909152074E-01 3.96143909152074E-01
0.7 3.22198806162582E-01 3.22198806162580E-01
0.8 2.57899035292340E-01 2.57899035292340E-01
1 1.57299207050285E-01 1.57299207050285E-01
1.5 3.38948535246893E-02 3.38948535246893E-02
2 4.67773498104727E-03 4.67773498104727E-03
2.5 4.06952017444959E-04 4.06952017444959E-04
3 2.20904969985855E-05 2.20904969985854E-05
3.5 7.43098372341413E-07 7.43098372341412E-07
4 1.54172579002800E-08 1.54172579002800E-08
4.5 1.96616044154289E-10 1.96616044154289E-10
5 1.53745979442803E-12 1.53745979442803E-12
5.5 7.35784791797440E-15 7.35784791797441E-15
6 2.15197367124989E-17 2.15197367124989E-17
6.5 3.84214832712065E-20 3.84214832712066E-20
7 4.18382560777941E-23 4.18382560777940E-23
7.5 2.77664938603057E-26 2.77664938603058E-26
8 1.12242971729829E-29 1.12242971729829E-29
Example 3: Effect of singularity on the loss of accuracy
For f(x) = x (>0), consider the integration

I = 0b x dx (b~O(1))

using Simpson’s rule. Investigate how we lose the O(h4) global accuracy.
Solution: Let’s start by considering the first two sub-intervals, x = [0, 2h]:
0.35
0.3 data
0.25 f(x)
0.2 Parabola
0.15
0.1
0.05
0
=0.5
-0.05
-0.05 0 0.05 0.1 S
h 1
Simpson: S[0, 2h] = 3 (0 + 4h+ (2h)) = 3 (4+2) h+

2h 1
Exact: I(0, 2h) = 0 x dx = (2h) +
1+

+
2+ 1
Error: E(0, 2h) = I(0, 2h) - S[0, 2h] = h [ - (4+2)]
1+ 3
Note: a) For =1, 2, & 3, coef_1 = 0  E(0, 2h) = 0
(Simpson’s rule is exact for polynomials of order 1, 2, & 3.)
b) When  is not an integer and  < 3, coef_1  0 and +1< 4,
 E(0, 2h)  h+ » h (=order of TE in smooth region)
c) When =0.5, we end up with O(h) error—as clearly shown
2
in the case of I = 0 xdx for  =0.5.
2+ 1
Plot of coef_1()  - (4+2) vs :
1+ 3

0.35 coef_1
0.3
0.25
0.2
0.15
0.1
0.05

0
-0.05 0 0.5 1 1.5 2 2.5 3

d) The next plot verifies the same behavior for  =0.1

1.E-01
Error_0.1
1
Error_0.5
1.1
1.E-02
1
=0
1.5
1.E-03

=0
1.E-04
10 100 n 1000

Of course there are errors from other intervals as well.


Let E(2h, 4h) = I(2h, 4h) - S[2h, 4h] = coef_2* h+,
E(4h, 6h) = I(4h, 6h) - S[4h, 6h] = coef_3* h+

The contribution from (2h, 4h), (4h, 6h), … should be much smaller
comparing with that from (0, 2h) as x is moving away from the true
singularity at 0. It can also be seen from the rapid dropping of the
coef_n’s as n increases from the next plot.

Plot of coef_n() vs n: for =0.1 and =0.5:

1.E+00
coef_n(0.1)
1.E-01
coef_n(0.5)
1.E-02
1.E-03
1.E-04
1.E-05
1.E-06  =0
1.E-07
 =0
1.E-08
n
1.E-09
1 10 100

For  = 0.5, coef_1= 0.08088, coef_2= 0.000243


while the entire T.E. from all intervals (b=2) is 0.081184 h0+
which shows clearly that the error form (0, 2h) constitutes
0.08088/0.081184 = 99.625% of the error.
The present simple analysis clearly shows how we get O(h+) error
instead of O(h) when we have x-singularity.
Another lesson: if we can handle the integration on (0, 2h) analytically
(with little NUMERICAL error) by using some expansion method
and let Simpson’s rule to handle the rest of the interval (2h, b), we
can essentially eliminate the dominant integration error coming from
the singularity.

Now, can you perform a similar analysis when the trapezoidal rule is used?
Example 4: Divide & conquer strategy
 dx
Evaluate I= efficiently and accurately.
2
0 x ( x + 1)
1.E+03
f
1.E+02
1.E+01
1.E+00
1.E-01
1.E-02 75 Algebaric
dacay
1.E-03 50
1.E-04 singularity
1.E-05 25

1.E-06 0
1.E-07 0.0 0.1 0.2 0.3 0.4 0.5 x
1.E-08 x
1.E-04 1.E-03 1.E-02 1.E-01 1.E+00 1.E+01 1.E+02 1.E+03

1
* Diagnosis: The challenges for f ( x) = are:
x ( x 2 + 1)
i) it has a strong singularity at x=0 with f(0) →.
ii) it decays algebraically (a.k.a. slowly) as f(x) ~ 1/x2.5 as x→ so that if
the integration stop at x=1000, the error beyond x can easily be on
the order of O(1000-1.5) for I, which is still large.
iii) finding a change of variable that can suit both 0 and infinity is not
easy.
(Note: Iexact=  / 2 = 2.22144146907918 is known in this case.
The technique we will discuss is useful for handling more complicated
integrands for which exact values are unknown).
Solution:
* Strategy: divide and conquer.
* Split the interval into (0,1] and [1, ):
 dx 1 dx  dx
I= 2
= 2
+  2
 I1 + I2
0 x ( x + 1) 0 x ( x + 1) 1 x ( x + 1)
1 dx
* For I1 = 
0 x ( x 2 + 1)
2
we use change of variable: x = 
so that “microscope” is used to enlarge the x=0 region.
1 d
 I1 = 2 
0  4 +1
For this smooth integrand, we can then use 12 term Gauss-Legendre
quadrature to obtain
I1 = 1.73394597467934;
Total # of function evaluations = 12.
 dx
* For I2 =  ,
1 x ( x 2 + 1)
Let x=et
so that we use “telescope” to bring “” a lot closer.
 et / 2 dt  et / 2
−t
I2 =  2t = e t −t
dt
0 (e + 1) 0 (e + e )

et / 2
Let g(t) = t − t . It decays exponentially.
(e + e )
Because t goes from 0 to , it is ideal to use Gauss-Laguerre
quadrature.
Here we use 128 nodes Gauss-Laguerre quadrature:
Nodes ti Weight wi g(ti) wi*g(ti)
1.12513882636759E-02 2.85518444532397E-02 5.0278894873E-01 1.4355551857E-02
5.92847412690264E-02 6.33502117845051E-02 5.1413925724E-01 3.2570830833E-02
1.45707966594312E-01 9.13083813661343E-02 5.3212798139E-01 4.8587744660E-02
2.70553178758665E-01 1.09913900410911E-01 5.5209667018E-01 6.0683098423E-02
4.33841407553836E-01 1.18274171034173E-01 5.6692759904E-01 6.7052891813E-02
6.35597665781622E-01 1.17045739000406E-01 5.6833375186E-01 6.6521043985E-02
8.75852384546521E-01 1.08089987545568E-01 5.4996637465E-01 5.9445858586E-02
1.15464170197439E+00 9.39428886389285E-02 5.1067395561E-01 4.7974186542E-02
1.47200756316673E+00 7.72536687978980E-02 4.5506351659E-01 3.5155326192E-02
1.82799777831235E+00 6.03270562656615E-02 3.9082063872E-01 2.3577058662E-02
2.22266607156190E+00 4.48473482471952E-02 3.2530309095E-01 1.4588981006E-02
2.65607212988348E+00 3.17969479368768E-02 2.6369680889E-01 8.3847537033E-03
3.12828165502791E+00 2.15301494537944E-02 2.0886716227E-01 4.4969412198E-03
3.63936641985240E+00 1.39369517338463E-02 1.6196532152E-01 2.2573028685E-03
4.18940432959404E+00 8.63158538020225E-03 1.2307863464E-01 1.0623637434E-03
4.77847948843487E+00 5.11777701366923E-03 9.1692888987E-02 4.6926375958E-04
5.40668227160050E+00 2.90634743648595E-03 6.6979996472E-02 1.9466714104E-04
6.07410940319653E+00 1.58143294331667E-03 4.7975731034E-02 7.5870401537E-05
6.78086403997562E+00 8.24738985098812E-04 3.3694073795E-02 2.7788816225E-05
7.52705586122937E+00 4.12326088539694E-04 2.3201734959E-02 9.5666806228E-06
8.31280116500777E+00 1.97649426442591E-04 1.5663836320E-02 3.0959482645E-06
9.13822297088039E+00 9.08515788782451E-05 1.0367166886E-02 9.4187348006E-07
1.00034511294682E+01 4.00484927835805E-05 6.7263302472E-03 2.6937938836E-07
1.09086224389908E+01 1.69307623980817E-05 4.2778222463E-03 7.2426792034E-08
1.18538807690918E+01 6.86452529111068E-06 2.6666283773E-03 1.8305137938E-08
1.28393771922232E+01 2.66921659814210E-06 1.6291634836E-03 4.3485902116E-09
1.38652701228920E+01 9.95364010286385E-07 9.7542716843E-04 9.7090509811E-10
1.49317254650919E+01 3.55943575300306E-07 5.7229112713E-04 2.0370334990E-10
1.60389167682670E+01 1.22053255194881E-07 3.2899816519E-04 4.0155297015E-11
1.71870253921812E+01 4.01279192093563E-08 1.8530402635E-04 7.4358649985E-12
1.83762406810896E+01 1.26481141474759E-08 1.0224687350E-04 1.2932301272E-12
1.96067601476416E+01 3.82148972942657E-09 5.5264485340E-05 2.1119266313E-13
2.08787896669713E+01 1.10664105922734E-09 2.9256908227E-05 3.2376895910E-14
2.21925436814678E+01 3.07100923709742E-10 1.5168770125E-05 4.6583433169E-15
2.35482454167489E+01 8.16549938415449E-11 7.7012900177E-06 6.2884878896E-16
But we only need to compute up to k=36 as w36g(t36) = 6.288E-16 is
sufficiently small.
Add up those 36 terms of wi*g(ti) in the last column, we obtain:
I2 = 0.48749549439936 (with 36 f(x) evaluations for I2)

* Final result: I = 1.73394597467934 +0.48749549439936


= 2.22144146907870
Error: E = 4.8E-13
Total # of f(x) evaluations: 48.
 Now, you can use the same strategy to evaluate
 1+ x
I=  0.6 
dx
0 x [ x + exp(− x ) + 1]
with similar level of accuracy and similar amount of computation.
1 1+ x
I1 =  0.6 
dx
0 x [ x + exp(− x ) + 1]
1/(1-0.6) 2.5
Use x =  = =>
1 1 +  2.5
I1 = 2.5  d = …
2.5
0  + exp(− 1.25
) +1
 1+ x
I2 =  dx
1 x
0.6
( x + exp(− x ) + 1)
t
Use x = e
 e0.4t (1 + et )
I2 =  t t/2
dt
0 e + exp(−e ) + 1
We can now use Gauss-Laguerre quadrature for the above…
Example 5: Removal of singularity
20 1 e− x
Evaluate I =  dx efficiently and accurately for =10-4.
2 2
 x ( x + 1)
Solution:
 Diagnosis:
* Analytical result does not exist.
* If =0, the integral does not exist; I(→0) →.
* It appears that we need a very large number of grid points near x=.

1 e− x
* Near x=, the integrand f(x) = is dominated by 1/ x 2 .
x 2 ( x 2 + 1)

* Change of variables such x = t 2 does not work since

dx / x 2 = 2tdt / t4 = 2dt / t3 which is still singular


 Remedy

e− x
* Consider the non-singular part of the integrand, g(x) = .
( x 2 + 1)
Its Taylor series expansion near x=0 is
g(x) ~ (1- x + x2/2! + …) (1- x2 +…) = 1 - x - x2/2+ …
Thus
g(x) / x2 ~ 1/ x2 – 1/x - 1/2 + … near x=0.
In the above, the first two terms are singular but can be handled
analytically while “-1/2” is not singular any more.
* Thus we rewrite g(x) / x2 as

2 1− x 1 − x − e − x /(1 + x 2 )
f(x) = g(x) / x = -
x2 x2
20 1 − x 20 1 − x − e − x /(1 + x 2 )
Hence, I() =  dx -  dx
2 2
 x  x
= 1/ - 1/20 – ln(20/) – J()
where
20 1 − x − e − x /(1 + x 2 )
J() =  dx
2
 x
Note, the integrand for J() is well behaved at x= and numerical integration
is very easy to carry out. Here, Romberg integration based on trapezoidal
rule (with a total of n sub-intervals) is used for  = 0.0001 to obtain:
(1) ( 2) (3)
n J(= 0.0001) Jn Jn Jn
50 -1.990208809379
100 -1.998751251663 -2.001598732
200 -2.000851202826 -2.001551187 -2.001548016821
400 -2.001374165688 -2.001548487 -2.001548306648 -2.001548311248

Thus, I(= 0.0001) =1/ - 1/20 – ln(20/) – (-2.001548311248)


= 9989.74547566572
(3)
Taking J n =-2.001548311248 as exact value for J, we see that the 50-
interval integration using trapezoidal rule has an error of about
(-2.001548311248+ 1.990208809379) =0.01134
which results in a relative error of 1.135E-6 in I().

On the other hand, the direct application of trapezoidal rule using 5000

You might also like