Hasan I. Mamao - CVE 154 Formulas

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

MODULE 4: Curve Fitting and Interpolation Arithmetic mean → The sum of the individual data points (y i)

divided by the number of data points (n).


CURVE FITTING ∑𝑦
𝑦̅ = 𝑖 ; 𝑖 = 1, … , 𝑛
Interpolation → construct a curve through the data points by making 𝑛

the implicit assumption that the data points are accurate and distinct.
Standard deviation → The most common measure of a spread for a
Curve fitting → applied to data that contain scatter (noise), usually
sample.
due to measurement errors. Find a smooth curve that approximates
𝑆𝑡
the data in some sense. Thus, the curve does not necessarily hit the 𝑆𝑦 = √ ; 𝑤ℎ𝑒𝑟𝑒 𝑆𝑡 = ∑(𝑦𝑖 − 𝑦̅)2
𝑛−1
data points.

A straight line is described generically by f(x) = ax+ b. Variance → Representation of spread by the square of the standard
deviation.
∑(𝑦𝑖 −𝑦̅)2
The goal is to identify the coefficients “a” and “b” such that f(x) 𝑆𝑦2 =
𝑛−1 Degree of freeedom
“fits” the data well.
Coefficient of variation → Has the utility to quantify the spread of
CURVE FITTING data.
Describes techniques to fit curves (curve fitting) to discrete data to 𝑆
𝑐. 𝑣. = ̅𝑦 × 100%
obtain intermediate estimates. There are two general approaches to 𝑦
curve fitting:
◦ Data exhibit a significant degree of scatter. The strategy is to derive A histogram is used to depict the distribution of data. As the number
a single curve that represents the general trend of the data. of data points increases, the histogram often approaches the smooth,
◦ Data is very precise. The strategy is to pass a curve or a series of bell-shaped curve called the normal distribution.
curves through each of the points. In engineering two types of
applications are encountered: ◦ Trend analysis. Predicting values of (A) Least Squares Regression
dependent variable, may include extrapolation beyond data points or The least squares method is a statistical procedure to find the best fit
interpolation between data points. ◦ Hypothesis testing. Comparing for a set of data points by minimizing the sum of the offsets or
existing mathematical model with measured data. residuals of points from the plotted curve. Least squares regression is
used to predict the behavior of dependent variables.
Three attempts to fit a “best” curve through five data points:
(A) least-squares regression, Linear Regression
(B) linear interpolation, and Fitting a straight line to a set of paired observations:
(C) curvilinear interpolation. (𝑥1 , 𝑦1 ), (𝑥2 , 𝑦2 ), ⋯ , (𝑥𝑛 , 𝑦𝑛 ).
y = a0 + a1x + e → model
Other examples of data sets that we can fit a function to. a1 → slope
x f(x) a0 → intercept
time height of dropped object e → error, or residual, between the model and the observations
temperature Oxygen in soil
soil depth pore pressure Minimizing the sum of the squares of the residuals between the
paid labor hours profit measured y and the y calculated with the linear model:
𝑆𝑟 = ∑𝑛𝑖=1 𝑒𝑖2 = ∑𝑛𝑖=1(𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 )2 ;
Simple Statistics
𝑛 ∑ 𝑥𝑖 𝑦𝑖 −∑ 𝑥𝑖 ∑ 𝑦𝑖
In course of engineering study, if several measurements are made of where 𝑎1 =
𝑛 ∑ 𝑥𝑖2 −(∑ 𝑥𝑖 )2
a particular quantity, additional insight can be gained by 𝑥𝑖 𝑦𝑖
summarizing the data in one or more well-chosen statistics that 𝑎0 = 𝑦̅ − 𝑎1 𝑥̅ ; 𝑥̅ = ; 𝑦̅ =
𝑛 𝑛
convey as much information as possible about specific characteristics
𝑆𝑡 −𝑆𝑟 𝑆𝑟
of the data set. These descriptive statistics are most often selected to 𝑟2 = = 1− ; 𝑟 2 → coefficient of determination
𝑆𝑡 𝑆𝑡
represent
𝑟 → correlation coefficient
◦ The location of the center of the distribution of the data,
Table: Linear Regression
◦ The degree of spread of the data.
i xi yi xiyi 𝒙𝟐𝒊 𝒚𝟐𝒊 (𝒚𝒊 − 𝒚̅)𝟐 (𝒚𝒊 − 𝒂𝟎 − 𝒂𝟏 𝒙𝒊 )𝟐
TYPES OF STATISTICS 1 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮
◦ Descriptive ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮
➢ describes or summarizes the data of a target population. n ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮
➢ describes the data which is already known. Σ Σxi Σyi Σxiyi Σ𝑥𝑖 Σ𝑦𝑖 2 2
𝑆𝑡 𝑆𝑟
➢ organizes, analyzes, and presents data in a meaningful manner.
➢ Final results are shown in forms of tables and graphs. Polynomial Regression
➢ Tools: measures of central tendency and dispersion.
◦ Inferential y = a0 + a1x + a2x2
2
➢ uses data to make inferences or generalizations about population. 𝑆𝑟 = ∑(𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 − 𝑎2 𝑥2𝑖 )
➢ makes conclusions for population that is beyond available data.
➢ compares, tests, and predicts future outcomes. Normal equations:
➢ Final results are the probability scores. ❖
𝑛𝑎0 + (∑ 𝑥𝑖 )𝑎1 + (∑ 𝑥2𝑖 )𝑎2 = ∑ 𝑦𝑖
➢ Tools: hypothesis tests

(∑ 𝑥𝑖 )𝑎0 + (∑ 𝑥𝑖2 )𝑎1 + (∑ 𝑥𝑖3 )𝑎2 = ∑ 𝑥𝑖 𝑦𝑖


(∑ 𝑥𝑖2 )𝑎0 + (∑ 𝑥𝑖3 )𝑎1 + (∑ 𝑥𝑖4 )𝑎2 = ∑ 𝑥2𝑖 𝑦𝑖 1
INTERPOLATION 2nd Order P:
(𝑥−𝑥 )(𝑥−𝑥 ) (𝑥−𝑥 )(𝑥−𝑥 )
𝑓2 (𝑥) = (𝑥 2)(𝑥 3 ) 𝑓(𝑥1 ) + (𝑥 1 )(𝑥 3 ) 𝑓(𝑥2 ) +
Interpolation is a method of fitting the data points to represent the 1 −𝑥2 1 −𝑥3 2 −𝑥1 2 −𝑥3
(𝑥−𝑥1 )(𝑥−𝑥2 )
value of a function. It has a various number of applications in 𝑓(𝑥3 )
(𝑥3 −𝑥1 )(𝑥3 −𝑥2 )
engineering and science, that are used to construct new data points
within the range of a discrete data set of known data points or can be
Splines
used for determining a formula of the function that will pass from the
In spline interpolation, the function to be used in interpolation is in
given set of points (x,y).
piecewise. It means that there is a separate polynomial function for
Examples of interpolating polynomials: every two consecutive given points. The type of spline interpolation
(a) first-order (linear) connecting two points, depends on the degree of polynomial connecting each two
(b) second-order (quadratic or parabolic) connecting three points, consecutive points, but the most common are;
(c) and third-order (cubic) connecting four points. (A) linear splines,
(B) quadratic splines, and
(B) Newton Interpolating Polynomial (C) cubic splines.

Linear Interpolation (A) Linear Splines


𝑦 = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + ⋯ + 𝑎𝑛 𝑥 𝑛 𝑠𝑖 (𝑥) = 𝑎𝑖 + 𝑏𝑖 (𝑥 − 𝑥𝑖 )
where 𝑎𝑖 → intercept
Newton linear-interpolation formula: 𝑏𝑖 → slope of the straight line connecting the points
𝑓 −𝑓
𝑎𝑖 = 𝑓𝑖 = 𝑓(𝑥𝑖 ) ; 𝑏𝑖 = 𝑖+1 𝑖
𝑥𝑖+1 −𝑥𝑖
𝑓1 (𝑥)−𝑓(𝑥1 )
=
𝑓(𝑥2 )−𝑓(𝑥1 ) x sin(𝑥)
𝑥−𝑥1 𝑥2 −𝑥1 𝑥1 𝑓(𝑥1 ) (B) Quadratic Splines
x 𝑓1 (𝑥) 𝑓𝑖 (𝑥) = 𝑎𝑖 𝑥 2 + 𝑏𝑖 𝑥 + 𝑐𝑖
𝑓1 (𝑥) → unknown 𝑥2 𝑓(𝑥2 ) where “ai”, “bi”, and “ci” are unknown coefficients in every
Quadratic Interpolation quadratic splines. It needs 3n linear equation to solve for these
𝑓2 (𝑥) = 𝑏1 + 𝑏2 (𝑥 − 𝑥1 ) + 𝑏3 (𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) ; unknowns.
STEP 1:
𝑏1 = 𝑓1 (𝑥) EXAMPLE: If i = 4, then n = i -1 = 3. Thus, 2n = 6.
Eq. 1: 𝑓1 (𝑥) = 𝑎1 𝑥 2 + 𝑏1 𝑥 + 𝑐1
𝑓(𝑥2 )−𝑓(𝑥1 ) Eq. 2: 𝑓2 (𝑥) = 𝑎1 𝑥 2 + 𝑏1 𝑥 + 𝑐1
𝑏2 = Eq. 3: 𝑓2 (𝑥) = 𝑎2 𝑥 2 + 𝑏2 𝑥 + 𝑐2
𝑥2 −𝑥1
Eq. 4: 𝑓3 (𝑥) = 𝑎2 𝑥 2 + 𝑏2 𝑥 + 𝑐2
𝑓(𝑥3 )−𝑓(𝑥2 ) 𝑓(𝑥2 )−𝑓(𝑥1)
𝑥3 −𝑥2
− 𝑥 −𝑥
2 1
Eq. 5: 𝑓3 (𝑥) = 𝑎3 𝑥 2 + 𝑏3 𝑥 + 𝑐3
𝑏3 = Eq. 6: 𝑓4 (𝑥) = 𝑎3 𝑥 2 + 𝑏3 𝑥 + 𝑐3
𝑥3 −𝑥2

General Form of Newton’s Interpolating Polynomials STEP 2: The 1st derivatives of the two splines sharing an
𝑓𝑛−1 (𝑥) = 𝑏1 + 𝑏2 (𝑥 − 𝑥1 ) + 𝑏3 (𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) + ⋯ + intermediate point are equal.
𝑏𝑛 (𝑥 − 𝑥1 )(𝑥 − 𝑥2 )(𝑥 − 𝑥3 ) ⋯ (𝑥 − 𝑥𝑛−1 ) 𝑓2 (𝑥) = 𝑎1 𝑥 2 + 𝑏1 𝑥 + 𝑐1 = 𝑎2 𝑥 2 + 𝑏2 𝑥 + 𝑐2
𝑑 𝑑
(𝑎1 𝑥 2 + 𝑏1 𝑥 + 𝑐1 ) = (𝑎2 𝑥 2 + 𝑏2 𝑥 + 𝑐2 )
𝑑𝑥 𝑑𝑥
Newton’s Divided Difference Polynomial Interpolation 2𝑎1 𝑥 + 𝑏1 = 2𝑎2 𝑥 + 𝑏2
𝑏1 = 𝑓(𝑥1 ) Zeroth Order DD Eq. 7: 2𝑎1 𝑥 + 𝑏1 − 2𝑎2 𝑥 − 𝑏2 = 0
𝑏2 = 𝑓[𝑥2 , 𝑥1 ] First Order DD
𝑏3 = 𝑓[𝑥3 , 𝑥2 , 𝑥1 ] Second Order DD 𝑓3 (𝑥) = 𝑎2 𝑥 2 + 𝑏2 𝑥 + 𝑐2 = 𝑎3 𝑥 2 + 𝑏3 𝑥 + 𝑐3
⋮ 𝑑 𝑑
𝑓[𝑥 ,𝑥 ,⋯,𝑥 ]−𝑓[𝑥𝑛−1 ,𝑥𝑛−2 ,⋯,𝑥1 ] (𝑎2 𝑥 2 + 𝑏2 𝑥 + 𝑐2 ) = (𝑎3 𝑥 2 + 𝑏3 𝑥 + 𝑐3 )
𝑏𝑛 = 𝑓[𝑥𝑛 , 𝑥𝑛−1 , ⋯ , 𝑥2 , 𝑥1 ] = 𝑛 𝑛−1 2 𝑑𝑥 𝑑𝑥
𝑥𝑛 −𝑥1 2𝑎2 𝑥 + 𝑏2 = 2𝑎3 𝑥 + 𝑏3
Eq. 8: 2𝑎2 𝑥 + 𝑏2 − 2𝑎3 𝑥 − 𝑏3 = 0
𝑓(𝑥𝑖 )−𝑓(𝑥𝑗 )
The 1st finite divided difference is 𝑓[𝑥𝑖 , 𝑥𝑗 ] = .
𝑥𝑖 −𝑥𝑗
𝑓[𝑥𝑖 ,𝑥𝑗 ]−𝑓[𝑥𝑗 ,𝑥𝑘 ] STEP 3: The 2nd derivative of the first spline is assumed to be zero.
The 2nd finite divided difference is 𝑓[𝑥𝑖 , 𝑥𝑗 , 𝑥𝑘 ] = .
𝑥𝑖 −𝑥𝑘 𝑓1 (𝑥) = 𝑎1 𝑥 2 + 𝑏1 𝑥 + 𝑐1
The nth finite divided difference is 𝑑2 𝑑
𝑓[𝑥 ,𝑥 ,⋯,𝑥 ]−𝑓[𝑥𝑛−1 ,𝑥𝑛−2 ,⋯,𝑥1 ] (𝑎1 𝑥 2 + 𝑏1 𝑥 + 𝑐1 ) = (2𝑎1 𝑥 + 𝑏1 ) = 2𝑎1
𝑓[𝑥𝑛 , 𝑥𝑛−1 , ⋯ , 𝑥2 , 𝑥1 ] = 𝑛 𝑛−1 2 . 𝑑𝑥 2 𝑑𝑥
𝑥𝑛 −𝑥1 2𝑎1 = 0
Eq. 9: 𝑎1 = 0
Cubic Interpolation
𝑓3 (𝑥) = 𝑏1 + 𝑏2 (𝑥 − 𝑥1 ) + 𝑏3 (𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) + 𝑏3 (𝑥 − 𝑥1 ) (B) Cubic Splines
(𝑥 − 𝑥2 )(𝑥 − 𝑥3 ) 𝑓𝑖 (𝑥) = 𝑎𝑖 𝑥 3 + 𝑏𝑖 𝑥 2 + 𝑐𝑖 𝑥 + 𝑑𝑖
where “ai”, “bi”, “ci”, and “di” are unknown coefficients in every
quadratic splines. It needs 4n linear equation to solve for these
Lagrange Interpolating Polynomials unknowns.
STEP 1:
𝑦 = 𝑓𝑛−1 (𝑥) = 𝑎1 + 𝑎2 𝑥 + 𝑎3 𝑥 2 + ⋯ + 𝑎𝑛 𝑥 𝑛 EXAMPLE: If i = 4, then n = i -1 = 3. Thus, 2n = 6.
Eq. 1: 𝑓1 (𝑥) = 𝑎1 𝑥 3 + 𝑏1 𝑥 2 + 𝑐1 𝑥 + 𝑑1
𝑥−𝑥2 (𝑥−𝑥2 )
1st Order P: 𝑓1 (𝑥) = 𝑓(𝑥1 ) + (𝑥 𝑓(𝑥2 ) Eq. 2: 𝑓2 (𝑥) = 𝑎1 𝑥 3 + 𝑏1 𝑥 2 + 𝑐1 𝑥 + 𝑑1
𝑥1 −𝑥2 2 −𝑥1 ) 2
Eq. 3: 𝑓2 (𝑥) = 𝑎2 𝑥 3 + 𝑏2 𝑥 2 + 𝑐2 𝑥 + 𝑑2
Eq. 4: 𝑓3 (𝑥) = 𝑎2 𝑥 3 + 𝑏2 𝑥 2 + 𝑐2 𝑥 + 𝑑2 MODULE 5: Numerical Differentiation
Eq. 5: 𝑓3 (𝑥) = 𝑎3 𝑥 3 + 𝑏3 𝑥 2 + 𝑐3 𝑥 + 𝑑3 Differentiation: Derivative of a function ►Instantaneous rate of
Eq. 6: 𝑓4 (𝑥) = 𝑎3 𝑥 3 + 𝑏3 𝑥 2 + 𝑐3 𝑥 + 𝑑3 change
STEP 2: The 1st derivatives of the two splines sharing an Integration: Integral of a function ► Area under its graph
intermediate point are equal.
𝑓2 (𝑥) = 𝑎1 𝑥 3 + 𝑏1 𝑥 2 + 𝑐1 𝑥 + 𝑑1 = 𝑎2 𝑥 3 + 𝑏2 𝑥 2 + 𝑐2 𝑥 + 𝑑2 DIFFERENTIATION
𝑑 𝑑 (1) Analytical Method
(𝑎1 𝑥 3 + 𝑏1 𝑥 2 + 𝑐1 𝑥 + 𝑑1 ) = (𝑎2 𝑥 3 + 𝑏2 𝑥 2 + 𝑐2 𝑥 + 𝑑2 )
𝑑𝑥 𝑑𝑥  Product rule, Quotient rule, Power rule and Chain rule
3𝑎1 𝑥 2 + 2𝑏1 𝑥 + 𝑐1 = 3𝑎2 𝑥 2 + 2𝑏2 𝑥 + 𝑐2
 is used to find the exact solution.
Eq. 7: 3𝑎1 𝑥 2 + 2𝑏1 𝑥 + 𝑐1 − 3𝑎2 𝑥 2 − 2𝑏2 𝑥 − 𝑐2 = 0
(2) Numerical Differentiation
 Method used: Finite Difference Formula
𝑓2 (𝑥) = 𝑎2 𝑥 3 + 𝑏2 𝑥 2 + 𝑐2 𝑥 + 𝑑2 = 𝑎3 𝑥 3 + 𝑏3 𝑥 2 + 𝑐3 𝑥 + 𝑑3
𝑑 𝑑  is used to find the approximate solution of the derivative or
(𝑎2 𝑥 3 + 𝑏2 𝑥 2 + 𝑐2 𝑥 + 𝑑2 ) = (𝑎3 𝑥 3 + 𝑏3 𝑥 2 + 𝑐3 𝑥 + 𝑑3 ) the slope or the rate of change for the function 𝑓(𝑥) at any
𝑑𝑥 𝑑𝑥
3𝑎2 𝑥 2 + 2𝑏2 𝑥 + 𝑐2 = 3𝑎3 𝑥 2 + 2𝑏3 𝑥 + 𝑐3 particular point.
Eq. 8: 3𝑎2 𝑥 2 + 2𝑏2 𝑥 + 𝑐2 − 3𝑎3 𝑥 2 − 2𝑏3 𝑥 − 𝑐3 = 0

STEP 3: The 2nd derivatives of the two splines sharing an FINITE DIFFERENCE FORMULA
intermediate point are equal. First derivative; Forward Difference
𝑑2 𝑑2 Higher derivative; Backward Difference & Central Difference
(𝑎1 𝑥 3 + 𝑏1 𝑥 2 + 𝑐1 𝑥 + 𝑑1 ) = (𝑎2 𝑥 3 + 𝑏2 𝑥 2 + 𝑐2 𝑥 + 𝑑2 )
𝑑𝑥 2 𝑑𝑥 2
𝑑 𝑑
(3𝑎1 𝑥 2 + 2𝑏1 𝑥 + 𝑐1 ) = (3𝑎1 𝑥 2
+ 2𝑏1 𝑥 + 𝑐1 ) Basic Numerical Differentiation Formulas
𝑑𝑥 𝑑𝑥
6𝑎1 𝑥 + 2𝑏1 = 6𝑎2 𝑥 + 2𝑏2 ➢ Two-point Backward difference
Eq. 9: 6𝑎1 𝑥 + 2𝑏1 − 6𝑎2 𝑥 − 2𝑏2 = 0 𝑓(𝑥𝑖 ) − 𝑓(𝑥𝑖−1 )
𝑓 ′ (𝑥𝑖 ) =

𝑑2 𝑑2
(𝑎2 𝑥 3 + 𝑏2 𝑥 2 + 𝑐2 𝑥 + 𝑑2 ) = (𝑎3 𝑥 3 + 𝑏3 𝑥 2 + 𝑐3 𝑥 + 𝑑3 )
𝑑𝑥 2
𝑑 𝑑
𝑑𝑥 2 ➢ Two-point Central difference
(3𝑎2 𝑥 2 + 2𝑏2 𝑥 + 𝑐2 ) = (3𝑎3 𝑥 2
+ 2𝑏3 𝑥 + 𝑐3 ) 𝑓(𝑥𝑖+1 ) − 𝑓(𝑥𝑖−1 )
𝑑𝑥 𝑑𝑥
𝑓 ′ (𝑥𝑖 ) = + 𝑂(ℎ2 )
6𝑎2 𝑥 + 2𝑏2 = 6𝑎3 𝑥 + 2𝑏3 2ℎ
Eq. 10: 6𝑎2 𝑥 + 2𝑏2 − 6𝑎3 𝑥 − 2𝑏3 = 0
Higher-accuracy Differentiation Formulas
STEP 4: The second derivative of the first and last spline functions is Forward finite-difference formulas:
assumed to be zero. First Derivative
𝑓(𝑥 )−𝑓(𝑥𝑖 )
𝑓1 (𝑥) = 𝑎1 𝑥 3 + 𝑏1 𝑥 2 + 𝑐1 𝑥 + 𝑑1 𝑓 ′ (𝑥𝑖 ) = 𝑖+1 → O(h)

−𝑓(𝑥𝑖+2 )+4𝑓(𝑥𝑖+1 )−3𝑓(𝑥𝑖 )
𝑑2 𝑑 𝑓 ′ (𝑥𝑖 ) = → O(h2)
(𝑎1 𝑥 3 + 𝑏1 𝑥 2 + 𝑐1 𝑥 + 𝑑1 ) = (3𝑎1 𝑥 2 + 2𝑏1 𝑥 + 𝑐1 ) = 6𝑎1 𝑥 2ℎ
𝑑𝑥 2 𝑑𝑥

+2𝑏1 Second Derivative


𝑓(𝑥 )−2𝑓(𝑥𝑖+1 )+𝑓(𝑥𝑖 )
𝑓 ′′ (𝑥𝑖 ) = 𝑖+2 2 → O(h)

Eq. 11: 6𝑎1 𝑥 + 2𝑏1 = 0 −𝑓(𝑥𝑖+3 )+4𝑓(𝑥𝑖+2 )−5𝑓(𝑥𝑖+1 )+2𝑓(𝑥𝑖 )
𝑓 ′′ (𝑥𝑖 ) = → O(h2)
ℎ2

Third Derivative
𝑓4 (𝑥) = 𝑎3 𝑥 3 + 𝑏3 𝑥 2 + 𝑐3 𝑥 + 𝑑3 𝑓(𝑥 )−3𝑓(𝑥𝑖+2 )+3𝑓(𝑥𝑖+1 )−𝑓(𝑥𝑖 )
𝑓 ′′′ (𝑥𝑖 ) = 𝑖+3 3 ℎ
𝑑2 𝑑 −3𝑓(𝑥𝑖+4 )+14𝑓(𝑥𝑖+3 )−24𝑓(𝑥𝑖+2 )+18𝑓(𝑥𝑖+1 )−5𝑓(𝑥𝑖 )
2
(𝑎3 𝑥 3 + 𝑏3 𝑥 2 + 𝑐3 𝑥 + 𝑑3 ) = (3𝑎3 𝑥 2 + 2𝑏3 𝑥 + 𝑐3 ) = 6𝑎3 𝑥 𝑓 ′′′ (𝑥𝑖 ) =
𝑑𝑥 𝑑𝑥 2ℎ3

+2𝑏3 Fourth Derivative


Eq. 12: 6𝑎3 𝑥 + 2𝑏3 = 0 (𝑥 )−4𝑓(𝑥𝑖+3 )+6𝑓(𝑥𝑖+2 )−4𝑓(𝑥𝑖+1 )+𝑓(𝑥𝑖 )
𝑓 ′′′′ (𝑥𝑖 ) = 𝑖+4 4 ℎ
−2𝑓(𝑥𝑖+5 )+11𝑓(𝑥𝑖+4 )−24𝑓+26𝑓(𝑥𝑖+2 )−14𝑓(𝑥𝑖+1 )+3𝑓(𝑥𝑖 )
𝑓 ′′′′ (𝑥𝑖 ) =
2ℎ4

Backward finite-difference formulas:


First Derivative
𝑓(𝑥 )−𝑓(𝑥𝑖−1 )
𝑓 ′ (𝑥𝑖 ) = 𝑖 → O(h)

3𝑓(𝑥𝑖 )−4𝑓(𝑥𝑖−1 )−𝑓(𝑥𝑖−2 )
𝑓 ′ (𝑥𝑖 ) = → O(h2)
2ℎ

Second Derivative
𝑓(𝑥 )−2𝑓(𝑥𝑖−1 )+𝑓(𝑥𝑖−2 )
𝑓 ′′ (𝑥𝑖 ) = 𝑖 2 → O(h)

2𝑓(𝑥𝑖 )−5𝑓(𝑥𝑖−1 )+4𝑓(𝑥𝑖−2 )−𝑓(𝑥𝑖−3 )
𝑓 ′′ (𝑥𝑖 ) = → O(h2)
ℎ2

Third Derivative
𝑓(𝑥 )−3𝑓(𝑥𝑖−1 )+3𝑓(𝑥𝑖−2 )−𝑓(𝑥𝑖−3 )
𝑓 ′′′ (𝑥𝑖 ) = 𝑖 3 ℎ
5𝑓(𝑥𝑖 )−18𝑓(𝑥𝑖−1 )+24𝑓(𝑥𝑖−2 )−14𝑓(𝑥𝑖−3 )+3𝑓(𝑥𝑖−4 )
𝑓 ′′′ (𝑥𝑖 ) = 3
2ℎ3
MODULE 6: Numerical Integration
Fourth Derivative
𝑓(𝑥 )−4𝑓(𝑥𝑖−1 )+6𝑓(𝑥𝑖−2 )−4𝑓(𝑥𝑖−3 )+𝑓(𝑥𝑖−4 ) Numerical Integration
𝑓 ′′′′ (𝑥𝑖 ) = 𝑖 4 ℎ
3𝑓(𝑥𝑖 )−14𝑓(𝑥𝑖−1 )+26𝑓(𝑥𝑖−2 )−24𝑓(𝑥𝑖−3 )+11𝑓(𝑥𝑖−4 )−2𝑓(𝑥𝑖−5 )  Method used: Trapezoidal rule, Simpson’s rule, Romberg
𝑓 ′′′′ (𝑥𝑖 ) = Integration, and Gauss Quadrature.
ℎ4
 is used to find the approximate area under the curve of a
function 𝑓(𝑥) over the interval [a, b].
Centered finite-difference formulas:
First Derivative Trapezoidal Rule
𝑓(𝑥 )−𝑓(𝑥𝑖−1 )
𝑓 ′ (𝑥𝑖 ) = 𝑖+1 → O(h )
2
2ℎ
−𝑓(𝑥𝑖+2 )+8𝑓(𝑥𝑖+1 )−8𝑓(𝑥𝑖−1 )+𝑓(𝑥𝑖−2 ) SINGLE SEGMENT TRAPEZOIDAL RULE
𝑓 ′ (𝑥𝑖 ) = → O(h4) 𝑓(𝑎)+𝑓(𝑏) 𝑏
12ℎ 𝐼 = (𝑏 − 𝑎) ; ∫𝑎 𝑓(𝑥) 𝑑𝑥 → Approximate Value
2
Et = “True Value” – “Approximate Value”
Second Derivative 𝐸𝑡
𝑓(𝑥 )−2𝑓(𝑥𝑖 )+𝑓(𝑥𝑖−1 ) |∈𝑡 | = × 100 → Absolute relative true error (%)
𝑓 ′′ (𝑥𝑖 ) = 𝑖+1 2 → O(h2) 𝑇𝑟𝑢𝑒 𝑉𝑎𝑙𝑢𝑒

−𝑓(𝑥𝑖+2 )+16𝑓(𝑥𝑖+1 )−30𝑓(𝑥𝑖 )+16𝑓(𝑥𝑖−1 )−𝑓(𝑥𝑖−2 )
𝑓 ′′ (𝑥𝑖 ) = → O(h4) MULTIPLE SEGMENT TRAPEZOIDAL RULE
12ℎ2
𝑏−𝑎
ℎ=
𝑛
Third Derivative 𝑏 ℎ
𝑓(𝑥 )−2𝑓(𝑥𝑖+1 )+2𝑓(𝑥𝑖−1 )−𝑓(𝑥𝑖−2 ) ∫𝑎 𝑓(𝑥) 𝑑𝑥 = 2 [𝑓(𝑎) + 2{∑𝑛−1
𝑖=1 𝑓(𝑎 + 𝑖ℎ } + 𝑓(𝑏)]
𝑓 ′′′ (𝑥𝑖 ) = 𝑖+2 3 2ℎ
−𝑓(𝑥𝑖+3 )+8𝑓(𝑥𝑖+2 )−13𝑓(𝑥𝑖+1 )+13𝑓(𝑥𝑖−1 )−8𝑓(𝑥𝑖−2 )+𝑓(𝑥𝑖+3 )
𝑓 ′′′ (𝑥𝑖 ) = Simpson’s 1/3 Rule
8ℎ3

Fourth Derivative SINGLE SEGMENT SIMPSON’S 1/3 RULE


𝑓(𝑥 )−4𝑓(𝑥𝑖+1 )+6𝑓(𝑥𝑖 )−4𝑓(𝑥𝑖−1 )+𝑓(𝑥𝑖−2 ) 𝑏−𝑎
𝑓 ′′′′ (𝑥𝑖 ) = 𝑖+2 ℎ=
ℎ4 2
𝑏 1 𝑎+𝑏
𝑓 ′′′′ (𝑥𝑖 ) = ∫𝑎 𝑓2 (𝑥) 𝑑𝑥 = ℎ [𝑓(𝑎) + 4𝑓 ( ) + 𝑓(𝑏)]
−𝑓(𝑥𝑖+3 )+12𝑓(𝑥𝑖+2 )−39𝑓(𝑥𝑖+1 )+56𝑓(𝑥𝑖 )−39𝑓(𝑥𝑖−1 )+12𝑓(𝑥𝑖−2 )−𝑓(𝑥𝑖+3 ) 3 2

6ℎ4
MULTIPLE SEGMENT SIMPSON’S 1/3 RULE
𝑏−𝑎
STEP 1: Compute the exact derivative by direct differentiation then ℎ=
𝑛
substitute the x. 𝑏
𝑛−1 𝑛−2
STEP 2: For the given step size h and xi , we can obtain the necessary 1
∫ 𝑓(𝑥) 𝑑𝑥 = ℎ [𝑓(𝑥0 ) + 4 ∑ 𝑓(𝑥𝑖 ) + 2 ∑ 𝑓(𝑥𝑖 ) + 𝑓(𝑥𝑛 )]
data. 𝑎 3
𝑖=1 𝑖=2
𝑥𝑖−2 𝑥𝑖−1 𝑥𝑖 𝑥𝑖+1 𝑥𝑖+2 i = odd i = even
𝑥 𝑥𝑖 − 2ℎ 𝑥𝑖 − ℎ 𝑥𝑖 𝑥𝑖 + ℎ 𝑥𝑖 + 2ℎ 𝑓(𝑥0 ) = 𝑓(𝑎)
𝑓(𝑥) 𝑓(𝑥1 ) = 𝑓(𝑓(𝑥0 ) + ℎ)
STEP 3: Apply the finite-difference formulas. 𝑓(𝑥2 ) = 𝑓(𝑓(𝑥1 ) + ℎ)
𝑓(𝑥3 ) = 𝑓(𝑓(𝑥2 ) + ℎ)
Richardson Extrapolation 𝑓(𝑥𝑛 ) = 𝑓(𝑏)

𝐷(𝑥𝑖+1 ) − 𝐷(𝑥𝑖−1 ) Simpson’s 3/8 Rule


𝐷(ℎ𝑖 ) =
2ℎ𝑖 Use the data SINGLE SEGMENT SIMPSON’S 3/8 RULE
from this 𝑏−𝑎
ℎ1 4 1 ℎ=
table 3
ℎ2 = → 𝐷 ≅ 𝐷(ℎ2 ) − 𝐷(ℎ1 )
2 3 3 𝑥0 = 𝑎
𝑥1 = 𝑥0 + ℎ
𝑥2 = 𝑥0 + 2ℎ
𝑥3 = 𝑥0 + 3ℎ = 𝑏
3
𝐼 = ℎ{𝑓(𝑥0 ) + 3𝑓(𝑥1 ) + 3𝑓(𝑥2 ) + 𝑓(𝑥3 )}
8

MULTIPLE SEGMENT SIMPSON’S 3/8 RULE


𝑏−𝑎
ℎ=
𝑛
𝑥0 = 𝑎
𝑥1 = 𝑥0 + ℎ
𝑥2 = 𝑥0 + 2ℎ
𝑥𝑛 = 𝑥0 + 𝑖ℎ = 𝑏
𝑛−2 𝑛−3 𝑛−3
3
𝐼 = ℎ {𝑓(𝑥0 ) + 3 ∑ 𝑓(𝑥𝑖 ) + 3 ∑ 𝑓(𝑥𝑖 ) + 2 ∑ 𝑓(𝑥𝑖 ) + 𝑓(𝑥𝑛 )}
8
𝑖=1,4,⋯ 𝑖=2,5,⋯ 𝑖=3,6,⋯

ROMBERG INTEGRATION

Romberg Integration is an extrapolation formula of the Trapezoidal


Rule for integration. It provides a better approximation of the integral
by reducing the True Error. 4
Richardson’s Extrapolation for Trapezoidal Rule
𝐼 −𝐼
𝑇𝑉 ≈ 𝐼2𝑛 + 2𝑛 𝑛 → Approximate Value
3
Et = “True Value” – “Approximate Value”
𝐸
|∈𝑡 | = 𝑡 × 100 → Absolute relative true error (%)
𝑇𝑉

Romberg’s Rule
𝐼1,1 , 𝐼1,2 , 𝐼1,3 , ⋯ , 𝐼1,𝑛 are the segments of Trapezoidal rule results.

1st order extrapolation values: (k,j)


𝐼 −𝐼
𝐼2,1 = 𝐼1,2 + 1,2 1,1
3
𝐼1,3 −𝐼1,2
𝐼2,2 = 𝐼1,3 +
3
𝐼𝑘−1,𝑗+1 −𝐼𝑘−1,𝑗
𝐼𝑘,𝑗 = 𝐼𝑘−1,𝑗+1 + 4𝑘−1 − 1
3
nd
2 order extrapolation values: (k+1, j)
3rd order extrapolation values: (k+2, j)
nth order extrapolation values: (k+n-1, j)

Gauss Quadrature

One-Point Gaussian Quadrature Rule


𝑏
𝑎+𝑏
∫ 𝑓(𝑥) 𝑑𝑥 ≈ 𝑐1 𝑓(𝑥1 ) = (𝑏 − 𝑎)𝑓 ( )
𝑎 2

Two-Point Gaussian Quadrature Rule

𝑏 𝑏−𝑎 𝑏 𝑏−𝑎 𝑎+𝑏


∫𝑎 𝑓(𝑥) 𝑑𝑥 = ( 2
) ∫𝑎 𝑓 (
2
𝑥+
2
) 𝑑𝑥
𝑏−𝑎 𝑏−𝑎 𝑎+𝑏 𝑏−𝑎 𝑏−𝑎 𝑎+𝑏
= ( ) 𝑐1 𝑓 ( 𝑥1 + )+( ) 𝑐2 𝑓 ( 𝑥2 + )
2 2 2 2 2 2
where 𝑐1 = 𝑐2 = 1
1
𝑥2 = = −(𝑥1 )
√3

𝑏−𝑎 𝑏−𝑎 1 𝑎+𝑏 𝑏−𝑎 𝑏−𝑎 1 𝑎+𝑏


=( )𝑓( (− )+ )+( )𝑓( ( )+ )
2 2 √3 2 2 2 √3 2

Et = “True Value” – “Approximate Value”


𝐸𝑡
|∈𝑡 | = × 100 → Absolute relative true error (%)
𝑇𝑟𝑢𝑒 𝑉𝑎𝑙𝑢𝑒

Points Weighting Factors Function Arguments


1
2 𝑐1 = 𝑐2 = 1 𝑥2 = = −(𝑥1 )
√3
3
𝑐1 = 𝑐3 = 5⁄9 𝑥3 = √ = −(𝑥1 )
3 5
𝑐2 = 8/9
𝑥2 = 0

18−√30 3 2 6
𝑐1 = 𝑐4 = 𝑥4 = √ + √ = −(𝑥1 )
36 7 7 5
4
18+√30 3 2 6
𝑐2 = 𝑐3 = 𝑥3 = √ − √ = −(𝑥2 )
36 7 7 5

322−13√70
𝑐1 = 𝑐5 = 1 10
900 𝑥5 = √5 + 2√ = −(𝑥1 )
3 7
128
𝑐3 = 𝑥3 = 0
5 225

322+13√70 1 10
𝑐2 = 𝑐4 = 𝑥4 = √5 − 2√ = −(𝑥2 )
900
3 7

𝑐1 = 𝑐6 = 0.171324492 𝑥6 = 0.932469514 = −(𝑥1 )


6 𝑐2 = 𝑐5 = 0.360761573 𝑥5 = 0.661209386 = −(𝑥2 )
𝑐3 = 𝑐4 = 0.467913935 𝑥4 = 0.238619186 = −(𝑥3 )

You might also like