ECH 3128 Topic 6 Curve Fitting 1
ECH 3128 Topic 6 Curve Fitting 1
ECH 3128 Topic 6 Curve Fitting 1
Curve Fitting
b) Linear Interpolation
c) Polynomial Interpolation
Linear Regression
∑e = ∑(y
i =1
i
i =1
i − ao − a1 xi )
∑e =∑ y
i =1
i
i =1
i − a0 − a1 xi
Linear Regression
• Best strategy is to minimize the sum of the squares of
the residuals between the measured y and the y
calculated with the linear model:
n n n
S r = ∑ e = ∑ ( yi , measured − yi , model) = ∑ ( yi − a0 − a1 xi ) 2
2
i
2
i =1 i =1 i =1
n∑ xi yi − ∑ xi ∑ yi
a1 =
n∑ x − (∑ xi )
2 2
i Mean values
a0 = y − a1 x
Linear Regression
“Goodness” of our fit/
If
• Total sum of the squares around the mean for the
dependent variable, y, is St
• Sum of the squares of residuals around the regression
line is Sr
• St-Sr quantifies the improvement or error reduction
due to describing data in terms of a straight line rather
than as an average value.
2 St − S r
r = r2-coefficient of determination
St
Sqrt(r2) – correlation coefficient
Linear Regression
• For a perfect fit
Sr=0 and r=r2=1, signifying that the line
explains 100 percent of the variability of the
data.
• For r=r2=0, Sr=St, the fit represents no
improvement.
Linearize a Nonlinear Relationship
ln ( Sh ) m ln ( Re ) + ln ( a )
=
y
= m x + c
Linearize a Nonlinear Relationship
Polynomial Regression
∑( y − (a ))
2
2
S r= 0 + a1 x + a2 x
i =1
Polynomial Regression
3 =
∑ xi ∑i ∑ i 1 ∑ xi yi
2
x x a
∑ xi2 ∑i
x 3
∑ i 2 ∑ i yi
x 4
a x 2
Linear Algebraic Equation methods
2
Best Fit: r
Multilinear Regression
∑( y − (a + a1 x1 + a2 x2 ) )
2
S r= 0
i =1
Multilinear Regression
∑ x2,i ∑x x ∑x
2
∑ x2,i yi
2,i a2
1,i 2,i
Linear Algebraic Equation methods
2
Best Fit: r
General Linear Square Method
Let say,
y= a0 z0 + a1 z1 + a2 z2 + ... + am zm + e
Linear Polynomial
z0 = 1 z0 = 1
z1 = x1 z1 = x
z2 = x2 z2 = x 2
zm = xm zm = x m
General Linear Square Method
[Y ]
= [ Z ][ A] + [ E ]
z01 z11 zm1
z
[ ]
Z = 02
n – number of Data
z0 n
m – number of independent variable/order
=y f ( x) + e
n
∑ ( y − f ( x ))
2
Sr
=
i =1
Example:
( x ) a0 (1 − e
f= a1 x
)
Taylor Series Expansion
∂f ( xi ) j ∂f ( xi ) j
f ( xi ) j +1 = f ( xi ) j + ∆a0 + ∆a1
∂a0 ∂a1 i – Data number
j – iteration number
Nonlinear Regression
∆a=
0 a0, j +1 − a0, j , ∆a=
1 a1, j +1 − a1, j
Insert the Taylor series into the equation
=y f ( x) + e
∂f ( xi ) j ∂f ( xi ) j
yi , j = f ( xi ) j + ∆a0 + ∆a1 + e
∂a0 ∂a1
∂f ( xi ) j ∂f ( xi ) j
∆a0 + ∆a1 + e= yi , j − f ( xi ) j
∂a0 ∂a1
∂f ( x1 )1 ∂f ( x1 )1
Change to a
∂a0 ∂a1 e1,1 y1,1 − f ( x1 )1
matrix form: ∆a0
=
+
∆a1
∂f ( x )
n 1 ∂f ( xn )1
e y
n ,1 n ,1 − f ( x )
n 1
∂a0 ∂a1
Nonlinear Regression
[ Df ][ ∆A] + [ E ] =
[Y ]
If we reduce the residual to zero [E] ≈ 0
[ Df ][ ∆A] =
[Y ]
[ ] [ ][ ] [ ] [Y ]
T T
Df Df ∆ A =Df