Mathematical Methods in Engineering and Science
Mathematical Methods in Engineering and Science
Mathematical Methods in Engineering and Science
1,
2,
Textbook: Dasgupta B., App. Math. Meth. (Pearson Education 2006, 2007). http://home.iitk.ac.in/ dasgupta/MathBook
3,
Contents I
Preliminary Background Matrices and Linear Transformations Operational Fundamentals of Linear Algebra Systems of Linear Equations Gauss Elimination Family of Methods Special Systems and Special Methods Numerical Aspects in Linear Systems
4,
Contents II
Eigenvalues and Eigenvectors Diagonalization and Similarity Transformations Jacobi and Givens Rotation Methods Householder Transformation and Tridiagonal Matrices QR Decomposition Method Eigenvalue Problem of General Matrices Singular Value Decomposition Vector Spaces: Fundamental Concepts*
5,
Contents III
Topics in Multivariate Calculus Vector Analysis: Curves and Surfaces Scalar and Vector Fields Polynomial Equations Solution of Nonlinear Equations and Systems Optimization: Introduction Multivariate Optimization Methods of Nonlinear Optimization*
6,
Contents IV
Constrained Optimization Linear and Quadratic Programming Problems* Interpolation and Approximation Basic Methods of Numerical Integration Advanced Topics in Numerical Integration* Numerical Solution of Ordinary Dierential Equations ODE Solutions: Advanced Issues Existence and Uniqueness Theory
7,
Contents V
First Order Ordinary Dierential Equations Second Order Linear Homogeneous ODEs Second Order Linear Non-Homogeneous ODEs Higher Order Linear ODEs Laplace Transforms ODE Systems Stability of Dynamic Systems Series Solutions and Special Functions
8,
Contents VI
Sturm-Liouville Theory Fourier Series and Integrals Fourier Transforms Minimax Approximation* Partial Dierential Equations Analytic Functions Integrals in the Complex Plane Singularities of Complex Functions
9,
Contents VII
Variational Calculus* Epilogue Selected References
Preliminary Background Theme of the Course Course Contents Sources for More Detailed Study Logistic Strategy Expected Background
10,
Outline
Preliminary Background Theme of the Course Course Contents Sources for More Detailed Study Logistic Strategy Expected Background
Preliminary Background Theme of the Course Course Contents Sources for More Detailed Study Logistic Strategy Expected Background
11,
a fast-paced recapitulation of UG mathematics extension with supplementary advanced ideas for a mature and forward orientation exposure and highlighting of interconnections
would-be researchers in analytical/computational areas students who are till now somewhat afraid of mathematics
Preliminary Background Theme of the Course Course Contents Sources for More Detailed Study Logistic Strategy Expected Background
12,
Course Contents
Preliminary Background Theme of the Course Course Contents Sources for More Detailed Study Logistic Strategy Expected Background
13,
If you have the time, need and interest, then you may consult
individual books on individual topics; another umbrella volume, like Kreyszig, McQuarrie, ONeil or Wylie and Barrett; a good book of numerical analysis or scientic computing, like Acton, Heath, Hildebrand, Krishnamurthy and Sen, Press et al, Stoer and Bulirsch; friends, in joint-study groups.
Preliminary Background Theme of the Course Course Contents Sources for More Detailed Study Logistic Strategy Expected Background
14,
Logistic Strategy
Study in the given sequence, to the extent possible. Do not read mathematics. Use lots of pen and paper. Read mathematics books and do mathematics. Exercises are must.
Use as many methods as you can think of, certainly including the one which is recommended. Consult the Appendix after you work out the solution. Follow the comments, interpretations and suggested extensions. Think. Get excited. Discuss. Bore everybody in your known circles. Not enough time to attempt all? Want a selection ? Master a programming environment. Use mathematical/numerical library/software. Take a MATLAB tutorial session?
Preliminary Background Theme of the Course Course Contents Sources for More Detailed Study Logistic Strategy Expected Background
15,
Logistic Strategy
Tutorial Plan
Chapter 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Selection Tutorial Chapter Selection Tutorial 26 2,3 3 1,2,4,6 4 27 2,4,5,6 4,5 1,2,3,4 3,4 28 1,2,4,5,7 4,5 2,5,6 6 29 1,4,5 4 1,2,5,6 6 30 1,2,4,7 4 1,2,3,4,5 4 31 1,2,3,4 2 1,2 1(d) 32 1,2,3,4,6 4 1,3,5,7 7 33 1,2,4 4 1,2,3,7,8 8 34 2,3,4 4 1,3,5,6 5 35 2,4,5 5 1,3,4 3 36 1,3 3 1,2,4 4 37 1,2 1 1 1(c) 38 2,4,5,6,7 4 1,2,3,4,5 5 39 6,7 7 2,3,4,5 4 40 2,3,4,8 8 1,2,4,5 4 41 1,2,3,6 6 1,3,6,8 8 42 1,2,3,6,7 3 1,3,6 6 43 1,3,4,6 6 2,3,4 3 44 1,2,3 2 1,2,4,7,9,10 7,10 45 1,2,5,7,8 7 1,2,3,4,7,9 4,9 46 1,2,3,4,5,6 3,4 1,2,5,7 7 47 1,2,3 3 1,2,3,5,8,9,10 9,10 48 1,2,3,4,5,6 1 1,2,4,5 5 1,2,3,4,5 5
Preliminary Background Theme of the Course Course Contents Sources for More Detailed Study Logistic Strategy Expected Background
16,
Expected Background
moderate background of undergraduate mathematics rm understanding of school mathematics and undergraduate calculus
Preliminary Background Theme of the Course Course Contents Sources for More Detailed Study Logistic Strategy Expected Background
17,
Points to note
Put in eort, keep pace. Stress concept as well as problem-solving. Follow methods diligently. Ensure background skills.
Matrices and Linear Transformations Matrices Geometry and Algebra Linear Transformations Matrix Terminology
18,
Outline
Matrices and Linear Transformations Matrices Geometry and Algebra Linear Transformations Matrix Terminology
Matrices and Linear Transformations Matrices Geometry and Algebra Linear Transformations Matrix Terminology
19,
Matrices
Question: What is a matrix? Answers:
a mapping f : M N F , where M = {1, 2, 3, , m}, N = {1, 2, 3, , n} and F is the set of real numbers or complex numbers ?
Question: What does a matrix do? Explore: With an m n matrix A , y1 = a11 x1 + a12 x2 + + a1n xn y2 = a21 x1 + a22 x2 + + a2n xn . . . . . . . . . . . . . . . ym = am1 x1 + am2 x2 + + amn xn
or
Ax = y
Matrices and Linear Transformations Matrices Geometry and Algebra Linear Transformations Matrix Terminology
20,
Matrices
Consider these denitions: y = f (x ) y = f (x ) = f (x1 , x2 , , xn ) yk = fk (x ) = fk (x1 , x2 , , xn ), k = 1, 2, , m y = f (x ) y = Ax Further Answer: A matrix is the denition of a linear vector function of a vector variable. Anything deeper?
Caution: Matrices do not dene vector functions whose components are of the form
yk = ak 0 + ak 1 x1 + ak 2 x2 + + akn xn .
Matrices and Linear Transformations Matrices Geometry and Algebra Linear Transformations Matrix Terminology
21,
Let vector x = [x1 x2 x3 ]T denote a point (x1 , x2 , x3 ) in 3-dimensional space in frame of reference OX1 X2 X3 . Example: With m = 2 and n = 3, y1 = a11 x1 + a12 x2 + a13 x3 y2 = a21 x1 + a22 x2 + a23 x3 Plot y1 and y2 in the OY1 Y2 plane.
111 R3 A :000 R2
X3 x 11 00 00 11 O X1 Domain X2
Y2
y Y1 00000 11111 11 00 11111 00000 00000 11111 00000 11111 00000 11111 00000 11111 00000 11111 O
Codomain
Matrices and Linear Transformations Matrices Geometry and Algebra Linear Transformations Matrix Terminology
22,
Operating on point x in R 3 , matrix A transforms it to y in R 2 . Point y is the image of point x under the mapping dened by matrix A . Note domain R 3 , co-domain R 2 with reference to the gure and verify that A : R 3 R 2 fulls the requirements of a mapping, by denition. A matrix gives a denition of a linear transformation from one vector space to another.
Matrices and Linear Transformations Matrices Geometry and Algebra Linear Transformations Matrix Terminology
23,
Linear Transformations
Operate A on a large number of points xi R 3 . Obtain corresponding images yi R 2 . The linear transformation represented by A implies the totality of these correspondences.
X X for R 3 . We decide to use a dierent frame of reference OX1 2 3 Y for R 2 at the same time.] [And, possibly OY1 2 Coordinates change, i.e. xi changes to x i (and possibly yi to yi ). Now, we need a dierent matrix, say A , to get back the correspondence as y = A x .
A matrix: just one description. Question: How to get the new matrix A ?
Matrices and Linear Transformations Matrices Geometry and Algebra Linear Transformations Matrix Terminology
24,
Matrix Terminology
Matrix product Transpose Conjugate transpose Symmetric and skew-symmetric matrices Hermitian and skew-Hermitian matrices Determinant of a square matrix Inverse of a square matrix Adjoint of a square matrix
Matrices and Linear Transformations Matrices Geometry and Algebra Linear Transformations Matrix Terminology
25,
Points to note
A matrix denes a linear transformation from one vector space to another. Matrix representation of a linear transformation depends on the selected bases (or frames of reference) of the source and target spaces.
Operational Fundamentals of Linear Algebra Range and Null Space: Rank and Nullity Basis Change of Basis Elementary Transformations
26,
Outline
Operational Fundamentals of Linear Algebra Range and Null Space: Rank and Nullity Basis Change of Basis Elementary Transformations
27,
Range and Null Space: Rank and Nullity Range and Null Space: Rank and Nullity Basis Change of Basis Elementary Transformations
Consider A R mn as a mapping A : R n R m, Observations 1. Every x R n has an image y R m , but every y R m need not have a pre-image in R n . Range (or range space) as subset/subspace of co-domain: containing images of all x R n . 2. Image of x R n in R m is unique, but pre-image of y R m need not be. It may be non-existent, unique or innitely many. Null space as subset/subspace of domain: containing pre-images of only 0 R m . Ax = y , x R n, y R m.
28,
Range and Null Space: Rank and Nullity Range and Null Space: Rank and Nullity Basis Change of Basis
R
n
Elementary Transformations
R
0 O
m
Range ( A )
Null ( A)
Domain
Codomain
Question: What is the dimension of a vector space? Linear dependence and independence: Vectors x1 , x2 , , xr in a vector space are called linearly independent if k1 x1 + k2 x2 + + kr xr = 0 k1 = k2 = = kr = 0.
Null (A ) = {x : x R n , Ax = 0 }
Operational Fundamentals of Linear Algebra Range and Null Space: Rank and Nullity Basis Change of Basis Elementary Transformations
29,
Basis
where V = [v1 v2 vr ] and k = [k1 k2 kr ]T ? Answer: Not necessarily. Span, denoted as < v1 , v2 , , vr >: the subspace described/generated by a set of vectors. Basis: A basis of a vector space is composed of an ordered minimal set of vectors spanning the entire space.
Take a set of vectors v1 , v2 , , vr in a vector space. Question: Given a vector v in the vector space, can we describe it as v = k1 v1 + k2 v2 + + kr vr = Vk ,
The basis for an n-dimensional space will have exactly n members, all linearly independent.
Operational Fundamentals of Linear Algebra Range and Null Space: Rank and Nullity Basis Change of Basis Elementary Transformations
30,
Basis
Orthogonal basis: {v1 , v2 , , vn } with Orthonormal basis:
vjT vk = 0 j = k . 0 1 if if j =k j =k
vjT vk = jk =
Members of an orthonormal basis form an orthogonal matrix. Properties of an orthogonal matrix: V1 = VT or VVT = I , and Natural basis: e1 = det V = +1 or 1, 1 0 0 . . . 0 0 1 0 . . . 0 0 0 0 . . . 1
e2 =
en =
Operational Fundamentals of Linear Algebra Range and Null Space: Rank and Nullity Basis Change of Basis nElementary Transformations
31,
Change of Basis
Suppose x represents a vector (point) in R in some basis. Question: If we change over to a new basis {c1 , c2 , , cn }, how does the representation of a vector change? x = x 1 c1 + x 2 c2 + + x c n n x 1 x 2 = [c1 c2 cn ] . . . . x n c2 cn ],
With C = [c1
= x and new to old coordinates: C x = C 1 x . old to new coordinates: x Note: Matrix C is invertible. How? Special case with C orthogonal: orthogonal coordinate transformation.
Operational Fundamentals of Linear Algebra Range and Null Space: Rank and Nullity Basis Change of Basis Elementary Transformations
32,
Change of Basis
Question: And, how does basis change aect the representation of a linear transformation? Consider the mapping A : R n R m, Ax = y .
Change the basis of the domain through P R nn and that of the co-domain through Q R mm . New and old vector representations are related as =x Px and = y. Qy
Operational Fundamentals of Linear Algebra Range and Null Space: Rank and Nullity Basis Change of Basis Elementary Transformations
33,
Elementary Transformations
Observation: Certain reorganizations of equations in a system have no eect on the solution(s). Elementary Row Transformations: 1. interchange of two rows, 2. scaling of a row, and 3. addition of a scalar multiple of a row to another.
Elementary Column Transformations: Similar operations with columns, equivalent to a corresponding shuing of the variables (unknowns).
Operational Fundamentals of Linear Algebra Range and Null Space: Rank and Nullity Basis Change of Basis Elementary Transformations
34,
Elementary Transformations
Equivalence of matrices: An elementary transformation denes an equivalence relation between two matrices. Reduction to normal form: AN = Ir 0 0 0
Rank invariance: Elementary transformations do not alter the rank of a matrix. Elementary transformation as matrix multiplication: an elementary row transformation on a matrix is equivalent to a pre-multiplication with an elementary matrix, obtained through the same row transformation on the identity matrix (of appropriate size). Similarly, an elementary column transformation is equivalent to post-multiplication with the corresponding elementary matrix.
Operational Fundamentals of Linear Algebra Range and Null Space: Rank and Nullity Basis Change of Basis Elementary Transformations
35,
Points to note
Concepts of range and null space of a linear transformation. Eects of change of basis on representations of vectors and linear transformations. Elementary transformations as tools to modify (simplify) systems of (simultaneous) linear equations.
Systems of Linear Equations Nature of Solutions Basic Idea of Solution Methodology Homogeneous Systems Pivoting Partitioning and Block Operations
36,
Outline
Systems of Linear Equations Nature of Solutions Basic Idea of Solution Methodology Homogeneous Systems Pivoting Partitioning and Block Operations
Systems of Linear Equations Nature of Solutions Basic Idea of Solution Methodology Homogeneous Systems Pivoting Partitioning and Block Operations
37,
Nature of Solutions
Ax = b
Ax = b has a solution
b Range (A )
Solution of Ax = b is unique.
Ax = 0 has only the trivial (zero) solution. Innite solutions: For Rank (A ) = Rank ([A |b ]) = k < n, solution + xN , x=x =b with A x and xN Null (A )
Systems of Linear Equations Nature of Solutions Basic Idea of Solution Methodology Homogeneous Systems Pivoting Partitioning and Block Operations
38,
To determine the unique solution, or decouple the equations using elementary transformations. For solving Ax = b , apply suitable elementary row transformations on both sides, leading to Rq Rq1 R2 R1 Ax = Rq Rq1 R2 R1 b , or, [RA ]x = Rb ; such that matrix [RA ] is greatly simplied. In the best case, with complete reduction, RA = In , and components of x can be read o from Rb . For inverting matrix A , treat AA1 = In similarly.
Systems of Linear Equations Nature of Solutions Basic Idea of Solution Methodology Homogeneous Systems Pivoting Partitioning and Block Operations
39,
Homogeneous Systems
To solve Ax = 0 or to describe Null (A ), apply a series of elementary row transformations on A to reduce it to the A , the row-reduced echelon form or RREF. Features of RREF: 1. The rst non-zero entry in any row is a 1, the leading 1. 2. In the same column as the leading 1, other entries are zero. 3. Non-zero entries in a lower row appear later. Variables corresponding to columns having leading 1s are expressed in terms of the remaining variables. u1 u2 Solution of Ax = 0 : x = z1 z2 znk un k Basis of Null (A ): {z1 , z2 , , znk }
Systems of Linear Equations Nature of Solutions Basic Idea of Solution Methodology Homogeneous Systems Pivoting Partitioning and Block Operations
40,
Pivoting
Attempt: To get 1 at diagonal (or leading) position, with 0 elsewhere. Key step: division by the diagonal (or leading) entry. Consider Ik . . . . . . . . . . . . . . BIG . . A= . big . . . . . . . . . . . . . . . . Cannot divide by zero. Should not divide by .
partial pivoting: row interchange to get big in place of complete pivoting: row and column interchanges to get BIG in place of
Complete pivoting does not give a huge advantage over partial pivoting, but requires maintaining of variable permutation for later unscrambling.
Systems of Linear Equations Nature of Solutions Basic Idea of Solution Methodology Homogeneous Systems Pivoting Partitioning and Block Operations
41,
y1 y2
For a valid partitioning, block sizes should be consistent. Elementary transformations can be applied over blocks. Block operations can be computationally economical at times. Conceptually, dierent blocks of contributions/equations can be assembled for mathematical modelling of complicated coupled systems.
Systems of Linear Equations Nature of Solutions Basic Idea of Solution Methodology Homogeneous Systems Pivoting Partitioning and Block Operations
42,
Points to note
Solution(s) of Ax = b may be non-existent, unique or innitely many. Complete solution can be described by composing a particular solution with the null space of A . Null space basis can be obtained conveniently from the row-reduced echelon form of A . For a strategy of solution, pivoting is an important step.
Gauss Elimination Family of Methods Gauss-Jordan Elimination Gaussian Elimination with Back-Substitution LU Decomposition
43,
Outline
Gauss Elimination Family of Methods Gauss-Jordan Elimination Gaussian Elimination with Back-Substitution LU Decomposition
Gauss Elimination Family of Methods Gauss-Jordan Elimination Gaussian Elimination with Back-Substitution LU Decomposition
44,
Gauss-Jordan Elimination
A1 b1
A1 b2
A1 b3
A 1
A1 B ].
Premature termination: matrix A singular decision? If you use complete pivoting, unscramble permutation. Identity matrix in both C and C ? Store A1 in place. For evaluating A1 b , do not develop A1 . Gauss-Jordan elimination an overkill? Want something cheaper ?
Gauss Elimination Family of Methods Gauss-Jordan Elimination Gaussian Elimination with Back-Substitution LU Decomposition
45,
Gauss-Jordan Elimination
Gauss-Jordan Algorithm
=1 For k = 1, 2, 3, , (n 1)
1. Pivot : identify l such that |clk | = max |cjk | for k j n. If clk = 0, then = 0 and exit. Else, interchange row k and row l . 2. ckk , Divide row k by ckk . 3. Subtract cjk times row k from row j , j = k .
In case of non-singular A ,
default termination .
46,
Gauss-Jordan Elimination Gaussian Elimination with Back-Substitution Gaussian Elimination with Back-Substitution LU Decomposition
Gaussian elimination:
Ax = b
a11 a12 a 22 or, .. .
Back-substitutions:
Ax = b b1 x1 a1n x a2 n 2 b2 . = . . . . . . . . xn bn ann
Remarks Computational cost half compared to G-J elimination. Like G-J elimination, prior knowledge of RHS needed.
xn = bn /ann , 1 xi = bi aii
n j =i +1
xj aij for i = n 1, n 2, , 2, 1
47,
Gauss-Jordan Elimination Gaussian Elimination with Back-Substitution Gaussian Elimination with Back-Substitution LU Decomposition
Anatomy of the Gaussian elimination: The process of Gaussian elimination (with no pivoting) leads to U = Rq Rq1 R2 R1 A = RA . The steps given by for k = 1, 2, 3, , (n 1) j-th row j-th row j = k + 1, k + 2, , n 1 21 a a11 a31 a11 . . .
n1 a a11
ajk akk
k-th row
for
0 0 1 0 0 1 . . .. . . . . . 0 0
0 0 0 . . . 1
With L = R1 ,
A = LU .
etc .
Gauss Elimination Family of Methods Gauss-Jordan Elimination Gaussian Elimination with Back-Substitution LU Decomposition
48,
LU Decomposition
A square matrix with non-zero leading minors is LU-decomposable. No reference to a right-hand-side (RHS) vector! To solve Ax = b , denote y = Ux and split as Ax = b LUx = b Ly = b Forward substitutions: 1 yi = bi lii
n j =i +1
and
Ux = y .
i 1 j =1
Back-substitutions: 1 yi xi = uii
lij yj
for i = 1, 2, 3, , n;
uij xj
for i = n, n 1, n 2, , 1.
Gauss Elimination Family of Methods Gauss-Jordan Elimination Gaussian Elimination with Back-Substitution LU Decomposition
49,
LU Decomposition
L=
and U =
and
k =1
Gauss Elimination Family of Methods Gauss-Jordan Elimination Gaussian Elimination with Back-Substitution LU Decomposition
50,
LU Decomposition
Doolittles algorithm
i 1 1. uij = aij k for 1 i j =1 lik ukj j 1 1 2. lij = ujj (aij k =1 lik ukj ) for i > j
Evaluation proceeds in column order of u11 u12 u13 l21 u22 u23 A = l31 l32 u33 . . . . . . . . . ln 1 ln 2 ln 3
Gauss Elimination Family of Methods Gauss-Jordan Elimination Gaussian Elimination with Back-Substitution LU Decomposition
51,
LU Decomposition
Question: What about matrices which are not LU-decomposable? Question: What about pivoting? Consider the non-singular matrix 0 1 2 1 0 0 u11 = 0 u12 u13 3 1 2 = l21 =? 1 0 0 u22 u23 . 2 1 3 l31 l32 1 0 0 u33 LU-decompose a permutation of its rows
0 3 2 1 2 1 2 1 3 0 = 1 0 0 = 1 0 1 0 0 0 0 1 1 0 0 0 0 1 3 0 2 1 2 1 2 1 3 1 0 0 3 1 0 1 0 0 1 1 2 0 0 1 3 3
2 2 . 1
Gauss Elimination Family of Methods Gauss-Jordan Elimination Gaussian Elimination with Back-Substitution LU Decomposition
52,
Points to note
For invertible coecient matrices, use
Gauss-Jordan elimination for large number of RHS vectors available all together and also for matrix inversion, Gaussian elimination with back-substitution for small number of RHS vectors available together, LU decomposition method to develop and maintain factors to be used as and when RHS vectors are available.
Pivoting is almost necessary (without further special structure). Necessary Exercises: 1,4,5
53,
Outline
Quadratic Forms, Symmetry and Positive Denitenes Cholesky Decomposition Sparse Systems*
Special Systems and Special Methods Quadratic Forms, Symmetry and Positive Deniteness Cholesky Decomposition Sparse Systems*
54,
Quadratic Forms, Symmetry and Positive Denitenes Quadratic Forms, Symmetry and Positive Deniteness Cholesky Decomposition Sparse Systems*
Quadratic form
n n
q (x ) = x Ax =
i =1 j =1
aij xi xj
dened with respect to a symmetric matrix. Quadratic form q (x ), equivalently matrix A , is called positive denite (p.d.) when xT Ax > 0 xT Ax 0 Sylvesters criteria: a11 0, a11 a12 a21 a22 0, , det A 0; x=0 x = 0.
55,
Cholesky Decomposition
Quadratic Forms, Symmetry and Positive Denitenes Cholesky Decomposition Sparse Systems*
If A R nn is symmetric and positive denite, then there exists a non-singular lower triangular matrix L R nn such that A = LLT . Algorithm For i = 1, 2, 3, , n
Lii = Lji =
1 Lii
aii aji
i 1 2 k =1 Lik
i 1 k =1 Ljk Lik
for i < j n
Test of positive deniteness. Stable algorithm: no pivoting necessary! Economy of space and time.
56,
Sparse Systems*
Quadratic Forms, Symmetry and Positive Denitenes Cholesky Decomposition Sparse Systems*
What is a sparse matrix? Bandedness and bandwidth Ecient storage and processing Updates
57,
Points to note
Quadratic Forms, Symmetry and Positive Denitenes Cholesky Decomposition Sparse Systems*
Concepts and criteria of positive deniteness and positive semi-deniteness Cholesky decomposition method in symmetric positive denite systems Nature of sparsity and its exploitation
Numerical Aspects in Linear Systems Norms and Condition Numbers Ill-conditioning and Sensitivity Rectangular Systems Singularity-Robust Solutions Iterative Methods
58,
Outline
Numerical Aspects in Linear Systems Norms and Condition Numbers Ill-conditioning and Sensitivity Rectangular Systems Singularity-Robust Solutions Iterative Methods
Numerical Aspects in Linear Systems Norms and Condition Numbers Ill-conditioning and Sensitivity Rectangular Systems Singularity-Robust Solutions Iterative Methods
59,
xT x
The p -norm x
p
Weighted norm x
w
xT Wx
Numerical Aspects in Linear Systems Norms and Condition Numbers Ill-conditioning and Sensitivity Rectangular Systems Singularity-Robust Solutions Iterative Methods
60,
Norm of a matrix: magnitude or scale of the transformation Matrix norm (induced by a vector norm) is given by the largest magnication it can produce on a vector A = max
x
Ax = max Ax x x =1 x
Direct consequence: Ax A
Numerical Aspects in Linear Systems Norms and Condition Numbers Ill-conditioning and Sensitivity Rectangular Systems Singularity-Robust Solutions Iterative Methods
61,
Solution: x1 =
10001+1 , 2
0.9999x1 1.0001x2 = 1 x1 x2 = 1 + x2 =
99991 2
See illustration
Numerical Aspects in Linear Systems Norms and Condition Numbers Ill-conditioning and Sensitivity Rectangular Systems Singularity-Robust Solutions X2 Iterative Methods
62,
(2)
(2b) (1)
(b)
o
X (a)
X1
o
X (a)
X1
X2 (2) (1)
(c)
X1
X1
(d) Singularity
Numerical Aspects in Linear Systems Norms and Condition Numbers Ill-conditioning and Sensitivity Rectangular Systems Singularity-Robust Solutions Iterative Methods
63,
Rectangular Systems
1 1 Ax b 2 = (Ax b )T (Ax b ) 2 2 1 T T 1 x A Ax xT AT b + bT b 2 2
Numerical Aspects in Linear Systems Norms and Condition Numbers Ill-conditioning and Sensitivity Rectangular Systems Singularity-Robust Solutions Iterative Methods
64,
Rectangular Systems
Consider Ax = b with A R mn and Rank (A ) = m < n. Look for R m that satises AT = x and AAT = b Solution x = AT = AT (AAT )1 b Consider the problem
T minimize U (x ) = 1 2x x
subject to Ax = b .
Solution x = AT (AAT )1 b gives foot of the perpendicular on the solution plane and the pseudoinverse A# = AT (AAT )1
Numerical Aspects in Linear Systems Norms and Condition Numbers Ill-conditioning and Sensitivity Rectangular Systems Singularity-Robust Solutions Iterative Methods
65,
Singularity-Robust Solutions
recipe for any linear system (m > n, m = n or m < n), with any condition!
Ax = b may have conict: form AT Ax = AT b . AT A may be ill-conditioned: rig the system as (AT A + 2 In )x = AT b Coecient matrix: symmetric and positive denite! The idea: Immunize the system, paying a small price. Issues:
Numerical Aspects in Linear Systems Norms and Condition Numbers Ill-conditioning and Sensitivity Rectangular Systems Singularity-Robust Solutions Iterative Methods
66,
Iterative Methods
Jacobis iteration method: n 1 (k +1) bi = xi aii
j =1, j =i
(k ) aij xj for i = 1, 2, 3, , n.
n j =i +1
(k ) aij xj for i = 1, 2, 3, , n.
The category of relaxation methods: diagonal dominance and availability of good initial approximations
Numerical Aspects in Linear Systems Norms and Condition Numbers Ill-conditioning and Sensitivity Rectangular Systems Singularity-Robust Solutions Iterative Methods
67,
Points to note
Solutions are unreliable when the coecient matrix is ill-conditioned. Finding pseudoinverse of a full-rank matrix is easy. Tikhonov regularization provides singularity-robust solutions. Iterative methods may have an edge in certain situations!
Eigenvalues and Eigenvectors Eigenvalue Problem Generalized Eigenvalue Problem Some Basic Theoretical Results Power Method
68,
Outline
Eigenvalues and Eigenvectors Eigenvalue Problem Generalized Eigenvalue Problem Some Basic Theoretical Results Power Method
Eigenvalues and Eigenvectors Eigenvalue Problem Generalized Eigenvalue Problem Some Basic Theoretical Results Power Method n n
69,
Eigenvalue Problem
In mapping A : R n R n , special vectors of matrix A R mapped to scalar multiples, i.e. undergo pure scaling Av = v Eigenvector (v ) and eigenvalue (): eigenpair (, v ) algebraic eigenvalue problem (I A )v = 0 For non-trivial (non-zero) solution v , det(I A ) = 0
Multiplicity of an eigenvalue: algebraic and geometric Multiplicity mismatch: diagonalizable and defective matrices
Eigenvalues and Eigenvectors Eigenvalue Problem Generalized Eigenvalue Problem Some Basic Theoretical Results Power Method
70,
k m
Natural frequencies and corresponding modes? Assuming a vibration mode x = sin( t + ), ( 2 M + K) sin( t + ) = 0 K = 2 M
Reduce as M1 K = 2 ? Why is it not a good idea? K symmetric, M symmetric and positive denite!! With M = LLT , = LT and K = L1 KLT , K = 2
Eigenvalues and Eigenvectors Eigenvalue Problem Generalized Eigenvalue Problem Some Basic Theoretical Results Power Method
71,
Eigenvalues and Eigenvectors Eigenvalue Problem Generalized Eigenvalue Problem Some Basic Theoretical Results Power Method
72,
Triangular and block triangular matrices Eigenvalues of a triangular matrix are its diagonal entries.
Eigenvalues of a block triangular matrix are the collection of eigenvalues of its diagonal blocks. Take H= If Av = v , then H v 0 = A B 0 C v 0 = Av 0 = v 0 = v 0 A B 0 C , A R r r and C R s s
Eigenvalues and Eigenvectors Eigenvalue Problem Generalized Eigenvalue Problem Some Basic Theoretical Results Power Method
73,
Shift theorem Eigenvectors of A + I are the same as those of A . Eigenvalues: shifted by . Deation For a symmetric matrix A , with mutually orthogonal eigenvectors, having (j , vj ) as an eigenpair, B = A j vj vjT vjT vj
has the same eigenstructure as A , except that the eigenvalue corresponding to vj is zero.
Eigenvalues and Eigenvectors Eigenvalue Problem Generalized Eigenvalue Problem Some Basic Theoretical Results Power Method
74,
Eigenspace If v1 , v2 , , vk are eigenvectors of A corresponding to the same eigenvalue , then eigenspace: < v1 , v2 , , vk > Similarity transformation B = S1 AS : same transformation expressed in new basis. det(I A ) = det S1 det(I A ) det S = det(I B ) Same characteristic polynomial! Eigenvalues are the property of a linear transformation, not of the basis. An eigenvector v of A transforms to S1 v , as the corresponding eigenvector of B .
Eigenvalues and Eigenvectors Eigenvalue Problem Generalized Eigenvalue Problem Some Basic Theoretical Results Power Method
75,
Power Method
Consider matrix A with
2 v2 +
3 v3 + +
n 1
n vn
At convergence, n ratios will be the same. Question: How to nd the least magnitude eigenvalue?
Eigenvalues and Eigenvectors Eigenvalue Problem Generalized Eigenvalue Problem Some Basic Theoretical Results Power Method
76,
Points to note
Meaning and context of the algebraic eigenvalue problem Fundamental deductions and vital relationships Power method as an inexpensive procedure to determine extremal magnitude eigenvalues
Diagonalization and Similarity Transformations Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations
77,
Outline
Diagonalization and Similarity Transformations Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations
Diagonalization and Similarity Transformations Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations
78,
Diagonalizability
AS = A [v1
v2
= [v1
v2
vn ] = [1 v1 2 v2 n vn ] 1 0 0 0 2 0 vn ] . = S . . . . . . . . . . . 0 0 n and S1 AS =
A = S S 1
Diagonalization: The process of changing the basis of a linear transformation so that its new matrix representation is diagonal, i.e. so that it is decoupled among its coordinates.
Diagonalization and Similarity Transformations Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations
79,
Diagonalizability
Diagonalizability:
A matrix having a complete set of n linearly independent eigenvectors is diagonalizable. Existence of a complete set of eigenvectors: A diagonalizable matrix possesses a complete set of n linearly independent eigenvectors.
All distinct eigenvalues implies diagonalizability. But, diagonalizability does not imply distinct eigenvalues! However, a lack of diagonalizability certainly implies a multiplicity mismatch.
Diagonalization and Similarity Transformations Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations
80,
Canonical Forms
Jordan canonical form (JCF) Diagonal (canonical) form Triangular (canonical) form
Diagonalization and Similarity Transformations Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations
81,
Canonical Forms
..
Diagonalization and Similarity Transformations Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations
82,
Canonical Forms
Equating blocks as ASr = Sr Jr gives
[Av
Aw2
Aw3
] = [v
w2
w3
1 1 .. ] . .. .
(A I )w3 = w2
(A I )w2 = v
and and
(A I )3 w3 = 0 ,
Diagonalization and Similarity Transformations Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations
83,
Canonical Forms
Diagonal form
Special case of Jordan form, with each Jordan block of 1 1 size Matrix is diagonalizable Similarity transformation matrix S is composed of n linearly independent eigenvectors as columns None of the eigenvectors admits any generalized eigenvector Equal geometric and algebraic multiplicities for every eigenvalue
Diagonalization and Similarity Transformations Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations
84,
Canonical Forms
Triangular form Triangularization: Change of basis of a linear tranformation so as to get its matrix in the triangular form
For real eigenvalues, always possible to accomplish with orthogonal similarity transformation Always possible to accomplish with unitary similarity transformation, with complex arithmetic Determination of eigenvalues
Diagonalization and Similarity Transformations Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations
85,
Canonical Forms
Forms that can be obtained with pre-determined number of arithmetic operations (without iteration): Tridiagonal form: non-zero entries only in the (leading) diagonal, sub-diagonal and super-diagonal useful for symmetric matrices Hessenberg form: A slight generalization of a triangular matrix . . . . Hu = . . . . . . . . . . . . . . . . . Note: Tridiagonal and Hessenberg forms do not fall in the category of canonical forms.
Diagonalization and Similarity Transformations Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations
86,
Symmetric Matrices
A real symmetric matrix has all real eigenvalues and is diagonalizable through an orthogonal similarity transformation.
Eigenvalues must be real. A complete set of eigenvectors exists. Eigenvectors corresponding to distinct eigenvalues are necessarily orthogonal. Corresponding to repeated eigenvalues, orthogonal eigenvectors are available. In all cases of a symmetric matrix, we can form an orthogonal matrix V , such that VT AV = is a real diagonal matrix.
Further, A = V VT .
Similar results for complex Hermitian matrices.
Diagonalization and Similarity Transformations Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations
87,
Symmetric Matrices
Proposition: Eigenvalues of a real symmetric matrix must be real. Take A R nn such that A = AT , with eigenvalue = h + ik . Since I A is singular, so is I A ) = (hI A + ik I )(hI A ik I ) B = (I A ) ( = (hI A )2 + k 2 I For some x = 0 , Bx = 0 , and xT Bx = 0 xT (hI A )T (hI A )x + k 2 xT x = 0 Thus, (hI A )x
2
+ kx
=0
k = 0 and = h
Diagonalization and Similarity Transformations Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations
88,
Symmetric Matrices
Proposition: A symmetric matrix possesses a complete set of eigenvectors. Consider a repeated real eigenvalue of A and examine its Jordan block(s). Suppose Av = v . The rst generalized eigenvector w satises (A I )w = v , giving vT (A I )w = vT v vT AT w vT w = vT v (Av )T w vT w = v v
2 2
=0
which is absurd. An eigenvector will not admit a generalized eigenvector. All Jordan blocks will be of 1 1 size.
Diagonalization and Similarity Transformations Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations
89,
Symmetric Matrices
Proposition: Eigenvectors of a symmetric matrix corresponding to distinct eigenvalues are necessarily orthogonal. Take two eigenpairs (1 , v1 ) and (2 , v2 ), with 1 = 2 .
T T T v1 Av2 = v1 (2 v2 ) = 2 v1 v2 T T T T v1 Av2 = v1 A v2 = (Av1 )T v2 = (1 v1 )T v2 = 1 v1 v2 Tv = 0 From the two expressions, (1 2 )v1 2 Tv = 0 v1 2
Proposition: Corresponding to a repeated eigenvalue of a symmetric matrix, an appropriate number of orthogonal eigenvectors can be selected. If 1 = 2 , then the entire subspace < v1 , v2 > is an eigenspace. Select any two mutually orthogonal eigenvectors for the basis.
Diagonalization and Similarity Transformations Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations
90,
Symmetric Matrices
= [v1
v2
n
n
T T T = 1 v1 v1 + 2 v2 v2 + + n vn vn =
T v1 T v2 . . . T vn
i vi viT
i =1
Reconstruction from a sum of rank-one components Ecient storage with only large eigenvalues and corresponding eigenvectors Deation technique Stable and eective methods: easier to solve the eigenvalue problem
Diagonalization and Similarity Transformations Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations
91,
Similarity Transformations
General Hessenberg
Symmetric
Tridiagonal
Triangular
How to nd suitable similarity transformations? 1. rotation 2. reection 3. matrix decomposition or factorization 4. elementary transformation
Diagonalization and Similarity Transformations Diagonalizability Canonical Forms Symmetric Matrices Similarity Transformations
92,
Points to note
Generally possible reduction: Jordan canonical form Condition of diagonalizability and the diagonal form Possible with orthogonal similarity transformations: triangular form Useful non-canonical forms: tridiagonal and Hessenberg
Caution: Each step in this context to be eected through similarity transformations Necessary Exercises: 1,2,4
Jacobi and Givens Rotation Methods Plane Rotations Jacobi Rotation Method Givens Rotation Method
93,
Outline
Jacobi and Givens Rotation Methods (for symmetric matrices) Plane Rotations Jacobi Rotation Method Givens Rotation Method
Jacobi and Givens Rotation Methods Plane Rotations Jacobi Rotation Method Givens Rotation Method
94,
Plane Rotations
Y
Y/
P (x, y)
y
L
O
x
X/
x y
Jacobi and Givens Rotation Methods Plane Rotations Jacobi Rotation Method Givens Rotation Method
95,
Plane Rotations
Orthogonal change of basis: r= x y = cos sin sin cos
x y
= r
In three-dimensional (ambient) space, cos sin 0 cos 0 sin xy = sin cos 0 , xz = 0 1 0 etc. 0 0 1 sin 0 cos
Jacobi and Givens Rotation Methods Plane Rotations Jacobi Rotation Method Givens Rotation Method
96,
Plane Rotations
Matrix A is transformed as
0 0 1
only the p -th and q -th rows and columns being aected.
Jacobi and Givens Rotation Methods Plane Rotations Jacobi Rotation Method Givens Rotation Method
97,
In a Jacobi rotation,
apq =0
Left side is cot 2: solve this equation for . Jacobi rotation transformations P12 , P13 , , P1n ; P23 , , P2n ; ; Pn1,n complete a full sweep. Note: The resulting matrix is far from diagonal!
Jacobi and Givens Rotation Methods Plane Rotations Jacobi Rotation Method Givens Rotation Method
98,
= 2
p =r =q
2 arq
S = 2 = 2 dier by
p =r =q
2 2 ) + arq (arp
p =r =q
2 S = S S = 2apq 0;
and S 0.
Jacobi and Givens Rotation Methods Plane Rotations Jacobi Rotation Method Givens Rotation Method
99,
Sweep P23 , P24 , , P2n ; P34 , , P3n ; ; Pn1,n to annihilate a13 , a14 , , a1n ; a24 , , a2n ; ; an2,n . Symmetric tridiagonal matrix
How do eigenvectors transform through Jacobi/Givens rotation steps? Product matrix P(1) P(2) gives the basis. A = P(2) P(1) AP(1) P(2)
T T
To record it, initialize V by identity and keep multiplying new rotation matrices on the right side.
Jacobi and Givens Rotation Methods Plane Rotations Jacobi Rotation Method Givens Rotation Method
100,
What happens to intermediate zeros? What do we get after a complete sweep? How many sweeps are to be applied? What is the intended nal form of the matrix? How is size of the matrix relevant in the choice of the method?
Householder method accomplishes tridiagonalization more eciently than Givens rotation method. But, with a half-processed matrix, there come situations in which Givens rotation method turns out to be more ecient!
Jacobi and Givens Rotation Methods Plane Rotations Jacobi Rotation Method Givens Rotation Method
101,
Points to note
Plane rotations provide orthogonal change of basis that can be used for diagonalization of matrices. For small matrices (say 4 n 8), Jacobi rotation sweeps are competitive enough for diagonalization upto a reasonable tolerance. For large matrices, one sweep of Givens rotations can be applied to get a symmetric tridiagonal matrix, for ecient further processing.
102,
Outline
Householder Transformation and Tridiagonal Matrices Householder Reection Transformation Householder Method Eigenvalues of Symmetric Tridiagonal Matrices
103,
Householder Reection Transformation Householder Reection Transformation Householder Method Eigenvalues of Symmetric Tridiagonal Matrices
u w O v
uv Plane of Reflection
Consider u , v R k ,
u = v and w =
Householder reection matrix Hk = Ik 2wwT is symmetric and orthogonal. For any vector x orthogonal to w , Hk x = (Ik 2wwT )x = x and
u v u v
Hk w = (Ik 2wwT )w = w .
104,
Householder Method
Consider n n symmetric matrix A . Let u = [a21 a31 an1 ]T R n1 and v = u e1 R n1 . Construct P1 = 1 0 0 Hn1 and operate as a11 uT u A1 . 1 0 0 Hn1
A(1) = P1 AP1 = =
1 0 0 Hn1
d1 e2 0 . = e2 d2 uT 2 0 u2 A2
105,
Householder Method
Next, with v2 = u2 e1 , we form P2 = I2 0 0 Hn2
and operate as A(2) = P2 A(1) P2 . After j steps, d1 e2 e2 d2 . . . .. .. A(j ) = . . ej +1 ej +1 dj +1 uT j +1 uj +1 Aj +1 By n 2 steps, with P = P1 P2 P3 Pn2 , A(n2) = PT AP is symmetric tridiagonal.
106,
Reection Transformation Eigenvalues of Symmetric TridiagonalHouseholder Matrices Householder Method Eigenvalues of Symmetric Tridiagonal Matrices
e2 T= Characteristic polynomial d1 p () = e2
d1
e2 d2 .. . .. .. . . en1 dn1 en
en1
en dn
e2 d2 .. .
.. ..
. . en1 dn 1 en en dn .
en1
107,
Reection Transformation Eigenvalues of Symmetric TridiagonalHouseholder Matrices Householder Method Eigenvalues of Symmetric Tridiagonal Matrices
p1 () = d1 ,
a Sturmian sequence if ej = 0 j
Question: What if ej = 0 for some j ?! Answer: That is good news. Split the matrix.
108,
Reection Transformation Eigenvalues of Symmetric TridiagonalHouseholder Matrices Householder Method Eigenvalues of Symmetric Tridiagonal Matrices
Sturmian sequence property of P () with ej = 0: Interlacing property: Roots of pk +1 () interlace the roots of pk (). That is, if the roots of pk +1 () are 1 > 2 > > k +1 and those of pk () are 1 > 2 > > k ; then 1 > 1 > 2 > 2 > > k > k > k +1 . This property leads to a convenient Proof p1 () has a single root, d1 .
2 p2 (d1 ) = e2 < 0,
procedure .
Since p2 () = > 0, roots t1 and t2 of p2 () are separated as > t1 > d1 > t2 > . The statement is true for k = 1.
109,
Reection Transformation Eigenvalues of Symmetric TridiagonalHouseholder Matrices Householder Method Eigenvalues of Symmetric Tridiagonal Matrices
Next, we assume that the statement is true for k = i . Roots of pi (): 1 > 2 > > i Roots of pi +1 (): 1 > 2 > > i > i +1 Roots of pi +2 (): 1 > 2 > > i > i +1 > i +2 Assumption: 1 > 1 > 2 > 2 > > i > i > i +1
i+2 i+1 i i (a) Roots of p ( ) and p
i i+1
i1
2 2 ()
1 1
j+1 j+1 ve
j j ve
j1
110,
Reection Transformation Eigenvalues of Symmetric TridiagonalHouseholder Matrices Householder Method Eigenvalues of Symmetric Tridiagonal Matrices
Since 1 > 1 , pi (1 ) is of the same sign as pi (), i.e. positive. Therefore, pi +2 (1 ) = ei2+2 pi (1 ) is negative. But, pi +2 () is clearly positive.
Question: Where are the rest of the i roots of pi +2 ()? pi +2 (j +1 ) = ei2+2 pi (j +1 ) pi +2 (j ) = (j di +2 )pi +1 (j ) ei2+2 pi (j ) = ei2+2 pi (j )
Hence, 1 (1 , ). Similarly, i +2 (, i +1 ).
Refer gure.
Over [i +1 , 1 ], pi +2 () changes sign over each sub-interval [j +1 , j ], along with pi (), to maintain opposite signs at each . Conclusion: pi +2 () has exactly one root in (j +1 , j ).
111,
Reection Transformation Eigenvalues of Symmetric TridiagonalHouseholder Matrices Householder Method Eigenvalues of Symmetric Tridiagonal Matrices
Examine sequence P (w ) = {p0 (w ), p1 (w ), p2 (w ), , pn (w )}. If pk (w ) and pk +1 (w ) have opposite signs then pk +1 () has one root more than pk () in the interval (w , ). Number of roots of pn () above w = number of sign changes in the sequence P (w ).
Consequence: Number of roots of pn () in (a, b ) = dierence between numbers of sign changes in P (a) and P (b ). Bisection method: Examine the sequence at
a +b 2 .
Separate roots, bracket each of them and then squeeze the interval! Any way to start with an interval to include all eigenvalues? |i | bnd = max {|ej | + |dj | + |ej +1 |}
1j n
112,
Reection Transformation Eigenvalues of Symmetric TridiagonalHouseholder Matrices Householder Method Eigenvalues of Symmetric Tridiagonal Matrices
Algorithm
Identify the interval [a, b ] of interest. For a degenerate case (some ej = 0), split the given matrix. For each of the non-degenerate matrices,
by repeated use of bisection and study of the sequence P (), bracket individual eigenvalues within small sub-intervals, and by further use of the bisection method (or a substitute) within each such sub-interval, determine the individual eigenvalues to the desired accuracy.
113,
Points to note
A Householder matrix is symmetric and orthogonal. It eects a reection transformation. A sequence of Householder transformations can be used to convert a symmetric matrix into a symmetric tridiagonal form. Eigenvalues of the leading square sub-matrices of a symmetric tridiagonal matrix exhibit a useful interlacing structure. This property can be used to separate and bracket eigenvalues. Method of bisection is useful in the separation as well as subsequent determination of the eigenvalues.
QR Decomposition Method QR Decomposition QR Iterations Conceptual Basis of QR Method* QR Algorithm with Shift*
114,
Outline
QR Decomposition Method QR Decomposition QR Iterations Conceptual Basis of QR Method* QR Algorithm with Shift*
QR Decomposition Method QR Decomposition QR Iterations Conceptual Basis of QR Method* QR Algorithm with Shift*
115,
QR Decomposition
Decomposition (or factorization) A = QR into two factors, orthogonal Q and upper-triangular R : (a) It always exists. (b) Performing this decomposition is pretty straightforward. (c) It has a number of properties useful in the solution of the eigenvalue problem. r11 r1n . .. . [a1 an ] = [q1 qn ] . . rnn A simple method based on Gram-Schmidt orthogonalization: Considering columnwise equality aj = j i =1 rij qi , for j = 1, 2, 3, , n; rij qj = = qT i aj i < j , a j = aj
j 1
rij qi , rjj = a j ;
i =1
a j /rjj ,
QR Decomposition Method QR Decomposition QR Iterations Conceptual Basis of QR Method* QR Algorithm with Shift*
116,
QR Decomposition
u0 v0 u0 v0
With
we have QT A = R A = QR .
QR Decomposition Method QR Decomposition QR Iterations Conceptual Basis of QR Method* QR Algorithm with Shift*
117,
QR Decomposition
Alternative method useful for tridiagonal and Hessenberg matrices: One-sided plane rotations
rotations P12 , P23 etc to annihilate a21 , a32 etc in that sequence
Application in solution of a linear system: Q and R factors of a matrix A come handy in the solution of Ax = b QRx = b Rx = QT b needs only a sequence of back-substitutions.
QR Decomposition Method QR Decomposition QR Iterations Conceptual Basis of QR Method* QR Algorithm with Shift*
118,
QR Iterations
Multiplying Q and R factors in reverse,
A = RQ = QT AQ , an orthogonal similarity transformation. 1. If A is symmetric, then so is A . 2. If A is in upper Hessenberg form, then so is A . 3. If A is symmetric tridiagonal, then so is A . Complexity of QR iteration: O(n) for a symmetric tridiagonal matrix, O(n2 ) operation for an upper Hessenberg matrix and O(n3 ) for the general case. Algorithm: Set A1 = A and for k = 1, 2, 3, ,
decompose Ak = Qk Rk ,
reassemble Ak +1 = Rk Qk .
QR Decomposition Method QR Decomposition QR Iterations Conceptual Basis of QR Method* QR Algorithm with Shift*
119,
QR Iterations
Quasi-upper-triangular 1 2 .. . form: r
Bk
.. .
. . . . . .
with |1 | > |2 | > . Diagonal blocks Bk correspond to eigenspaces of equal/close (magnitude) eigenvalues. 2 2 diagonal blocks often correspond to pairs of complex eigenvalues (for non-symmetric matrices). For symmetric matrices, the quasi-upper-triangular form reduces to quasi-diagonal form.
QR Decomposition Method QR Decomposition QR Iterations Conceptual Basis of QR Method* QR Algorithm with Shift*
120,
QR decomposition algorithm operates on the basis of the relative magnitudes of eigenvalues and segregates subspaces. With k , Ak Range {e1 } = Range {q1 } Range {v1 }
T and (a1 )k QT k Aq1 = 1 Qk q1 = 1 e1 .
Further, Ak Range {e1 , e2 } = Range {q1 , q2 } Range {v1 , v2 }. (1 2 )1 . and (a2 )k QT 2 k Aq2 = 0 And, so on ...
QR Decomposition Method QR Decomposition QR Iterations Conceptual Basis of QR Method* QR Algorithm with Shift*
121,
i j
k = A k k I ; A k = Qk Rk , A k +1 = Rk Qk ; A k +1 + k I . Ak +1 = A Resulting transformation is Ak +1 = Rk Qk + k I = QT k Ak Qk + k I
T = QT k (Ak k I )Qk + k I = Qk Ak Qk .
QR Decomposition Method QR Decomposition QR Iterations Conceptual Basis of QR Method* QR Algorithm with Shift*
122,
Points to note
QR decomposition can be eected on any square matrix. Practical methods of QR decomposition use Householder transformations or Givens rotations. A QR iteration eects a similarity transformation on a matrix, preserving symmetry, Hessenberg structure and also a symmetric tridiagonal form. A sequence of QR iterations converge to an almost upper-triangular form. Operations on symmetric tridiagonal and Hessenberg forms are computationally ecient. QR iterations tend to order subspaces according to the relative magnitudes of eigenvalues. Eigenvalue shifting is useful as an expediting strategy.
Eigenvalue Problem of General Matrices Introductory Remarks Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Recommendation
123,
Outline
Eigenvalue Problem of General Matrices Introductory Remarks Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Recommendation
Eigenvalue Problem of General Matrices Introductory Remarks Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Recommendation
124,
Introductory Remarks
A general (non-symmetric) matrix may not be diagonalizable. We attempt to triangularize it. With real arithmetic, 2 2 diagonal blocks are inevitable signifying complex pair of eigenvalues. Higher computational complexity, slow convergence and lack of numerical stability.
A non-symmetric matrix is usually unbalanced and is prone to higher round-o errors. Balancing as a pre-processing step: multiplication of a row and division of the corresponding column with the same number, ensuring similarity. Note: A balanced matrix may get unbalanced again through similarity transformations that are not orthogonal!
Eigenvalue Problem of General Matrices Introductory Remarks Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Recommendation
125,
2. a sequence of n 2 steps of Householder transformations, and 3. a cycle of coordinated Gaussian elimination. Method based on Gaussian elimination or elementary transformations: The pre-multiplying matrix corresponding to the elementary row transformation and the post-multiplying matrix corresponding to the matching column transformation must be inverses of each other. Two kinds of steps
Pivoting Elimination
Eigenvalue Problem of General Matrices Introductory Remarks Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Recommendation
126,
Elimination Ir Gr = 0 0
Eigenvalue Problem of General Matrices Introductory Remarks Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Recommendation
127,
Whenever a sub-diagonal zero appears, the matrix is split into two smaller upper Hessenberg blocks, and they are processed separately, thereby reducing the cost drastically.
Particular cases:
an1,n2 0: Separately nd the eigenvalues n1 and n an1,n1 an1,n from , continue with the leading a n , n 1 an , n (n 2) (n 2) sub-matrix.
Eigenvalue Problem of General Matrices Introductory Remarks Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Recommendation
128,
Inverse Iteration
Assumption: Matrix A has a complete set of eigenvectors. (i )0 : a good estimate of an eigenvalue i of A . Purpose: To nd i precisely and also to nd vi .
Step: Select a random vector y0 (with y0 = 1) and solve [A (i )0 I ]y = y0 . Result: y is a good estimate of vi and (i )1 = (i )0 + 1
Ty y0
is an improvement in the estimate of the eigenvalue. How to establish the result and work out an
algorithm ?
Eigenvalue Problem of General Matrices Introductory Remarks Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Recommendation
129,
Inverse Iteration
With y0 =
n j =1 n j =1 j vj
and y = =
n j =1 j vj , n
[A (i )0 I ]y = y0 gives
j [A (i )0 I ]vj
j vj
j =1
j [j (i )0 ] = j
j =
j . j (i )0
i is typically large and eigenvector vi dominates y . Avi = i vi gives [A (i )0 I ]vi = [i (i )0 ]vi . Hence, [i (i )0 ]y [A (i )0 I ]y = y0 . Inner product with y0 gives
T [i (i )0 ]y0 y 1 i (i )0 +
1
Ty y0
Eigenvalue Problem of General Matrices Introductory Remarks Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Recommendation
130,
Inverse Iteration
Algorithm:
Solve [A (i )k I ]y = yk . Normalize yk +1 =
y y
Improve (i )k +1 = (i )k +
1 Ty. yk
If yk +1 yk < , terminate. Update eigenvalue once in a while, not at every iteration. Use some acceptable small number as articial pivot. The method may not converge for defective matrix or for one having complex eigenvalues. Repeated eigenvalues may inhibit the process.
Important issues
Eigenvalue Problem of General Matrices Introductory Remarks Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Recommendation
131,
Recommendation
Symmetric
Nonsymmetric
Intermediate Large
General
Eigenvalue Problem of General Matrices Introductory Remarks Reduction to Hessenberg Form* QR Algorithm on Hessenberg Matrices* Inverse Iteration Recommendation
132,
Points to note
Eigenvalue problem of a non-symmetric matrix is dicult! Balancing and reduction to Hessenberg form are desirable pre-processing steps. QR decomposition algorithm is typically used for reduction to an upper-triangular form. Use inverse iteration to polish eigenvalue and nd eigenvectors. In algebraic eigenvalue problems, dierent methods or combinations are suitable for dierent cases; regarding matrix size, symmetry and the requirements.
Singular Value Decomposition SVD Theorem and Construction Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution SVD Algorithm
133,
Outline
Singular Value Decomposition SVD Theorem and Construction Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution SVD Algorithm
Singular Value Decomposition SVD Theorem and Construction Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution SVD Algorithm
134,
Eigenvalue problem: A = U V1 where U = V Do not ask for similarity. Focus on the form of the decomposition. Guaranteed decomposition with orthogonal U , V , and non-negative diagonal entries in by allowing U = V . A = U VT such that UT AV = SVD Theorem For any real matrix A R mn , there exist orthogonal matrices U R mm and V R nn such that UT AV = R mn is a diagonal matrix, with diagonal entries 1 , 2 , 0, obtained by appending the square diagonal matrix diag (1 , 2 , , p ) with (m p ) zero rows or (n p ) zero columns, where p = min(m, n). Singular values: 1 , 2 , , p .
Similar result for complex matrices
Singular Value Decomposition SVD Theorem and Construction Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution SVD Algorithm
135,
AT A = (V T UT )(U VT ) = V T VT = V VT , where = T is an n n diagonal matrix. 2 .. . | 0 = p | + 0 | Determine V and . Work out and we have A = U VT AV = U
This provides a proof as well!
| |
Singular Value Decomposition SVD Theorem and Construction Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution SVD Algorithm
136,
1. Column Avk = k uk , with k = 0: determine column uk . Columns developed are bound to be mutually orthonormal! Verify uT i uj =
1 i Avi T 1 j Avj
= ij .
2. Column Avk = k uk , with k = 0: uk is left indeterminate (free). 3. In the case of m < n, identically zero columns Avk = 0 for k > m: no corresponding columns of U to determine. 4. In the case of m > n, there will be (m n) columns of U left indeterminate. Extend columns of U to an orthonormal basis. All three factors in the decomposition are constructed, as desired.
Singular Value Decomposition SVD Theorem and Construction Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution SVD Algorithm
137,
Properties of SVD
(a) the same permutations of columns of U , columns of V and diagonal elements of ; (b) the same orthonormal linear combinations among columns of U and columns of V , corresponding to equal singular values; and (c) arbitrary orthonormal linear combinations among columns of U or columns of V , corresponding to zero or non-existent singular values. Ordering of the singular values: 1 2 r > 0, and r +1 = r +2 = = p = 0. Rank (A ) = Rank () = r Rank of a matrix is the same as the number of its non-zero singular values.
Singular Value Decomposition SVD Theorem and Construction Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution SVD Algorithm
138,
Properties of SVD
Ax = U VT x = U y = [u1
ur
ur +1
has non-zero components along only the rst r columns of U . U gives an orthonormal basis for the co-domain such that Range (A ) = < u1 , u2 , , ur > .
T x = y , and With VT x = y , vk k
= 1 y1 u1 + 2 y2 u2 + + r yr ur
1 y1 . . um ] . r yr
x = y1 v1 + y2 v2 + + yr vr + yr +1 vr +1 + yn vn . V gives an orthonormal basis for the domain such that Null (A ) = < vr +1 , vr +2 , , vn > .
Singular Value Decomposition SVD Theorem and Construction Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution SVD Algorithm
139,
Properties of SVD
= max
2c2 k k . 2 k ck
A =
= max
1 1 1 , , , 1 2 n
UT .
Singular Value Decomposition SVD Theorem and Construction Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution SVD Algorithm
140,
Properties of SVD
Revision of denition of norm and condition number: The norm of a matrix is the same as its largest singular value, while its condition number is given by the ratio of the largest singular value to the least.
Arranging singular values in decreasing order, with Rank (A ) = r , U = [Ur ] and V = [Vr U ] U r 0
r
], V
T Vr T V
A = U VT = [Ur or, A=
0 0
T Ur r Vr
=
k =1
T . k uk vk
141,
Theorem and Construction Pseudoinverse and Solution of Linear SVD Systems Properties of SVD Pseudoinverse and Solution of Linear Systems SVD Algorithm Generalized inverse: G is called a generalized inverse or g-inverse of A if, for b Range (A ), Gb is a solution of Ax = b . The Moore-Penrose inverse or the pseudoinverse: Optimality of Pseudoinverse Solution
A# = (U VT )# = (VT )# # U# = V # UT With =
1 0 0 r , # = . 0 0 0 1 | | 2 .. . | 0 # = p | + 0 |
r 0
Or,
where k =
1 k ,
0,
142,
Theorem and Construction Pseudoinverse and Solution of Linear SVD Systems Properties of SVD Pseudoinverse and Solution of Linear Systems
If Ax = b is an under-determined consistent system, then A# b selects the solution x with the minimum norm. If the system is inconsistent, then A# b minimizes the least square error Ax b .
If the minimizer of Ax b is not unique, then it picks up that minimizer which has the minimum norm x among such minimizers.
Contrast with Tikhonov regularization: Pseudoinverse solution for precision and diagnosis. Tikhonovs solution for continuity of solution over variable A and computational eciency.
143,
Theorem and Construction Optimality of Pseudoinverse Solution SVD Properties of SVD Pseudoinverse and Solution of Linear Systems
Pseudoinverse solution of Ax = b :
r
x = V U b =
k =1
k vk uT kb
=
k =1
(uT k b /k )vk
(T )VT x = T UT b
for k = 1, 2, 3, , r .
144,
Theorem and Construction Optimality of Pseudoinverse Solution SVD Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution SVD Algorithm
= [vr +1 With V x=
vr +2
r
vn ], then
(uT k b /k )vk + Vy = x + Vy . k =1
How to minimize x
subject to E (x ) minimum?
= x
+ Vy
145,
Theorem and Construction Optimality of Pseudoinverse Solution SVD Properties of SVD Pseudoinverse and Solution of Linear Systems
Anatomy of the optimization through SVD SVD Algorithm Using basis V for domain and U for co-domain, the variables are transformed as VT x = y and UT b = c . Then, Ax = b U VT x = b VT x = UT b y = c . A completely decoupled system! Usable components: yk = ck /k for k = 1, 2, 3, , r . For k > r , completely redundant information (ck = 0) purely unresolvable conict (ck = 0) SVD extracts this pure redundancy/inconsistency. Setting k = 0 for k > r rejects it wholesale! At the same time, y is minimized, and hence x too.
Singular Value Decomposition SVD Theorem and Construction Properties of SVD Pseudoinverse and Solution of Linear Systems Optimality of Pseudoinverse Solution SVD Algorithm
146,
Points to note
SVD provides a complete orthogonal decomposition of the domain and co-domain of a linear transformation, separating out functionally distinct subspaces. If oers a complete diagnosis of the pathologies of systems of linear equations. Pseudoinverse solution of linear systems satisfy meaningful optimality requirements in several contexts. With the existence of SVD guaranteed, many important results can be established in a straightforward manner.
Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space
147,
Outline
Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space
Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space
148,
Group
Closure: a + b G a , b G
(F , +).
Subgroup
Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space
149,
Field
Group property for addition: (F , +) is a commutative group. (Denote the identity element of this group as 0.) Group property for multiplication: (F {0}, ) is a commutative group. (Denote the identity element of this group as 1.) Distributivity: a (b + c ) = a b + a c , a, b , c F . Concept of eld: abstraction of a number system Examples: (Q , +, ), (R , +, ), (C , +, ) etc.
Subeld
Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space
150,
Vector Space
A vector space is dened by
a eld F of scalars, a commutative group V of vectors, and a binary operation between F and V , that may be called scalar multiplication, such that , F , a , b V ; the following conditions hold. Closure: a V . Identity: 1a = a . Associativity: ( )a = ( a ). Scalar distributivity: (a + b ) = a + b . Vector distributivity: ( + )a = a + a .
Examples: R n , C n , m n real matrices etc. Field Number system Vector space Space
Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space
151,
Vector Space
Suppose V is a vector space. Take a vector 1 = 0 in it.
Then, vectors linearly dependent on 1 : 1 1 V 1 F . Question: Are the elements of V exhausted? If not, then take 2 V : linearly independent from 1 . Then, 1 1 + 2 2 V 1 , 2 F . Question: Are the elements of V exhausted now? Question: Will this process ever end? Suppose it does. nite dimensional vector space
Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space
152,
Vector Space
Finite dimensional vector space
Suppose the above process ends after n choices of linearly independent vectors. = 1 1 + 2 2 + + n n Then,
n: dimension of the vector space ordered set 1 , 2 , , n : a basis 1 , 2 , , n F : coordinates of in that basis
Subspace
Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space
153,
Linear Transformation
A mapping T : V W satisfying
T (a + b ) = T (a ) + T (b ) , F and a , b V where V and W are vector spaces over the eld F . Question: How to describe the linear transformation T ?
For V , basis 1 , 2 , , n
1 V gets mapped to T (1 ) W .
m i =1 aij i .
Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space
154,
Linear Transformation
basis vectors of V get mapped to vectors in W whose coordinates are listed in columns of A , and a vector of V , having its coordinates in x , gets mapped to a vector in W whose coordinates are obtained from Ax .
Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space
155,
Linear Transformation
Understanding:
T : V W is the linear transformation and the matrix A simply stores coecients needed to describe it. By changing bases of V and W , the same vector and the same linear transformation are now expressed by dierent x and A , respectively. Matrix representation emerges as the natural description of a linear transformation between two vector spaces.
Vector is an actual object in the set V and the column x R n is merely a list of its coordinates.
Exercise: Set of all T : V W form a vector space of their own!! Analyze and describe that vector space.
Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space
156,
Isomorphism
Consider T : V W that establishes a one-to-one correspondence. Linear transformation T denes a one-one onto mapping, which is invertible. dim V = dim W Inverse linear transformation T1 : W V T denes (is) an isomorphism. Vector spaces V and W are isomorphic to each other. Isomorphism is an equivalence relation. V and W are equivalent!
If we need to perform some operations on vectors in one vector space, we may as well 1. transform the vectors to another vector space through an isomorphism, 2. conduct the required operations there, and 3. map the results back to the original space through the inverse.
Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space
157,
Isomorphism
Consider vector spaces V and W over the same eld F and of the same dimension n. Question: Can we dene an isomorphism between them? Answer: Of course. As many as we want! The underlying eld and the dimension together completely specify a vector space, up to an isomorphism. All n-dimensional vector spaces over the eld F are isomorphic to one another. In particular, they are all isomorphic to F n . The representation (columns) can be considered as the objects (vectors) themselves.
Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space
158,
Inner product (a , b ) in a real or complex vector space: a scalar function p : V V F satisfying Closure: Associativity: (a , b ) = (a , b ) a , b V , (a , b ) F
Distributivity: (a + b , c ) = (a , c ) + (b , c ) Conjugate commutativity: (b , a ) = (a , b ) Positive deniteness: (a , a ) 0; and (a , a ) = 0 i a = 0 Note: Property of conjugate commutativity forces (a , a ) to be real. Examples: aT b , aT Wb in R , a b in C etc. Inner product space: a vector space possessing an inner product
Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space
159,
Inner products bring in ideas of angle and length in the geometry of vector spaces. Orthogonality: (a , b ) = 0 Norm: : V R , such that a = (a , a )
Associativity: a = || a
Positive deniteness: a > 0 for a = 0 and 0 = 0 Triangle inequality: a + b a + b Cauchy-Schwarz inequality: |(a , b )| a b
Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space
160,
Function Space
Suppose we decide to represent a continuous function f : [a, b ] R by the listing vf = [f (x1 ) f (x2 ) f (x3 )
f (xN )]T
with a = x1 < x2 < x3 < < xN = b . Note: The true representation will require N to be innite! Here, vf is a real column vector. Do such vectors form a vector space? Correspondingly, does the set F of continuous functions over [a, b ] form a vector space? innite dimensional vector space
Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space
161,
Function Space
Vector space of continuous functions First, (F , +) is a commutative group.
Every function in this space is an (innite dimensional) vector. Listing of values is just an obvious basis.
Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner 1 Product Space 2 Function Space
162,
Function Space
Linear dependence of (non-zero) functions f and f f2 (x ) = kf1 (x ) for all x in the domain
Functions f1 , f2 , f3 , , fn F are linearly dependent if k1 , k2 , k3 , , kn , not all zero, such that k1 f1 (x ) + k2 f2 (x ) + k3 f3 (x ) + + kn fn (x ) = 0 x [a, b ].
Example: functions 1, x , x 2 , x 3 , are a set of linearly independent functions. Incidentally, this set is a commonly used basis.
Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space
163,
Function Space
Inner product: For functions f (x ) and g (x ) in F , the usual inner product between corresponding vectors:
T vg = f (x1 )g (x1 ) + f (x2 )g (x2 ) + f (x3 )g (x3 ) + (vf , vg ) = vf T Wv = Weighted inner product: (vf , vg ) = vf g i
wi f (xi )g (xi )
(f , g ) =
a
w (x )f (x )g (x )dx
Orthogonality: (f , g ) = Norm: f =
b a
b a
w (x )f (x )g (x )dx = 0
w (x )[f (x )]2 dx
Vector Spaces: Fundamental Concepts* Group Field Vector Space Linear Transformation Isomorphism Inner Product Space Function Space
164,
Points to note
Matrix algebra provides a natural description for vector spaces and linear transformations. Through isomorphisms, R n can represent all n-dimensional real vector spaces. Through the denition of an inner product, a vector space incorporates key geometric features of physical space. Continuous functions over an interval constitute an innite dimensional vector space, complete with the usual notions.
Topics in Multivariate Calculus Derivatives in Multi-Dimensional Spaces Taylors Series Chain Rule and Change of Variables Numerical Dierentiation An Introduction to Tensors*
165,
Outline
Topics in Multivariate Calculus Derivatives in Multi-Dimensional Spaces Taylors Series Chain Rule and Change of Variables Numerical Dierentiation An Introduction to Tensors*
166,
Derivatives in Multi-Dimensional Spaces Derivatives in Multi-Dimensional Spaces Taylors Series Chain Rule and Change of Variables
Gradient f (x ) f f (x ) = x x1 f x2
f xn
Among all unit vectors, taken as directions, the rate of change of a function in a direction is the same as the component of its gradient along that direction, and the rate of change along the direction of the gradient is the greatest and is equal to the magnitude of the gradient.
167,
Derivatives in Multi-Dimensional Spaces Derivatives in Multi-Dimensional Spaces Taylors Series Chain Rule and Change of Variables Numerical Dierentiation An Introduction to Tensors*
Hessian
2f x1 2 2f x1 x2 2f x2 x1 2f x2 2
2f H (x ) = = x2
. . .
. . .
.. . x
2f xn x1 2f xn x2
. . .
2f x1 xn
2f x2 xn 2f (x ) x2
2f xn 2
h x2
h xn
Topics in Multivariate Calculus Derivatives in Multi-Dimensional Spaces Taylors Series Chain Rule and Change of Variables Numerical Dierentiation An Introduction to Tensors*
168,
Taylors Series
Taylors formula in the remainder form:
f (x + x ) = f (x ) + f (x )x 1 1 1 f (n1) (x )x n1 + f (n) (xc )x n + f (x )x 2 + + 2! (n 1)! n! where xc = x + t x with 0 t 1 Mean value theorem: existence of xc Taylors series: 1 f (x + x ) = f (x ) + f (x )x + f (x )x 2 + 2! For a multivariate function, 1 f (x + x ) = f (x ) + [xT ]f (x ) + [xT ]2 f (x ) + 2! 1 1 [xT ]n1 f (x ) + [xT ]n f (x + t x ) + (n 1)! n! 1 2f f (x + x ) f (x ) + [f (x )]T x + xT (x ) x 2 x2
Topics in Multivariate Calculus Derivatives in Multi-Dimensional Spaces Taylors Series Chain Rule and Change of Variables Numerical Dierentiation An Introduction to Tensors*
169,
f (v , x (v )) = vi
f vi
+
x
f (v , x ) x
x = vi x (v ) v
f vi
T
+[x f (v , x )]T
x vi
f (v , x (v )) = v f (v , x ) +
x f (v , x )
Topics in Multivariate Calculus Derivatives in Multi-Dimensional Spaces Taylors Series Chain Rule and Change of Variables Numerical Dierentiation An Introduction to Tensors*
170,
Partition x R m+n into z R n and w R m . System of equations h (x ) = 0 means h (z , w ) = 0 . Question: Can we work out the function w = w (z )? Solution of m equations in m unknowns? Question: If we have one valid pair (z , w ), then is it possible to develop w = w (z ) in the local neighbourhood? h Answer: Yes, if Jacobian w is non-singular. Implicit function theorem h h h w w + =0 = z w z z w Upto rst order, w1 = w +
w z 1
h z
(z1 z ).
Topics in Multivariate Calculus Derivatives in Multi-Dimensional Spaces Taylors Series Chain Rule and Change of Variables Numerical Dierentiation An Introduction to Tensors*
171,
f (x , y , z ) dx dy dz ,
f (x (u , v , w ), y (u , v , w ), z (u , v , w )) |J (u , v , w )| du dv dw ,
(x ,y ,z ) (u ,v ,w )
P1 (x )dx1 + P2 (x )dx2 + + Pn (x )dxn , we ask: does there exist a function f (x ), of which this is the dierential; or equivalently, the gradient of which is P (x )? Perfect or exact dierential: can be integrated to nd f .
Topics in Multivariate Calculus Derivatives in Multi-Dimensional Spaces Taylors Series Chain Rule and Change of Variables Numerical Dierentiation An Introduction to Tensors*
172,
v (x ) u (x ) f
(x , t ) dt ?
du dv + + , x u dx v dx
F (x ,t ) t ,
(x ) =
u
F (x , t )dt = F (x , v ) F (x , u ) (x , u , v ). t
u
Using
= f (x , v ) and
v (x ) u (x )
= f (x , u ),
(x ) =
Topics in Multivariate Calculus Derivatives in Multi-Dimensional Spaces Taylors Series Chain Rule and Change of Variables Numerical Dierentiation An Introduction to Tensors*
173,
Numerical Dierentiation
Forward dierence formula f (x + x ) f (x ) f (x ) = + O(x ) x Central dierence formulae f (x + x ) f (x x ) + O(x 2 ) f (x ) = 2x f (x + x ) 2f (x ) + f (x x ) f (x ) = + O(x 2 ) x 2 For gradient f (x ) and Hessian, 1 f (x ) = [f (x + ei ) f (x ei )], xi 2 2f (x ) = xi 2 2f (x ) = xi xj
f (x + ei ) 2f (x ) + f (x ei ) , and 2 f (x + ei + ej ) f (x + ei ej ) f (x ei + ej ) + f (x ei ej ) 42
Topics in Multivariate Calculus Derivatives in Multi-Dimensional Spaces Taylors Series Chain Rule and Change of Variables Numerical Dierentiation An Introduction to Tensors*
174,
An Introduction to Tensors*
Indicial notation and summation convention Kronecker delta and Levi-Civita symbol Rotation of reference axes Tensors of order zero, or scalars Contravariant and covariant tensors of order one, or vectors Cartesian tensors Cartesian tensors of order two Higher order tensors Elementary tensor operations Symmetric tensors Tensor elds
Topics in Multivariate Calculus Derivatives in Multi-Dimensional Spaces Taylors Series Chain Rule and Change of Variables Numerical Dierentiation An Introduction to Tensors*
175,
Points to note
Gradient, Hessian, Jacobian and the Taylors series Partial and total gradients Implicit functions Leibnitz rule Numerical derivatives
Vector Analysis: Curves and Surfaces Recapitulation of Basic Notions Curves in Space Surfaces*
176,
Outline
Vector Analysis: Curves and Surfaces Recapitulation of Basic Notions Curves in Space Surfaces*
Vector Analysis: Curves and Surfaces Recapitulation of Basic Notions Curves in Space Surfaces*
177,
(a x )b = (baT )x , and aT x , for 2-d vectors ax = ax , for 3-d vectors where a = ay ax 0 az ay 0 ax and a = az ay ax 0
Vector Analysis: Curves and Surfaces Recapitulation of Basic Notions Curves in Space Surfaces*
178,
Curves in Space
dr =
b a
r r dt
s (t ) =
a
r ( ) r ( ) d
ds dt
with ds = d r =
dx 2 + dy 2 + dz 2 and
= r
Vector Analysis: Curves and Surfaces Recapitulation of Basic Notions Curves in Space Surfaces*
179,
Curves in Space
Curve r (t ) is regular if r (t ) = 0 t .
Observations
= 0.
Then, s (t ) has an inverse function. Inverse t (s ) reparametrizes the curve as r (t (s )). r (s ) = 1 and the unit tangent is
u (s ) = r (s ).
Vector Analysis: Curves and Surfaces Recapitulation of Basic Notions Curves in Space Surfaces*
180,
Curves in Space
Curvature: The rate at which the direction changes with arc length. (s ) = u (s ) = r (s ) Unit principal normal: p= With general parametrization, r (t ) = d r du d r u (t ) + r (t ) = u (t ) + (t ) r 2 p (t ) dt dt dt
r/
AC = = 1/
1 u (s )
u
z
A
r//
r
y
Vector Analysis: Curves and Surfaces Recapitulation of Basic Notions Curves in Space Surfaces*
181,
Curves in Space
Binormal: b = u p
What is p ?
Vector Analysis: Curves and Surfaces Recapitulation of Basic Notions Curves in Space Surfaces*
182,
Curves in Space
We have u and b . What is p ? From p = b u ,
p = b u + b u = p u + b p = u + b . Serret-Frenet formulae u = p , p = u + b , b = p
Intrinsic representation of a curve is complete with (s ) and (s ). The arc-length parametrization of a curve is completely determined by its curvature (s ) and torsion (s ) functions, except for a rigid body motion.
Vector Analysis: Curves and Surfaces Recapitulation of Basic Notions Curves in Space Surfaces*
183,
Surfaces*
Parametric surface equation:
r (u , v ) = x (u , v )i +y (u , v )j +z (u , v )k [x (u , v ) y (u , v ) z (u , v )]T Tangent vectors ru and rv dene a tangent plane T . N = ru rv is normal to the surface and the unit normal is n= N ru rv = . N ru rv
Question: How does n vary over the surface? Information on local geometry: curvature tensor
Normal and principal curvatures Local shape: convex, concave, saddle, cylindrical, planar
Vector Analysis: Curves and Surfaces Recapitulation of Basic Notions Curves in Space Surfaces*
184,
Points to note
Parametric equation is the general and most convenient representation of curves and surfaces. Arc length is the natural parameter and the Serret-Frenet frame oers the natural frame of reference. Curvature and torsion are the only inherent properties of a curve. The local shape of a surface patch can be understood through an analysis of its curvature tensor.
Scalar and Vector Fields Dierential Operations on Field Functions Integral Operations on Field Functions Integral Theorems Closure
185,
Outline
Scalar and Vector Fields Dierential Operations on Field Functions Integral Operations on Field Functions Integral Theorems Closure
186,
Dierential Operations on Field Functions Dierential Operations on Field Functions Integral Operations on Field Functions Integral Theorems
Scalar point function or scalar eld (x , y , z ): R 3 R Vector point function or vector eld V (x , y , z ): R 3 R 3 The del or nabla () operator i
Closure
+j +k x y z
is a vector, it signies a dierentiation, and it operates from the left side. Laplacian operator: 2 2 2 2 + + x 2 y 2 z 2 = ??
187,
Dierential Operations on Field Functions Dierential Operations on Field Functions Integral Operations on Field Functions Integral Theorems Closure
Gradient grad = i+ j+ k x y z
is orthogonal to the level surfaces. Flow elds: gives the velocity vector. Divergence For V (x , y , z ) Vx (x , y , z )i + Vy (x , y , z )j + Vz (x , y , z )k , div V V = Vy Vz Vx + + x y z
Divergence of V : ow rate of mass per unit volume out of the control volume. Similar relation between eld and ux in electromagnetics.
188,
Dierential Operations on Field Functions Dierential Operations on Field Functions Integral Operations on Field Functions Integral Theorems Closure
Curl i curl V V = =
x
j
y
k
z
Vx
Vy i+
Vz Vx Vz z x j+ Vy Vx x y k
Vy Vz y z
If V = r represents the velocity eld, then angular velocity = Curl represents rotationality. Connections between electric and magnetic elds! 1 curl V . 2
189,
Dierential Operations on Field Functions Dierential Operations on Field Functions Integral Operations on Field Functions Integral Theorems Closure
190,
Dierential Operations on Field Functions Dierential Operations on Field Functions Integral Operations on Field Functions Integral Theorems
Closure
curl grad ()
div grad ()
curl curl V ( V ) grad div V ( V ) Important identities: div grad () = 2 div curl V ( V ) = 0 = ( V ) 2 V = grad div V 2 V
div curl V ( V )
191,
Dierential Operations on Field Functions Integral Operations on Field Functions Integral Operations on Field Functions Integral Theorems
Closure
V dr =
(Vx dx + Vy dy + Vz dz )
C
I =
C
V dr =
dr dt . dt
For simple (non-intersecting) paths contained in a simply connected region, equivalent statements: Vx dx + Vy dy + Vz dz is an exact dierential. V = for some (r ). C V d r is independent of path. Circulation V d r = 0 around any closed path. curl V = 0 . Field V is conservative.
192,
Dierential Operations on Field Functions Integral Operations on Field Functions Integral Operations on Field Functions Integral Theorems Closure
V dS =
V n dS
For r (u , w ), dS = ru rw du dw and J=
S
V n dS =
V (ru rw ) du dw .
dv
and
F=
T
V dv
Scalar and Vector Fields Dierential Operations on Field Functions Integral Operations on Field Functions Integral Theorems Closure
193,
Integral Theorems
Greens theorem in the plane
R: closed bounded region in the xy -plane C : boundary, a piecewise smooth closed curve F1 (x , y ) and F2 (x , y ): rst order continuous functions (F1 dx + F2 dy ) =
C R
F2 F1 x y
y
dx dy
y d x1(y) D y2(x) B
R
A x2(y)
c O a
Scalar and Vector Fields Dierential Operations on Field Functions Integral Operations on Field Functions Integral Theorems Closure
194,
Integral Theorems
Proof: F1 dxdy y
b y2 (x ) y1 (x ) b
=
a
F1 dydx y
=
a
[F1 {x , y2 (x )} F1 {x , y1 (x )}]dx
a b b
= = F2 dxdy = x
C (F1 dx
F1 {x , y2 (x )}dx F1 (x , y )dx
F1 {x , y1 (x )}dx
C d c x2 (y ) x1 (y ) R R
F2 dxdy = x
F2 x
F2 (x , y )dy
C
Dierence:
+ F2 dy ) =
C
F1 y
dx dy
In alternative form,
F dr =
curl F k dx dy .
Scalar and Vector Fields Dierential Operations on Field Functions Integral Operations on Field Functions Integral Theorems Closure
195,
Integral Theorems
Gausss divergence theorem
T : a closed bounded region S: boundary, a piecewise smooth closed orientable surface F (x , y , z ): a rst order continuous vector function div F dv =
T S
F n dS
dx dy dz =
S
Fz Fz nz dS To show: z dx dy dz = S T First consider a region, the boundary of which is intersected at most twice by any line parallel to a coordinate axis.
Scalar and Vector Fields Dierential Operations on Field Functions Integral Operations on Field Functions Integral Theorems Closure
196,
Integral Theorems
Fz dz dx dy z
[Fz {x , y , z2 (x , y )} Fz {x , y , z1 (x , y )}]dx dy
R : projection of T on the xy -plane Projection of area element of the upper segment: nz dS = dx dy Projection of area element of the lower segment: nz dS = dx dy Thus,
T Fz z dx
dy dz =
Fz nz dS .
Sum of three such components leads to the result. Extension to arbitrary regions by a suitable subdivision of domain!
Scalar and Vector Fields Dierential Operations on Field Functions Integral Operations on Field Functions Integral Theorems Closure
197,
Integral Theorems
Greens identities (theorem)
Region T and boundary S: as required in premises of Gausss theorem (x , y , z ) and (x , y , z ): second order continuous scalar functions n dS =
T
(2 + )dv (2 2 )dv
( ) n dS
=
T
Direct consequences of Gausss theorem To establish, apply Gausss divergence theorem on , and then on as well.
Scalar and Vector Fields Dierential Operations on Field Functions Integral Operations on Field Functions Integral Theorems Closure
198,
Integral Theorems
Stokess theorem S: a piecewise smooth surface C : boundary, a piecewise smooth simple closed curve F (x , y , z ): rst order continuous vector function F dr = curl F n dS
Fx Fx j k n dS = z y
Fx Fx ny nz z y
dS .
First, consider a surface S intersected at most once by any line parallel to a coordinate axis.
Scalar and Vector Fields Dierential Operations on Field Functions Integral Operations on Field Functions Integral Theorems Closure
199,
Integral Theorems
Represent S as z = z (x , y ) f (x , y ).
f y
1]T .
ny = nz Fx Fx ny nz z y
z y Fx z Fx + y z y
dS =
nz dS
Fx dx
Scalar and Vector Fields Dierential Operations on Field Functions Integral Operations on Field Functions Integral Theorems Closure
200,
Points to note
Gradient, divergence and curl Composite and second order operators Line, surface and volume intergals
Polynomial Equations Basic Principles Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Advanced Techniques*
201,
Outline
Polynomial Equations Basic Principles Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Advanced Techniques*
Polynomial Equations Basic Principles Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Advanced Techniques*
202,
Basic Principles
Fundamental theorem of algebra
p (x ) = a0 x n + a1 x n1 + a2 x n2 + + an1 x + an has exactly n roots x1 , x2 , , xn ; with p (x ) = a0 (x x1 )(x x2 )(x x3 ) (x xn ). In general, roots are complex. Multiplicity: A root of p (x ) with multiplicity k satises p (x ) = p (x ) = p (x ) = = p (k 1) (x ) = 0.
Descartes rule of signs Bracketing and separation Synthetic division and deation p (x ) = f (x )q (x ) + r (x )
Polynomial Equations Basic Principles Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Advanced Techniques*
203,
Analytical Solution
Quadratic equation ax + bx + c = 0 x = Method of completing the square: b x2 + x + a b 2a
2 2
b 2 4ac 2a
b2 c 2 4a a
x+
b 2a
b 2 4ac 4a2
Cubic equations (Cardano): x 3 + ax 2 + bx + c = 0 Completing the cube? Substituting y = x + k , y 3 + (a 3k )y 2 + (b 2ak + 3k 2 )y + (c bk + ak 2 k 3 ) = 0. Choose the shift k = a/3.
Polynomial Equations Basic Principles Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Advanced Techniques*
204,
Analytical Solution
y 3 + py + q = 0
= p /3 = q 4p 3 . 27
(u 3 v 3 )2 = q 2 +
Polynomial Equations Basic Principles Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Advanced Techniques*
205,
Analytical Solution
Quartic equations (Ferrari) x 4 + ax 3 + bx 2 + cx + d = 0 For a perfect square, y a x2 + x + 2 2
2
a x2 + x 2
a2 b x 2 cx d 4
a2 b+y 4
x2 +
ay c x + 2
y2 d 4
a2 b+y 4
y2 d 4
=0
Polynomial Equations Basic Principles Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Advanced Techniques*
206,
Analytical Solution
Procedure
Frame the cubic resolvent. Solve this cubic equation. Pick up one solution as y . Insert this y to form y a x2 + x + 2 2
2
= (ex + f )2 .
Solve each of the two quadratic equations to obtain a total of four solutions of the original quartic equation.
Polynomial Equations Basic Principles Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Advanced Techniques*
207,
A general quintic, or higher degree, equation is not solvable by radicals. General polynomial equations: iterative algorithms
Solution through the companion matrix Roots of a polynomial are the same as the eigenvalues of its companion matrix. 0 0 0 an 1 0 0 a n 1 . . . . .. . . . Companion matrix: . . . . . . 0 0 0 a2 0 0 1 a1
Polynomial Equations Basic Principles Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Advanced Techniques*
208,
Bairstows method to separate out factors of small degree. Attempt to separate real linear factors?
Real quadratic factors Synthetic division with a guess factor x 2 + q1 x + q2 : remainder r1 x + r2 r = [r1 r2 ]T is a vector function of q = [q1 q2 ]T . Iterate over (q1 , q2 ) to make (r1 , r2 ) zero. Newton-Raphson (Jacobian based) iteration: see exercise.
Polynomial Equations Basic Principles Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Advanced Techniques*
209,
Polynomial Equations Basic Principles Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Advanced Techniques*
210,
Elimination Methods*
The method operates similarly even if the degrees of the original equations in y are higher. What about the degree of the eliminant equation? Two equations in x and y of degrees n1 and n2 : x-eliminant is an equation of degree n1 n2 in y Maximum number of solutions: Bezout number = n1 n2 Note: Decient systems may have less number of solutions. Classical methods of elimination
Polynomial Equations Basic Principles Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Advanced Techniques*
211,
Advanced Techniques*
Exploitation of special structures through clever heuristics (mechanisms kinematics literature) Gr obner basis representation (algebraic geometry) Continuation or homotopy method by Morgan For solving the system f (x ) = 0 , identify another structurally similar system g (x ) = 0 with known solutions and construct the parametrized system h (x ) = t f (x ) + (1 t )g (x ) = 0 for t [0, 1].
Polynomial Equations Basic Principles Analytical Solution General Polynomial Equations Two Simultaneous Equations Elimination Methods* Advanced Techniques*
212,
Points to note
Roots of cubic and quartic polynomials by the methods of Cardano and Ferrari For higher degree polynomials,
Bairstows method: a clever implementation of Newton-Raphson method for polynomials Eigenvalue problem of a companion matrix
Solution of Nonlinear Equations and Systems Methods for Nonlinear Equations Systems of Nonlinear Equations Closure
213,
Outline
Solution of Nonlinear Equations and Systems Methods for Nonlinear Equations Systems of Nonlinear Equations Closure
Solution of Nonlinear Equations and Systems Methods for Nonlinear Equations Systems of Nonlinear Equations Closure
214,
Algebraic and transcendental equations in the form f (x ) = 0 Practical problem: to nd one real root (zero) of f (x ) Example of f (x ): x 3 2x + 5, x 3 ln x sin x + 2, etc. If f (x ) is continuous, then Bracketing: f (x0 )f (x1 ) < 0 there must be a root of f (x ) between x0 and x1 .
x1 Bisection: Check the sign of f ( x0 + 2 ). Replace either x0 or x1 x1 with x0 + 2 .
Solution of Nonlinear Equations and Systems Methods for Nonlinear Equations Systems of Nonlinear Equations Closure
w y y = g(x) u v y=x
215,
a n
f g x
qx r
Solution of Nonlinear Equations and Systems Methods for Nonlinear Equations Systems of Nonlinear Equations Closure
216,
f g
x* d x0 x
Merit: quadratic speed of convergence: |xk +1 x | = c |xk x |2 Demerit: If the starting point is not appropriate, haphazard wandering, oscillations or outright divergence!
Solution of Nonlinear Equations and Systems Methods for Nonlinear Equations Systems of Nonlinear Equations Closure
217,
Secant method and method of false position In the Newton-Raphson formula, f (xk )f (xk 1 ) f (x ) x k x k 1 xk +1 = xk
x k x k 1 f (xk )f (xk 1 ) f
f(x0)
(xk )
O x1 x2 x3 x* x0 f(x1) x
Draw the chord or secant to f (x ) through (xk 1 , f (xk 1 )) and (xk , f (xk )). Take its x -intercept.
Special case: Maintain a bracket over the root at every iteration. The method of false position or regula falsi Convergence is guaranteed!
Solution of Nonlinear Equations and Systems Methods for Nonlinear Equations Systems of Nonlinear Equations Closure
218,
Quadratic interpolation method or Muller method Evaluate f (x ) at three points and model y = a + bx + cx 2 . Set y = 0 and solve for x .
(x0,y0 ) Quadratic Interpolation
Inverse quadratic interpolation Evaluate f (x ) at three points and model x = a + by + cy 2 . Set y = 0 to get x = a.
maintains the bracket, uses inverse quadratic interpolation, and accepts outcome if within bounds, else takes a bisection step.
Solution of Nonlinear Equations and Systems Methods for Nonlinear Equations Systems of Nonlinear Equations Closure
219,
f1 (x1 , x2 , , xn ) = 0,
f2 (x1 , x2 , , xn ) = 0,
fn (x1 , x2 , , xn ) = 0. f (x ) = 0
Number of variables and number of equations? No bracketing! Fixed point iteration schemes x = g (x )? Newtons method for systems of equations f (x + x ) = f (x ) + f (x ) x + f (x ) + J (x )x x
Solution of Nonlinear Equations and Systems Methods for Nonlinear Equations Systems of Nonlinear Equations Closure
220,
Closure
Modied Newtons method
xk +1 = xk k [J (xk )]1 f (xk ) Broydens secant method Jacobian is not evaluated at every iteration, but gets developed through updates. Optimization-based formulation Global minimum of the function f (x )
2
Levenberg-Marquardt method
Solution of Nonlinear Equations and Systems Methods for Nonlinear Equations Systems of Nonlinear Equations Closure
221,
Points to note
Iteration schemes for solving f (x ) = 0 Newton (or Newton-Raphson) iteration for a system of equations xk +1 = xk [J (xk )]1 f (xk ) Optimization formulation of a multi-dimensional root nding problem
Optimization: Introduction
222,
Outline
Optimization: Introduction The Methodology of Optimization Single-Variable Optimization Conceptual Background of Multivariate Optimization
Optimization: Introduction
223,
Parameters and variables The statement of the optimization problem Minimize f (x ) subject to g (x ) 0 , h (x ) = 0 .
Optimization methods Sensitivity analysis Optimization problems: unconstrained and constrained Optimization problems: linear and nonlinear Single-variable and multi-variable problems
Optimization: Introduction
224,
Single-Variable Optimization
a x1
x2
x3
x4
x5
x6
Optimality criteria First order necessary condition: If x is a local minimum or maximum point and if f (x ) exists, then f (x ) = 0. Second order necessary condition: If x is a local minimum point and f (x ) exists, then f (x ) 0. Second order sucient condition: If f (x ) = 0 and f (x ) > 0 then x is a local minimum point.
Optimization: Introduction
225,
Single-Variable Optimization
Higher order analysis: From Taylors series, f
= f (x + x ) f (x ) 1 1 1 = f (x )x + f (x )x 2 + f (x )x 3 + f iv (x )x 4 + 2! 3! 4! For an extremum to occur at point x , the lowest order derivative with non-zero value should be of even order.
If f (x ) = 0, then
x is a stationary point, a candidate for an extremum. Evaluate higher order derivatives till one of them is found to be non-zero.
If its order is odd, then x is an inection point. If its order is even, then x is a local minimum or maximum, as the derivative value is positive or negative, respectively.
Optimization: Introduction
226,
Single-Variable Optimization
Iterative methods of line search Methods based on gradient root nding Newtons method xk +1 = xk
f (xk ) f (xk )
Secant method xk +1 = xk f
(x
Method of cubic estimation point of vanishing gradient of the cubic t with f (xk 1 ), f (xk ), f (xk 1 ) and f (xk ) Method of quadratic estimation point of vanishing gradient of the quadratic t through three points
xk xk 1 f (xk ) (x ) f ) k k 1
Optimization: Introduction
227,
Single-Variable Optimization
Bracketing:
x1 < x2 < x3 with f (x1 ) f (x2 ) f (x3 ) Exhaustive search method or its variants Direct optimization algorithms Fibonacci search uses a pre-dened number N , of function evaluations, and the Fibonacci sequence F0 = 1, F1 = 1, F2 = 2, , Fj = Fj 2 + Fj 1 , to tighten a bracket with economized number of function evaluations. Golden section search uses a constant ratio 51 = 0.618, 2 the golden section ratio, of interval reduction, that is determined as the limiting case of N and the actual number of steps is decided by the accuracy desired.
Optimization: Introduction
228,
The Methodology of Optimization Conceptual Background of Multivariate Optimization Single-Variable Optimization Conceptual Background of Multivariate Optimization
Unconstrained minimization problem x is called a local minimum of f (x ) if such that f (x ) f (x ) for all x satisfying x x < . Optimality criteria From Taylors series, 1 f (x ) f (x ) = [g (x )]T x + xT [H (x )]x + . 2 For x to be a local minimum, necessary condition: g (x ) = 0 and H (x ) is positive semi-denite, sucient condition: g (x ) = 0 and H (x ) is positive denite. Indenite Hessian matrix characterizes a saddle point.
Optimization: Introduction
229,
The Methodology of Optimization Conceptual Background of Multivariate Optimization Single-Variable Optimization Conceptual Background of Multivariate Optimization
f(x2)
X2 X1
f(x1)
x1
x1
x2
Optimization: Introduction
230,
The Methodology of Optimization Conceptual Background of Multivariate Optimization Single-Variable Optimization Conceptual Background of Multivariate Optimization
First order characterization of convexity From f (x1 + (1 )x2 ) f (x1 ) + (1 )f (x2 ), f (x1 ) f (x2 ) f (x2 + (x1 x2 )) f (x2 ) .
As 0, f (x1 ) f (x2 ) + [f (x2 )]T (x1 x2 ). Tangent approximation is an underestimate at intermediate points! Second order characterization: Hessian is positive semi-denite. Convex programming problem: convex function over convex set A local minimum is also a global minimum, and all minima are connected in a convex set. Note: Convexity is a stronger condition than unimodality!
Optimization: Introduction
231,
The Methodology of Optimization Conceptual Background of Multivariate Optimization Single-Variable Optimization Conceptual Background of Multivariate Optimization
If A is positive denite, then the unique solution of Ax = b is the only minimum point. If A is positive semi-denite and b Range (A ), then the entire subspace of solutions of Ax = b are global minima. If A is positive semi-denite but b / Range (A ), then the function is unbounded!
Note: A quadratic problem (with positive denite Hessian) acts as a benchmark for optimization algorithms.
Optimization: Introduction
232,
The Methodology of Optimization Conceptual Background of Multivariate Optimization Single-Variable Optimization Conceptual Background of Multivariate Optimization
Optimization Algorithms From the current point, move to another point, hopefully better. Which way to go? How far to go? Which decision is rst? Strategies and versions of algorithms: Trust Region: Develop a local quadratic model 1 f (xk + x ) = f (xk ) + [g (xk )]T x + xT Fk x , 2 and minimize it in a small trust region around xk . (Dene trust region with dummy boundaries.) Line search: Identify a descent direction dk and minimize the function along it through the univariate function
Optimization: Introduction
233,
The Methodology of Optimization Conceptual Background of Multivariate Optimization Single-Variable Optimization Conceptual Background of Multivariate Optimization
Convergence of algorithms: notions of guarantee and speed Global convergence: the ability of an algorithm to approach and converge to an optimal solution for an arbitrary problem, starting from an arbitrary point Practically, a sequence (or even subsequence) of monotonically decreasing errors is enough. Local convergence: the rate/speed of approach, measured by p , where xk +1 x < = lim k xk x p
Linear, quadratic and superlinear rates of convergence for p = 1, 2 and intermediate. Comparison among algorithms with linear rates of convergence is by the convergence ratio .
Optimization: Introduction
234,
Points to note
Theory and methods of single-variable optimization Optimality criteria in multivariate optimization Convexity in optimization The quadratic function Trust region Line search Global and local convergence
Multivariate Optimization Direct Methods Steepest Descent (Cauchy) Method Newtons Method Hybrid (Levenberg-Marquardt) Method Least Square Problems
235,
Outline
Multivariate Optimization Direct Methods Steepest Descent (Cauchy) Method Newtons Method Hybrid (Levenberg-Marquardt) Method Least Square Problems
Multivariate Optimization Direct Methods Steepest Descent (Cauchy) Method Newtons Method Hybrid (Levenberg-Marquardt) Method Least Square Problems
236,
Direct Methods
Cyclic coordinate search Rosenbrocks method Hooke-Jeeves pattern search Boxs complex method Nelder and Meads simplex search Powells conjugate directions method
Useful for functions, for which derivative either does not exist at all points in the domain or is computationally costly to evaluate. Note: When derivatives are easily available, gradient-based algorithms appear as mainstream methods.
Multivariate Optimization Direct Methods Steepest Descent (Cauchy) Method Newtons Method Hybrid (Levenberg-Marquardt) Method Least Square Problems
237,
Direct Methods
Nelder and Meads simplex method Simplex in n-dimensional space: polytope formed by n + 1 vertices Nelder and Meads method iterates over simplices that are non-degenerate (i.e. enclosing non-zero hypervolume). First, n + 1 suitable points are selected for the starting simplex. Among vertices of the current simplex, identify the worst point xw , the best point xb and the second worst point xs . Need to replace xw with a good point. Centre of gravity of the face not containing xw : 1 xc = n
n+1
xi
i =1,i =w
Multivariate Optimization Direct Methods Steepest Descent (Cauchy) Method Newtons Method Hybrid (Levenberg-Marquardt) Method Least Square Problems
238,
Direct Methods
Default xnew = xr . Revision possibilities:
f(xb)
Expansion Default
f(x s )
f(xw)
Positive Contraction x new Negative Contraction
xw
xr
x new
xw
x r = xnew
xw
xr
xw
xr
1. For f (xr ) < f (xb ), expansion: xnew = xc + (xc xw ), > 1. 2. For f (xr ) f (xw ), negative contraction: xnew = xc (xc xw ), 0 < < 1. 3. For f (xs ) < f (xr ) < f (xw ), positive contraction: xnew = xc + (xc xw ), with 0 < < 1.
Multivariate Optimization Direct Methods Steepest Descent (Cauchy) Method Newtons Method Hybrid (Levenberg-Marquardt) Method Least Square Problems
239,
From a point xk , a move through units in direction dk : f (xk + dk ) = f (xk ) + [g (xk )]T dk + O(2 ) Descent direction dk : For > 0, [g (xk )]T dk < 0 Direction of steepest descent: dk = gk Minimize () = f (xk + dk ). Exact line search: (k ) = [g (xk + k dk )]T dk = 0 Search direction tangential to the contour surface at (xk + k dk ). Note: Next direction dk +1 = g (xk +1 ) orthogonal to dk [ or dk = gk / gk ]
Multivariate Optimization Direct Methods Steepest Descent (Cauchy) Method Newtons Method Hybrid (Levenberg-Marquardt) Method Least Square Problems
240,
1. Select a starting point x0 , set k = 0 and several parameters: tolerance G on gradient, absolute tolerance A on reduction in function value, relative tolerance R on reduction in function value and maximum number of iterations M . 2. If gk G , STOP. Else dk = gk / gk . 3. Line search: Obtain k by minimizing () = f (xk + dk ), > 0. Update xk +1 = xk + k dk . 4. If |f (xk +1 ) f (xk )| A + R |f (xk )|,STOP. Else k k + 1. 5. If k > M , STOP. Else go to step 2. Very good global convergence. But, why so many STOPS?
Multivariate Optimization Direct Methods Steepest Descent (Cauchy) Method Newtons Method Hybrid (Levenberg-Marquardt) Method Least Square Problems
241,
E (x ) = Convergence ratio:
1 (x x )T A (x x ) 2
(A )1 2 (A )+1
E (xk +1 ) E (xk )
conceptual understanding initial iterations in a completely new problem spacer steps in other sophisticated methods
Multivariate Optimization Direct Methods Steepest Descent (Cauchy) Method Newtons Method Hybrid (Levenberg-Marquardt) Method Least Square Problems
242,
Newtons Method
Second order approximation of a function:
1 f (x ) f (xk ) + [g (xk )]T (x xk ) + (x xk )T H (xk )(x xk ) 2 Vanishing of gradient g (x ) g (xk ) + H (xk )(x xk ) gives the iteration formula xk +1 = xk [H (xk )]1 g (xk ). Excellent local convergence property! xk +1 x xk x 2 Caution: Does not have global convergence. If H (xk ) is positive denite then dk = [H (xk )]1 g (xk ) is a descent direction.
Multivariate Optimization Direct Methods Steepest Descent (Cauchy) Method Newtons Method Hybrid (Levenberg-Marquardt) Method Least Square Problems
243,
Newtons Method
Modied Newtons method
Replace the Hessian by Fk = H (xk ) + I . Replace full Newtons step by a line search.
Algorithm 1. Select x0 , tolerance and > 0. Set k = 0. 2. Evaluate gk = g (xk ) and H (xk ). Choose , nd Fk = H (xk ) + I , solve Fk dk = gk for dk . 3. Line search: obtain k to minimize () = f (xk + dk ). Update xk +1 = xk + k dk . 4. Check convergence: If |f (xk +1 ) f (xk )| < , STOP. Else, k k + 1 and go to step 2.
Multivariate Optimization Direct Methods Steepest Descent (Cauchy) Method Newtons Method Hybrid (Levenberg-Marquardt) Method Least Square Problems
244,
xk +1 = xk k [Mk ]gk
Initial value of : large enough to favour steepest descent trend Improvement in an iteration: reduced by a factor Increase in function value: step rejected and increased
Opportunism systematized! Note: Cost of evaluating the Hessian remains a bottleneck. Useful for problems where Hessian estimates come cheap!
Multivariate Optimization Direct Methods Steepest Descent (Cauchy) Method Newtons Method Hybrid (Levenberg-Marquardt) Method Least Square Problems
245,
ei =
k =1
xk k (i ) yi = [(i )]T x yi .
Multivariate Optimization Direct Methods Steepest Descent (Cauchy) Method Newtons Method Hybrid (Levenberg-Marquardt) Method Least Square Problems
246,
1 2
[f (i , x ) yi ]2
Gradient: g (x ) = E (x ) = Hessian: H (x ) =
2 E (x ) x2
(i , x ) yi ]f (i , x ) = JT e
2 i ei x 2 f (i , x )
= JT J +
JT J
Combining a modied form diag(JT J ) x = g (x ) of steepest descent formula with Newtons formula, Levenberg-Marquardt step: [JT J + diag(JT J )]x = g (x )
Multivariate Optimization Direct Methods Steepest Descent (Cauchy) Method Newtons Method Hybrid (Levenberg-Marquardt) Method Least Square Problems
247,
1. Select x0 , evaluate E (x0 ). Select tolerance , initial and its update factor. Set k = 0. k = JT J + diag(JT J ). 2. Evaluate gk and H k x = gk . Evaluate E (xk + x ). Solve H 3. If |E (xk + x ) E (xk )| < , STOP. 4. If E (xk + x ) < E (xk ), then decrease , update xk +1 = xk + x , k k + 1. Else increase . 5. Go to step 2. Professional procedure for nonlinear least square problems and also for solving systems of nonlinear equations in the form h (x ) = 0 .
Multivariate Optimization Direct Methods Steepest Descent (Cauchy) Method Newtons Method Hybrid (Levenberg-Marquardt) Method Least Square Problems
248,
Points to note
Simplex method of Nelder and Mead Steepest descent method with its global convergence Newtons method for fast local convergence Levenberg-Marquardt method for equation solving and least squares
249,
Outline
250,
Two vectors d1 and d2 are mutually conjugate with respect to a symmetric matrix A , if dT 1 Ad2 = 0. Linear independence of conjugate directions: Conjugate directions with respect to a positive denite matrix are linearly independent. Expanding subspace property: In R n , with conjugate vectors {d0 , d1 , , dn1 } with respect to symmetric positive denite A , for any x0 R n , the sequence {x0 , x1 , x2 , , xn } generated as xk +1 = xk + k dk , with k =
Td gk k , T dk Adk
251,
Question: How to nd a set of n conjugate directions? Gram-Schmidt procedure is a poor option! Conjugate gradient method Starting from d0 = g0 , dk +1 = gk +1 + k dk Imposing the condition of conjugacy of dk +1 with dk , k =
T Ad gk k +1
dT k Adk
T (g gk +1 k +1 gk )
k dT k Adk
252,
Fletcher-Reeves formula: k =
T g gk +1 k +1 Tg gk k
253,
Extension to general (non-quadratic) functions Varying Hessian A : determine the step size by line search. After n steps, minimum not attained. T d = gT g implies guaranteed descent. But, gk k k k Globally convergent, with superlinear rate of convergence. What to do after n steps? Restart or continue? Algorithm 1. Select x0 and tolerances G , D . Evaluate g0 = f (x0 ). 2. Set k = 0 and dk = gk . 3. Line search: nd k ; update xk +1 = xk + k dk . 4. Evaluate gk +1 = f (xk +1 ). If gk +1 G , STOP. 5. Find k = or k = Obtain dk +1 = gk +1 + k dk .
dT d
T (g gk +1 k +1 gk ) (Polak-Ribiere) Tg gk k T gk +1 gk +1 (Fletcher-Reeves). Tg gk k
254,
Parallel subspace property: In R n , consider two parallel linear varieties S1 = v1 + Bk and S2 = v2 + Bk , with Bk = {d1 , d2 , , dk }, k < n. If x1 and x2 T T minimize q (x ) = 1 2 x Ax + b x on S1 and S2 , respectively, then x2 x1 is conjugate to d1 , d2 , , dk .
255,
1. Select x0 , and a set of n linearly independent (preferably normalized) directions d1 , d2 , , dn ; possibly di = ei . 3. Line searches along d1 , d2 , , dn in sequence to obtain z = xk + n j =1 j dj . 4. New conjugate direction d = z xk . If d < , STOP.
5. Reassign directions dj dj +1 for j = 1, 2, , (n 1) and dn = d / d . (Old d1 gets discarded at this step.) 6. Line search and update xk +1 = z + dn ; set k k + 1 and go to step 3.
256,
x0 -x1 and b -z1 : x1 -z1 is conjugate to b -z1 . b -z1 -x2 and c -d -z2 : c -d , d -z2 and x2 -z2 are mutually conjugate.
x3 d z2 x3
x2 x1 x1 a x0 z1
c x2
257,
Quasi-Newton Methods
Variable metric methods attempt to construct the inverse Hessian Bk . pk = xk +1 xk and qk = gk +1 gk qk Hpk
With n such steps, B = PQ1 : update and construct Bk H1 . Rank one correction: Bk +1 = Bk + ak zk zT k? Rank two correction:
T Bk +1 = Bk + ak zk zT k + bk w k w k
Davidon-Fletcher-Powell (DFP) method Select x0 , tolerance and B0 = In . For k = 0, 1, 2, , dk = Bk gk . Line search for k ; update pk = k dk , xk +1 = xk + pk , qk = gk +1 gk . If pk < or qk < , STOP.
pk pT k pT k qk
Bk qk qT k Bk . qT k Bk qk
258,
Quasi-Newton Methods
Properties of DFP iterations:
1. If Bk is symmetric and positive denite, then so is Bk +1 . 2. For quadratic function with positive denite Hessian H , pT i Hpj = 0 and Implications: 1. Positive deniteness of inverse Hessian estimate is never lost. 2. Successive search directions are conjugate directions. 3. With B0 = I , the algorithm is a conjugate gradient method. 4. For a quadratic problem, the inverse Hessian gets completely constructed after n steps. Variants: Broyden-Fletcher-Goldfarb-Shanno (BFGS) method and the Broyden family of methods Bk +1 Hpi = pi for for 0 i < j k,
0 i k.
259,
Closure
Table 23.1: Summary of performance of optimization methods Cauchy (Steepest Descent) For Quadratic Problems: Convergence steps Newton Levenberg-Marquardt (Hybrid) (Deected Gradient) DFP/BFGS (Quasi-Newton) (Variable Metric) FR/PR (Conjugate Gradient) Powell (Direction Set)
N Indenite Nf Ng
N Unknown Nf Ng NH
n2
Evaluations
2f 2g 1H
(n + 1)f (n + 1)g
(n + 1)f (n + 1)g
n2 f
Equivalent function evaluations Line searches Storage Performance in general problems Practically good for
260,
Points to note
Conjugate directions and the expanding subspace property Conjugate gradient method Powell-Smith direction set method The quasi-Newton concept in professional optimization
Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*
261,
Outline
Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*
Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*
262,
Constraints
Constrained optimization problem: Minimize f (x ) subject to gi (x ) 0 and hj (x ) = 0
for i = 1, 2, , l , for j = 1, 2, , m,
or g (x ) 0 ; or h (x ) = 0 .
Conceptually, minimize f (x ), x . Equality constraints reduce the domain to a surface or a manifold, possessing a tangent plane at every point. Gradient of the vector function h (x ): T h (x ) [h1 (x ) h2 (x ) hm (x )]
h x h x1 hT x2
. . .
hT xn
= [h (x )]T .
Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*
263,
Constraints
Constraint qualication
h1 (x ), h2 (x ) etc are linearly independent, i.e. h (x ) is full-rank. If a feasible point x0 , with h (x0 ) = 0 , satises the constraint qualication condition, we call it a regular point. At a regular feasible point x0 , tangent plane M = {y : [h (x0 )]T y = 0 } gives the collection of feasible directions. Equality constraints reduce the dimension of the problem. Variable elimination?
Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*
264,
Constraints
Active inequality constraints gi (x0 ) = 0: included among hj (x0 ) for the tangent plane. Cone of feasible directions: [h (x0 )]T d = 0
where I is the set of indices of active inequality constraints. Handling inequality constraints:
Active set strategy maintains a list of active constraints, keeps checking at every step for a change of scenario and updates the list by inclusions and exclusions. Slack variable strategy replaces all the inequality constraints by equality constraints as gi (x ) + xn+i = 0 with the inclusion of non-negative slack variables (xn+i ).
Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*
265,
Optimality Criteria
Suppose x is a regular point with
Columns of h (x ) and g(a) (x ): basis for orthogonal complement of the tangent plane Basis of the tangent plane: D = [d1 Then, [D Now, f (x ) is a vector in R n . f (x ) = [D z h (x ) g(a) (x )] (a) h (x ) g(a) (x )]: d2 basis of Rn dk ]
Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*
266,
Optimality Criteria
Components of f (x ) in the tangent plane must be zero. For inactive constraints, insisting on (i ) = 0 , f (x ) = [h (x )] + [g(a) (x ) or f (x ) + [h (x )] + [g (x )] = 0 where g (x ) = g(a) (x ) g(i ) (x ) and = (a) (i ) . z=0 f (x ) = [h (x )] + [g(a) (x )](a) (a) (i )
g(i ) (x )]
T g (x ) = 0.
i gi (x ) = 0 i , or
Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*
267,
Optimality Criteria
Answer: Negative gradient f (x ) can have no component (a) (a) towards decreasing gi (x ), i.e. i 0, i . Combining it with i = 0,
(i )
First order necessary conditions or Karusch-Kuhn-Tucker (KKT) conditions: If x is a regular point of the constraints and a solution to the NLP problem, then there exist Lagrange multiplier vectors, and , such that Optimality: f (x ) + [h (x )] + [g (x )] = 0 , 0 ; Feasibility: h (x ) = 0 , g (x ) 0 ; Complementarity: T g (x ) = 0 . Convex programming problem: Convex objective function f (x ) and convex domain (convex gi (x ) and linear hj (x )): KKT conditions are sucient as well!
0.
Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*
268,
Optimality Criteria
Lagrangian function:
Second order conditions Consider curve z (t ) in the tangent plane with z (0) = x . d2 f (z (t )) dt 2 d [f (z (t ))T z (t )] dt
T
=
t =0
t =0
= z (0) H (x )z (0) + [f (x )]T z (0) 0 Similarly, from hj (z (t )) = 0, z (0)T Hhj (x )z (0) + [hj (x )]T z (0) = 0.
Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*
269,
Optimality Criteria
t =0
where HL (x ) =
2L x2
j Hhj (x ) +
i Hgi (x ).
First order necessary condition makes the second term vanish! Second order necessary condition: The Hessian matrix of the Lagrangian function is positive semi-denite on the tangent plane M. Sucient condition: x L = 0 and HL (x ) positive denite on M. Restriction of the mapping HL (x ) : R n R n on subspace M?
Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*
270,
Optimality Criteria
Restricted mapping LM : M M
Take y M, operate HL (x ) on it, project the image back to M. Question: Matrix representation for LM of size (n m) (n m)? Select local orthonormal basis D R n(nm) for M. For arbitrary z R nm , map y = Dz R n as HL y = HL Dz . Its component along di : dT i HL Dz Hence, projection back on M: LM z = DT HL Dz , The (n m) (n m) matrix LM = DT HL D : the restriction! Second order necessary/sucient condition: LM p.s.d./p.d.
Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*
271,
Sensitivity
f (x , p ), g (x , p ) and h (x , p )
By choosing parameters (p ), we arrive at x . Call it x (p ). Question: How does f (x (p ), p ) depend on p ? Total gradients p f (x (p ), p ) = p x (p )x f (x , p ) + p f (x , p ), p h (x (p ), p ) = p x (p )x h (x , p ) + p h (x , p ) = 0 , and similarly for g (x (p ), p ). In view of x L = 0, from KKT conditions, p f (x (p ), p ) = p f (x , p ) + [p h (x , p )] + [p g (x , p )]
Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*
272,
Sensitivity
Sensitivity to constraints In particular, in a revised problem, with h (x ) = c and g (x ) d , using p = c , p f (x , p ) = 0 , p h (x , p ) = I and p g (x , p ) = 0 . c f (x (p ), p ) = Similarly, using p = d , we get d f (x (p ), p ) = .
Lagrange multipliers and signify costs of pulling the minimum point in order to satisfy the constraints!
Equality constraint: both sides infeasible, sign of j identies one side or the other of the hypersurface. Inequality constraint: one side is feasible, no cost of pulling from that side, so i 0.
Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*
273,
Duality*
Dual problem: Reformulation of a problem in terms of the Lagrange multipliers. Suppose x as a local minimum for the problem Minimize f (x ) subject to h (x ) = 0 , with Lagrange multiplier (vector) . f (x ) + [h (x )] = 0
If HL (x ) is positive denite (assumption of local duality), then x is also a local minimum of (x ) = f (x ) + T h (x ). f If we vary around , the minimizer of L(x , ) = f (x ) + T h (x ) varies continuously with .
Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*
274,
Duality*
() = min L(x , ) = min[f (x ) + T h (x )]. For a pair {x , }, the dual solution is feasible if and only if the primal solution is optimal. Dene x () as the local minimizer of L(x , ). () = L(x (), ) = f (x ()) + T h (x ()) First derivative: () = x ()x L(x (), ) + h (x ()) = h (x ()) For a pair {x , }, the dual solution is optimal if and only if the primal solution is feasible.
Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*
275,
Duality*
Hessian of the dual function:
H () = x ()x h (x ()) Dierentiating x L(x (), ) = 0 , we have x ()HL (x (), ) + [x h (x ())]T = 0 . Solving for x () and substituting, H () = [x h (x ())]T [HL (x (), )]1 x h (x ()), negative denite! At , x ( ) = x , ( ) = h (x ) = 0 , H ( ) is negative denite and the dual function is maximized. ( ) = L(x , ) = f (x )
Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*
276,
Duality*
Consolidation (including all constraints)
Assuming local convexity, the dual function: (, ) = min L(x , , ) = min[f (x ) + T h (x ) + T g (x )].
x x
Constraints on the dual: x L(x , , ) = 0 , optimality of the primal. Corresponding to inequality constraints of the primal problem, non-negative variables in the dual problem. First order necessary conditons for the dual optimality: equivalent to the feasibility of the primal problem. The dual function is concave globally! Under suitable conditions, ( ) = L(x , ) = f (x ). The Lagrangian L(x , , ) has a saddle point in the combined space of primal and dual variables: positive curvature along x directions and negative curvature along and directions.
Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*
277,
For a problem of n variables, with m active constraints, nature and dimension of working spaces
h (x )
2 +1 2 [max(0 , g (x ))] .
Primal methods (R nm ): Work only in feasible domain, restricting steps to the tangent plane. Example: Gradient projection method. Dual methods (R m ): Transform the problem to the space of Lagrange multipliers and maximize the dual. Example: Augmented Lagrangian method. Lagrange methods (R m+n ): Solve equations appearing in the KKT conditions directly. Example: Sequential quadratic programming.
Constrained Optimization Constraints Optimality Criteria Sensitivity Duality* Structure of Methods: An Overview*
278,
Points to note
Constraint qualication KKT conditions Second order conditions Basic ideas for solution strategy
279,
Outline
280,
Linear Programming
Standard form of an LP problem: Minimize subject to f (x ) = cT x , Ax = b , x 0 ;
with b 0 .
Preprocessing to cast a problem to the standard form Maximization: Minimize the negative function. Variables of unrestricted sign: Use two variables. Inequality constraints: Use slack/surplus variables. Negative RHS: Multiply with 1. Geometry of an LP problem Innite domain: does a minimum exist? Finite convex polytope: existence guaranteed Operating with vertices sucient as a strategy Extension with slack/surplus variables: original solution space a subspace in the extented space, x 0 marking the domain Essence of the non-negativity condition of variables
281,
Linear Programming
The simplex method Suppose x R N , b R M and A R M N full-rank, with M < N . IM xB + A xNB = b Basic and non-basic variables: xB R M and xNB R N M Basic feasible solution: xB = b 0 and xNB = 0 At every iteration, selection of a non-basic variable to enter the basis
edge of travel selected based on maximum rate of descent no qualier: current vertex is optimal based on the rst constraint becoming active along the edge no constraint ahead: function is unbounded
Two-phase method: Inclusion of a pre-processing phase with articial variables to develop a basic feasible solution
282,
Linear Programming
General perspective LP problem: Minimize subject to Lagrangian:
y 0.
T L(x , y , , , ) = cT 1 x + c2 y
Optimality conditions:
T c1 + AT 11 + A21 = 0
Substituting back, optimal function value: f = T b1 T b2 f f Sensitivity to the constraints: b1 = and b2 = Dual problem: maximize subject to
T (, ) = bT 1 b2 ; T T T A11 + A21 = c1 , AT 12 + A22 c2 ,
0.
283,
Quadratic Programming
a QP problem.
Equations from the KKT conditions: linear! Lagrange methods are the natural choice! With equality constraints only, Minimize 1 f (x ) = xT Qx + cT x , 2 subject to Ax = b .
Solution of this linear system yields the complete result! Caution: This coecient matrix is indenite.
284,
Quadratic Programming
Active set method Minimize subject to
Start the iterative process from a feasible point. Construct active set of constraints as Ax = b . From the current point xk , with x = xk + dk , f (x ) =
1 T x Qx + cT x ; f (x ) = 2 A1 x = b1 , A2 x b2 .
1 (xk + dk )T Q (xk + dk ) + cT (xk + dk ) 2 1 T = d Qdk + (c + Qxk )T dk + f (xk ). 2 k Since gk f (xk ) = c + Qxk , subsidiary quadratic program:
1 T Td dk Qdk + gk minimize 2 k
subject to Adk = 0 .
Examining solution dk and Lagrange multipliers, decide to terminate, proceed or revise the active set.
285,
Quadratic Programming
Linear complementary problem (LCP)
+ cT x ,
subject to Ax b ,
x 0.
Ax + y = b ,
w Mz = q ,
286,
Quadratic Programming
If q 0 , then w = q , z = 0 is a solution!
Lemkes method: articial variable z0 with e = [1 1 1 1]T : Iw Mz e z0 = q With z0 = max(qi ), w = q + e z0 0 and z = 0 : basic feasible solution Evolution of the basis similar to the simplex method. Out of a pair of w and z variables, only one can be there in any basis. At every step, one variable is driven out of the basis and its partner called in. The step driving out z0 ags termination. Very clumsy!!
287,
Points to note
Fundamental issues and general perspective of the linear programming problem The simplex method Quadratic programming
The active set method Lemkes method via the linear complementary problem
Interpolation and Approximation Polynomial Interpolation Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Modelling of Curves and Surfaces*
288,
Outline
Interpolation and Approximation Polynomial Interpolation Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Modelling of Curves and Surfaces*
Interpolation and Approximation Polynomial Interpolation Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Modelling of Curves and Surfaces*
289,
Polynomial Interpolation
Problem: To develop an analytical representation of a function from information at discrete data points. Purpose
Evaluation at arbitrary points Dierentiation and/or integration Drawing conclusion regarding the trends or nature sampled data are exactly satised
Polynomial: a convenient class of basis functions For yi = f (xi ) for i = 0, 1, 2, , n with x0 < x1 < x2 < < xn , p (x ) = a0 + a1 x + a2 x 2 + + an x n . Find the coecients such that p (xi ) = f (xi ) for i = 0, 1, 2, , n. Values of p (x ) for x [x0 , xn ] interpolate n + 1 values of f (x ), an outside estimate is extrapolation.
Interpolation and Approximation Polynomial Interpolation Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Modelling of Curves and Surfaces*
290,
Polynomial Interpolation
Vandermonde matrix: invertible, but typically ill-conditioned! Invertibility means existence and uniqueness of polynomial p (x ). Two polynomials p1 (x ) and p2 (x ) matching the function f (x ) at x0 , x1 , x2 , , xn imply n-th degree polynomial p (x ) = p1 (x ) p2 (x ) with n + 1 roots!
p 0 p1 (x ) = p2 (x ): p (x ) is unique.
Interpolation and Approximation Polynomial Interpolation Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Modelling of Curves and Surfaces*
291,
Polynomial Interpolation
Lagrange interpolation Basis functions: n j =0,j =k (x xj ) Lk (x ) = n j =0,j =k (xk xj ) =
Interpolating polynomial:
p (x ) = 0 L0 (x ) + 1 L1 (x ) + 2 L2 (x ) + + n Ln (x ) At the data points, Lk (xi ) = ik . Coecient matrix identity and i = f (xi ). Lagrange interpolation formula:
n
p (x ) =
k =0
Interpolation and Approximation Polynomial Interpolation Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Modelling of Curves and Surfaces*
292,
Polynomial Interpolation
Two interpolation formulae
one costly to determine, but easy to process the other trivial to determine, costly to process
n 1 i =0 (x
Hermite interpolation
Newton interpolation for an intermediate trade-o: p (x ) = c0 + c1 (x x0 ) + c2 (x x0 )(x x1 ) + + cn uses derivatives as well as function values.
xi )
At (m + 1) points, a total of n + 1 =
m i =0 ni
conditions
Limitations of single-polynomial interpolation With large number of data points, polynomial degree is high.
Computational cost and numerical imprecision Lack of representative nature due to oscillations
Interpolation and Approximation Polynomial Interpolation Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Modelling of Curves and Surfaces*
293,
Handy for many uses with dense data. But, not dierentiable. Piecewise cubic interpolation With function values and derivatives at (n + 1) points, n cubic Hermite segments Data for the j -th segment:
f (xj 1 ) = fj 1 , f (xj ) = fj , f (xj 1 ) = fj1 and f (xj ) = fj Interpolating polynomial: pj (x ) = a0 + a1 x + a2 x 2 + a3 x 3 Coecients a0 , a1 , a2 , a3 : linear combinations of fj 1 , fj , fj1 , fj Composite function C 1 continuous at knot points.
Interpolation and Approximation Polynomial Interpolation Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Modelling of Curves and Surfaces*
294,
General formulation through normalization of intervals x = xj 1 + t (xj xj 1 ), t [0, 1] With g (t ) = f (x (t )), g (t ) = (xj xj 1 )f (x (t )); Cubic polynomial for the j -th segment: qj (t ) = 0 + 1 t + 2 t 2 + 3 t 3 Modular expression: 1 t qj (t ) = [0 1 2 3 ] t 2 = [g0 g1 g0 g1 ] W t3
Packaging data, interpolation type and variable terms separately! Question: How to supply derivatives? And, why?
1 t t 2 = Gj WT t3
Interpolation and Approximation Polynomial Interpolation Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Modelling of Curves and Surfaces*
295,
Spline: a drafting tool to draw a smooth curve through key points. Data: fi = f (xi ), for x0 < x1 < x2 < < xn . If kj = f (xj ), then pj (x ) can be determined in terms of fj 1 , fj , kj 1 , kj and pj +1 (x ) in terms of fj , fj +1 , kj , kj +1 . Then, pj (xj ) = pj+1 (xj ): a linear equation in kj 1 , kj and kj +1 From n 1 interior knot points, n 1 linear equations in derivative values k0 , k1 , , kn .
Prescribing k0 and kn , a diagonally dominant tridiagonal system! A spline is a smooth interpolation, with C 2 continuity.
296,
Polynomial Interpolation Interpolation of Multivariate Functions Piecewise Polynomial Interpolation Interpolation of Multivariate Functions
x = x0 , x1 , x2 , , xm and y = y0 , y1 , y2 , , yn Rectangular domain: {(x , y ) : x0 x xm , y0 y yn } For xi 1 x xi and yj 1 y yj , f (x , y ) = a0,0 + a1,0 x + a0,1 y + a1,1 xy = [1 x ] a0,0 a0,1 a1,0 a1,1 1 y
With data at four corner points, coecient matrix determined from 1 xi 1 1 xi a0,0 a0,1 a1,0 a1,1 1 yj 1 1 yj = fi 1,j 1 fi 1,j f i , j 1 fi , j .
297,
Polynomial Interpolation Interpolation of Multivariate Functions Piecewise Polynomial Interpolation Interpolation of Multivariate Functions
Alternative local formula through reparametrization Modelling of Curves and Surfaces* y y j 1 x x i 1 With u = xi xi 1 and v = yj yj 1 , denoting fi 1,j 1 = g0,0 , fi ,j 1 = g1,0 , fi 1,j = g0,1 and fi ,j = g1,1 ; bilinear interpolation: g (u , v ) = [1 u ] 0,0 0,1 1,0 1,1 1 v for u , v [0, 1].
Values at four corner points x the coecient matrix as 0,0 0,1 1,0 1,1 Concisely, U= 1 u = 1 0 1 1 g0,0 g0,1 g1,0 g1,1 in which , Gi ,j = fi 1,j 1 fi 1,j fi , j 1 fi , j . 1 1 0 1 .
g (u , v ) = UT WT Gi ,j WV ,V= 1 v ,W= 1 1 0 1
298,
Polynomial Interpolation Interpolation of Multivariate Functions Piecewise Polynomial Interpolation Interpolation of Multivariate Functions
Piecewise bicubic interpolation f f 2f Data: f , x , y and x y over grid points With normalizing parameters u and v ,
g u f = (xi xi 1 ) x, 2g u v g v
f = (yj yj 1 ) y , and
2
f = (xi xi 1 )(yj yj 1 ) x y
In {(x , y ) : xi 1 x xi , yj 1 y yj } or {(u , v ) : u , v [0, 1]}, g (u , v ) = UT WT Gi ,j WV , with U = [1 u u 2 u 3 ]T , g (0, 0) g (1, 0) Gi ,j = gu (0, 0) gu (1, 0) V = [1 v v 2 v 3 ]T , and g (0, 1) gv (0, 0) gv (0, 1) g (1, 1) gv (1, 0) gv (1, 1) . gu (0, 1) guv (0, 0) guv (0, 1) gu (1, 1) guv (1, 0) guv (1, 1)
299,
Polynomial Interpolation A Note on Approximation of Functions Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Modelling of Curves and Surfaces*
express a function as a linear combination of a set of basis functions (which?), and determine coecients based on some criteria (what?).
Criteria: Interpolatory approximation: Exact agreement with sampled data Least square approximation: Minimization of a sum (or integral) of square errors over sampled data Minimax approximation: Limiting the largest deviation Basis functions: polynomials, sinusoids, orthogonal eigenfunctions or eld-specic heuristic choice
Interpolation and Approximation Polynomial Interpolation Piecewise Polynomial Interpolation Interpolation of Multivariate Functions A Note on Approximation of Functions Modelling of Curves and Surfaces*
300,
Points to note
Lagrange, Newton and Hermite interpolations Piecewise polynomial functions and splines Bilinear and bicubic interpolation of bivariate functions
301,
Outline
Newton-Cotes Integration Formulae Richardson Extrapolation and Romberg Integration Further Issues
Basic Methods of Numerical Integration Newton-Cotes Integration Formulae Richardson Extrapolation and Romberg Integration Further Issues
302,
Newton-Cotes Integration Formulae Richardson Extrapolation and Romberg Integration Further Issues
J=
a
f (x )dx
Divide [a, b ] into n sub-intervals with a = x0 < x1 < x2 < < xn 1 < xn = b , where xi xi 1 = h =
n b a n .
= J
i =1
Taking
xi
As n (i.e. h 0), if J1 and J2 approach the same limit, then function f (x ) is integrable over interval [a, b ]. A rectangular rule or a one-point rule Question: Which point to take as xi ?
303,
Newton-Cotes Integration Formulae Richardson Extrapolation and Romberg Integration Further Issues
f (x )dx hf ( xi )
and
a
f (x )dx h
f ( xi ).
i =1
f (x )dx =
x i 1 x i 1
f ( xi ) + f ( xi )(x x i ) + f ( xi )
(x x i )2 + dx 2
= hf ( xi ) +
h5 iv h3 f ( xi ) + f ( xi ) + , 24 1920
f (x )dx h
f ( xi )+
i =1
h3 24
f ( xi ) = h
i =1 i =1
f ( xi )+
h2 (b a)f ( ), 24
304,
Newton-Cotes Integration Formulae Richardson Extrapolation and Romberg Integration Further Issues
f (x )dx
h [f (xi 1 ) + f (xi )] 2
n 1 i =1
and
b a
1 f (x )dx h f (x0 ) + 2
305,
Newton-Cotes Integration Formulae Richardson Extrapolation and Romberg Integration Further Issues
f (x )dx =
x i 1
f (x )dx = h
a
1 {f (x0 ) + f (xn )} + 2
n 1 i =1
f (xi )
h2 (b a)f ( )+ . 12
The same order of accuracy as the mid-point rule! Dierent sources of merit
Mid-point rule: Use of mid-point leads to symmetric error-cancellation. Trapezoidal rule: Use of end-points allows double utilization of boundary points in adjacent intervals.
306,
Newton-Cotes Integration Formulae Richardson Extrapolation and Romberg Integration Further Issues
Simpsons rules Divide [a, b ] into an even number (n = 2m) of intervals. Fit a quadratic polynomial over a panel of two intervals. For this panel of length 2h, two estimates: M (f ) = 2hf (xi ) and T (f ) = h[f (xi 1 ) + f (xi +1 )] h5 h3 f (xi ) + f iv (xi ) + 3 60 h5 2h3 f (xi ) f iv (xi ) + J = T (f ) 3 15 Simpsons one-third rule (with error estimate): J = M (f ) +
xi +1
f (x )dx =
x i 1
Fifth (not fourth) order accurate! A four-point rule: Simpsons three-eighth rule Still higher order rules NOT advisable!
307,
Newton-Cotes Integration Formulae Richardson Extrapolation and Romberg Integration Richardson Extrapolation and Romberg Integration Further Issues
To determine quantity F using a step size h , estimate F (h ) error terms: h p , h q , h r etc (p < q < r ) F = lim0 F ( )? plot F (h ), F (h ), F (2 h ) (with < 1) and extrapolate?
1 2 4
F1 (h) = F1 (h) =
F2 (h) =
Richardson extrapolation
F1 (h)q F1 (h) 1q
= F + O(hr )
308,
Newton-Cotes Integration Formulae Richardson Extrapolation and Romberg Integration Richardson Extrapolation and Romberg Integration Further Issues
b a
f (x )dx : p = 2, q = 4, r = 6 etc
T (f ) = J + ch2 + dh4 + eh6 + With = 1 2 , half the sum available for successive levels. Romberg integration Trapezoidal rule with h = H : nd J11 . With h = H /2, nd J12 . J22 =
J12
If |J22 J12 | is within tolerance, STOP. Accept J J22 . With h = H /4, nd J13 . J23 = 4J13 J12 3 and J33 = J23
1 4 J22 2 4 1 2
1 2 J11 2 2 1 2
4J12 J11 . 3
16J23 J22 . 15
309,
Further Issues
Featured functions: adaptive quadrature
Newton-Cotes Integration Formulae Richardson Extrapolation and Romberg Integration Further Issues
(xi xi 1 ) b a
of
For each interval, nd two estimates of the integral and estimate the error. If error estimate is not within quota, then subdivide.
Only trapezoidal rule applicable? Fit a spline over data points and integrate the segments?
310,
Points to note
Newton-Cotes Integration Formulae Richardson Extrapolation and Romberg Integration Further Issues
Denition of an integral and integrability Closed Newton-Cotes formulae and their error estimates Richardson extrapolation as a general technique Romberg integration Adaptive quadrature
311,
Outline
312,
Gaussian Quadrature
A typical quadrature formula: a weighted sum n i =0 wi fi fi : function value at i -th sampled point wi : corresponding weight Newton-Cotes formulae: Abscissas (xi s) of sampling prescribed Coecients or weight values determined to eliminate dominant error terms Gaussian quadrature rules: no prescription of quadrature points only the number of quadrature points prescribed locations as well as weights contribute to the accuracy criteria with n integration points, 2n degrees of freedom can be made exact for polynomials of degree up to 2n 1 best locations: interior points open quadrature rules: can handle integrable singularities
313,
Gaussian Quadrature
Gauss-Legendre quadrature
1
f (x )dx = w1 f (x1 ) + w2 f (x2 ) Four variables: Insist that it is exact for 1, x , x 2 and x 3 .
1 1
w1 + w2 =
1 1
dx = 2, xdx = 0,
1 1 1 1 1 1 3
w1 x1 + w2 x2 =
2 2 w1 x1 + w2 x2 = 3 3 and w1 x1 + w2 x2 =
x 2 dx =
2 3
x 3 dx = 0.
1 x1 = x2 , w1 = w2 w1 = w2 = 1, x1 = , x2 = 3
314,
Gaussian Quadrature
1 1 f
Two-point Gauss-Legendre quadrature formula Exact for any cubic polynomial: parallels Simpsons rule! Three-point quadrature rule along similar lines: 5 f (x )dx = f 9 1
1 1 1 (x )dx = f ( ) + f ( ) 3 3
3 5
8 5 + f (0) + f 9 9
3 5
A large number of formulae: Consult mathematical handbooks. For domain of integration [a, b ], x= a+b ba + t 2 2 and dx = ba dt 2
f (x )dx =
a
ba 2
f [x (t )]dt
1
315,
Gaussian Quadrature
General Framework for n-point formula
p (x ): Lagrange polynomial through the n quadrature points f (x ) p (x ): a (2n 1)-degree polynomial having n of its roots at the quadrature points Then, with (x ) = (x x1 )(x x2 ) (x xn ), f (x ) p (x ) = (x )q (x ). Quotient polynomial: q (x ) = Direct integration:
1 1 n 1 i i =0 i x
f (x ): a polynomial of degree 2n 1
f (x )dx =
1 1
p (x )dx +
1
(x )
n 1 i =0
i x i dx
316,
Gaussian Quadrature
Choose quadrature points x1 , x2 , , xn so that (x ) is orthogonal to all polynomials of degree less than n. Legendre polynomial Gauss-Legendre quadrature 1. Choose Pn (x ), Legendre polynomial of degree n, as (x ). 2. Take its roots x1 , x2 , , xn as the quadrature points. 3. Fit Lagrange polynomial of f (x ), using these n points. p (x ) = L1 (x )f (x1 ) + L2 (x )f (x2 ) + + Ln (x )f (xn ) 4.
1 1 n 1
f (x )dx =
1 1
p (x )dx =
j =1 1 1 Lj (x )dx ,
f (xj )
1
Lj (x )dx
Weight values: wj =
for j = 1, 2, , n
317,
Gaussian Quadrature
Weight functions in Gaussian quadrature What is so great about exact integration of polynomials? Demand something else: generalization Exact integration of polynomials times function W (x ) Given weight function W (x ) and number (n) of quadrature points, work out the locations (xj s) of the n points and the corresponding weights (wj s), so that integral
b n
W (x )f (x )dx =
a j =1
wj f (xj )
318,
Gaussian Quadrature
A family of orthogonal polynomials with increasing degree: quadrature points: roots of n-th member of the family. For dierent kinds of functions and dierent domains,
Several singular functions and innite domains can be handled. A very special case: For W (x ) = 1, Gauss-Legendre quadrature!
319,
Multiple Integrals
b g2 (x )
S=
a g2 (x ) g1 (x )
f (x , y ) dy dx
b
F (x ) =
f (x , y ) dy and S =
g1 (x ) a
F (x )dx
with complete exibility of individual quadrature methods. Double integral on rectangular domain Two-dimensional version of Simpsons one-third rule:
1 1 1
f (x , y )dxdy = w0 f (0, 0) + w1 [f (1, 0) + f (1, 0) + f (0, 1) + f (0, 1)] + w2 [f (1, 1) + f (1, 1) + f (1, 1) + f (1, 1)]
1
320,
Multiple Integrals
Monte Carlo integration I =
f (x )dV
Requirements:
if x , otherwise .
F (xi )
i =1
321,
Points to note
Basic strategy of Gauss-Legendre quadrature Formulation of a double integral from fundamental principle Monte Carlo integration
322,
Outline
Single-Step Methods Practical Implementation of Single-Step Methods Systems of ODEs Multi-Step Methods*
Numerical Solution of Ordinary Dierential Equations Single-Step Methods Practical Implementation of Single-Step Methods Systems of ODEs Multi-Step Methods*
323,
Single-Step Methods
Single-Step Methods Practical Implementation of Single-Step Methods Systems of ODEs Multi-Step Methods*
Initial value problem (IVP) of a rst order ODE: dy = f (x , y ), y (x0 ) = y0 dx To determine: y (x ) for x [a, b ] with x0 = a. Numerical solution: Start from the point (x0 , y0 ).
Single-step method: Only the current value Multi-step method: History of several recent steps
324,
Single-Step Methods
Eulers method
Single-Step Methods Practical Implementation of Single-Step Methods Systems of ODEs Multi-Step Methods*
dy dx
= f (xn , yn ).
yn+1 = yn + hf (xn , yn ) Repitition of such steps constructs y (x ). First order truncated Taylors series: Expected error: O(h2 ) Accumulation over steps Total error: O(h) Eulers method is a rst order method. Question: Total error = Sum of errors over the steps? Answer: No, in general.
325,
Single-Step Methods
Single-Step Methods Practical Implementation of Single-Step Methods Systems of ODEs Multi-Step Methods*
y
C
Q* Q2 Q
y3
C1
y0
Q1
P 1
x0
x1
x2
x3
x0
x1
Improved Eulers method or Heuns method y n+1 = yn + hf (xn , yn ) n+1 )] yn+1 = yn + h 2 [f (xn , yn ) + f (xn+1 , y The order of Heuns method is two.
326,
Single-Step Methods
Runge-Kutta methods Second order method:
Single-Step Methods Practical Implementation of Single-Step Methods Systems of ODEs Multi-Step Methods*
and
Force agreement up to the second order. yn+1 = yn + w1 hf (xn , yn ) + w2 h[f (xn , yn ) + hfx (xn , yn ) + k1 fy (xn , yn ) +
= yn + (w1 + w2 )hf (xn , yn ) + h2 w2 [fx (xn , yn ) + f (xn , yn )fy (xn , yn )] + From Taylors series, using y = f (x , y ) and y = fx + y , y (xn+1 ) = yn + hf (xn , yn ) + h2 [fx (xn , yn ) + f (xn , yn )fy (xn , yn )] + 2
1 2
w1 + w2 = 1, w2 = w2 =
==
1 2w2 ,
w1 = 1 w2
327,
Single-Step Methods
With continuous choice of w2 , Popular form of RK2: with choice w2 = 1,
Single-Step Methods Practical Implementation of Single-Step Methods Systems of ODEs Multi-Step Methods*
k1 = hf (xn , yn ), k2 = hf (xn + h 2 , yn + xn+1 = xn + h, yn+1 = yn + k2 Fourth order Runge-Kutta method (RK4): k1 k2 k3 k4 = hf (xn , yn ) k1 = hf (xn + h 2 , yn + 2 ) k2 = hf (xn + h 2 , yn + 2 ) = hf (xn + h, yn + k3 )
k1 2)
xn+1 = xn + h, yn+1 = yn + k
328,
Single-Step Methods Practical Implementation of Single-Step Methods Practical Implementation of Single-Step Methods Systems of ODEs
Question: How to decide whether the error is within tolerance? Additional estimates: handle to monitor the error further ecient algorithms Runge-Kutta method with adaptive step size In an interval [xn , xn + h], yn+1 = yn+1 + ch5 + higher order terms Over two steps of size h 2,
(2) yn+1 (1)
Multi-Step Methods*
= yn+1 + 2c
h 2
15 5 ch 16
16yn+1 yn+1 15
(2) (1)
329,
Single-Step Methods Practical Implementation of Single-Step Methods Practical Implementation of Single-Step Methods Systems of ODEs Multi-Step Methods*
Evaluation of a step: > : Step size is too large for accuracy. Subdivide the interval. << : Step size is inecient! Start with a large step size. Keep subdividing intervals whenever > . Fast marching over smooth segments and small steps in zones featured with rapid changes in y (x ). Runge-Kutta-Fehlberg method With six function values, An RK4 formula embedded in an RK5 formula two independent estimates and an error estimate! RKF45 in professional implementations
330,
Systems of ODEs
Methods for a single rst order ODE
Single-Step Methods Practical Implementation of Single-Step Methods Systems of ODEs Multi-Step Methods*
directly applicable to a rst order vector ODE A typical IVP with an ODE system: dy = f (x , y ), y (x0 ) = y0 dx An n-th order ODE: convert into a system of rst order ODEs Dening state vector z (x ) = [y (x ) y (x ) y (n1) (x )]T , work out
dz dx
A system of higher order ODEs with the highest order derivatives of orders n1 , n2 , n3 , , nk
Cast into the state space form with the state vector of dimension n = n1 + n2 + n3 + + nk
331,
Systems of ODEs
Single-Step Methods Practical Implementation of Single-Step Methods Systems of ODEs Multi-Step Methods*
the highest order derivatives can be solved explicitly. The resulting form of the ODEs: normal system of ODEs Example: d 2x 3 dt 2 dy dt e xy State vector: z (t ) = x dx dt
2
+ 2x
dx dt d 2y dt 2
dy dt 3/2
d 2y +4 = 0 dt 2 + 2x + 1 = e t
d 3y y dt 3
dx dt
(t ) = (t ) = z With three trivial derivatives z1 = z4 and z4 5 and the other two obtained from the given ODEs,
T d2y 2 dt (t ) z2 , z3 dz dt
= f (t , z ).
332,
Multi-Step Methods*
Why not try to capture the trend? A typical multi-step formula:
Single-Step Methods Practical Implementation of Single-Step Methods Systems of ODEs Multi-Step Methods*
yn+1 = yn + h[c0 f (xn+1 , yn+1 ) + c1 f (xn , yn ) + c2 f (xn1 , yn1 ) + c3 f (xn2 , yn2 ) + ] Determine coecients by demanding the exactness for leading polynomial terms. Explicit methods: c0 = 0, evaluation easy, but involves extrapolation. Implicit methods: c0 = 0, dicult to evaluate, but better stability. Predictor-corrector methods Example: Adams-Bashforth-Moulton method
333,
Points to note
Single-Step Methods Practical Implementation of Single-Step Methods Systems of ODEs Multi-Step Methods*
Eulers and Runge-Kutta methods Step size adaptation State space formulation of dynamic systems
ODE Solutions: Advanced Issues Stability Analysis Implicit Methods Sti Dierential Equations Boundary Value Problems
334,
Outline
ODE Solutions: Advanced Issues Stability Analysis Implicit Methods Sti Dierential Equations Boundary Value Problems
ODE Solutions: Advanced Issues Stability Analysis Implicit Methods Sti Dierential Equations Boundary Value Problems
335,
Stability Analysis
But, its scope has a limitation.
Focus of explicit methods (such as RK) is accuracy and eciency. The issue of stabilty is handled indirectly. Stabilty of explicit methods For the ODE system y = f (x , y ), Eulers method gives Taylors series of the actual solution: Discrepancy or error: yn+1 = yn + f (xn , yn )h + O(h2 ).
ODE Solutions: Advanced Issues Stability Analysis Implicit Methods Sti Dierential Equations Boundary Value Problems
336,
Stability Analysis
Eulers step magnies the error by a factor (I + hJ ). Using J loosely as the representative Jacobian, n+1 (I + hJ )n 1 . For stability, n+1 0 as n . Eigenvalues of (I + hJ ) must fall within the unit circle |z | = 1. By shift theorem, eigenvalues of hJ must fall inside the unit circle with the centre at z0 = 1. |1 + h| < 1 h < 2Re () ||2
Note: Same result for single ODE w = w , with complex . For second order Runge-Kutta method, n+1 = 1 + h + h 2 2 n 2 1+z +
z2 2
<1
ODE Solutions: Advanced Issues Stability Analysis Implicit Methods Sti Dierential Equations Boundary Value Problems
337,
Stability Analysis
3 UNSTABLE 2 RK2 1 Euler
UNSTABLE
Im(h)
1 RK4
3 5 4 3 2 1 Re(h) 0 1 2 3
Question: What do these stability regions mean with reference to the system eigenvalues? Question: How does the step size adaptation of RK4 operate on a system with eigenvalues on the left half of complex plane? Step size adaptation tackles instability by its symptom!
ODE Solutions: Advanced Issues Stability Analysis Implicit Methods Sti Dierential Equations Boundary Value Problems
338,
Implicit Methods
Backward Eulers method
Stability: eigenvalues of (I hJ ) outside the unit circle |z | = 1 2Re () ||2 Absolute stability for a stable ODE, i.e. one with Re () < 0 |h 1| > 1 h >
ODE Solutions: Advanced Issues Stability Analysis Implicit Methods Sti Dierential Equations Boundary Value Problems
339,
Implicit Methods
2 1.5 1
STABLE STABLE
0.5 Im(h)
UNSTABLE
O
0.5
STABLE
1.5
2 1.5
0.5
0.5
1 Re(h)
1.5
2.5
3.5
How to solve g (yn+1 ) = yn + hf (xn+1 , yn+1 ) yn+1 = 0 for yn+1 ? Typical Newtons iteration: yn+1 = yn+1 + (I hJ )1 yn yn+1 + hf xn+1 , yn+1 Semi-implicit Eulers method for local solution: yn+1 = yn + h(I hJ )1 f (xn+1 , yn )
(k +1) (k ) (k ) (k )
ODE Solutions: Advanced Issues Stability Analysis Implicit Methods Sti Dierential Equations Boundary Value Problems
340,
0.6
0.6
0.4
0.4
0.2
0.2
0.2
0.2
0.4
0.4
0.6
0.6
0.8
0.8
0.1
0.2
0.3
0.4
0.5 t
0.6
0.7
0.8
0.9
(a) Case of c = 3, k = 2
ODE Solutions: Advanced Issues Stability Analysis Implicit Methods Sti Dierential Equations Boundary Value Problems
341,
e 2t e 300t 298
4 x 10
3
0.1
0.2
0.3
0.4
0.5 t
0.6
0.7
0.8
0.9
To solve sti ODE systems, use implicit method, preferably with explicit Jacobian.
ODE Solutions: Advanced Issues Stability Analysis Implicit Methods Sti Dierential Equations Boundary Value Problems
342,
A ball is thrown with a particular velocity. What trajectory does the ball follow? How to throw a ball such that it hits a particular window at a neighbouring house after 15 seconds?
Two-point BVP in ODEs: boundary conditions at two values of the independent variable Methods of solution
ODE Solutions: Advanced Issues Stability Analysis Implicit Methods Sti Dierential Equations Boundary Value Problems
343,
follows the strategy to adjust trials to hit a target. Consider the 2-point BVP y = f (x , y ), g1 (y (a)) = 0 , g2 (y (b )) = 0 , where g1 R n1 , g2 R n2 and n1 + n2 = n.
Parametrize initial state: y (a) = h (p ) with p R n2 . Guess n2 values of p to dene IVP y = f (x , y ), y (a) = h (p ).
Solve this IVP for [a, b ] and evaluate y (b ). Dene error vector E (p ) = g2 (y (b )).
ODE Solutions: Advanced Issues Stability Analysis Implicit Methods Sti Dierential Equations Boundary Value Problems
344,
From current vector p , n2 perturbations as p + ei : Jacobian Each Newtons step: solution of n2 + 1 initial value problems!
E p
ODE Solutions: Advanced Issues Stability Analysis Implicit Methods Sti Dierential Equations Boundary Value Problems
345,
1. Discretize domain [a, b ]: grid of points a = x0 < x1 < x2 < < xN 1 < xN = b . Function values y (xi ): n(N + 1) unknowns 2. Replace the ODE over intervals by nite dierence equations. Considering mid-points, a typical (vector) FDE: yi yi 1 hf xi + xi 1 yi + yi 1 , 2 2 = 0 , for i = 1, 2, 3, , N
nN (scalar) equations 3. Assemble additional n equations from boundary conditions. 4. Starting from a guess solution over the grid, solve this system. (Sparse Jacobian is an advantage.) Iterative schemes for solution of systems of linear equations.
ODE Solutions: Advanced Issues Stability Analysis Implicit Methods Sti Dierential Equations Boundary Value Problems
346,
Points to note
Numerical stability of ODE solution methods Computational cost versus better stability of implicit methods Multiscale responses leading to stiness: failure of explicit methods Implicit methods for sti systems Shooting method for two-point boundary value problems Relaxation method for boundary value problems
Existence and Uniqueness Theory Well-Posedness of Initial Value Problems Uniqueness Theorems Extension to ODE Systems Closure
347,
Outline
Existence and Uniqueness Theory Well-Posedness of Initial Value Problems Uniqueness Theorems Extension to ODE Systems Closure
348,
Well-Posedness of Initial Value Problems Well-Posedness of Initial Value Problems Uniqueness Theorems Extension to ODE Systems Closure Pierre Simon de Laplace (1749-1827): We may regard the present state of the universe as the eect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
349,
Well-Posedness of Initial Value Problems Well-Posedness of Initial Value Problems Uniqueness Theorems Extension to ODE Systems
Closure
y = f (x , y ), y (x0 ) = y0 From (x , y ), the trajectory develops according to y = f (x , y ). The new point: (x + x , y + f (x , y )x ) The slope now: f (x + x , y + f (x , y )x ) Question: Was the old direction of approach valid? With x 0, directions appropriate, if
x x
lim f (x , y ) = f ( x , y ( x )),
i.e. if f (x , y ) is continuous. If f (x , y ) = , then y = and trajectory is vertical. For the same value of x, several values of y ! y (x ) not a function, unless f (x , y ) = , i.e. f (x , y ) is bounded.
350,
Well-Posedness of Initial Value Problems Well-Posedness of Initial Value Problems Uniqueness Theorems Extension to ODE Systems
Peanos theorem: If f (x , y ) is continuous Closure and bounded in a rectangle R = {(x , y ) : |x x0 | < h, |y y0 | < k }, with |f (x , y )| M < , then the IVP y = f (x , y ), y (x0 ) = y0 has a solution y (x ) dened in a neighbourhood of x0 .
y 0 1 0 1 0 1 0 1 (x0,y0) 0 1 0 1 0 1 0 1 0 1 1 0 1 y +k 0 1 1111111 0000000 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 000000000000 000000000000 111111111111 111111 000000 0 1 0 1 0 1 0 111111111111 1 000000000000 111111111111 000000000000 111111111111 0 1 1 0 1 0 000000000000 111111111111 000000000000 111111111111 0 1 0 1 0 k 111111111111 1 000000000000 000000000000 111111111111 0 1 1 0 1 0 000000000000 111111111111 000000000000 111111111111 Mh 0 1 0 1 0 111111111111 1 000000000000 000000000000 111111111111 0 1 1 0 1 0 000000000000 111111111111 000000000000 111111111111 0 1 1111111 0000000 0 111111111111 1 000000000000 111111111111 000000000000 111111111111 111111 000000 0 1 0 1 1 y1 0 000000000000 111111111111 0 1 00 0 000000000000 1 000000000000 111111111111 000000000000 111111111111 0 1 0 1 1 0 000000000000 000000000000 111111111111 0 1 0 1 0 111111111111 1 000000000000 111111111111 000000000000 Mh 111111111111 0 1 0 1 1 0 000000000000 000000000000 111111111111 0 1 0 1 0 k 111111111111 1 000000000000 111111111111 000000000000 111111111111 0 1 0 1 1 0 000000000000 111111111111 000000000000 111111111111 0 1 0 1 111111 000000 0 1 0 1 0 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 0 1 0 1 1111111 0000000 0 1 0 1 y k 0 1 1 0 1 1 0 0 0 1 0 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 h h 0 1 0 1 0 1 0 1 0 1 0 1 0 1 111111111111 000000000000 111111111111 000000000000 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 11111111111111111111111111111111 00000000000000000000000000000000 0 1 0 1 0 1 0 0 1 O1 x 0 h x0 x 0+h x y 0 1 0 1 0 1 0 1 0 1 (x0,y0) 0 1 0 1 00000000000 11111111111 00000000000 11111111111 y +k1 0 00000000000 11111111111 00000000000 11111111111 0 0 1 00000000000 11111111111 00000000000 11111111111 0 1 00000000000 11111111111 00000000000 11111111111 0 1 00000000000 11111111111 00000000000 11111111111 Mh 0 1 k 00000000000 11111111111 00000000000 11111111111 0 1 00000000000 11111111111 00000000000 11111111111 0 1 00000000000 11111111111 00000000000 11111111111 0 1 00000000000 11111111111 00000000000 11111111111 0 1 00000000000 11111111111 00000000000 11111111111 0 1 y1 00000000000 11111111111 00000000000 11111111111 0 00000000000 11111111111 00000000000 11111111111 01 0 00000000000 11111111111 00000000000 11111111111 0 1 00000000000 11111111111 00000000000 11111111111 0 1 00000000000 00000000000 11111111111 k 11111111111 0 1 00000000000 11111111111 00000000000 11111111111 Mh 0 1 00000000000 11111111111 00000000000 11111111111 0 1 00000000000 11111111111 00000000000 11111111111 0 1 00000000000 11111111111 00000000000 11111111111 0 1 00000000000 11111111111 00000000000 11111111111 0 1 00000000000 11111111111 00000000000 11111111111 y0k 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 h h 0 1 0 1 11111111111 00000000000 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 11111111111111111111111111111111 00000000000000000000000000000000 0 O1 x 0h x0 x0+h x
(a) Mh <= k
(b) Mh >= k
Guaranteed neighbourhood:
k [x0 , x0 + ], where = min(h, M )>0
351,
Well-Posedness of Initial Value Problems Well-Posedness of Initial Value Problems Uniqueness Theorems Extension to ODE Systems
Example: y = Function f (x , y ) =
y 1 x
Closure
y 1 , y (0) = 1 x
Premises of existence theorem not satised. But, premises here are sucient, not necessary! Result inconclusive. The IVP has solutions: y (x ) = 1 + cx for all values of c . The solution is not unique. Example: y 2 = |y |, y (0) = 0 But, there are two solutions: y (x ) = 0 and y (x ) = sgn(x ) x 2 /4.
352,
Well-Posedness of Initial Value Problems Well-Posedness of Initial Value Problems Uniqueness Theorems Extension to ODE Systems Closure
Mathematical model admits of extraneous solution(s)? Physical system itself can exhibit alternative behaviours?
Mathematical model of the system is not complete. The initial value problem is not well-posed.
353,
Well-Posedness of Initial Value Problems Well-Posedness of Initial Value Problems Uniqueness Theorems Extension to ODE Systems Closure
unique solution: y1 (x ).
Applying a small perturbation to the initial condition, the new IVP: y = f (x , y ), y (x0 ) = y0 +
unique solution: y2 (x )
Question: By how much y2 (x ) diers from y1 (x ) for x > x0 ? Large dierence: solution sensitive to initial condition
Well-posed IVP: An initial value problem is said to be well-posed if there exists a solution to it, the solution is unique and it depends continuously on the initial conditions.
Existence and Uniqueness Theory Well-Posedness of Initial Value Problems Uniqueness Theorems Extension to ODE Systems Closure
354,
Uniqueness Theorems
Lipschitz condition:
|f (x , y ) f (x , z )| L|y z |
Theorem: If f (x , y ) is a continuous function satisfying a Lipschitz condition on a strip S = {(x , y ) : a < x < b , < y < }, then for any point (x0 , y0 ) S, the initial value problem of y = f (x , y ), y (x0 ) = y0 is well-posed. Assume y1 (x ) and y2 (x ): solutions of the ODE y = f (x , y ) with initial conditions y (x0 ) = (y1 )0 and y (x0 ) = (y2 )0 Consider E (x ) = [y1 (x ) y2 (x )]2 . Applying Lipschitz condition,
) = 2(y1 y2 )[f (x , y1 ) f (x , y2 )] y2 E (x ) = 2(y1 y2 )(y1
|E (x )| 2L(y1 y2 )2 = 2LE (x ).
Existence and Uniqueness Theory Well-Posedness of Initial Value Problems Uniqueness Theorems Extension to ODE Systems Closure
355,
Uniqueness Theorems
E (x ) 2L E (x )
x x0
E (x ) dx 2L(x x0 ) E (x )
Integrating, E (x ) E (x0 )e 2L(x x0 ) . Hence, |y1 (x ) y2 (x )| e L(x x0 ) |(y1 )0 (y2 )0 |. Since x [a, b ], e L(x x0 ) is nite. |(y1 )0 (y2 )0 | = |y1 (x ) y2 (x )| e L(x x0 ) continuous dependence of the solution on initial condition In particular, (y1 )0 = (y2 )0 = y0 y1 (x ) = y2 (x ) x [a, b ]. The initial value problem is well-posed.
Existence and Uniqueness Theory Well-Posedness of Initial Value Problems Uniqueness Theorems Extension to ODE Systems Closure
356,
Uniqueness Theorems
f Picards theorem: If f (x , y ) and y are continuous and bounded on a rectangle R = {(x , y ) : a < x < b , c < y < d }, then for every (x0 , y0 ) R, the IVP y = f (x , y ), y (x0 ) = y0 has a unique solution in some neighbourhood |x x0 | h.
From the mean value theorem, f (x , y1 ) f (x , y2 ) = With Lipschitz constant L = sup f (x , )(y1 y2 ). y ,
f y
Lipschitz condition is satised lavishly ! Note: All these theorems give only sucient conditions! Hypotheses of Picards theorem Lipschitz condition Well-posedness Existence and uniqueness
Existence and Uniqueness Theory Well-Posedness of Initial Value Problems Uniqueness Theorems Extension to ODE Systems Closure
357,
dy = f (x , y ), y (x0 ) = y0 dx
Lipschitz condition: f (x , y ) f (x , z ) L y z
= (y1 y2 )T (y1 y2 )
f y
Partial derivative
f y
Existence and Uniqueness Theory Well-Posedness of Initial Value Problems Uniqueness Theorems Extension to ODE Systems Closure
358,
y = A (x )y + g (x ), y (x0 ) = y0 Rate function: f (x , y ) = A (x )y + g (x ) Continuity and boundedness of the coecient functions in A (x ) and g (x ) are sucient for well-posedness. An n-th order linear ordinary dierential equation y (n) +P1 (x )y (n1) +P2 (x )y (n2) + +Pn1 (x )y +Pn (x )y = R (x ) State vector: z = [y y y y (n1) ]T = z , z = z , , z With z1 2 3 2 n1 = zn and zn from the ODE,
Existence and Uniqueness Theory Well-Posedness of Initial Value Problems Uniqueness Theorems Extension to ODE Systems Closure
359,
Closure
A sizeable segment of current research: ill-posed problems Dynamics of some nonlinear systems
For boundary value problems, No general criteria for existence and uniqueness Note: Taking clue from the shooting method, a BVP in ODEs can be visualized as a complicated root-nding problem! Multiple solutions or non-existence of solution is no surprise.
Existence and Uniqueness Theory Well-Posedness of Initial Value Problems Uniqueness Theorems Extension to ODE Systems Closure
360,
Points to note
For a solution of initial value problems, questions of existence, uniqueness and continuous dependence on initial condition are of crucial importance. These issues pertain to aspects of practical relevance regarding a physical system and its dynamic simulation Lipschitz condition is the tightest (available) criterion for deciding these questions regarding well-posedness
361,
Outline
Formation of Dierential Equations and Their Solutio Separation of Variables ODEs with Rational Slope Functions Some Special ODEs Exact Dierential Equations and Reduction to the Ex First Order Linear (Leibnitz) ODE and Associated Fo Orthogonal Trajectories Modelling and Simulation
First Order Ordinary Dierential Equations Formation of Dierential Equations and Their Solutions Separation of Variables ODEs with Rational Slope Functions Some Special ODEs Exact Dierential Equations and Reduction to the Exact Form First Order Linear (Leibnitz) ODE and Associated Forms Orthogonal Trajectories Modelling and Simulation
362,
Formation of Dierential Equations and Their Solutio Formation of Dierential Equations and Their Solutions Separation of Variables ODEs with Rational Slope Functions
Order Linear (Leibnitz) ODE and Associated Fo A dierential equation represents a class of First functions. Orthogonal Trajectories
Some Special ODEs Exact Dierential Equations and Reduction to the Ex Modelling and Simulation
Example: y (x ) = cx k With
dy dx
= ckx k 1 and xy
d2y dx 2
= ck (k 1)x k 2 , dy dx
2
d 2y =x dx 2
dy dx
363,
Separation of Variables
ODE form with separable variables: y = f (x , y ) Solution as quadrature: (y )dy =
(x ) dy = or (y )dy = (x )dx dx (y )
Formation of Dierential Equations and Their Solutio Separation of Variables ODEs with Rational Slope Functions Some Special ODEs Exact Dierential Equations and Reduction to the Ex First Order Linear (Leibnitz) ODE and Associated Fo Orthogonal Trajectories Modelling and Simulation
(x )dx + c .
364,
of Dierential Equations and Their Solutio ODEs with Rational Slope FunctionsFormation Separation of Variables ODEs with Rational Slope Functions
First Order Linear (Leibnitz) ODE and Associated Fo f1 (x , y ) Orthogonal Trajectories y = Modelling and Simulation f2 (x , y ) If f1 and f2 are homogeneous functions of n-th degree, then substitution y = ux separates variables x and u . dy 1 (y /x ) du 1 (u ) dx 2 (u ) = u +x = = du dx 2 (y /x ) dx 2 (u ) x 1 (u ) u 2 (u )
For y =
coordinate shift dy dY = dx dX
x = X + h, y = Y + k y = produces
a1 X + b1 Y + (a1 h + b1 k + c1 ) dY = . dX a2 X + b2 Y + (a2 h + b2 k + c2 ) Choose h and k such that a1 h + b1 k + c1 = 0 = a2 h + b2 k + c2 . If the system is inconsistent, then substitute u = a x + b y .
365,
Formation of Dierential Equations and Their Solutio Separation of Variables ODEs with Rational Slope Functions Some Special ODEs Exact Dierential Equations and Reduction to the Ex First Order Linear (Leibnitz) ODE and Associated Fo Orthogonal Trajectories Modelling and Simulation
dp [x + f (p )] = 0 dx
Singular solution is the envelope of the family of straight lines that constitute the general solution.
366,
Formation of Dierential Equations and Their Solutio Separation of Variables ODEs with Rational Slope Functions Some Special ODEs Exact Dierential Equations and Reduction to the Ex First Order Linear (Leibnitz) ODE and Associated Fo Orthogonal Trajectories Modelling and Simulation
Substitute y = p and solve f (x , p , p ) = 0 for p (x ). Second order ODEs with independent variable not appearing explicitly f (y , y , y ) = 0 Use y = p and y = dp dy dp dp dp = =p f (y , p , p ) = 0. dx dy dx dy dy
367,
Formation of Dierential Equations and Their Solutio Exact Dierential Equations and Reduction to the Exact For Separation of Variables ODEs with Rational Slope Functions
and N =
, y
y ,
or,
Some Special ODEs Exact Dierential Equations and Reduction to the Ex First Order Linear (Leibnitz) ODE and Associated Fo Orthogonal Trajectories Modelling and Simulation
M N = y x
M y
N x
368,
Formation of Dierential Equations and Their Solutio Separation of Variables ODEs with Rational Slope Functions Some Special ODEs Exact Dierential Equations and Reduction to the Ex First Order Linear (Leibnitz) ODE and Associated Fo Orthogonal Trajectories Modelling and Simulation
P (x )dx .
Integrating factor: F (x ) = e ye
R P (x )dx
Q (x )e
P (x )dx
dx + C
369,
Formation of Dierential Equations and Their Solutio Separation of Variables ODEs with Rational Slope Functions Some Special ODEs Exact Dierential Equations and Reduction to the Ex First Order Linear (Leibnitz) ODE and Associated Fo Orthogonal Trajectories Modelling and Simulation k
dz + (1 k )P (x )z = (1 k )Q (x ), dx in the Leibnitz form. Riccati equation y = a(x ) + b (x )y + c (x )y 2 If one solution y1 (x ) is known, then propose y (x ) = y1 (x ) + z (x ).
y1 (x ) + z (x ) = a(x ) + b (x )[y1 (x ) + z (x )] + c (x )[y1 (x ) + z (x )]2 (x ) = a(x ) + b (x )y (x ) + c (x )[y (x )]2 , Since y1 1 1
= (1 k )y k dy dx gives
370,
Orthogonal Trajectories
a family of curves
Formation of Dierential Equations and Their Solutio Separation of Variables ODEs with Rational Slope Functions Some Special ODEs Exact Dierential Equations and Reduction to the Ex First Order Linear (Leibnitz) ODE and Associated Fo Orthogonal Trajectories Modelling and Simulation
Dierential equation of the family of curves: dy = f1 (x , y ) dx Slope of curves orthogonal to (x , y , c ) = 0: dy 1 = dx f1 (x , y ) Solving this ODE, another family of curves (x , y , k ) = 0. Orthogonal trajectories If (x , y , c ) = 0 represents the potential lines (contours), then (x , y , k ) = 0 will represent the streamlines!
371,
Points to note
Formation of Dierential Equations and Their Solutio Separation of Variables ODEs with Rational Slope Functions Some Special ODEs Exact Dierential Equations and Reduction to the Ex First Order Linear (Leibnitz) ODE and Associated Fo Orthogonal Trajectories Modelling and Simulation
Meaning and solution of ODEs Separating variables Exact ODEs and integrating factors Linear (Leibnitz) equations Orthogonal families of curves
372,
Outline
Introduction Homogeneous Equations with Constant Coecients Euler-Cauchy Equation Theory of the Homogeneous Equations Basis for Solutions
Second Order Linear Homogeneous ODEs Introduction Homogeneous Equations with Constant Coecients Euler-Cauchy Equation Theory of the Homogeneous Equations Basis for Solutions
373,
Introduction
Second order ODE: f (x , y , y , y ) = 0
Introduction Homogeneous Equations with Constant Coecients Euler-Cauchy Equation Theory of the Homogeneous Equations Basis for Solutions
Special case of a linear (non-homogeneous) ODE: y + P (x )y + Q (x )y = R (x ) Non-homogeneous linear ODE with constant coecients: y + ay + by = R (x ) For R (x ) = 0, linear homogeneous dierential equation y + P (x )y + Q (x )y = 0 and linear homogeneous ODE with constant coecients y + ay + by = 0
374,
Introduction Homogeneous Equations with Constant Coecients Homogeneous Equations with Constant Coecients Euler-Cauchy Equation Theory of the Homogeneous Equations Basis for Solutions
y + ay + by = 0 Assume y = e x Substitution: (2 + a + b )e x = 0 Auxiliary equation: 2 + a + b = 0 Solve for 1 and 2 : Solutions: e 1 x and e 2 x Three cases
y = e x and y = 2 e x .
375,
Introduction Homogeneous Equations with Constant Coecients Homogeneous Equations with Constant Coecients Euler-Cauchy Equation
(a2
= 4b ): 1 = 2 = = a 2
y (x ) = c1 y1 (x ) + c2 y2 (x ) = (c1 + c2 x )e x
a i Complex conjugate (a2 < 4b ): 1,2 = 2
ax
ax
with A = c1 + c2 , B = i (c1 c2 ).
376,
Euler-Cauchy Equation
Introduction Homogeneous Equations with Constant Coecients Euler-Cauchy Equation Theory of the Homogeneous Equations Basis for Solutions
x 2 y + axy + by = 0 Substituting y = x k , auxiliary (or indicial) equation: k 2 + (a 1)k + b = 0 1. Roots real and distinct [(a 1)2 > 4b ]: k1 = k2 . y (x ) = c1 x k1 + c2 x k2 .
1 2. Roots real and equal [(a 1)2 = 4b ]: k1 = k2 = k = a 2 .
y (x ) = (c1 + c2 ln x )x k .
y (x ) = x
a1 2
a1 2
cos( ln x ).
377,
Introduction Theory of the Homogeneous Equations Homogeneous Equations with Constant Coecients Euler-Cauchy Equation Theory of the Homogeneous Equations Basis for Solutions
y + P (x )y + Q (x )y = 0 Well-posedness of its IVP: The initial value problem of the ODE, with arbitrary initial conditions y (x0 ) = Y0 , y (x0 ) = Y1 , has a unique solution, as long as P (x ) and Q (x ) are continuous in the interval under question. At least two linearly independent solutions:
y1 (x ): IVP with initial conditions y (x0 ) = 1, y (x0 ) = 0 y2 (x ): IVP with initial conditions y (x0 ) = 0, y (x0 ) = 1 c1 y1 (x ) + c2 y2 (x ) = 0 c1 = c2 = 0
378,
Introduction Theory of the Homogeneous Equations Homogeneous Equations with Constant Coecients Euler-Cauchy Equation
Solutions y1 and y2 are linearly dependent, if and only if x0 such that W [y1 (x0 ), y2 (x0 )] = 0. W [y1 (x0 ), y2 (x0 )] = 0 W [y1 (x ), y2 (x )] = 0 x . W [y1 (x1 ), y2 (x1 )] = 0 W [y1 (x ), y2 (x )] = 0 x , and y1 (x ) and y2 (x ) are linearly independent solutions. Complete solution: If y1 (x ) and y2 (x ) are two linearly independent solutions, then the general solution is y (x ) = c1 y1 (x ) + c2 y2 (x ).
379,
Introduction Theory of the Homogeneous Equations Homogeneous Equations with Constant Coecients Euler-Cauchy Equation
If y1 (x ) and y2 (x ) are linearly dependent, then y2Solutions = ky1 . Basis for In particular, W [y1 (x0 ), y2 (x0 )] = 0 Conversely, if there is a value x0 , where W [y1 (x0 ), y2 (x0 )] = then for y1 (x0 ) y2 (x0 ) (x ) y (x ) y1 0 2 0 = 0,
y1 (x0 ) y2 (x0 ) c1 = 0, y1 (x0 ) y2 (x0 ) c2 coecient matrix is singular. c1 Choose non-zero and frame y (x ) = c1 y1 + c2 y2 , satisfying c2 IVP y + Py + Qy = 0, y (x0 ) = 0, y (x0 ) = 0.
380,
Introduction Theory of the Homogeneous Equations Homogeneous Equations with Constant Coecients Euler-Cauchy Equation
Pick a candidate solution Y (x ), choose a point x0 , evaluate functions y1 , y2 , Y and their derivatives at that point, frame y1 (x0 ) y2 (x0 ) (x ) y (x ) y1 0 2 0 and ask for solution C1 C2 . C1 C2 = Y (x0 ) Y (x0 )
Unique solution for C1 , C2 . Hence, particular solution y (x ) = C1 y1 (x ) + C2 y2 (x ) is the unique solution of the IVP y + Py + Qy = 0, y (x0 ) = Y (x0 ), y (x0 ) = Y (x0 ). But, that is the candidate function Y (x )! Hence, Y (x ) = y (x ).
381,
Introduction Homogeneous Equations with Constant Coecients Euler-Cauchy Equation Theory of the Homogeneous Equations Basis for Solutions
No guaranteed procedure to identify two basis members! If one solution y1 (x ) is available, then to nd another? Reduction of order Assume the second solution as y2 (x ) = u (x )y1 (x ) and determine u (x ) such that y2 (x ) satises the ODE.
u y1 + 2u y1 + uy1 + P (u y1 + uy1 ) + Quy1 = 0 u y1 + 2u y1 + Pu y1 + u (y1 + Py1 + Qy1 ) = 0. + Py + Qy = 0, we have y u + (2y + Py )u = 0 Since y1 1 1 1 1 1
382,
Introduction Homogeneous Equations with Constant Coecients Euler-Cauchy Equation Theory of the Homogeneous Equations Basis for Solutions
1 R Pdx , 2e y1
1 R Pdx dx , 2e y1 1 R Pdx dx . 2e y1
383,
Function space perspective: Operator D means dierentiation, operates on an innite dimensional function space as a linear transformation. It maps all constant functions to zero.
Introduction Homogeneous Equations with Constant Coecients Euler-Cauchy Equation Theory of the Homogeneous Equations Basis for Solutions
Second derivative or D 2 is an operator that has a two-dimensional null space, c1 + c2 x , with basis {1, x }. Examples of composite operators
Solution of [D 2 + P (x )D + Q (x )]y = 0: description of the null space, or a basis for it.. Analogous to solution of Ax = 0 , i.e. development of a basis for Null (A ).
384,
Points to note
Introduction Homogeneous Equations with Constant Coecients Euler-Cauchy Equation Theory of the Homogeneous Equations Basis for Solutions
Second order linear homogeneous ODEs Wronskian and related results Solution basis Reduction of order Null space of a dierential operator
Second Order Linear Non-Homogeneous ODEs Linear ODEs and Their Solutions Method of Undetermined Coecients Method of Variation of Parameters Closure
385,
Outline
Second Order Linear Non-Homogeneous ODEs Linear ODEs and Their Solutions Method of Undetermined Coecients Method of Variation of Parameters Closure
Second Order Linear Non-Homogeneous ODEs Linear ODEs and Their Solutions Method of Undetermined Coecients Method of Variation of Parameters Closure
386,
In ordinary vector space Ax = b The system is consistent. A solution x Alternative solution: x x x satises Ax = 0 , is in null space of A . Complete solution: x = x + i ci (x0 )i Methodology: Find null space of A i.e. basis members (x0 )i . Find x and compose.
In innite-dimensional function space y + Py + Qy = R P (x ), Q (x ), R (x ) are continuous. A solution yp (x ) Alternative solution: y (x ) y (x ) yp (x ) satises y + Py + Qy = 0, is in null space of D 2 + P (x )D + Q (x ). Complete solution: yp (x ) + i ci yi (x ) Methodology: Find null space of D 2 + P (x )D + Q (x ) i.e. basis members yi (x ). Find yp (x ) and compose.
Second Order Linear Non-Homogeneous ODEs Linear ODEs and Their Solutions Method of Undetermined Coecients Method of Variation of Parameters Closure
387,
Procedure to solve y + P (x )y + Q (x )y = R (x ) 1. First, solve the corresponding homogeneous equation, obtain a basis with two solutions and construct yh (x ) = c1 y1 (x ) + c2 y2 (x ). 2. Next, nd one particular solution yp (x ) of the NHE and compose the complete solution y (x ) = yh (x ) + yp (x ) = c1 y1 (x ) + c2 y2 (x ) + yp (x ). 3. If some initial or boundary conditions are known, they can be imposed now to determine c1 and c2 . Caution: If y1 and y2 are two solutions of the NHE, then do not expect c1 y1 + c2 y2 to satisfy the equation. Implication of linearity or superposition: With zero initial conditions, if y1 and y2 are responses due to inputs R1 (x ) and R2 (x ), respectively, then the response due to input c1 R1 + c2 R2 is c1 y1 + c2 y2 .
388,
ODEs and Their Solutions Method of Undetermined CoecientsLinear Method of Undetermined Coecients Method of Variation of Parameters Closure
y + ay + by = R (x )
What kind of function to propose as yp (x ) if R (x ) = x n ? And what if R (x ) = e x ? If R (x ) = x n + e x , i.e. in the form k1 R1 (x ) + k2 R2 (x )? The principle of superposition (linearity)
Table: Candidate solutions for linear non-homogeneous ODEs
RHS function R (x ) pn (x ) e x cos x or sin x e x cos x or e x sin x pn (x )e x pn (x ) cos x or pn (x ) sin x pn (x )e x cos x or pn (x )e x sin x
Candidate solution yp (x ) qn (x ) ke x k1 cos x + k2 sin x k1 e x cos x + k2 e x sin x qn (x )e x qn (x ) cos x + rn (x ) sin x qn (x )e x cos x + rn (x )e x sin x
389,
ODEs and Their Solutions Method of Undetermined CoecientsLinear Method of Undetermined Coecients Method of Variation of Parameters
In each case, the rst ocial proposal: yp = ke 3x (b) y (x ) = c1 e 2x + c2 e 3x + xe 3x (c) y (x ) = c1 e 3x + c2 xe 3x + Modication rule
Closure
(a) y (x ) = c1 e x + c2 e 5x e 3x /4
1 2 3x 2x e
If the candidate function (ke x , k1 cos x + k2 sin x or k1 e x cos x + k2 e x sin x ) is a solution of the corresponding HE; with , i or i (respectively) satisfying the auxiliary equation; then modify it by multiplying with x . In the case of being a double root, i.e. both e x and xe x being solutions of the HE, choose yp = kx 2 e x .
Second Order Linear Non-Homogeneous ODEs Linear ODEs and Their Solutions Method of Undetermined Coecients Method of Variation of Parameters Closure
390,
yh (x ) = c1 y1 (x ) + c2 y2 (x ), in which c1 and c2 are constant parameters. For solution of the NHE, how about variable parameters ? Propose yp (x ) = u1 (x )y1 (x ) + u2 (x )y2 (x ) and force yp (x ) to satisfy the ODE. A single second order ODE in u1 (x ) and u2 (x ). We need one more condition to x them.
Second Order Linear Non-Homogeneous ODEs Linear ODEs and Their Solutions Method of Undetermined Coecients Method of Variation of Parameters Closure
391,
yp = u1 y1 + u1 y1 + u2 y 2 + u2 y 2 .
Condition
y + u y = 0 gives u1 1 2 2 yp = u1 y1 + u2 y 2 .
Dierentiating,
yp = u1 y 1 + u2 y2 + u1 y1 + u2 y 2 .
Rearranging,
+Q (x )y2 ) = R (x ). +P (x )y2 +Q (x )y1 )+u2 (y2 +P (x )y1 y2 +u1 (y1 y1 +u2 u1 y + u y = R (x ) As y1 and y2 satisfy the associated HE, u1 1 2 2
Second Order Linear Non-Homogeneous ODEs Linear ODEs and Their Solutions Method of Undetermined Coecients Method of Variation of Parameters Closure
392,
0 R
y2 R W
and u2 =
y1 R . W
In contrast to the method of undetermined multipliers, variation of parameters is general. It is applicable for all continuous functions as P (x ), Q (x ) and R (x ).
Second Order Linear Non-Homogeneous ODEs Linear ODEs and Their Solutions Method of Undetermined Coecients Method of Variation of Parameters Closure
393,
Points to note
Function space perspective of linear ODEs Method of undetermined coecients Method of variation of parameters
394,
Outline
Theory of Linear ODEs Homogeneous Equations with Constant Coecients Non-Homogeneous Equations Euler-Cauchy Equation of Higher Order
Higher Order Linear ODEs Theory of Linear ODEs Homogeneous Equations with Constant Coecients Non-Homogeneous Equations Euler-Cauchy Equation of Higher Order
395,
Theory of Linear ODEs Homogeneous Equations with Constant Coecients Non-Homogeneous Equations Euler-Cauchy Equation of Higher Order
Wronskian:
For the HE, suppose we have n solutions y1 (x ), y2 (x ), , yn (x ). Assemble the state vectors in matrix y1 y2 yn y1 y2 yn y1 y y n 2 Y (x ) = . . . . .. . . . . . . . (n1) (n1) (n1) y1 y2 yn W (y1 , y2 , , yn ) = det[Y (x )]
396,
Theory of Linear ODEs Homogeneous Equations with Constant Coecients Non-Homogeneous Equations Euler-Cauchy Equation of Higher Order
ki yi (x ) = 0
i =1
ki yi (x ) = 0 for j = 1, 2, 3, , (n 1)
(j )
If Wronskian is zero at x = x0 , then Y (x0 ) is singular and a non-zero k Null [Y (x0 )] gives n i =1 ki yi (x ) = 0, implying y1 (x ), y2 (x ), , yn (x ) to be linearly dependent. Zero Wronskian at some x = x0 implies zero Wronskian everywhere. Non-zero Wronskian at some x = x1 ensures non-zero Wronskian everywhere and the corrseponding solutions as linearly independent. With n linearly independent solutions y1 (x ), y2 (x ), , yn (x ) of the HE, we have its general solution yh (x ) = n i =1 ci yi (x ), acting as the complementary function for the NHE.
W [y1 (x ), y2 (x ), , yn (x )] = 0.
[Y (x )]k = 0 [Y (x )] is singular
397,
Theory of Linear ODEs Homogeneous Equations with Constant Coecients Homogeneous Equations with Constant Coecients Non-Homogeneous Equations Euler-Cauchy Equation of Higher Order
Construction of the basis: 1. For every simple real root = , e x is a solution. 2. For every simple pair of complex roots = i , e x cos x and e x sin x are linearly independent solutions. 3. For every real root = of multiplicity r ; e x , xe x , x 2 e x , , x r 1 e x are all linearly independent solutions. 4. For every complex pair of roots = i of multiplicity r ; e x cos x , e x sin x , xe x cos x , xe x sin x , , x r 1 e x cos x , x r 1 e x sin x are the required solutions.
398,
Non-Homogeneous Equations
Method of undetermined coecients Extension of the second order case Method of variation of parameters
n
Theory of Linear ODEs Homogeneous Equations with Constant Coecients Non-Homogeneous Equations Euler-Cauchy Equation of Higher Order
yp (x ) =
i =1
ui (x )yi (x )
= R (x )
399,
Non-Homogeneous Equations
Since each yi (x ) is a solution of the HE,
n
Theory of Linear ODEs Homogeneous Equations with Constant Coecients Non-Homogeneous Equations Euler-Cauchy Equation of Higher Order
ui (x )yi
i =1
(n1)
(x ) = R (x ).
400,
Points to note
Theory of Linear ODEs Homogeneous Equations with Constant Coecients Non-Homogeneous Equations Euler-Cauchy Equation of Higher Order
Laplace Transforms Introduction Basic Properties and Results Application to Dierential Equations Handling Discontinuities Convolution Advanced Issues
401,
Outline
Laplace Transforms Introduction Basic Properties and Results Application to Dierential Equations Handling Discontinuities Convolution Advanced Issues
Laplace Transforms Introduction Basic Properties and Results Application to Dierential Equations Handling Discontinuities Convolution Advanced Issues
402,
Introduction
Classical perspective
Entire dierential equation is known in advance. Go for a complete solution rst. Afterwards, use the initial (or other) conditions.
You may drive the plant with dierent kinds of inputs on dierent occasions.
Implication
Left-hand side of the ODE and the initial conditions are known a priori. Right-hand side, R (x ), changes from task to task.
Laplace Transforms Introduction Basic Properties and Results Application to Dierential Equations Handling Discontinuities Convolution Advanced Issues
403,
Introduction
If there is a sudden voltage uctuation, what happens to the equipment connected to the power line?
Or, does anything happen in the immediate future? Something certainly happens. The IVP has a solution! Laplace transforms provide a tool to nd the solution, in spite of the discontinuity of R (x ). Integral transform:
b
T [f (t )](s ) =
a
K (s , t )f (t )dt
Laplace Transforms Introduction Basic Properties and Results Application to Dierential Equations Handling Discontinuities Convolution Advanced Issues
404,
Introduction
e st f (t )dt = lim
b 0
e st f (t )dt
When this integral exists, f (t ) has its Laplace transform. Sucient condition:
f (t ) is piecewise continuous, and it is of exponential order, i.e. |f (t )| < Me ct for some (nite) M and c .
Laplace Transforms Introduction Basic Properties and Results Application to Dierential Equations Handling Discontinuities Convolution Advanced Issues
405,
L{af (t ) + bg (t )} = aL{f (t )} + bL{g (t )} First shifting property or the frequency shifting rule: L{e at f (t )} = F (s a) Laplace transforms of some elementary functions: L(1) = L(t ) = L(t n ) = L(t ) = and L(e at ) =
a
s (a + 1) s a+1 1 . s a
e st dt =
1 , s2
(for a R + )
Laplace Transforms Introduction Basic Properties and Results Application to Dierential Equations Handling Discontinuities Convolution Advanced Issues
406,
e st f (t )
+s
0
Using this process recursively, L{f (n) (t )} = s n L{f (t )} s (n1) f (0) s (n2) f (0) f (n1) (0). For integral g (t ) = 0 f (t )dt , g (0) = 0, and L{g (t )} = sL{g (t )} g (0) = sL{g (t )} L{g (t )} = 1 s L{f (t )}.
t
Laplace Transforms Introduction Basic Properties and Results Application to Dierential Equations Handling Discontinuities Convolution Advanced Issues
407,
Example: Initial value problem of a linear constant coecient ODE y + ay + by = r (t ), y (0) = K0 , y (0) = K1 Laplace transforms of both sides of the ODE:
s 2 Y (s ) sy (0) y (0) + a[sY (s ) y (0)] + bY (s ) = R (s ) (s 2 + as + b )Y (s ) = (s + a)K0 + K1 + R (s ) A dierential equation in y (t ) has been converted to an algebraic equation in Y (s ). Transfer function: ratio of Laplace transform of output function y (t ) to that of input function r (t ), with zero initial conditions Q (s ) = 1 Y (s ) = 2 (in this case) R (s ) s + as + b
Laplace Transforms Introduction Basic Properties and Results Application to Dierential Equations Handling Discontinuities Convolution Advanced Issues
408,
Handling Discontinuities
Unit step function u (t a) = Its Laplace transform: L{u (t a)} =
0
0 if 1 if
a 0
t <a t >a
a
e st u (t a)dt =
0 dt +
e st dt =
e as s
For input f (t ) with a time delay, f (t a)u (t a) = has its Laplace transform as L{f (t a)u (t a)} = =
0 a
0 if f (t a) if e st f (t a)dt
t<a t>a
Laplace Transforms Introduction Basic Properties and Results Application to Dierential Equations Handling Discontinuities Convolution Advanced Issues
409,
Handling Discontinuities
Dene fk (t a) = =
u(ta)
1 1 k 1
at a+k otherwise 1 1 u (t a) u (t a k ) k k
1 u(ta) k
1/k if 0
f (ta)
1 k 1 k
(ta) 1
t
1
o
1 k
a+k
a a+k
1 u(tak) k
(b) Composition
(c) Function f
fk (t a)dt =
1 dt = 1. k
Laplace Transforms Introduction Basic Properties and Results Application to Dierential Equations Handling Discontinuities Convolution Advanced Issues
410,
Handling Discontinuities
In the limit, (t a) = or, (t a) =
k 0
(t a)dt = 1.
L{(t a)} =
Through step and impulse functions, Laplace transform method can handle IVPs with discontinuous inputs.
Laplace Transforms Introduction Basic Properties and Results Application to Dierential Equations Handling Discontinuities Convolution Advanced Issues
411,
Convolution
A generalized product of two functions
t
h(t ) = f (t ) g (t ) =
0
f ( )g (t ) d
0
f ( )g (t )d dt =
t=
f ( )
e st g (t )dt d
t=
11 00 00 11 00 11 00 11 00 11 00 11 00 11
(a) Original order
Laplace Transforms Introduction Basic Properties and Results Application to Dierential Equations Handling Discontinuities Convolution Advanced Issues
412,
Convolution
Through substitution t = t , H (s ) =
0 0 0
f ( )
e s (t
+ )
g (t ) dt d
f ( )e s
0
e st g (t ) dt d
H (s ) = F (s )G (s ) Convolution theorem: Laplace transform of the convolution integral of two functions is given by the product of the Laplace transforms of the two functions. Utilities:
Laplace Transforms Introduction Basic Properties and Results Application to Dierential Equations Handling Discontinuities Convolution Advanced Issues
413,
Points to note
A paradigm shift in solution of IVPs Handling discontinuous input functions Extension to ODE systems The idea of integral transforms
ODE Systems
414,
Outline
Fundamental Ideas Linear Homogeneous Systems with Constant Coeci Linear Non-Homogeneous Systems Nonlinear Systems
ODE Systems Fundamental Ideas Linear Homogeneous Systems with Constant Coecients Linear Non-Homogeneous Systems Nonlinear Systems
ODE Systems
415,
Fundamental Ideas
y = f (t , y ) Solution: a vector function y = h (t ) Autonomous system: y = f (y )
Fundamental Ideas Linear Homogeneous Systems with Constant Coeci Linear Non-Homogeneous Systems Nonlinear Systems
autonomous systems if A and g are constant homogeneous systems if g (t ) = 0 homogeneous constant coecient systems if A is constant and g (t ) = 0
ODE Systems
416,
Fundamental Ideas
For a homogeneous system, y = A (t )y
Fundamental Ideas Linear Homogeneous Systems with Constant Coeci Linear Non-Homogeneous Systems Nonlinear Systems
giving a basis.
General solution:
n
y (t ) =
i =1
ci yi (t ) = [Y (t )] c
ODE Systems
417,
Fundamental Ideas Linear Homogeneous Systems with Constant Coecients Linear Homogeneous Systems with Constant Coeci Linear Non-Homogeneous Systems Nonlinear Systems
Attempt y = x e t y = x e t . Substitution: Ax e t = x e t Ax = x If A is diagonalizable, n linearly independent solutions yi = xi e i t corresponding to n eigenpairs If A is not diagonalizable? All xi e i t together will not complete the basis. Try y = x te t ? Substitution leads to x e t + x te t = Ax te t x e t = 0 x = 0 . Absurd!
ODE Systems
418,
Fundamental Ideas Linear Homogeneous Systems with Constant Coecients Linear Homogeneous Systems with Constant Coeci Linear Non-Homogeneous Systems
Nonlinear Systems
Linear independence here has two implications: in function space AND in ordinary vector space! Substitution: x e t + x te t + u e t = Ax te t + Au e t (A I )u = x Solve for u , the generalized eigenvector of A . For Jordan blocks of larger sizes, 1 y1 = x e t , y2 = x te t + u1 e t , y3 = x t 2 e t + u1 te t + u2 e t etc. 2 Jordan canonical form (JCF) of A provides a set of basis functions to describe the complete solution of the ODE system.
ODE Systems
419,
Fundamental Ideas Linear Homogeneous Systems with Constant Coeci Linear Non-Homogeneous Systems Nonlinear Systems
yh (t ) =
i =1
ci yi (t ) = [Y (t )]c
Complete solution: y (t ) = yh (t ) + yp (t ) We need to develop one particular solution yp . Method of undetermined coecients Based on g (t ), select candidate function Gk (t ) and propose yp =
k
uk Gk (t ),
ODE Systems
420,
Fundamental Ideas Linear Homogeneous Systems with Constant Coeci Linear Non-Homogeneous Systems Nonlinear Systems
If A is a diagonalizable constant matrix, with X1 AX = D , changing variables to z = X1 y , such that y = Xz , Xz = AXz + g (t ) z = X1 AXz + X1 g (t ) = Dz + h (t ) (say). Single decoupled Leibnitz equations
= dk zk + hk (t ), k = 1, 2, 3, , n; zk
ODE Systems
421,
Method of variation of parameters If we can supply a basis Y (t ) of the complementary function yh (t ), then we propose yp (t ) = [Y (t )]u (t ) Substitution leads to Y u + Y u = A Y u + g . Since Y = A Y , Y u = g , or, u = [Y ]1 g . Complete solution: y (t ) = yh + yp = [Y ]c + [Y ] This method is completely general. [Y ]1 g dt
Fundamental Ideas Linear Homogeneous Systems with Constant Coeci Linear Non-Homogeneous Systems Nonlinear Systems
ODE Systems
422,
Points to note
Fundamental Ideas Linear Homogeneous Systems with Constant Coeci Linear Non-Homogeneous Systems Nonlinear Systems
complementary functions in the case of constant coecients particular solutions for all cases
Necessary Exercises: 1
Stability of Dynamic Systems Second Order Linear Systems Nonlinear Dynamic Systems Lyapunov Stability Analysis
423,
Outline
Stability of Dynamic Systems Second Order Linear Systems Nonlinear Dynamic Systems Lyapunov Stability Analysis
Stability of Dynamic Systems Second Order Linear Systems Nonlinear Dynamic Systems Lyapunov Stability Analysis
424,
or,
y = Ay
Phase: a pair of values of y1 and y2 Phase plane: plane of y1 and y2 Trajectory: a curve showing the evolution of the system for a particular initial value problem Phase portrait: all trajectories together showing the complete picture of the behaviour of the dynamic system Allowing only isolated equilibrium points, matrix A is non-singular: origin is the only equilibrium point. Eigenvalues of A : 2 (a11 + a22 ) + (a11 a22 a12 a21 ) = 0
Stability of Dynamic Systems Second Order Linear Systems Nonlinear Dynamic Systems Lyapunov Stability Analysis
425,
with p = (a11 + a22 ) = 1 + 2 and q = a11 a22 a12 a21 = 1 2 Discriminant D = p 2 4q and 1,2 p = 2 p 2
2
p D q = . 2 2
Stability of Dynamic Systems Second Order Linear Systems Nonlinear Dynamic Systems Lyapunov Stability Analysis
y2
426,
y1
y1
y1
(b) Centre
y2
(c) Spiral
y2
y1
y1
y1
Stability of Dynamic Systems Second Order Linear Systems Nonlinear Dynamic Systems Lyapunov Stability Analysis
427,
Table: Critical points of linear systems Sub-type Eigenvalues real, opposite signs pure imaginary complex, both non-zero components real, same sign unequal in magnitude equal, diagonalizable equal, decient
q spiral c e n t r e spiral
Position in p -q chart q<0 q > 0, p = 0 q > 0, p = 0 D = p 2 4q < 0 q > 0, p = 0, D 0 D>0 D=0 D=0
unstable
node
p2
0 4q =
node
o
saddle point
unstable
Stability of Dynamic Systems Second Order Linear Systems Nonlinear Dynamic Systems Lyapunov Stability Analysis
428,
Determine all the critical points. Linearize the ODE system around each of them as y = J (y0 )(y y0 ).
With z = y y0 , analyze each neighbourhood from z = Jz . Assemble outcomes of local phase plane analyses. Features of a dynamic system are typically captured by its critical points and their neighbourhoods.
Limit cycles
Stability of Dynamic Systems Second Order Linear Systems Nonlinear Dynamic Systems Lyapunov Stability Analysis
429,
Important terms Stability: If y0 is a critical point of the dynamic system y = f (y ) and for every > 0, > 0 such that y (t0 ) y0 < y (t ) y0 < t > t0 ,
then y0 is a stable critical point. If, further, y (t ) y0 as t , then y0 is said to be asymptotically stable. Positive denite function: A function V (y ), with V (0 ) = 0, is called positive denite if V (y ) > 0 y = 0 . Lyapunov function: A positive denite function V (y ), having V continuous yi , with a negative semi-denite rate of change V = [V (y )]T f (y ).
Stability of Dynamic Systems Second Order Linear Systems Nonlinear Dynamic Systems Lyapunov Stability Analysis
430,
Theorem: For a system y = f (y ) with the origin as a critical point, if there exists a Lyapunov function V (y ), then the system is stable at the origin, i.e. the origin is a stable critical point. Further, if V (y ) is negative denite, then it is asymptotically stable. A generalization of the notion of total energy: negativity of its rate correspond to trajectories tending to decrease this energy. Note: Lyapunovs method becomes particularly important when a linearized model allows no analysis or when its results are suspect. Caution: It is a one-way criterion only!
Stability of Dynamic Systems Second Order Linear Systems Nonlinear Dynamic Systems Lyapunov Stability Analysis
431,
Points to note
Analysis of second order systems Classication of critical points Nonlinear systems and local linearization Phase plane analysis Examples in physics, engineering, economics, biological and social systems Lyapunovs method of stability analysis
432,
Outline
Power Series Method Frobenius Method Special Functions Dened as Integrals Special Functions Arising as Solutions of ODEs
Series Solutions and Special Functions Power Series Method Frobenius Method Special Functions Dened as Integrals Special Functions Arising as Solutions of ODEs
433,
Methods to solve an ODE in terms of elementary functions: restricted in scope Theory allows study of the properties of solutions!
Power Series Method Frobenius Method Special Functions Dened as Integrals Special Functions Arising as Solutions of ODEs
When elementary methods fail, gain knowledge about solutions through properties, and for actual evaluation develop innite series. Power series: y (x ) =
n=0
an x n = a0 + a1 x + a2 x 2 + a3 x 3 + a4 x 4 + a5 x 5 +
or in powers of (x x0 ). A simple exercise: Try developing power series solutions in the above form and study their properties for dierential equations y + y = 0 and 4x 2 y = y .
434,
Power Series Method Frobenius Method Special Functions Dened as Integrals Special Functions Arising as Solutions of ODEs
y + P (x )y + Q (x )y = 0 If P (x ) and Q (x ) are analytic at a point x = x0 , i.e. if they possess convergent series expansions in powers of (x x0 ) with some radius of convergence R, then the solution is analytic at x0 , and a power series solution is convergent at least for |x x0 | < R .
n=0 n=0
y (x ) = a0 + a1 (x x0 ) + a2 (x x0 )2 + a3 (x x0 )3 +
and assume y (x ) =
an x n .
435,
as
Power Series Method Frobenius Method Special Functions Dened as Integrals Special Functions Arising as Solutions of ODEs
(n + 1)an+1 x n and y (x ) =
n=0
(n + 2)(n + 1)an+2 x n
pn x n qn x n
(n + 1)an+1 x n = an x n =
n
n=0 k =0
pnk (k + 1)ak +1 x n
n=0 k =0
qn k a k x n
n
(n + 2)(n + 1)an+2 +
k =0 n k =0
pnk (k + 1)ak +1 +
k =0
qnk ak x n = 0
436,
Frobenius Method
Power Series Method Frobenius Method Special Functions Dened as Integrals Special Functions Arising as Solutions of ODEs
For the ODE y + P (x )y + Q (x )y = 0, a point x = x0 is ordinary point if P (x ) and Q (x ) are analytic at x = x0 : power series solution is analytic singular point if any of the two is non-analytic (singular) at x = x0 regular singularity: (x x0 )P (x ) and (x x0 )2 Q (x ) are analytic at the point irregular singularity The case of regular singularity For x0 = 0, with P (x ) =
b(x ) x
and Q (x ) =
c (x ) , x2
437,
Frobenius Method
Working steps:
Power Series Method Frobenius Method Special Functions Dened as Integrals Special Functions Arising as Solutions of ODEs
1. Assume the solution in the form y (x ) = x r 2. Dierentiate to get the series expansions for
n n=0 an x . y (x ) and y (x ).
3. Substitute these series for y (x ), y (x ) and y (x ) into the given ODE and collect coecients of x r , x r +1 , x r +2 etc. 4. Equate the coecient of x r to zero to obtain an equation in the index r , called the indicial equation as r (r 1) + b0 r + c0 = 0; allowing a0 to become arbitrary. 5. For each solution r , equate other coecients to obtain a1 , a2 , a3 etc in terms of a0 . Note: The need is to develop two solutions.
438,
Power Series Method Special Functions Dened as Integrals Frobenius Method Special Functions Dened as Integrals Special Functions Arising as Solutions of ODEs
Gamma function: (n) = 0 e x x n1 dx , convergent for n > 0. Recurrence relation (1) = 1, (n + 1) = n(n) allows extension of the denition for the entire real line except for zero and negative integers. (n + 1) = n! for non-negative integers. (A generalization of the factorial function.) Beta function: B (m, n) =
/2
1 m 1 (1 0 x 2m1 2n1
x )n1 dx =
x sin t 0 t dt .
439,
Power Series Method Special Functions Arising as Solutions of ODEs Frobenius Method Special Functions Dened as Integrals Special Functions Arising as Solutions of ODEs
In the study of some important problems in physics, some variable-coecient ODEs appear recurrently, defying analytical solution! Series solutions properties and connections further problems further solutions
Table: Special functions of mathematical physics
Name of the ODE Legendres equation Airys equation Chebyshevs equation Hermites equation Bessels equation Form of the ODE (1 x 2 )y 2xy + k (k + 1)y = 0 y k 2 xy = 0 (1 x 2 )y xy + k 2 y = 0 y 2xy + 2ky = 0 x 2 y + xy + (x 2 k 2 )y = 0 x (1 x )y + [c (a + b + 1)x ]y aby = 0 xy + (1 x )y + ky = 0 Resulting functions Legendre functions Legendre polynomials Airy functions Chebyshev polynomials Hermite functions Hermite polynomials Bessel functions Neumann functions Hankel functions Hypergeometric function Laguerre polynomials
440,
Power Series Method Special Functions Arising as Solutions of ODEs Frobenius Method Special Functions Dened as Integrals
Legendres equation
(1 x 2 )y 2xy + k (k + 1)y = 0
(k +1) 2x and Q (x ) = k1 are analytic at x = 0 with P (x ) = 1 x2 x 2 radius of convergence R = 1.
x = 0 is an ordinary point and a power series solution n y (x ) = n=0 an x is convergent at least for |x | < 1. Apply power series method: a2 = a3 and an+2 k (k + 1) a0 , 2! (k + 2)(k 1) = a1 3! (k n)(k + n + 1) = an (n + 2)(n + 1)
for n 2.
Solution: y (x ) = a0 y1 (x ) + a1 y2 (x )
441,
Power Series Method Special Functions Arising as Solutions of ODEs Frobenius Method Special Functions Dened as Integrals
Legendre functions
y1 (x ) = 1
k (k + 1) 2 k (k 2)(k + 1)(k + 3) 4 x + x 2! 4! (k 1)(k + 2) 3 (k 1)(k 3)(k + 2)(k + 4) 5 y2 (x ) = x x + x 3! 5! Special signicance: non-negative integral values of k For each k = 0, 1, 2, 3, , one of the series terminates at the term containing x k .
Polynomial solution: valid for the entire real line! Recurrence relation in reverse: a k 2 = k (k 1) ak 2(2k 1)
442,
Power Series Method Special Functions Arising as Solutions of ODEs Frobenius Method Special Functions Dened as Integrals
This choice of ak ensures Pk (1) = 1 and implies Pk (1) = (1)k . Initial Legendre polynomials: P0 (x ) = 1, P1 (x ) = x , 1 P2 (x ) = (3x 2 1), 2 1 P3 (x ) = (5x 3 3x ), 2 1 P4 (x ) = (35x 4 30x 2 + 3) etc. 8
443,
Power Series Method Special Functions Arising as Solutions of ODEs Frobenius Method Special Functions Dened as Integrals Special Functions Arising as Solutions of ODEs
1 P0 (x) 0.8 P1(x) 0.6 P3 (x) 0.4 P 2(x) 0.2 Pn (x)
0.2
0.4 P5 (x)
0.6
P 4(x)
0.8
1 1
0.8
0.6
0.4
0.2
0 x
0.2
0.4
0.6
0.8
All roots of a Legendre polynomial are real and they lie in [1, 1]. Orthogonality?
444,
Power Series Method Special Functions Arising as Solutions of ODEs Frobenius Method Special Functions Dened as Integrals
Bessels equation
x 2 y + xy + (x 2 k 2 )y = 0 x = 0 is a regular singular point. Frobenius method: carrying out the early steps,
n=2
445,
Power Series Method Special Functions Arising as Solutions of ODEs Frobenius Method Special Functions Dened as Integrals Special Functions Arising as Solutions of ODEs
Bessel functions: and using n = 2m, Selecting a0 = 2k (1 k +1) am = 2k +2m m!(k (1)m . + m + 1)
m=0
m=0
(1)m x 2 m!(k + m + 1)
k +2m
When k is not an integer, Jk (x ) completes the basis. For integer k , Jk (x ) = (1)k Jk (x ), linearly dependent! Reduction of order can be used to nd another solution. Bessel function of the second kind or Neumann function
446,
Points to note
Power Series Method Frobenius Method Special Functions Dened as Integrals Special Functions Arising as Solutions of ODEs
Solution in power series Ordinary points and singularities Denition of special functions Legendre polynomials Bessel functions
447,
Outline
448,
Preliminary Ideas
A simple boundary value problem:
y + 2y = 0, y (0) = 0, y ( ) = 0 General solution of the ODE: y (x ) = a sin(x 2) + b cos(x 2) Condition y (0) = 0 b = 0. Hence, y (x ) = a sin(x 2). Then, y ( ) = 0 a = 0. Only solution is y (x ) = 0. Now, consider the BVP y + 4y = 0, y (0) = 0, y ( ) = 0. The same steps give y (x ) = a sin(2x ), with arbitrary value of a. Innite number of non-trivial solutions!
449,
Preliminary Ideas
Boundary value problems as eigenvalue problems Explore the possible solutions of the BVP y + ky = 0, y (0) = 0, y ( ) = 0.
With k 0, no hope for a non-trivial solution. Consider k = 2 > 0. Solutions: y = a sin( x ), only for specic values of (or k ): = 0, 1, 2, 3, ; i.e. k = 0, 1, 4, 9, .
Question: For what values of k (eigenvalues), does the given BVP possess non-trivial solutions, and what are the corresponding solutions (eigenfunctions), up to arbitrary scalar multiples? Analogous to the algebraic eigenvalue problem Av = v ! Analogy of a Hermitian matrix: self-adjoint dierential operator.
450,
Preliminary Ideas
Consider the ODE y + P (x )y + Q (x )y = 0. Question: Is it possible to nd functions F (x ) and G (x ) such that F (x )y + F (x )P (x )y + F (x )Q (x )y gets reduced to the derivative of F (x )y + G (x )y ? Comparing with d [F (x )y + G (x )y ] = F (x )y + [F (x ) + G (x )]y + G (x )y , dx F (x ) + G (x ) = F (x )P (x ) Elimination of G (x ): F (x ) P (x )F (x ) + [Q (x ) P (x )]F (x ) = 0 This is the adjoint of the original ODE. and G (x ) = F (x )Q (x ).
451,
Preliminary Ideas
The adjoint ODE
The adjoint of the ODE y + P (x )y + Q (x )y = 0 is F + P1 F + Q1 F = 0, where P1 = P and Q1 = Q P . Then, the adjoint of F + P1 F + Q1 F = 0 is + P2 + Q2 = 0, where P2 = P1 = P and = Q P (P ) = Q . Q2 = Q1 P1
The adjoint of the adjoint of a second order linear homogeneous equation is the original equation itself.
452,
Preliminary Ideas
Second order self-adjoint ODE Question: What is the adjoint of Fy + FPy + FQy = 0? Rephrased question: What is the ODE that (x ) has to satisfy if Fy + FPy + FQy = Comparing terms, d (F ) + (x ) = FP dx Eliminating (x ), we have
d2 (F ) dx 2
d Fy + (x )y ? dx
and
(x ) = FQ .
d dx (FP ).
+ FQ =
F + 2F + F + FQ = FP + (FP ) F + (2F FP ) + F (FP ) + FQ = 0 This is the same as the original ODE, when F (x ) = F (x )P (x )
453,
Preliminary Ideas
Casting a given ODE into the self-adjoint form: Equation y + P (x )y + Q (x )y = 0 is converted to the self-adjoint R form through the multiplication of F (x ) = e P (x )dx . General form of self-adjoint equations: d [F (x )y ] + R (x )y = 0 dx Working rules:
To determine whether a given ODE is in the self-adjoint form, check whether the coecient of y is the derivative of the coecient of y . To convert an ODE into the self-adjoint form, rst obtain the equation in normal form by dividing with the coecient of y . If the coecient of y now R is P (x ), then next multiply the resulting equation with e Pdx .
454,
Sturm-Liouville Problems
Sturm-Liouville equation
[r (x )y ] + [q (x ) + p (x )]y = 0, where p , q , r and r are continuous on [a, b ], with p (x ) > 0 on [a, b ] and r (x ) > 0 on (a, b ). With dierent boundary conditions, Regular S-L problem: a1 y (a) + a2 y (a) = 0 and b1 y (b ) + b2 y (b ) = 0, vectors [a1 a2 ]T and [b1 b2 ]T being non-zero. Periodic S-L problem: With r (a) = r (b ), y (a) = y (b ) and y (a) = y (b ). Singular S-L problem: If r (a) = 0, no boundary condition is needed at x = a. If r (b ) = 0, no boundary condition is needed at x = b . (We just look for bounded solutions over [a, b ].)
455,
Sturm-Liouville Problems
Orthogonality of eigenfunctions
Theorem: If ym (x ) and yn (x ) are eigenfunctions (solutions) of a Sturm-Liouville problem corresponding to distinct eigenvalues m and n respectively, then
b
(ym , yn )
i.e. they are orthogonal with respect to the weight function p (x ). From the hypothesis,
) + (q + m p )ym = 0 (rym ) + (q + n p )yn = 0 (ryn
Subtracting, =
) ym (q + n p )ym yn = (ryn
) yn (q + m p )ym yn = (rym
456,
Sturm-Liouville Problems
Integrating both sides,
b
(m n )
In a regular S-L problem, from the boundary condition at x = a, the homogeneous system (a) ym (a) ym a1 0 = has non-trivial solutions. (a) yn (a) yn a2 0 (a) y (a)y (a) = 0. Therefore, ym (a)yn n m (b ) = 0. Similarly, ym (b )yn (b ) yn (b )ym In a singular S-L problem, zero value of r (x ) at a boundary makes the corresponding term vanish even without a BC. In a periodic S-L problem, the two terms cancel out together. Since m = n , in all cases,
b
457,
Sturm-Liouville Problems
Example: Legendre polynomials over [1, 1] Legendres equation d [(1 x 2 )y ] + k (k + 1)y = 0 dx is self-adjoint and denes a singular Sturm Liouville problem over [1, 1] with p (x ) = 1, q (x ) = 0, r (x ) = 1 x 2 and = k (k + 1).
1
(mn)(m+n+1)
From orthogonal decompositions 1 = P0 (x ), x = P1 (x ), 1 2 1 1 (3x 2 1) + = P2 (x ) + P0 (x ), x2 = 3 3 3 3 3 2 3 1 (5x 3 3x ) + x = P3 (x ) + P1 (x ), x3 = 5 5 5 5 8 4 1 4 x = P4 (x ) + P2 (x ) + P0 (x ) etc; 35 7 5 Pk (x ) is orthogonal to all polynomials of degree less than k .
458,
Sturm-Liouville Problems
Real eigenvalues
Eigenvalues of a Sturm-Liouville problem are real. Let eigenvalue = + i and eigenfunction y (x ) = u (x ) + iv (x ). Substitution leads to [r (u + iv )] + [q + ( + i )p ](u + iv ) = 0. Separation of real and imaginary parts: [rv ] + (q + p )v + pu = 0 pu 2 = [rv ] u (q + p )uv
Adding together,
p (u 2 + v 2 ) = [ru ] v + [ru ]v [rv ]u [rv ] u = r (uv vu ) Integration and application of boundary conditions leads to
b
459,
Eigenfunction Expansions
Eigenfunctions of Sturm-Liouville problems:
convenient and powerful instruments to represent and manipulate fairly general classes of functions {y0 , y1 , y2 , y3 , }: a family of continuous functions over [a, b ], mutually orthogonal with respect to p (x ). Representation of a function f (x ) on [a, b ]: f (x ) =
m=0
am ym (x ) = a0 y0 (x ) + a1 y1 (x ) + a2 y2 (x ) + a3 y3 (x ) +
Generalized Fourier series Analogous to the representation of a vector as a linear combination of a set of mutually orthogonal vectors. Question: How to determine the coecients (an )?
460,
Eigenfunction Expansions
Inner product:
b
(f , yn ) =
a
p (x )f (x )yn (x )dx
b
= where
m=0
am (ym , yn ) = an yn
a m=0 b
yn =
(yn , yn ) =
a (f ,yn ) yn 2
2 (x )dx p (x )yn
m (x ) =
ym (x ) ym (x )
cm m (x ) = c0 0 (x )+ c1 1 (x )+ c2 2 (x )+ c3 3 (x )+
461,
Eigenfunction Expansions
N
m m (x ) = 0 0 (x )+1 1 (x )+2 2 (x )+ +N N (x ).
2
Error E = f N
2 b N
=
a
p (x ) f (x )
m m (x )
m=0
dx
2p (x ) f (x )
b a
m m (x )
m=0 b
[n (x )]dx = 0
n p (x )2 n (x )dx =
p (x )f (x )n (x )dx .
a
462,
Eigenfunction Expansions
Using the Fourier coecients, error
N N
N 2 cn (n , n ) = f 2
N 2 cn + 2 cn n=0
E = (f , f )2
cn (f , n )+
n=0 n=0
n=0
E = f Bessels inequality:
N n=0 2 f cn
n=0
2 cn 0.
=
a
p (x )f 2 (x )dx
Partial sum sk (x ) =
am m (x )
m=0
Question: Does the sequence of {sk } converge? Answer: The bound in Bessels inequality ensures convergence.
463,
Eigenfunction Expansions
Question: Does it converge to f ?
b k a
lim
p (x )[sk (x ) f (x )]2 dx = 0?
Answer: Depends on the basis used. Convergence in the mean or mean-square convergence: An orthonormal set of functions {k (x )} on an interval a x b is said to be complete in a class of functions, or to form a basis for it, if the corresponding generalized Fourier series for a function converges in the mean to the function, for every function belonging to that class.
2 2 Parsevals identity: n=0 cn = f Eigenfunction expansion: generalized Fourier series in terms of eigenfunctions of a Sturm-Liouville problem
convergent for continuous functions with piecewise continuous derivatives, i.e. they form a basis for this class.
464,
Points to note
Eigenvalue problems in ODEs Self-adjoint dierential operators Sturm-Liouville problems Orthogonal eigenfunctions Eigenfunction expansions
Fourier Series and Integrals Basic Theory of Fourier Series Extensions in Application Fourier Integrals
465,
Outline
Fourier Series and Integrals Basic Theory of Fourier Series Extensions in Application Fourier Integrals
Fourier Series and Integrals Basic Theory of Fourier Series Extensions in Application Fourier Integrals
466,
f (x ) = a0 +
an cos
n x n x + bn sin L L
f (x )dx ,
L L
f (x ) cos
L
1 m x dx and bm = L L
f (x ) sin
L
m x dx . L
Fourier Series and Integrals Basic Theory of Fourier Series Extensions in Application Fourier Integrals
467,
Dirichlets conditions: If f (x ) and its derivative are piecewise continuous on [L, L] and are periodic with a period 2L, then the series f (x ) converges to the mean f (x +)+ of one-sided limits, at 2 all points. Fourier series Note: The interval of integration can be [x0 , x0 + 2L] for any x0 .
It is valid to integrate the Fourier series term by term. The Fourier series uniformly converges to f (x ) over an interval on which f (x ) is continuous. At a jump discontinuity, f (x ) is not uniform. Mismatch peak convergence to f (x +)+ 2 shifts with inclusion of more terms (Gibbs phenomenon). Term-by-term dierentiation of the Fourier series at a point requires f (x ) to be smooth at that point.
Fourier Series and Integrals Basic Theory of Fourier Series Extensions in Application Fourier Integrals
468,
an f (x ) cos
n x n x + bn f (x ) sin L L
L L
1 2
2 2 (an + bn )=
1 2L
f 2 (x )dx
The Fourier series representation is complete. A periodic function f (x ) is composed of its mean value and several sinusoidal components, or harmonics. Fourier coecients are corresponding amplitudes. Parsevals identity is simply a statement on energy balance! Bessels inequality
2 a0
1 + 2
N n=1 2 2 + bn ) (an
1 f (x ) 2L
Fourier Series and Integrals Basic Theory of Fourier Series Extensions in Application Fourier Integrals
469,
Extensions in Application
Original spirit of Fouries series representation of periodic functions over (, ). Question: What about a function f (x ) dened only on [L, L]? Answer: Extend the function as F (x ) = f (x ) for L x L, and F (x + 2L) = F (x ). Fourier series of F (x ) acts as the Fourier series representation of f (x ) in its own domain. In Euler formulae, notice that bm = 0 for an even function. The Fourier series of an even function is a Fourier cosine series f (x ) = a0 + where a0 =
1 L L 0 f (x )dx n=1
an cos
n x , L
2 L L n x 0 f (x ) cos L dx.
and an =
Fourier Series and Integrals Basic Theory of Fourier Series Extensions in Application Fourier Integrals
470,
Extensions in Application
Over [0, L], sometimes we need a series of sine terms only, or cosine terms only!
f(x) fc(x)
3L
2L
2L
3L
fs(x)
3L
2L
2L
3L
Fourier Series and Integrals Basic Theory of Fourier Series Extensions in Application Fourier Integrals
471,
Extensions in Application
Half-range expansions
For Fourier cosine series of a function f (x ) over [0, L], even periodic extension: fc (x ) = f (x ) f (x ) for for 0 x L, L x < 0, and fc (x +2L) = fc (x )
For Fourier sine series of a function f (x ) over [0, L], odd periodic extension: fs (x ) = f (x ) f (x ) for for 0 x L, L x < 0, and fs (x +2L) = fs (x )
To develop the Fourier series of a function, which is available as a set of tabulated values or a black-box library routine, integrals in the Euler formulae are evaluated numerically. Important: Fourier series representation is richer and more powerful compared to interpolatory or least square approximation in many contexts.
Fourier Series and Integrals Basic Theory of Fourier Series Extensions in Application Fourier Integrals
472,
Fourier Integrals
Question: How to apply the idea of Fourier series to a non-periodic function over an innite domain? Answer: Magnify a single period to an innite length. Fourier series of function fL (x ) of period 2L: fL (x ) = a0 + where pn =
n L n=1
1 2L
fL (x )dx
L L L
cos pn x
L
fL (v ) cos pn v dv + sin pn x
L
fL (v ) sin pn v dv p ,
where p = pn+1 pn = L.
Fourier Series and Integrals Basic Theory of Fourier Series Extensions in Application Fourier Integrals
473,
Fourier Integrals
f (x ) = 1
f (v ) cos pv dv + sin px
f (v ) sin pv dv dp
Fourier integral of f (x ): f (x ) =
0
f (v ) cos pv dv and B (p ) =
f (v ) sin pv dv
f (v ) cos p (x v )dv dp .
Fourier Series and Integrals Basic Theory of Fourier Series Extensions in Application Fourier Integrals
474,
Fourier Integrals
Using cos = f (x ) =
e i + e i 2
1 2
With substitution p = q ,
0
f (v )e ip(x v ) dv dp =
f (v )e iq(x v ) dv dq .
f (v )e ip(x v ) dv dp =
C (p )e ipx dp ,
f (v )e ipv dv .
Fourier Series and Integrals Basic Theory of Fourier Series Extensions in Application Fourier Integrals
475,
Points to note
Fourier series arising out of a Sturm-Liouville problem A versatile tool for function representation Fourier integral as the limiting case of Fourier series
Fourier Transforms Denition and Fundamental Properties Important Results on Fourier Transforms Discrete Fourier Transform
476,
Outline
Fourier Transforms Denition and Fundamental Properties Important Results on Fourier Transforms Discrete Fourier Transform
Fourier Transforms
477,
Denition and Fundamental Properties Denition and Fundamental Properties Important Results on Fourier Transforms Discrete Fourier Transform
1 2
f (v )e iwv dv e iwt dw
Composition of an innite number of functions in the e iwt , over a continuous distribution of frequency w . form 2 Fourier transform: Amplitude of a frequency component: (w ) = 1 F (f ) f 2 Function of the frequency variable. Inverse Fourier transform ) f (t ) = 1 F 1 (f 2 recovers the original function.
f (t )e iwt dt
(w )e iwt dw f
Fourier Transforms
478,
Denition and Fundamental Properties Denition and Fundamental Properties Important Results on Fourier Transforms Discrete Fourier Transform
Example: Fourier transform of f (t ) = 1? (w ) = k (w ). Let us nd out the inverse Fourier transform of f ) = 1 f (t ) = F 1 (f 2 k k (w )e iwt dw = 2 F (1) = 2(w )
Linearity of Fourier transforms: F{f1 (t ) + f2 (t )} = f 1 (w ) + f2 (w ) Scaling: F{f (at )} = Shifting rules: 1 w f |a| a w and F 1 f a = |a|f (at )
Fourier Transforms
479,
Denition and Fundamental Properties Important Results on Fourier Transforms Important Results on Fourier Transforms Discrete Fourier Transform
Fourier transform of the derivative of a function: If f (t ) is continuous in every interval and f (t ) is piecewise continuous, |f (t )|dt converges and f (t ) approaches zero as t , then F{f (t )} =
1 f (t )e iwt dt 2 1 1 = f (t )e iwt 2 2 (w ). = iw f
(iw )f (t )e iwt dt
Fourier Transforms
480,
Denition and Fundamental Properties Important Results on Fourier Transforms Important Results on Fourier Transforms Discrete Fourier Transform
Under appropriate premises, (w ) = w 2 f (w ). F{f (t )} = (iw )2 f (w ). In general, F{f (n) (t )} = (iw )n f Fourier transform of an integral: If f (t ) is piecewise continuous on every interval, |f (t )|dt converges and f (0) = 0, then
t
f ( )d
1 f (w ). iw
Derivative of a Fourier transform (with respect to the frequency variable): dn F{t n f (t )} = i n n f (w ), dw if f (t ) is piecewise continuous and
n |t f (t )|dt
converges.
Fourier Transforms
481,
Denition and Fundamental Properties Important Results on Fourier Transforms Important Results on Fourier Transforms Discrete Fourier Transform
f ( )g (t )d
Fourier Transforms
482,
Denition and Fundamental Properties Important Results on Fourier Transforms Important Results on Fourier Transforms Discrete Fourier Transform
f (t )e iwt dt
(w ) f g (w )dw
= = =
1 2
f (t )e iwt dt g (w )dw
1 f (t ) 2 f (t )g (t )dt .
g (w )e iwt dw dt
(w ) 2 dw = f
f (t ) 2 dt ,
equating the total energy content of the frequency spectrum of a wave or a signal to the total energy ow over time.
Fourier Transforms Denition and Fundamental Properties Important Results on Fourier Transforms Discrete Fourier Transform
483,
Consider a signal f (t ) from actual measurement or sampling. We want to analyze its amplitude spectrum (versus frequency). For the FT, how to evaluate the integral over (, )? Windowing: Sample the signal f (t ) over a nite interval. A window function: g (t ) = 1 0 for a t b otherwise
Actual processing takes place on the windowed function f (t )g (t ). Next question: Do we need to evaluate the amplitude for all w (, )? Most useful signals are particularly rich only in their own characteristic frequency bands.
Fourier Transforms Denition and Fundamental Properties Important Results on Fourier Transforms Discrete Fourier Transform
484,
data being collected at t = a, a + , a + 2, , a + (N 1), with N = b a. Nyquist critical frequency Note the duality.
Decision of sampling rate determines the band of frequency content that can be accommodated. Decision of the interval [a, b ) dictates how nely the frequency spectrum can be developed. A band-limited signal can be reconstructed from a nite number of samples.
Fourier Transforms Denition and Fundamental Properties Important Results on Fourier Transforms Discrete Fourier Transform
485,
With discrete data at tk = k for k = 0, 1, 2, 3, , N 1, mjk f (t ), f (w ) = 2 where mj = e iwj and mjk is an N N matrix. A similar discrete version of inverse Fourier transform. Reconstruction: a trigonometric interpolation of sampled data.
Structure of matrix mjk , with patterns of redundancies, opens up a trick to reduce it further to O(N log N ) operations.
Structure of Fourier and inverse Fourier transforms reduces the problem with a system of linear equations [O(N 3 ) operations] to that of a matrix-vector multiplication [O(N 2 ) operations].
Fourier Transforms Denition and Fundamental Properties Important Results on Fourier Transforms Discrete Fourier Transform
486,
DFT representation reliable only if the incoming signal is really band-limited in the interval [wc , wc ]. Frequencies beyond [wc , wc ] distort the spectrum near w = wc by folding back. Aliasing Detection: a posteriori Bandpass ltering: If we expect a signal having components only in certain frequency bands and want to get rid of unwanted noise frequencies, for every band [w1 , w2 ] of our interest, we dene window (w ) with intervals [w2 , w1 ] and [w1 , w2 ]. function (w )f (w ) lters out frequency Windowed Fourier transform components outside this band. For recovery, (w ). convolve raw signal f (t ) with IFT (t ) of
Fourier Transforms Denition and Fundamental Properties Important Results on Fourier Transforms Discrete Fourier Transform
487,
Points to note
Fourier transform as amplitude function in Fourier integral Basic operational tools in Fourier and inverse Fourier transforms Conceptual notions of discrete Fourier transform (DFT)
488,
Outline
Minimax Approximation*
489,
Approximation with Chebyshev polynomials Approximation with Chebyshev polynomials Minimax Polynomial Approximation
n2 y =0 1 x2
over 1 x 1, with Tn (1) = 1 for all n. Closed-form expressions: Tn (x ) = cos(n cos1 x ), or, T0 (x ) = 1, T1 (x ) = x , T2 (x ) = 2x 2 1, T3 (x ) = 4x 3 3x , ; with the three-term recurrence relation Tk +1 (x ) = 2xTk (x ) Tk 1 (x ).
Minimax Approximation*
490,
Approximation with Chebyshev polynomials Approximation with Chebyshev polynomials Minimax Polynomial Approximation
Immediate observations Coecients in a Chebyshev polynomial are integers. In particular, the leading coecient of Tn (x ) is 2n1 . For even n , Tn (x ) is an even function, while for odd n it is an odd function. Tn (1) = 1, Tn (1) = (1)n and |Tn (x )| 1 for 1 x 1. Zeros of a Chebyshev polynomial Tn (x ) are real and lie inside 1) the interval [1, 1] at locations x = cos (2k2 for n k = 1, 2, 3, , n. These locations are also called Chebyshev accuracy points. Further, zeros of Tn (x ) are interlaced by those of Tn+1 (x ). Extrema of Tn (x ) are of magnitude equal to unity, alternate in for k = 0, 1, 2, 3, , n. sign and occur at x = cos kn Orthogonality and norms: if m = n, 0 1 Tm (x )Tn (x ) if m = n = 0, and dx = 2 1 x2 1 if m = n = 0.
Minimax Approximation*
491,
Approximation with Chebyshev polynomials Approximation with Chebyshev polynomials Minimax Polynomial Approximation
1
1
0.8
0.8
P 8 (x) T8 (x)
0.6
0.6 0.4 0.2 T3 (x) 0 0.2
0.4
0.2
y
extrema zeroes 1 0.5 0 x 0.5 1 1.5
0.2
0.4 0.6 0.8 1
0.4
0.6
0.8
1.5
1 1
0.8
0.6
0.4
0.2
0 x
0.2
0.4
0.6
0.8
Being cosines and polynomials at the same time, Chebyshev polynomials possess a wide variety of interesting properties! Most striking property: equal-ripple oscillations, leading to minimax property
Minimax Approximation*
492,
Approximation with Chebyshev polynomials Approximation with Chebyshev polynomials Minimax Polynomial Approximation
Minimax property Theorem: Among all polynomials pn (x ) of degree n > 0 with the leading coecient equal to unity, 21n Tn (x ) deviates least from zero in [1, 1]. That is,
1x 1
then at (n + 1) locations of alternating extrema of 21n Tn (x ), the polynomial qn (x ) = 21n Tn (x ) pn (x ) will have the same sign as 21n Tn (x ). With alternating signs at (n + 1) locations in sequence, qn (x ) will have n intervening zeros, even though it is a polynomial of degree at most (n 1): CONTRADICTION!
Minimax Approximation*
493,
Approximation with Chebyshev polynomials Approximation with Chebyshev polynomials Minimax Polynomial Approximation
f (x )T0 (x ) 2 dx and an = 2 1x
n k =0 ak Tk (x ):
f (x )Tn (x ) dx for n = 1, 2, 3, 1 x2
A truncated series
Chebyshev economization Leading error term an+1 Tn+1 (x ) deviates least from zero over [1, 1] and is qualitatively similar to the error function. Question: How to develop a Chebyshev series approximation? Find out so many Chebyshev polynomials and evaluate coecients?
Minimax Approximation*
494,
Approximation with Chebyshev polynomials Approximation with Chebyshev polynomials Minimax Polynomial Approximation
Remark: The economized series n k =0 ak Tk (x ) gives minimax deviation of the leading error term an+1 Tn+1 (x ). Assuming an+1 Tn+1 (x ) to be the error, at the zeros of Tn+1 (x ), the error will be ocially zero, i.e.
n
For approximating f (t ) over [a, b ], scale the variable as b a b t = a+ 2 + 2 x , with x [1, 1].
Recall: Values of an n-th degree polynomial at n + 1 points uniquely x the entire polynomial.
495,
Situations in which minimax approximation is desirable: Develop the approximation once and keep it for use in future. Requirement: Uniform quality control over the entire domain Minimax approximation: deviation limited by the constant amplitude of ripple Chebyshevs minimax theorem Theorem: Of all polynomials of degree up to n, p (x ) is the minimax polynomial approximation of f (x ), i.e. it minimizes max |f (x ) p (x )|, if and only if there are n + 2 points xi such that a x1 < x2 < x3 < < xn+2 b , where the dierence f (x ) p (x ) takes its extreme values of the same magnitude and alternating signs.
496,
Utilize any gap to reduce the deviation at the other extrema with values at the bound.
y d f(x) p(x) /2
O
p(x) l m w n b x
a /2
Construction of the minimax polynomial: Remez algorithm Note: In the light of this theorem and algorithm, examine how Tn+1 (x ) is qualitatively similar to the complete error function!
497,
Points to note
Unique features of Chebyshev polynomials The equal-ripple and minimax properties Chebyshev series and Chebyshev-Lagrange approximation Fundamental ideas of general minimax approximation
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
498,
Outline
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
499,
Introduction
Quasi-linear second order PDEs a
2u 2u 2u + c + 2 b = F (x , y , u , ux , uy ) x 2 x y y 2
hyperbolic if b 2 ac > 0, modelling phenomena which evolve in time perpetually and do not approach a steady state parabolic if b 2 ac = 0, modelling phenomena which evolve in time in a transient manner, approaching steady state elliptic if b 2 ac < 0, modelling steady-state congurations, without evolution in time If F (x , y , u , ux , uy ) = 0, second order linear homogeneous dierential equation Principle of superposition: A linear combination of dierent solutions is also a solution. Solutions are often in the form of innite series. Solution techniques in PDEs typically attack the boundary value problem directly.
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
500,
Introduction
Initial and boundary conditions Time and space variables are qualitatively dierent.
Conditions in time: typically initial conditions. For second order PDEs, u and ut over the entire space domain: Cauchy conditions
Conditions in space: typically boundary conditions. For u (t , x , y ), boundary conditions over the entire curve in the x -y plane that encloses the domain. For second order PDEs,
Dirichlet condition: value of the function Neumann condition: derivative normal to the boundary Mixed (Robin) condition
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
501,
Introduction
Method of separation of variables For u (x , y ), propose a solution in the form
u (x , y ) = X (x )Y (y ) and substitute ux = X Y , uy = XY , uxx = X Y , uxy = X Y , uyy = XY to cast the equation into the form (x , X , X , X ) = (y , Y , Y , Y ). If the manoeuvre succeeds then, x and y being independent variables, it implies (x , X , X , X ) = (y , Y , Y , Y ) = k . Nature of the separation constant k is decided based on the context, resulting ODEs are solved in consistency with the boundary conditions and assembled to construct u (x , y ).
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
502,
Hyperbolic Equations
Transverse vibrations of a string
u T Q P P
T +
Small deection and slope: cos 1, sin tan Horizontal (longitudinal) forces on PQ balance. From Newtons second law, vertical (transverse) deection u (x , t ): T sin( + ) T sin = x 2u t 2
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
503,
Hyperbolic Equations
Under the assumptions, denoting c 2 = x 2u = c2 t 2 u x
T ,
u x
.
P
one-dimensional wave equation Boundary conditions (in this case): u (0, t ) = u (L, t ) = 0 Initial conguration and initial velocity: u (x , 0) = f (x ) and ut (x , 0) = g (x ) Cauchy problem: Determine u (x , t ) for 0 x L, t 0.
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
504,
Hyperbolic Equations
Solution by separation of variables
utt = c 2 uxx , u (0, t ) = u (L, t ) = 0, u (x , 0) = f (x ), ut (x , 0) = g (x ) Assuming u (x , t ) = X (x )T (t ), and substituting utt = XT and uxx = X T , variables are separated as X T = = p 2 . c 2T X The PDE splits into two ODEs X + p 2 X = 0 and T + c 2 p 2 T = 0. Eigenvalues of BVP X + p 2 X = 0, X (0) = X (L) = 0 are p = and eigenfunctions n x Xn (x ) = sin px = sin for n = 1, 2, 3, . L cn Second ODE: T + 2 n T = 0, with n = L
n L
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
505,
Hyperbolic Equations
Corresponding solution:
Tn (t ) = An cos n t + Bn sin n t Then, for n = 1, 2, 3, , un (x , t ) = Xn (x )Tn (t ) = (An cos n t + Bn sin n t ) sin satises the PDE and the boundary conditions. Since the PDE and the BCs are homogeneous, by superposition, u (x , t ) =
n=1
n x L
n x . L
Question: How to determine coecients An and Bn ? Answer: By imposing the initial conditions.
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
506,
Hyperbolic Equations
u (x , 0) = f (x ) = ut (x , 0) = g (x ) = Hence, coecients:
n=1 n=1
n Bn sin
L 2 n x n x 2 L dx and Bn = dx f (x ) sin g (x ) sin L 0 L cn 0 L Related problems: Dierent boundary conditions: other kinds of series Long wire: innite domain, continuous frequencies and solution from Fourier integrals Alternative: Reduce the problem using Fourier transforms. General wave equation in 3-d: utt = c 2 2 u Membrane equation: utt = c 2 (uxx + uyy )
An =
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
507,
Hyperbolic Equations
DAlemberts solution of the wave equation Method of characteristics Canonical form By coordinate transformation from (x , y ) to (, ), with U (, ) = u [x (, ), y (, )], hyperbolic equation: U = parabolic equation: U = elliptic equation: U + U = in which (, , U , U , U ) is free from second derivatives. For a hyperbolic equation, entire domain becomes a network of - coordinate curves, known as characteristic curves, along which decoupled solutions can be tracked!
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
508,
Hyperbolic Equations
For a hyperbolic equation in the form a
2u 2u 2u + c + 2 b = F (x , y , u , ux , uy ), x 2 x y y 2 b b 2 ac , a
roots of am2 + 2bm + c are m1,2 = real and distinct. Coordinate transformation
= y + m1 x , = y + m2 x leads to U = (, , U , U , U ). For the BVP utt = c 2 uxx , u (0, t ) = u (L, t ) = 0, u (x , 0) = f (x ), ut (x , 0) = g (x ), canonical coordinate transformation: = x ct , = x + ct , 1 1 ( ). with x = ( + ), t = 2 2c
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
509,
Hyperbolic Equations
Substitution of derivatives
ut = U t + U t = cU + cU utt = c 2 U 2c 2 U + c 2 U into the PDE utt = c 2 uxx gives c 2 (U 2U + U ) = c 2 (U + 2U + U ). Canonical form: U = 0 Integration: U = U (, ) = U d + ( ) = ( ) ( )d + f2 ( ) = f1 ( ) + f2 ( )
ux = U x + U x = U + U uxx = U + 2U + U
DAlemberts solution: u (x , t ) = f1 (x ct ) + f2 (x + ct )
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
510,
Hyperbolic Equations
f1 (x ct ): a progressive wave in forward direction with speed c Reection at boundary: in a manner depending upon the boundary condition Reected wave f2 (x + ct ): another progressive wave, this one in backward direction with speed c Superposition of two waves: complete solution (response) Note: Components of the earlier solution: with n = cos n t sin n x L n x sin n t sin L = =
cn L ,
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
511,
Parabolic Equations
u = c 2 2 u t One-dimensional heat (diusion) equation: ut = c 2 uxx
Heat conduction in a nite bar: For a thin bar of length L with end-points at zero temperature, ut = c 2 uxx , u (0, t ) = u (L, t ) = 0, u (x , 0) = f (x ). Assumption u (x , t ) = X (x )T (t ) leads to XT = c 2 X T giving rise to two ODEs as X + p 2 X = 0 and T + c 2 p 2 T = 0. X T = = p 2 , c 2T X
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
512,
Parabolic Equations
BVP in the space coordinate X + p 2 X = 0, X (0) = X (L) = 0 has solutions n x . Xn (x ) = sin L With n = cn L , the ODE in T (t ) has the corresponding solutions Tn (t ) = An e n t . By superposition, u (x , t ) =
n=1
2
An sin
n x 2 e nt , L
An sin
n x , L
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
513,
Parabolic Equations
ut = c 2 uxx , u (0, t ) = u1 , u (L, t ) = u2 , u (x , 0) = f (x ). For u1 = u2 , with u (x , t ) = X (x )T (t ), BCs do not separate! Assume u (x , t ) = U (x , t ) + uss (x ), where component uss (x ), steady-state temperature (distribution), does not enter the dierential equation. u2 u1 x uss (x ) = 0, uss (0) = u1 , uss (L) = u2 uss (x ) = u1 + L Substituting into the BVP, Final solution: Ut = c 2 Uxx , U (0, t ) = U (L, t ) = 0, U (x , 0) = f (x ) uss (x ). u (x , t ) =
n=1
Bn sin
n x 2 e n t + uss (x ), L
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
514,
Parabolic Equations
Heat conduction in an innite wire
ut = c 2 uxx , u (x , 0) = f (x )
, now we have continuous frequency p . In place of nL Solution as superposition of all frequencies:
u (x , t ) =
0
up (x , t )dp =
0
2 p2 t
dp
Initial condition u (x , 0) = f (x ) =
0
f (v ) cos pv dv and B (p ) =
f (v ) sin pv dv .
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
515,
Parabolic Equations
Solution using Fourier transforms
ut = c 2 uxx , u (x , 0) = f (x ) Using derivative formula of Fourier transforms, u = c 2 w 2 u , F (ut ) = c 2 (iw )2 F (u ) t since variables x and t are independent. Initial value problem in u (w , t ): u (w ) = c 2 w 2 u , u (0) = f t (w )e c 2 w 2 t Solution: u (w , t ) = f Inverse Fourier transform gives solution of the original problem as 1 (w )e c 2 w 2 t e iwx dw u (x , t ) = F 1 {u (w , t )} = f 2 1 2 2 cos(wx wv )e c w t dw dv . f (v ) u (x , t ) = 0
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
516,
Elliptic Equations
u = c2 t 2u 2u + 2 x 2 y
Steady-state temperature distribution: 2u 2u + =0 x 2 y 2 Laplaces equation Steady-state heat ow in a rectangular plate: uxx + uyy = 0, u (0, y ) = u (a, y ) = u (x , 0) = 0, u (x , b ) = f (x ); a Dirichlet problem over the domain 0 x a, 0 y b . Proposal u (x , y ) = X (x )Y (y ) leads to X Y + XY = 0 Separated ODEs: X + p 2 X = 0 and Y p 2 Y = 0 Y X = = p 2 . X Y
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations n x Two-Dimensional n Wave Equation
517,
Elliptic Equations
X (x ) = sin
n y n y + Bn sinh a a n x n y sinh a a
Bn sin
n x n y sinh a a
The last boundary condition u (x , b ) = f (x ) xes the coecients from the Fourier sine series of f (x ). Note: In the example, BCs on three sides were homogeneous. How did it help? What if there are more non-homogeneous BCs?
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
518,
Elliptic Equations
Steady-state heat ow with internal heat generation 2 u = (x , y ) Poissons equation Separation of variables impossible! Consider function u (x , y ) as u (x , y ) = uh (x , y ) + up (x , y ) Sequence of steps
one particular solution up (x , y ) that may or may not satisfy some or all of the boundary conditions solution of the corresponding homogeneous equation, namely uxx + uyy = 0 for uh (x , y )
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
519,
A Cauchy problem of the membrane: utt = c 2 (uxx + uyy ); u (x , y , 0) = f (x , y ), ut (x , y , 0) = g (x , y ); u (0, y , t ) = u (a, y , t ) = u (x , 0, t ) = u (x , b , t ) = 0. Separate the time variable from the space variables: u (x , y , t ) = F (x , y )T (t ) Helmholtz equation: Fxx + Fyy + 2 F = 0 Fxx + Fyy T = 2 = 2 F c T
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
520,
n b
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
521,
Composing Xm (x ), Yn (y ) and Tmn (t ) and superposing, [Amn cos c mn t +Bmn sin c mn t ] sin
m=1 n=1
m x n y sin , a b
coecients being determined from the double Fourier series f (x , y ) = and g (x , y ) = Amn sin m x n y sin a b n y m x sin . a b
c mn Bmn sin
BVPs modelled in polar coordinates For domains of circular symmetry, important in many practical systems, the BVP is conveniently modelled in polar coordinates, the separation of variables quite often producing Bessels equation, in cylindrical coordinates, and Legendres equation, in spherical coordinates
Partial Dierential Equations Introduction Hyperbolic Equations Parabolic Equations Elliptic Equations Two-Dimensional Wave Equation
522,
Points to note
PDEs in physically relevant contexts Initial and boundary conditions Separation of variables Examples of boundary value problems with hyperbolic, parabolic and elliptic equations
Cascaded application of separation of variables for problems with more than two independent variables
523,
Outline
524,
gives a rule to associate a unique complex number w = u + iv to every z = x + iy in a set. Limit: If f (z ) is dened in a neighbourhood of z0 (except possibly at z0 itself) and l C such that > 0, > 0 such that 0 < |z z0 | < |f (z ) l | < , then l = lim f (z ).
z z0
Crucial dierence from real functions: z can approach z0 in all possible manners in the complex plane. Denition of the limit is more restrictive. Continuity: limz z0 f (z ) = f (z0 ) Continuity in a domain D : continuity at every point in D
525,
z z0
When this limit exists, function f (z ) is said to be dierentiable. Extremely restrictive denition! Analytic function A function f (z ) is called analytic in a domain D if it is dened and dierentiable at all points in D. Points to be settled later:
Derivative of an analytic function is also analytic. An analytic function possesses derivatives of all orders.
A great qualitative dierence between functions of a real variable and those of a complex variable!
526,
Cauchy-Riemann conditions If f (z ) = u (x , y ) + iv (x , y ) is analytic then u + i v f (z ) = lim x ,y 0 x + i y along all paths of approach for z = x + i y 0 or x , y 0.
y
3 2 z0 4 O 5 1
z = iy z = x
z0
x
527,
are necessary for analyticity. Question: Do the C-R conditions imply analyticity? Consider u (x , y ) and v (x , y ) having continuous rst order partial derivatives that satisfy the Cauchy-Riemann conditions. By mean value theorem, u u u = u (x + x , y + y ) u (x , y ) = x (x1 , y1 ) + y (x1 , y1 ) x y with x1 = x + x , y1 = y + y for some [0, 1]; and v v v = v (x + x , y + y ) v (x , y ) = x (x2 , y2 ) + y (x2 , y2 ) x y with x2 = x + x , y2 = y + y for some [0, 1]. Then, v v u u f = x (x1 , y1 ) + i y (x2 , y2 ) +i x (x2 , y2 ) i y (x1 , y1 ) x y x y
528,
u x
and
u y
v = x ,
f z
u u u (x1 , y1 ) + i y (x2 , y2 ) (x1 , y1 ) x x x v v v + i (x + i y ) (x1 , y1 ) + i x (x2 , y2 ) (x1 , y1 ) x x x u v = (x1 , y1 ) + i (x1 , y1 ) + x x v y u u x v (x2 , y2 ) (x1 , y1 ) + i (x2 , y2 ) (x1 , y1 ) i z x x z x x = (x + i y )
x z
Since
y z
529,
u x
and
u y
v = x ,
2u 2u 2u 2v 2v 2v 2v 2u , , = = = = , x 2 x y y 2 y x y x y 2 x y x 2 2v 2v 2u 2u + = 0 = + . x 2 y 2 x 2 y 2
Real and imaginary components of an analytic functions are harmonic functions. Conjugate harmonic function of u (x , y ): v (x , y ) Families of curves u (x , y ) = c and v (x , y ) = k are mutually orthogonal, except possibly at points where f (z ) = 0. Question: If u (x , y ) is given, then how to develop the complete analytic function w = f (z ) = u (x , y ) + iv (x , y )?
530,
Conformal Mapping
Function: mapping of elements in domain to their images in range Depiction of a complex variable requires a plane with two axes. Mapping of a complex function w = f (z ) is shown in two planes. Example: mapping of a rectangle under transformation w = e z
3.5
2
3
D C
C 2.5
1.5
2
1
1.5
0.5 y
0 O A B
0.5
0
0.5
O 0.5
1 1
0.5
0.5 x
1.5
1 1
0.5
0.5
1 u
1.5
2.5
3.5
531,
Conformal Mapping
Conformal mapping: a mapping that preserves the angle between any two directions in magnitude and sense. Verify: w = e z denes a conformal mapping. Through relative orientations of curves at the points of intersection, local shape of a gure is preserved. Take curve z (t ), z (0) = z0 and image w (t ) = f [z (t )], w0 = f (z0 ). For analytic f (z ), w (0) = f (z0 )z (0), implying |w (0)| = |f (z0 )| |z (0)| and arg w (0) = arg f (z0 ) + arg z (0). For several curves through z0 , image curves pass through w0 and all of them turn by the same angle arg f (z0 ). Cautions f (z ) varies from point to point. Dierent scaling and turning eects take place at dierent points. Global shape changes. For f (z ) = 0, argument is undened and conformality is lost.
532,
Conformal Mapping
An analytic function denes a conformal mapping except at its critical points where its derivative vanishes. Except at critical points, an analytic function is invertible. We can establish an inverse of any conformal mapping. Examples Linear function w = az + b (for a = 0) Linear fractional transformation az + b , ad bc = 0 w= cz + d Other elementary functions like z n , e z etc Special signicance of conformal mappings: A harmonic function (u , v ) in the w -plane is also a harmonic function, in the form (x , y ) in the z-plane, as long as the two planes are related through a conformal mapping.
533,
Potential Theory
Riemann mapping theorem: Let D be a simply connected domain in the z -plane bounded by a closed curve C . Then there exists a conformal mapping that gives a one-to-one correspondence between D and the unit disc |w | < 1 as well as between C and the unit circle |w | = 1, bounding the unit disc. First, establish a conformal mapping between the given domain and a domain of simple geometry. Next, solve the BVP in this simple domain. Finally, using the inverse of the conformal mapping, construct the solution for the given domain.
534,
Potential Theory
Two-dimensional potential ow
A streamline is a curve in the ow eld, the tangent to which at any point is along the local velocity vector. Stream function (x , y ) remains constant along a streamline. (x , y ) is the conjugate harmonic function of (x , y ). Complex potential function (z ) = (x , y ) + i (x , y ) denes the ow.
If a ow eld encounters a solid boundary of a complicated shape, transform the boundary conformally to a simple boundary to facilitate the study of the ow pattern.
535,
Points to note
Analytic functions and Cauchy-Riemann conditions Conformality of analytic functions Applications in solving BVPs and ow description
Integrals in the Complex Plane Line Integral Cauchys Integral Theorem Cauchys Integral Formula
536,
Outline
Integrals in the Complex Plane Line Integral Cauchys Integral Theorem Cauchys Integral Formula
Integrals in the Complex Plane Line Integral Cauchys Integral Theorem Cauchys Integral Formula
537,
Line Integral
f (z )dz =
C C
(vdx +udy ).
C
f (z )dz =
C a
f [z (t )]z (t )dt .
Over a simple closed curve, contour integral: C f (z )dz Example: C z n dz for integer n, around circle z = e i z n dz = i n+1
C 0 2
e i (n+1) d =
0 2 i
for n = 1, for n = 1.
f (z )dz
|f (z )| |dz | < M
|dz | = ML.
Integrals in the Complex Plane Line Integral Cauchys Integral Theorem Cauchys Integral Formula
538,
Contour integral C f (z )dz =? If f (z ) is continuous, then by Greens theorem in the plane, f (z )dz =
C R
v u x y
dxdy +i
R
u v x y
dxdy ,
where R is the region enclosed by C . From C-R conditions, C f (z )dz = 0. Proof by Goursat: without the hypothesis of continuity of f (z ) Cauchy-Goursat theorem If f (z ) is analytic in a simply connected domain D, then C f (z )dz = 0 for every simple closed curve C in D. Importance of Goursats contribution: continuity of f (z ) appears as consequence!
Integrals in the Complex Plane Line Integral Cauchys Integral Theorem Cauchys Integral Formula
539,
Principle of path independence Two points z1 and z2 on the close curve C two open paths C1 and C2 from z1 to z2 Cauchys theorem on C , comprising of C1 in the forward direction and C2 in the reverse direction:
z2 C1
f (z )dz
C2
f (z )dz = 0
f (z )dz =
z1 C1
f (z )dz =
C2
f (z )dz
For an analytic function f (z ) in a simply connected z domain D, z12 f (z )dz is independent of the path and depends only on the end-points, as long as the path is completely contained in D. Consequence: Denition of the function
z
F (z ) =
z0
f ( )d
Integrals in the Complex Plane Line Integral Cauchys Integral Theorem Cauchys Integral Formula
540,
f ( )d
z0
f ( )d f (z )
[f ( ) f (z )]d
d = .
z
If f (z ) is analytic in a simply connected domain D, then there exists an analytic function F (z ) in D such that F (z ) = f (z ) and
z2 z1
Integrals in the Complex Plane Line Integral Cauchys Integral Theorem Cauchys Integral Formula
541,
C*
z2 s1
D
C1 C2
f (z )dz =
C1 C2
f (z )dz =
C3
f (z )dz
z1
s3
C3
s2
The line integral remains unaltered through a continuous deformation of the path of integration with xed end-points, as long as the sweep of the deformation includes no point where the integrand is non-analytic.
Integrals in the Complex Plane Line Integral Cauchys Integral Theorem Cauchys Integral Formula
542,
C3 L3
f (z )dz
C1
f (z )dz
C2
f (z )dz
f (z )dz = 0.
C3
If f (z ) is analytic in a region bounded by the contour C as the outer boundary and non-overlapping contours C1 , C2 , C3 , , Cn as inner boundaries, then
n
f (z )dz =
C i =1 Ci
f (z )dz .
Integrals in the Complex Plane Line Integral Cauchys Integral Theorem Cauchys Integral Formula
543,
f (z ): analytic function in a simply connected domain D For z0 D and simple closed curve C in D ,
C
f (z ) dz = 2 if (z0 ). z z0
Consider C as a circle with centre at z0 and radius , with no loss of generality (why?). f (z ) dz = f (z0 ) z z0 dz + z z0 f (z ) f (z0 ) dz z z0 f (z ) f (z0 ) < , z z0
From continuity of f (z ), such that for any , |z z0 | < |f (z ) f (z0 )| < and with < .
Integrals in the Complex Plane Line Integral Cauchys Integral Theorem Cauchys Integral Formula
544,
Evaluation of function at a point: If nding the integral on the left-hand-side is relatively simple, then we use it to evaluate f (z0 ). Signicant in the solution of boundary value problems! Example: Poissons integral formula u (r , ) = 1 2
2 0
If g (z ) is analytic on the contour and in the enclosed region, the Cauchys theorem implies C g (z )dz = 0. If the contour encloses a singularity at z0 , then Cauchys formula supplies a non-zero contribution to the integral, if f (z ) = g (z )(z z0 ) is analytic.
R2
(R 2 r 2 )u (R , ) d 2Rr cos( ) + r 2
Integrals in the Complex Plane Line Integral Cauchys Integral Theorem Cauchys Integral Formula
545,
Poissons integral formula Taking z0 = re i and z = Re i (with r < R ) in Cauchys formula, f (Re i ) (iRe i )d . Re i re i 0 How to get rid of imaginary quantities from the expression? 2 Develop a complement. With R r in place of r , 2 if (re i ) =
2 2
0=
0
f (Re i ) Re i
R2 i r e 2 0 2
(iRe i )d =
0
f (Re i ) (ire i )d . re i Re i
Subtracting,
2 if (re i ) = i = i
0
f (Re i )
f (re i ) =
1 2
Integrals in the Complex Plane Line Integral Cauchys Integral Theorem Cauchys Integral Formula
546,
Cauchys integral formula evaluates contour integral of g (z ), if the contour encloses a point z0 where g (z ) is non-analytic but g (z )(z z0 ) is analytic. If g (z )(z z0 ) is also non-analytic, but g (z )(z z0 )2 is analytic? f (z0 ) = f (z0 ) = f (z0 ) = = f (n) (z0 ) = 1 2 i 1 2 i 2! 2 i n! 2 i f (z ) dz , z z0 f (z ) dz , (z z0 )2 f (z ) dz , (z z0 )3 , f (z ) dz . (z z0 )n+1
The formal expressions can be established through dierentiation under the integral sign.
Integrals in the Complex Plane Line Integral Cauchys Integral Theorem Cauchys Integral Formula
547,
1 2 i 1 = 2 i
An analytic function possesses derivatives of all orders at every point in its domain. Analyticity implies much more than mere dierentiability!
Integrals in the Complex Plane Line Integral Cauchys Integral Theorem Cauchys Integral Formula
548,
Points to note
Concept of line integral in complex plane Cauchys integral theorem Consequences of analyticity Cauchys integral formula Derivatives of arbitrary order for analytic functions
Singularities of Complex Functions Series Representations of Complex Functions Zeros and Singularities Residues Evaluation of Real Integrals
549,
Outline
Singularities of Complex Functions Series Representations of Complex Functions Zeros and Singularities Residues Evaluation of Real Integrals
550,
Series Representations of Complex Functions Series Representations of Complex Functions Zeros and Singularities Residues
f (w )dw , (w z0 )n+1
where C is a circle with centre at z0 . Form of the series and coecients: similar to real functions The series representation is convergent within a disc |z z0 | < R, where radius of convergence R is the distance of the nearest singularity from z0 . Note: No valid power series representation around z0 , i.e. in powers of (z z0 ), if f(z) is not analytic at z0 Question: In that case, what about a series representation that includes negative powers of (z z0 ) as well?
551,
Series Representations of Complex Functions Series Representations of Complex Functions Zeros and Singularities Residues
Laurents series: If f (z ) is analytic on circles C1 (outer) and C2 (inner) with centre at z0 , and in the annulus in between, then f (z ) =
n=
an (z z0 )n =
m=0
bm (z z0 )m +
m=1
cm ; (z z0 )m
or,
f (w )(w z0 )m1 dw ;
the contour C lying in the annulus and enclosing C2 . Validity of this series representation: in annular region obtained by growing C1 and shrinking C2 till f (z ) ceases to be analytic. Observation: If f (z ) is analytic inside C2 as well, then cm = 0 and Laurents series reduces to Taylors series.
552,
Series Representations of Complex Functions Series Representations of Complex Functions Zeros and Singularities Residues
Proof of Laurents series Cauchys integral formula for any point z in the annulus, f (w )dw f (w )dw 1 1 . f (z ) = 2 i C1 w z 2 i C2 w z Organization of the series: 1 w z 1 w z 1 (w z0 )[1 (z z0 )/(w z0 )] 1 = (z z0 )[1 (w z0 )/(z z0 )] =
z w z0 C2 C1
We use q =
Using the expression for the sum of a geometric series, 1 qn 1 qn = 1+q +q 2 + +q n1 + . 1+q +q 2 + +q n1 = 1q 1q 1q
z z 0 w z 0
w z 0 z z 0
over C2 .
553,
Series Representations of Complex Functions Series Representations of Complex Functions Zeros and Singularities Residues
z z0 1 z z0 (z z0 )n1 1 = + + + + 2 w z w z0 (w z0 ) (w z0 )n w z0
1 w z
C2
f (w )dw = a1 (z z0 )1 + + an (z z0 )n + Tn , w z w z0 z z0
n
w z 0 z z 0 ,
z z0 w z0
f (w ) dw . w z
f (w ) dw . z w
554,
Series Representations of Complex Functions Series Representations of Complex Functions Zeros and Singularities Residues
ak (z z0 )k + Tn + Tn , 1 2 i 1 2 i
w z 0 z z 0
where
Tn = and Tn =
C1
C2
w z0 z z0
z z0 w z0
f (w ) dw w z
f (w ) dw . z w
< 1 over C2
remainder terms Tn and Tn approach zero as n . Remark: For actually developing Taylors or Laurents series of a function, algebraic manipulation of known facts are employed quite often, rather than evaluating so many contour integrals!
Singularities of Complex Functions Series Representations of Complex Functions Zeros and Singularities Residues Evaluation of Real Integrals
555,
Zeros of an analytic function: points where the function vanishes If, at a point z0 , a function f (z ) vanishes along with rst m 1 of its derivatives, but f (m) (z0 ) = 0; then z0 is a zero of f (z ) of order m, giving the Taylors series as f (z ) = (z z0 )m g (z ). An isolated zero has a neighbourhood containing no other zero. For an analytic function, not identically zero, every point has a neighbourhood free of zeros of the function, except possibly for that point itself. In particular, zeros of such an analytic function are always isolated.
Implication: If f (z ) has a zero in every neighbourhood around z0 then it cannot be analytic at z0 , unless it is the zero function [i.e. f (z ) = 0 everywhere].
Singularities of Complex Functions Series Representations of Complex Functions Zeros and Singularities Residues Evaluation of Real Integrals
556,
Entire function: A function which is analytic everywhere Examples: z n (for positive integer n), e z , sin z etc.
The Taylors series of an entire function has an innite radius of convergence. Singularities: points where a function ceases to be analytic Removable singularity: If f (z ) is not dened at z0 , but has a limit. z 1 at z = 0. Example: f (z ) = e z Pole: If f (z ) has a Laurents series around z0 , with a nite number of terms with negative powers. If an = 0 for n < m, but am = 0, then z0 is a pole of order m, limz z0 (z z0 )m f (z ) being a non-zero nite number. A simple pole: a pole of order one. Essential singularity: A singularity which is neither a removable singularity nor a pole. If the function has a Laurents series, then it has innite terms with negative powers. Example: f (z ) = e 1/z at z = 0.
Singularities of Complex Functions Series Representations of Complex Functions Zeros and Singularities Residues Evaluation of Real Integrals
557,
Poles are necessarily isolated singularities. A zero of f (z ) of order m is a pole of f (1 z ) of the same order and vice versa. If f (z ) has a zero of order m at z0 where g (z ) has a pole of the same order, then f (z )g (z ) is either analytic at z0 or has a removable singularity there. Argument theorem: If f (z ) is analytic inside and on a simple closed curve C except for a nite number of poles inside and f (z ) = 0 on C , then 1 2 i f (z ) dz = N P , f (z )
where N and P are total numbers of zeros and poles inside C respectively, counting multiplicities (orders).
Singularities of Complex Functions Series Representations of Complex Functions Zeros and Singularities Residues Evaluation of Real Integrals
558,
Residues
Term by term integration of Laurents series: 1 Residue: Res z0 f (z ) = a1 = 2i C f (z )dz If f (z ) has a pole (of order m) at z0 , then (z z0 )m f (z ) = is analytic at z0 , and d m 1 [(z z0 )m f (z )] = dz m1
Resf (z ) z0 n =m n =1
f (z )dz = 2 ia1
an (z z0 )m+n
d m 1 1 lim [(z z0 )m f (z )]. (m 1)! z z0 dz m1 Residue theorem: If f (z ) is analytic inside and on simple closed curve C , with singularities at z1 , z2 , z3 , , zk inside C ; then = a 1 =
k
f (z )dz = 2 i
C i =1
Resf zi
(z ).
Singularities of Complex Functions Series Representations of Complex Functions Zeros and Singularities Residues Evaluation of Real Integrals
559,
General strategy Identify the required integral as a contour integral of a complex function, or a part thereof. If the domain of integration is innite, then extend the contour innitely, without enclosing new singularities. Example: I =
0 2
(cos , sin )d
1 2
z+
1 z
1 2i
1 z
dz = iz
f (z )dz ,
C
where C is the unit circle centred at the origin. Denoting poles falling inside the unit circle C as pj , I = 2 i
j Resf (z ). pj
Singularities of Complex Functions Series Representations of Complex Functions Zeros and Singularities Residues Evaluation of Real Integrals
560,
f (x )dx ,
denominator of f (x ) being of degree two higher than numerator. Consider contour C enclosing semi-circular region |z | R , y 0, large enough to enclose all singularities above the x -axis.
R
y
f (z )dz =
C R
f (x )dx +
S M R2
f (z )dz
p p
iR
R
p
on C
R
M M R = . 2 R R
Resf (z ) pj
I =
f (x )dx = 2 i
as R .
Singularities of Complex Functions Series Representations of Complex Functions Zeros and Singularities Residues Evaluation of Real Integrals
561,
f (x ) cos sx dx
and B (s ) =
f (x ) sin sx dx
f (x )e isx dx .
f (x )e isx dx +
S
f (z )e isz dz .
As
|e isz |
|e isx | |e sy |
S
|e sy |
Res[f pj
Singularities of Complex Functions Series Representations of Complex Functions Zeros and Singularities Residues Evaluation of Real Integrals
562,
Points to note
Taylors series and Laurents series Zeros and poles of analytic functions Residue theorem Evaluation of real integrals through contour integration of suitable complex functions
563,
Outline
564,
Introduction
Consider a particle moving on a smooth surface z = (q1 , q2 ). With position r = [q1 (t ) q2 (t ) (q1 (t ), q2 (t ))]T on the surface and r = [q1 q2 ( )T q ]T in the tangent plane, length of the path from qi = q (ti ) to qf = q (tf ) is
tf tf
l=
r =
ti
r dt =
ti
2 2 q 1 +q 2 + ( T q )2
1/2
dt .
For shortest path or geodesic, minimize the path length l . Question: What are the variables of the problem? Answer: The entire curve or function q (t ). Variational problem: Optimization of a function of functions, i.e. a functional.
565,
Introduction
Functionals and their extremization
Suppose that a candidate curve is represented as a sequence of points qj = q (tj ) at time instants ti = t0 < t 1 < t 2 < t 3 < < t N 1 < tN = t f . Geodesic problem: a multivariate optimization problem with the 2(N 1) variables in {qj , 1 j N 1}. With N , we obtain the actual function. First order necessary condition: Functional is stationary with respect to arbitrary small variations in {qj }. [Equivalent to vanishing of the gradient] This gives equations for the stationary points. Here, these equations are dierential equations!
566,
Introduction
Examples of variational problems b Geodesic path: Minimize l = a r (t ) dt Minimal surface of revolution: Minimize b S = 2 yds = 2 a y 1 + y 2 dx The brachistochrone problem: To nd the curve along which the descent is fastest. b 1+y 2 Minimize T = ds v = a 2gy dx Fermats principle: Light takes the fastest path. Minimize T = u12 c (x ,y ,z ) du Isoperimetric problem: Largest area in the plane enclosed by a closed curve of given perimeter. By extension, extremize a functional under one or more equality constraints. Hamiltons principle of least action: Evolution of a dynamic system through the minimization of the action
t2 t2 u x 2 +y 2 +z 2
s=
t1
Ldt =
t1
(K P )dt
567,
Eulers Equation
x2
f [x , y (x ), y (x )]dx
stationary, with boundary conditions y (x1 ) = y1 and y (x2 ) = y2 . Consider variation y (x ) with y (x1 ) = y (x2 ) = 0 and consistent variation y (x ). f f y + y dx y y x1 Integration of the second term by parts: I =
x2 x1 x2
f y dx = y
x2 x1
f d f (y )dx = y y dx y
x2
x2 x1
x2
x1
d f y dx dx y
f d f y dx . y dx y
568,
Eulers Equation
For I to vanish for arbitrary y (x ),
d f dx y
f y
= 0.
I [y (x )] =
x1
f x , y , y , y , , y (n) dx
I =
x1
f f f f y + y + y + + (n) y (n) dx y y y y
Working rule: Starting from the last term, integrate one term at a time by parts, using consistency of variations and BCs. Eulers equation:
n f d f d 2 f f n d + + ( 1) = 0, y dx y dx 2 y dx n y (n) an ODE of order 2n, in general.
569,
Eulers Equation
Functionals of a vector function
t2
I [r (t )] =
t1
f (t , r , r )dt and
T f r ,
f r
=
t1 t2
f r f r
T
r + r dt +
T
f r
r dt
T t2 t2
=
t1 t2
f r r dt .
r
t1
t1
d dt
f r
r dt
=
t1
d f f r dt r
570,
Eulers Equation
f (x , y , u , ux , uy )dx dy
f y uy
Eulers equation:
f x ux
f u
=0
Moving boundaries Revision of the basic case: allowing non-zero y (x1 ), y (x2 ) f At an end-point, y y has to vanish for arbitrary y (x ).
f y
Euler boundary condition or natural boundary condition Equality constraints and isoperimetric problems x x Minimize I = x12 f (x , y , y )dx subject to J = x12 g (x , y , y )dx = J0 . In another level of generalization, constraint (x , y , y ) = 0. Operate with f (x , y , y , ) = f (x , y , y ) + (x )g (x , y , y ).
571,
Direct Methods
Finite dierence method With given boundary values y (a) and y (b ),
b
I [y (x )] =
a
f [x , y (x ), y (x )]dx
Represent y (x ) by its values over xi = a + ih with i = 0, 1, 2, , N , where b a = Nh. Approximate the functional by
N
I [y (x )] (y1 , y2 , y3 , , yN 1 ) =
x +x y +y
f ( xi , y i , y i )h,
i =1 y y
where x i = i 2 i 1 , y i = i 2 i 1 and y i = i h i 1 . Minimize (y1 , y2 , y3 , , yN 1 ) with respect to yi ; for example, by solving yi = 0 for all i .
yi
572,
Direct Methods
y (x ) =
i =1
i wi (x ).
Represent functional I [y (x )] as a multivariate function (). Optimize () to determine i s. Note: As N , the numerical solution approaches exactitude. For a particular tolerance, one can truncate appropriately. Observation: With these direct methods, no need to reduce the variational (optimization) problem to Eulers equation! Question: Is it possible to reformulate a BVP as a variational problem and then use a direct method?
573,
Direct Methods
The inverse problem: From
b N
I [y (x )] () =
i = Z b
a
f
a
N X i =1
x,
i =1
1
i wi (x ),
i =1
f y 0 @x ,
N X i =1
i wi (x ) dx ,
i wi ,
N X i =1 i wi A wi (x )5 dx .
2 4
f y
@x ,
N X i =1
i wi ,
i wi A wi (x ) +
i wi wi (x )dx ,
i =1
f d f where R[y ] y dx y = 0 is the Eulers equation of the variational problem. Def.: R[z (x )]: residual of the dierential equation R[y ] = 0 operated over the function z (x )
Residual of the Eulers equation of a variational problem operated upon the solution obtained by Rayleigh-Ritz method is orthogonal to basis functions wi (x ).
574,
Direct Methods
Galerkin method Question: What if we cannot nd a corresponding variational problem for the dierential equation? Answer: Work with the residual directly and demand
b a
Freedom to choose two dierent families of functions as basis functions j (x ) and trial functions wi (x ):
b a
j j (x ) wi (x )dx = 0
delta functions, at discrete points, as trial functions Satisfaction of the dierential equation exactly at the chosen points, known as collocation points: Collocation method
575,
Direct Methods
Finite element methods
discretization of the domain into elements of simple geometry basis functions of low order polynomials with local scope design of basis functions so as to achieve enough order of continuity or smoothness across element boundaries piecewise continuous/smooth basis functions for entire domain, with a built-in sparse structure some weighted residual method to frame the algebraic equations solution gives coecients which are actually the nodal values
576,
Points to note
Optimization with respect to a function Concept of a functional Eulers equation Rayleigh-Ritz and Galerkin methods Optimization and equation-solving in the innite-dimensional function space: practical methods and connections
Epilogue
577,
Outline
Epilogue
Epilogue
578,
Epilogue
Source for further information: http://home.iitk.ac.in/ dasgupta/MathBook Destination for feedback: [email protected] Some general courses in immediate continuation
Advanced Mathematical Methods Scientic Computing Advanced Numerical Analysis Optimization Advanced Dierential Equations Partial Dierential Equations Finite Element Methods
Epilogue
579,
Epilogue
Linear Algebra and Matrix Theory Approximation Theory Variational Calculus and Optimal Control Advanced Mathematical Physics Geometric Modelling Computational Geometry Computer Graphics Signal Processing Image Processing
Selected References
580,
Outline
Selected References
Selected References
581,
Selected References I
F. S. Acton. Numerical Methods that usually Work . The Mathematical Association of America (1990). C. M. Bender and S. A. Orszag. Advanced Mathematical Methods for Scientists and Engineers . Springer-Verlag (1999). G. Birkho and G.-C. Rota. Ordinary Dierential Equations . John Wiley and Sons (1989). G. H. Golub and C. F. Van Loan. Matrix Computations . The John Hopkins University Press (1983).
Selected References
582,
Selected References II
M. T. Heath. Scientic Computing . Tata McGraw-Hill Co. Ltd (2000). E. Kreyszig. Advanced Engineering Mathematics . John Wiley and Sons (2002). E. V. Krishnamurthy and S. K. Sen. Numerical Algorithms . Aliated East-West Press Pvt Ltd (1986). D. G. Luenberger. Linear and Nonlinear Programming . Addison-Wesley (1984). P. V. ONeil. Advanced Engineering Mathematics . Thomson Books (2004).
Selected References
583,