Numerical Solution of Differential Algebraic Equations PDF
Numerical Solution of Differential Algebraic Equations PDF
Numerical Solution of Differential Algebraic Equations PDF
IMM
Contents
Preface Chapter 1. Introduction 1.1. Denitions 1.1.1. Semi explicit DAEs 1.1.2. Index 1.1.2.1. Singular algebraic constraint 1.1.2.2. Implicit algebraic variable 1.1.3. Index reduction 1.2. Singular Perturbations 1.2.1. An example 1.3. The Van der Pol equation Part 1. Methods Chapter 2. Runge-Kutta Methods for DAE problems 2.1. Introduction 2.1.1. Basic Runge-Kutta methods 2.1.2. Simplifying conditions 2.1.3. Order reduction, stage order, stiff accuracy 2.1.4. Collocation methods 2.2. Runge-Kutta Methods for Problems of Index 1 2.2.1. State Space Form Method 2.2.2. The -embedding method for problems of index 1 2.3. Runge-Kutta Methods for high-index problems 2.3.1. The semi-explicit index-2 system 2.3.2. The -embedding method 2.3.3. Collocation methods 2.3.4. Order-table for some methods in the index-2 case 2.4. Special Runge-Kutta methods 2.4.1. Explicit Runge-Kutta methods (ERK)
i
v 1 1 2 2 2 2 4 5 5 5 7 9 9 9 10 10 11 11 12 12 14 14 14 15 16 16 16
2.4.2. Fully implicit Runge-Kutta methods (FIRK) 2.4.3. Diagonally Implicit Runge-Kutta methods (DIRK) 2.4.4. Design Criteria for Runge-Kutta methods 2.5. -Expansion of the Smooth Solution Chapter 3. Projection Methods 3.1. Introduction 3.1.1. Problem 3.1.2. Example case 3.2. Index reduction 3.2.1. Example on index reduction 3.2.2. Restriction to manifold 3.2.3. Implications of reduction 3.2.4. Drift-off phenomenon 3.2.5. Example of drift-off 3.3. Projection 3.3.1. Coordinate projection 3.3.2. Another projection method and the projected RK method 3.4. Special topics 3.4.1. Systems with invariants 3.4.2. Over determined systems Chapter 4. BDF-methods 4.1. Multi Step-Methods in general and BDF-methods 4.1.1. BDF-methods 4.2. BDF-methods applied to DAEs 4.2.1. Semi-Explicit Index-1 Systems 4.2.2. Fully-Implicit Index-1 Systems 4.2.3. Semi-Explicit Index-2 Systems 4.2.4. Index-3 Systems of Hessenberg form 4.2.5. Summary 4.2.6. DAE-codes Chapter 5. Stabilized Multi-step Methods Using -Blocking 5.1. Adams methods 5.2. -blocking 5.2.1. Why -blocking 5.2.2. Difference correcting 5.3. Consequences of -blocking 5.4. Discretization of the Euler-Lagrange DAE Formulation
ii
17 17 18 18 23 23 23 23 24 24 26 27 27 27 29 29 30 31 31 31 33 33 33 35 36 36 37 37 38 38 39 39 39 40 40 41 41
5.4.1. Commutative application of the DCBDF method 5.4.2. Example: The Trapezoidal Rule 5.5. Observed relative error Part 2. Special Issues Chapter 6. Algebraic Stability 6.1. General linear stability - AN -stability 6.2. Non-linear stability Chapter 7. Singularities in DEs 7.1. Motivation 7.2. Description of the method 7.3. Detailed description of the method 7.3.1. Singularities in ODEs 7.3.2. Projection: A little more on charts 7.3.3. Singularities in DAEs 7.3.4. Augmention: Making the singular ODE a nonsingular ODE 7.4. Implementing the algorithm Chapter 8. ODEs with invariants 8.1. Examples of systems with invariants 8.2. Solving ODEs with invariants 8.2.1. Methods preserving invariants 8.2.2. Reformulation as a DAE 8.2.3. Stabilization of the ODE system 8.3. Flying to the moon 8.3.1. Going nowhere 8.3.2. Round about the Earth 8.3.3. Lunar landing 8.3.4. Whats out there? 8.4. Comments on Improvement of the Solution Part 3. Applications Chapter 9. Multibody Systems 9.1. What is a Multibody System? 9.2. Why this interest in Multibody Systems? 9.3. A little bit of history repeating 9.4. The tasks in multibody system analysis 9.5. Example of a multibody system
iii
42 43 44 45 47 47 50 53 53 54 55 55 56 56 57 59 63 63 64 64 64 64 65 67 68 69 70 70 73 75 75 76 76 77 79
9.5.1. The multibody truck 9.5.2. The pendulum 9.5.3. Numerical solution 9.6. Problems 9.7. Multibody Software Chapter 10. DAEs in Energy System Models 10.1. Models of Energy System Dynamics 10.2. Example Simulations with DNA 10.2.1. Numerical methods of DNA 10.2.2. Example 1: Air owing through a pipe 10.2.3. Example 2: Steam temperature control Bibliography Index
79 81 83 83 84 85 85 87 87 90 91 97 99
iv
Preface
These lecture notes have been written as part of a Ph.D. course on the numerical solution of Differential Algebraic Equations. The course was held at IMM in the fall of 1998. The authors of the different chapters have all taken part in the course and the chapters are written as part of their contribution to the course. We hope that coming courses in the Numerical Solution of DAEs will benet from the publication of these lecture notes. The students participating is the course along with the chapters they have written are as follows: Astrid Jrdis Kuijers, Chapter 4 and 5 and Section 1.2.1. Anton Antonov AntonovChapter 2 and 6. Brian ElmegaardChapter 8 and 10. Mikael Zebbelin PoulsenChapter 2 and 3. Falko Jens WagnerChapter 3 and 9. Erik stergaard, Chapter 5 and 7.
vi
PREFACE
CHAPTER 1
Introduction
The (modern) theory of numerical solution of ordinary differential equations (ODEs) has been developed since the early part of this century beginning with Adams, Runge and Kutta. At the present time the theory is well understood and the development of software has reached a state where robust methods are available for a large variety of problems. The theory for Differential Algebraic Equations (DAEs) has not been studied to the same extent it appeared from early attempts by Gear and Petzold in the early 1970es that not only are the problems harder to solve but the theory is also harder to understand. The problems that lead to DAEs are found in many applications of which some are mentioned in the following chapters of these lecture notes. The choice of sources for problems have been inuenced by the students following this rst time appearance of the course. 1.1. Denitions The problems considered are in the most general form a fully implicit equation of the form (1.1) Fy y 0 where F and y are of dimension n and F is assumed to be sufciently smooth. This is the autonomous form, a non-autonomous case is dened by Fx y y 0 Notice that the non-autonomous equation may be made autonomous by adding the equation x 1 and we do therefore not need to consider the non-autonomous form seperately1 . A special case arises when we can solve for the y -variable since we (at least formally) can make the equation explicit in this case and obtain a system of F ODEs. The condition to be fulllled is that y is nonsingular. When this is not the case the system is commonly known as being differential algebraic and this
1 this
may be subject to debate since the non-autonomous case can have special features
1
1. INTRODUCTION
will be the topic of these notes. In order to emphasize that the DAE has the general form Eq. 1.1 it is normally called a fully implicit DAE. If F in addition is linear in y (and y ) (i.e. has the form Axy Bxy 0) the DAE is called linear and if the matrices A and B further more are constant we have a constant coefcient linear DAE. 1.1.1. Semi explicit DAEs. The simplest form of problem is the one where we can write the system in the form (1.2) y 0 fy z gy z
and gz (the partial derivative g z) has a bounded inverse in a neighbourhood of the solution. Assuming we have a set of consistent initial values y0 z0 it follows from the inverse function theorem that z can be found as a function of y. Thus local existence, uniqueness and regularity of the solution follows from the conventional theory of ODEs. 1.1.2. Index. Numerous examples exist where the conditions above do not hold. These cases have general interest and below we give a couple of examples from applications. 1.1.2.1. Singular algebraic constraint. We consider the problem dened by the system of three equations y 1 0 0 y3 y2 1 y2
y1 y2 y3 1 y2 x
where x is a parameter of our choice. The second equation has two solutions y2 0 and y2 1 and we may get different situations depending on the choice of initial conditions: 1. if y2 0 we get y3 x from the last equation and we can solve the rst equation for y1 . 2. Setting y2 1 we get y1 x from the last equation and y3 1 follows from the rst equation. 1.1.2.2. Implicit algebraic variable. (1.3) y 0 f y z gy
1.1. DEFINITIONS
In this case we have that gz 0 and the condition of boundedness of the inverse does not hold. However if gy fz has a bounded inverse we can do the trick of differentiating the second equation leading to system y 0 f y z gy y f y z
and this can then be treated like the semi explicit case, Eq. 1.2. This motivates the introduction of index. D EFINITION 1.1 (Differential index). For general DAE-systems we dene the index along the solution path as the minimum number of differentiations of the systems, Eq. 1.1 that is required to reduce the system to a set of ODEs for the variable y. The concept of index has been introduced in order to quantify the level of difculty that is involved in solving a given DAE. It must be stressed, as we indeed will see later, that numerical methods applicable (i.e. convergent) for DAEs of a given index, might not be useful for DAEs of higher index. Complementary to the differential index one can dene the pertubation index [HLR89]. D EFINITION 1.2 (Perturbation index). Eq. 1.1 is said to have perturbation index m along a solution y if m is the smallest integer such that, for all functions having a defect y y Fy there exists an estimate x yx y 0 y0 C y
max
max
m1
whenever the expression on the right hand side is sufciently small. C is a constant that depends only on the function F and the length of the interval. If we consider the ODE-case Eq. 1.1, the lemma by Gronwall (see [HNW93, page 62]) gives the bound (1.4) y x yx C y 0 y0
0 x
max
t dt
1. INTRODUCTION
1.1.3. Index reduction. A process called index reduction may be applied to a system for lowering the index from an initially high value down to e.g. index one. This reduction is performed by successive differentiation and is often used in theoretical contexts. Let us illustrate it by an example: We look at example 1.1.2.2 and differentiate Eq. 1.3 once, thereby obtaining y f y z gy y f y z 0 y
f y z
The two differentiations have reduced the system to one of index zero and we can solve the resulting ODE-system using well known methods. If we want to nd the perturbation index we may look at the equations for the perturbed system y f y z x gy x 0
From the estimates above we can now (by using Gronwalls lemma Eq. 1.4) obtain y x yx
x
C y 0 y0
z x zx
max max x.
C y 0 y0
1.2. Singular Perturbations One important source for DAE-problems comes from using singular perturbations [RO88] Here we look at systems of the form (1.5) z y fy z gy z where 0 1
Letting go to zero ( 0) we obtain an index one problem in semi-explicit form. This system may be proven to have an -expansion where the expansion coefcients are solution to the system of DAEs that we get in the limit of Eq. 1.5. 1.2.1. An example. In order to show the complex relationship between singular pertubation problems and higher index DAEs, we study the following problem: y f1 x y z f2 x y z where y y0 z0 1 0) 0 g0 0 z
This is a DAE-system in semi-explicit form and we can differentiate the second equation with respect to x and obtain 0 z 0 f1 x y y 0f2 x y z 0 gx x y 0 gy x y 0 y 0
This shows, that for indexes greater than one the DAE and the stiff problem can not always be related easily. Singular Pertubation Problems (SPP) are treated in more detail in Subsection 2.2.2. 1.3. The Van der Pol equation A very famous test problem for systems of ODEs is the Van der Pol equation dened by the second order differential equation y 1 y2 y y
1. INTRODUCTION
This equation may be treated in different ways, the most straigtforward is to split the equation into two by introducing a new variable for the derivative (y1 y y2 y ) which leads to y1 y2 y2 1 y12 y2 y1
The system of two ODEs may be solved by any standard library solver but the outcome will depend on the solver and on the parameter . If we divide the second of the equations by we get an equation that has the character of a singular perturbation problem. Letting we see that this corresponds to 0 in Eq. 1.5. Several other approaches may show other aspects of the nature of this problem for example [HW91] introduces the transformation t x after a scaling of y2 by we get y 1 y
2
y2
2 1 y12 y2 y1 y 1
2
y12y2 y1
As explained previously this problem will approach a DAE if 0 Using terminology from ODEs the stiffness of the problem increases as gets smaller giving rise to stability problems for numerical ODE-solvers that are explicit while we expect methods that are A- or L-stable to perform better.
Part 1
Methods
CHAPTER 2
12
yn h bi fYn i
i 1
The method is a one-step method (i.e. it does not utilize information from previous steps) and it is specied by the matrix A with the elements ai j , and the
9
10
vector b having the elements bi . We call the Yn is the internal stages of the step. In general these equations represents a non-linear system of equations. The classical theory for determining the order of the local and global error is found in a number of books on ODEs. Both the J. C. Butcher and P. Albrecht approach are shortly described in [Lam91]. 2.1.2. Simplifying conditions. The construction of implicit Runge-Kutta methods is often done with respect to some simplifying (order) conditions , see [HW96, p.71]: (2.2a) (2.2b) (2.2c) B p : C : D :
i 1
bici
j 1 s
ai j cqj 1
q 1
i 1 s
bici
q1
1 q ci q
q
q i j 1 1 s s q q
1 1 1
ai j
bj q 1 c j q
Condition B p is the quadrature condition, and C could be interpreted as the quadrature condition for the internal stages. A theorem of Butcher now says T HEOREM 2.1. If the coefcients bi , ci and ai j of a Runge-Kutta method satises B p, C and D with p and p 2 2, then the method is of order p. Comment: You could say at least order p, or - that to discover the properties of the method, - try to nd a p as high as possible. 2.1.3. Order reduction, stage order, stiff accuracy. When using an implicit Runge-Kutta method for solving a stiff system, the order of the error can be reduced compared to what the classical theory predicts. This phenomenon is called order reduction. The cause of the classical order failing, is that the step size is actually chosen much to large compared to the time scale of the system. We know though, and our error estimates often tells the same, that we get accurate solutions even though choosing such big step-sizes. This has to do with the fact that in a stiff system we are not following the solution curve, but we are actually following the stable manifold The order we observe is (at least) described by the concept of stage order. The stage order is the minimum of p and q when the Runge-Kutta method satises B p and Cq from Eq. 2.2.
11
A class of methods not suffering from order reduction are the methods which are stify accurate. Stiff accuracy means that the nal point is actually one of the internal stages (typically the last stage), and therefore the method satises bi asi i 1 s
2.1.4. Collocation methods. Collocation is a fundamental numerical method. It is about tting a function to have some properties in some chosen collocation points. Normally the collocation function would be a polynomial Px of some degree r. The collocation method for integrating a step (like Eq. 2.1a) would be to nd coefcients so that Pxn yn P xn ci h yn1 fxn ci h Pxn ci h Pxn h i 12 s
Collocation methods are a relevant topic when discussing Runge-Kutta methods. It turns out that some special Runge-Kutta methods like Gauss, Radau IIA and Lobatto IIIA, are in fact collocation methods. In [Lam91, p. 194] it is shown why such a method is a Runge-Kutta method. The principle is that the ci in the collocation formula matches the ones from the Runge-Kutta method, and the internal stages of the Runge-Kutta method match the collocation method as Yi Pxn ci h. The parameters of the collocation method are the ci s and therefore dene the A-matrix of the identical (implicit) Runge-Kutta method. 2.2. Runge-Kutta Methods for Problems of Index 1 The traditional form of the index-one problem looks like this: (2.3a) (2.3b) with the assumption that (2.3c) gz y z is invertible for every point at the solution curve. If gz y z were not invertible the system would have index at least 2. This implies that Eq. 2.3b has a unique solution z Gy by the Implicit Function Theorem. Inserted into Eq. 2.3a gives (2.4) y fy Gy y 0 fy z gy z
12
- the so called state space form. We see that, if algebraic manipulations could lead to Eq. 2.4, then we could solve the problem as a normal explicit ODE problem. This is in general not possible. We should at this time mention another form of the implicit ODE system: The form My fy typically occurs when you cant separate differential and algebraic equations like in Eq. 2.3, and some of the equations contain more than one entry of the primes y i from y , see eg. [HW96, p. 376 ff.]. If M is invertible/regular, this system is actually just a normal ODE system (index-0), and traditional methods can be used. If M is singular the system has index at least one. If the system is of index-1 then a linear transformation of the variables can give the system a form like Eq. 2.3. 2.2.1. State Space Form Method. Because of the possibility of (at least implicitly) rewriting the problem to the state space form Eq. 2.4, we could imagine using the Runge-Kutta method, by solving Eq. 2.3b (for z) in each of the internal stages and at the nal solution point: Yn i 0 yn h ai j fYn j Zn
j 1 s j
12
gyn1 zn1
These methods have the same order as the classical theory would predict for Runge-Kutta methods with some A and b, and this holds for both y and z. Furthermore these methods do not have to be implicit. The explicit state space form methods are treated in [ESF98, p. 182], here called half-explicit Runge-Kutta methods. 2.2.2. The -embedding method for problems of index 1. Discussions of index-1 problems are often introduced by presenting the singular perturbation problem (SPP) already encountered in Section 1.2. A key idea (see eg. [HW96, p. 374]) is the following: Transform the DAE to a SPP, by introducing . Deriving the method for a SPP and then putting 0. This will give a method for DAE of index 1:
13
(2.6a) (2.6b)
Yn i Zn i
yn h ai j fYn j Zn
j 1 s
i i
12 12
s s
zn h ai j gYn j Zn
j 1 s
(2.6c) (2.6d)
yn1 zn1
yn h bi fYn i Zn i zn h bi gYn i Zn i
i 1 i 1 s
zn
from Eq. 2.6b, where i j ai j 1 . This means that A is invertible and therefore the RK methods for -embedding must at least be implicit. So lets at last put 0 (2.7a) (2.7b) Yn i 0 yn ai j fYn j Zn
j 1 s j
i i
12 12
s s
gYn i Zn i yn h bi fYn i Zn i
1
s
(2.7c) (2.7d)
yn1 zn1
bii j zn bii j Zn j
j 1 i j 1
i 1 s
Zn j
gyn1 zn1
GYn j and zn1 Gyn1 In this case Eq. 2.7 is identical to the solution of the state space form with the same RK method. The same would hold if the method were stify accurate. In these cases the error is predicted by the classical order theory referred to previously.
14
Generally the convergence results are listed in [HW96, p.380]. We should though mention the convergence result if the method is not stify accurate, but linearly stable at innity: Then the method has the minimum of the classical order and the stage order + 1. We have the following diagram:
SPP 0 DAE z=G(y) ODE RK 0 direct approach indirect approch
RK Solution
Solution
F IGURE 2.1. The transformation of the index-1 problem 2.3. Runge-Kutta Methods for high-index problems Not many convergence results exist for high-index DAEs. 2.3.1. The semi-explicit index-2 system. For problems of index 2, results mainly exist for the semi-explicit index-2 problem. It looks as follows: (2.8a) (2.8b) with (2.8c) gy yfy z being invertible. 2.3.2. The -embedding method. This method also applies to the semiexplicit index-2 case, although in a slightly different form: (note that the new variable ln j is introduced) (2.9a) (2.9b) (2.9c) Yn i 0 Zn i yn h ai j fYn j Zn
j 1 s j
fy z gy
i i
12 12 12
s s s
gYn i zn h ai j ln
j 1 s j
15
(2.9d) (2.9e)
yn1 zn1
yn h bi fYn i Zn i zn h bi ln
i 1 i 1 s j
Convergence results are described in [HW96, p. 495 ff.]. The main convergence results are based on the simplifying conditions B p and Cq see Eq. 2.2. An abstract of convergence results are that the local error then behaves like LE y O hq1 LE z with p
O hq
if p if p q q
GE y GE z
O hq1 O hq
O hq
Remember that these results deal with the method used for the semi-explicit index-2 problem. The convergence results are, as presented, concentrated useful implications of more general results found in [HW96]. 2.3.3. Collocation methods. The collocation method, matching an s-stage Runge-Kutta method could be applied to Eq. 2.8, as follows: Let ux and vx be polynomials of degree s. At some point xn the polynomial satises consistent initial conditions (2.10a) u xn ci h uxn yn vxn zn Then the collocation polynomials are found to satisfy (2.10b) 0 guxn ci h yn1 zn1 uxn h vxn h
fuxn ci h vxn ci h
i 12 s
It can be shown that this method coincides with the -method just mentioned, if the coefcients are chosen correctly. The Gauss-, Radau IIA-, and Lobatto IIIA
16
coefcients could therefore be used in both methods. These coefcients are quite popular, when solving DAEs. 2.3.4. Order-table for some methods in the index-2 case. Here we present a table with some of the order results from using the above methods. Method stages local error global error y z y z s 1 s s 1 s Gauss s odd h h h h 1 s 1 s s Gauss s even h h h hs2 2 s 1 2 s projected Gauss s h h Radau IA s hs hs1 hs hs1 2 s 1 2 s 2 projected Radau IA s h h Radau IIA s h2s h2s1 h2s2 hs1 Lobatto IIIA s odd h2s1 hs h2s2 hs1 2 s 1 s Lobatto IIIA s odd h h h2s2 hs 2 s 1 s 1 2 s 2 Lobatto IIIC s odd h h h hs1 For results on projection methods we refer to the chapter about Projection methods, Chapter 3 or we refer to the text in [HW96, p.502]. 2.4. Special Runge-Kutta methods This section will give an overview of some special classes of methods. These classes are dened by the structure of their coefcient matrix A. Figure 2.2 gives an overview of these structures. The content is presented with inspiration from [Kv92] and [KNO96]. 2.4.1. Explicit Runge-Kutta methods (ERK). These methods have the nice property that the internal stages one after one are expressed explicitly from former stages. The typical structure of the A-matrix would the be ai j 0 j i
These methods are very interesting in the explicit ODE case, but looses importance when we introduce implicit ODEs. In this case the computation of the internal stages would in general imply solving a system of nonlinear equations for each stage, and introduce iterations. Additionally the stability properties of these methods are quite poor. As a measure of the work load we could use the factorization of the Jacobian. We have an implicit n-dimensional ODE system. Since the factorization should be done for each stage the work load for the factorization is of the order sn3 .
17
2.4.2. Fully implicit Runge-Kutta methods (FIRK). These methods have no restriction on the elements of A. An s-stage FIRK method together with an implicit ODE system, forms a fully implicit nonlinear equation system with a number of equations in the order of ns. The work done in factorization of the Jacobian would be of the order sn3 s3 n3 . 2.4.3. Diagonally Implicit Runge-Kutta methods (DIRK). In order to avoid the full Jacobian from the FIRK method, one could construct a semi explicit (close to explicit) method, by making the structure of A triangular and thereby the Jacobian of the complete nonlinear equation system block-triangular. The condition of the A being triangular is traditionally expressed as (2.11) ai j 0 j i
The work load could again be related to the factorization of the Jacobian. In each stage we have n equations and the factorization would cost in the order of n3 per stage. This means that the total workload would be of the order sn3 just as for the ERK methods, when used on the implicit ODE system. But as a major advantage these methods have better stability properties than ERK methods. On the other hand they can not match the order of the FIRK methods with the same number of stages. As a subclass of the DIRK methods one should notice the singly DIRK method (SDIRK methods) which on top of Eq. 2.11 species that aii i 1 s
Yet another class of method is the ESDIRK methods which introduces the start point as the rst stage, and hereafter looks like the SDIRK method: the method is dened by Eq. 2.11 and a11 aii
0 11111 00000 00000 11111 00000 11111 00000 11111 00000 11111 ERK
s
0 11111 00000 00000 11111 00000 11111 00000 11111 00000 11111 0 111111 000000 0 000000 111111 000000 111111 000000 111111 000000 111111
11111 00000 00000 11111 00000 11111 00000 11111 00000 11111
FIRK
11111 00000 0 00000 11111 00000 11111 00000 11111 00000 11111
DIRK
SDIRK
ESDIRK
F IGURE 2.2. Overview of the structure of A for the different classes of methods
18
2.4.4. Design Criteria for Runge-Kutta methods. The DIRK methods are investigated for solving stiff systems, index-1 and index-2 systems in [Kv92]. She uses the -expansion shown in the next section to describe the error generated when solving a stiff system. She summarizes a number of design criteria based on the properties found of importance, before presenting the DIRK methods. These properties are listed below:
The advancing method should be stify accurate. The error estimating method should be stify accurate. The advancing method should be at least L0-stable. The error estimating method should at least be A0-stable, if possible The stageorder of the method should be as high as possible.
A0-stable.
The conditions are presented in preferential order. We will only mention that the st two conditions are ment to ensure that the order of the method and the error estimate is known also in the these stiff(-like) problems, where methods otherwise can suffer from order reduction. 2.5. -Expansion of the Smooth Solution Following [HW96, Chapter VI.2] let us consider the singular perturbation problem Eq. 1.5 with the functions f and g sufciently smooth. The functions f, g and the initial values y0, z0 may depend smoothly on . The corresponding equation for 0 is the reduced problem 2.3a. Again in order to guarantee solvability of Eq. 2.3a, we assume that gz y z is invertible. Now it is natural to be interested in smooth solutions of Eq. 1.5 which are of the form: (2.12) (2.13) yx zx y0 x y1x 2 y2 x z0 x z1 x 2 z2 x
If we now insert Eq. 2.12 and Eq. 2.13 into Eq. 1.5 and collect the equal powers of we will end up with couples of systems of equations rst of which is the system for y0 x, z0 x. Since gz is invertible we can solve the other systems for z1 , z2 and so forth. This leads to the following: for 0 : (2.14) (2.15) y 0 0 fy0 z0 gy0 z0
19
y fy y0 z0 y1 fz y0 z0 z1 1 z0 gy y0 z0 y1 gz y0 z0 z1
fy y0 z0 y fz y0 z0 z y0 z0 y1 z1 z g y z y g y 0 0 z y0 z0 z 1 y0 z0 y1 z1
As expected, we see from Eq. 2.14 and Eq. 2.15 that y0 x, z0 x is a solution of the reduced system. Since gz is invertible, the equation Eq. 2.17 can be solved for z1 . By inserting z1 into the Eq. 2.15 we obtain a linear differential equation for y1 x. Hence, y1 x and z1 x are determined. similarly, we get y2 x and z2 x from Eq. 2.18 and Eq. 2.19, etc. This construction of the coefcients of Eq. 2.12 and Eq. 2.13 shows that we can choose the initial values y j 0 arbitrarily, but that there is no freedom in the choice of z j 0. Consequently not every solution of Eq. 1.5 could be written in the form Eq. 2.12 and Eq. 2.13. The same kind of analysis could be made for a Runge-Kutta method. We consider the Runge-Kutta method (2.21) where (2.22) kni lni Yni Zni yn zn
s
yn1 zn1
yn zn
i 1
bi
kni lni
j 1
ai j
We formally expand all occurring quantities into powers of with independent coefcients (2.24) (2.25) (2.26) Yni kni yn
1 2 2 y0 n yn yn 1 2 2 k0 ni kni kni 1 2 2 Y0 ni Yni Yni
20
and similarly for zn , Zni and lni . Since Eq. 2.22 has the same form as Eq. 1.5 we will end up with exactly the same systems of equations as those we had after inserting the -expansions of yx and zx in Eq. 1.5. If we subtract Eq. 2.24 from Eq. 2.12 we will get formally (2.27) (2.28) yn yxn zn zxn
y n y xn z n z xn
The next theorem gives rigorous estimate of the remainder in (2.27)-(2.28). T HEOREM 2.2. [HW96, page 428] Consider the stiff problem Eq. 1.5 for which gz x 1 with the initial values y0, z0 admitting a smooth solution. Apply the Runge-Kutta method Eq. 2.21-Eq. 2.23 of classical order p and stage order q (1 q p). Assume that the method is A-stable. that the stability function satises Rinf 1, and that the eigenvalues of the coefcient matrix A have positive real part. Then for any xed constant c 0 the global error satises for ch and q 1 (2.29) (2.30) zn zxn yn yxn
1 1 z0 h n zn zn O 1 1 y0 n yn yn O
0 Here y0 y0 z0 are the global errors of the method n n y0 xn , zn n z0 xn applied to the systems derived from Eq. 1.5 and Eq. 2.12,Eq. 2.13. The estimates Eq. 2.29, Eq. 2.30 hold uniformly for h h0 . and nh Const.
C OROLLARY 2.1. Under the assumptions of Theorem 2.2 the global error of Runge-Kutta method satises (2.31) yn yxn If in addition asi (2.32) Oh p O hq1 bi for all i we have zn zxn zn zxn
O hq1
O h p O hq
The conclusion is that the explained error when solving a stiff system using a Runge-Kutta method, can be explained as a combination of errors from using the same Runge-Kutta method on the derived DAEs. The table below shows the theoretical error bounds of some methods.
21
Method asi bi y-comp. z-comp. Radau IA no h2s1 hs hs 2 s 1 2 s 2 s 1 Radau IIA yes h h h hs 2 s 2 s 1 2 s 2 Lobatto IIIC yes h h h 2 hs1 4 2 4 SDIRK IV(16) yes h h h h SDIRK IV(18) yes h4 h2 h2
22
CHAPTER 3
Projection Methods
Written by Mikael Zebbelin Poulsen & Falko Jens Wagner Keywords: High index DAEs, index reduction, drift-off phenomenon, differential equations on manifolds, coordinate projection, derivative projection, order preserving projection, invariant projection, over determined systems.
3.1. Introduction When trying to solve a DAE problem of high index with more traditional methods, it often causes instability in some of the variables, and nally leads to breakdown of convergence and integration of the solution. This is nicely shown in [ESF98, p. 152 ff.]. This chapter will introduce projection methods as a way of handling these special problems. It is assumed that we have methods for solving normal ODE systems and index-1 systems. 3.1.1. Problem. The general form of the DAE problem treated in this chapter is fy z gy
(3.1a) (3.1b)
y 0
where f : n m n and g : n m , which is seen to be a DAE system of differentiation index at least 2 (see [HW96, p. 456]). 3.1.2. Example case. As an example case of this type of DAE system, a model of the pendulum system will be used (see [ESF98, p. 150]):
23
24
3. PROJECTION METHODS
(3.2)
p x p y v x v y
vx vy
2 px m g m 2 py
0
2 2 p2 x py l
It is seen that this equation system, Eq. 3.2 and Eq. 3.3 is a specialized form of Eq. 3.1. It is further in the form described in [HW96, p. 456 (1.15)], and therefore has differentiation index 3. 3.2. Index reduction A way of handling the problem of instability is to construct a new equation system by performing index reduction on the original DAE system. To understand index reduction, it can be useful to notice the denition of the differentiation index (see e.g. [HW96, p. 455]). This index gives the number of times m, that one will have to differentiate the equation system di fy z dxi (3.4) i 01 m di 0 g y dxi with x being the independent variable, to be able to rearrange equations to what is called the underlying ODE [HW96]. The principle of index reduction is then, that one differentiation of the equation system will give a new set of equations, so that a new equation system, with index one lower, can be set up using algebraic manipulations. This reduction can the be continued in successive steps, lowering the index of the system, and enable the use of methods for lower index problems. 3.2.1. Example on index reduction. In this example we will see that, in the pendulum case, we can reduce the index by successive differentiation of the algebraic restriction Eq. 3.3 alone. We differentiate on both sides with respect to time, and substitute using Eq. 3.2 and get di y dxi
25
0 (3.5) 0
2 px p x 2 py p y 2 px vx 2 py vy px vx py vy
Using Eq. 3.5 instead of Eq. 3.3 in the equation system changes the problem into an index-2 problem. Further differentiation of Eq. 3.5 gives 0 (3.6) px vx p x vx py v y p y vy
2 2 v2 x vy gpy 2l m
2 px 2 px m v2 x py g 2 py m vy
Using this equation as the algebraic restriction, makes the problem index-1. Note that, isolating in this equation and reducing from the equations Eq. 3.2, actually gives an explicit ODE system. We want though to show how yet another differentiation can give a normal ODE problem, reducing the index to 0. 0 m 2vx vx 2vy v y g p y 2l 2
m 2vx 2 px m 2vy g 2 py m gvy 2l 2 m 4 mvx px vy py 3gvy 2l2 3gvy 2l2 m we get which is reached by insertion of (Eq. 3.5). Isolating (3.7)
3mgvy
2 2l
We see that by using Eq. 3.7 instead of the algebraic equation Eq. 3.3 we get an explicit ODE system. The high index problem is reduced to an index-0 problem. Comment: As is, this is not a proof of the original system having index exactly 3, but it is though a proof that the system has index 3 or less. In other words, just because we differentiated 3 times, and we end up with an index-0 problem, it doesnt mean, that the original systems was index-3. We could just as well have been too bad at rearranging the variables.
26
3. PROJECTION METHODS
3.2.2. Restriction to manifold. The index reduction process just exemplied, can be understood in another way. Observe that the general problem can be seen as an ODE system Eq. 3.1a restricted to a manifold Eq. 3.1b by varying z. This right value z would be the value of z for which the solution curve would proceed on the manifold. Locally fy z should be in the tangent plane of the manifold (see this means that y Figure 3.1). We can express this as gy yfy z or just (3.8) gy f 0 0
gy(y)
f(y,z)
f(y,z*)
0=g(y)
F IGURE 3.1. The manifold and tangent plane, showing how to pick z z so that the solution will proceed on the manifold. Note that Eq. 3.8 is in fact the differentiation Eq. 3.1b with respect to the independent variable, and therefore corresponds to Eq. 3.5 in the previous example. If z is uniquely determined from Eq. 3.8 i.e. gy fz is invertible, Eq. 3.8 together with Eq. 3.1a forms an index-1 system. If gy f is only a function of y, Eq. 3.8 species a manifold like Eq. 3.1b, and the same scheme can be used once more:
gy fy f gyy f gy fy f
27
or in a more correct notation (3.9) gyy f f gy fy f 0 while gyy is a Frechet derivative ([Lam91, p. 158]). Again this is just similar to differentiation of the equation Eq. 3.8. In general gy f could be a function of y, without the unique determination of z being possible from equation Eq. 3.8 i.e. gy fz is not invertible (not of full rank). A way to handle this is described in [HW96, p. 456]. 3.2.3. Implications of reduction. An exact solution (curve) for the index reduced system is not necessarily a solution for the original system, though the opposite is true. Comment: In this way, differentiation is only a single implication. The index reduced system has more solutions than the original system. To get the solutions to the original system from the reduced system, one has to choose the right initial values/conditions. Such initial conditions is a point on a solution curve for the original system. It is not a trivial question to nd such a point - eg. it is not necessarily sufcient to satisfy the algebraic equations Eq. 3.1b form the original system alone. The problem is called the problem of nding consistent initial conditions, and is of relevance even when solving the original system, because most algorithms need such consistent initial conditions. 3.2.4. Drift-off phenomenon. Numerically there is another consequence of index reduction, related to the problem just mentioned, but not solved by nding consistent initial conditions. Lets say that the initial conditions are picked according to the above: When the computation of the numerical solution advances, a new point will be found satisfying a local error condition. This error will be measured and depend on the reduced system and its differentiated algebraic restrictions, eg. Eq. 3.5. The consequence is that when looking at the defect of the original algebraic restriction, eg. Eq. 3.3, it will develop as the integral of this local error. In this way the points on the computed solution will randomly move away from the manifold specied by the original algebraic restriction. 3.2.5. Example of drift-off. Figure 3.2 illustrates how the solutions of the pendulum model changes due to index reduction. In the pendulum case Eq. 3.2, the original index-3 formulation had the algebraic equation Eq. 3.3
28
3. PROJECTION METHODS
2 2 p2 x py l
specifying the manifold drawn as the solid curve. The index-2 formulation had the algebraic equation Eq. 3.5 0 px vx py vy
This equation only species that the velocity vx vy should be orthogonal to the position vector px py . This is illustrated by the dashed curves in Figure 3.2.
F IGURE 3.2. Expanding set of solutions due to index reduction and illustration of the drift-off phenomenon We could say that by doing index reduction, we expanded the solution set of the original system to include not only the solutions moving on the solid line, but also all the solutions moving in parallel to the solid line. Illustrated on Figure 3.2, as the piecewise linear curve, is also the drift-off phenomenon. When numerically advancing the solution, the next point is found with respect to the index-2 restriction. The index-2 restriction says, that we shall move in parallel with the solid line (along the dashed lines), and not as the index-3 restriction says, to move on the solid line. Therefore the solution slowly moves away from (drifts off) the solid line - although trying to move in parallel with it. Comment: When solving a system the consequence of drift of is not necessarily worse than the normal global error. It is though obvious that at least from a cosmetic point of view a solution to the pendulum system with shortening length of the pendulum looks bad. But using the model for other purposes might be indifferent to drift-off compared to the global error.
3.3. PROJECTION
29
3.3. Projection The idea is to project the solution points back to the manifold dened by the original system as the steps are computed. This will prevent drift-off, but can also in special cases increase the order of the method. Lets say that yn1 zn1 is a point consistent (or almost consistent) with the original system. Then we take a step using some method on the reduced equation system, and get the point y n z n . This point is then projected on to the manifolds inherent in the original system, giving yn zn as the next point on the solution curve. This is the general idea, and is illustrated in Figure 3.3.
~ ~ ,z (y n n)
(yn ,zn)
(yn-1 ,zn-1 )
F IGURE 3.3. The general projection method. The broken solid arrow shows the step taken using the reduced system. The dashed arrow shows the projection back to the manifold Different information from the original and reduced system can be used for various projection methods. As an example it is mentioned in [HW96, p.471] that there is an advantage in determining the velocity constraints and project to these instead of projecting to the position constraints. In general the various methods need not give consistent values, and they dont necessarily preserve the order of the method. 3.3.1. Coordinate projection. Coordinate projection uses correction after each step, indifferent of the ODE-part of the differential equation Eq. 3.1a, and only concerned about the manifold Eq. 3.1b, and/or the manifolds specied by the derivatives of this. Generalizing the idea of [ESF98, p. 165] the method uses the index reduced system to advance the solution from yn1 zn1 to y n z n . Then the projection is dened by
30
3. PROJECTION METHODS
(3.10)
y n yn
min
yn
gyn
which is a nonlinear constrained least squares problem because of the norm chosen. The projection gives the orthogonal projection to the manifold. In [ESF98, p. 165] the idea is used on different sets of variables and equation in successive steps. Especially equations from the index reduction are included, and it is mentioned in [HW96, p. 471] that this can give better results than using the original algebraic restrictions. In [ESF98] there are some references to proofs of convergence for various of these projection methods. 3.3.2. Another projection method and the projected RK method. This method is explained in [HW96, p. 470]. The method has some nice convergence properties, when applied to index-2 problems. The ideas is again to use a method on the index-reduced system to advance n z n . The method uses projection the solution one step, nding the value y n z n , and is dened as along the image of fz y n yn y gyn n z n fz y 0
(3.11a)
or some differentiated form of this. The convergence of the method is shown in [HW96, p. 471] for the index-2 case, where gy will have full rank. The proof uses the mean value theorem to conclude that, if the error of y n O h p 1, then the error of the projected point p will also be yn O h 1, and nally but important that zn O h p 1. This method is used in what is called projected Runge-Kutta methods. Such n z n , and a method is simply using a Runge-Kutta method for computation of y hence uses this projection method to correct the solution point. The method is described in [HW96, p. 503]. These are not the only projection methods. There are i.e. simplectic Euler and Derivative Projection [Eic92, p.57] along with other methods for stabilizing the solution of a reduced system.
31
3.4. Special topics 3.4.1. Systems with invariants. Some systems can, along with the description of their dynamics, being some ODE system, have additional properties that should be modeled. The invariant property applies to a system for which some quantity is preserved or constant at every point on the solution curve. The well known example of this, is the conservation of energy in different technical systems. Some methods using projection can use this extra information. One of these methods is described in [ESF98, p. 173]. Lets assume that y0 is a (consistent) initial value for the system. For this value we can compute the invariant (3.12) 0 y0 0 , one uses some
Computing the next step, at some point yn1 yn1 ODE method. The computed point y n is then projected as (3.13) y n yn
2
min
yn
yn
to get the next point yn on the solution curve. This is seen to be a nonlinear least squares problem. In [ESF98] there are references to proofs that this method doesnt disturb convergence, and it is described that such a method can minimize the global error. 3.4.2. Over determined systems. In the case of systems with invariants, it is obvious that the mentioned systems are actually overdetermined. This can also be the case for the DAE system Eq. 3.2 if all the equations Eq. 3.3 - Eq. 3.6 are included in the model. In [HW96, p. 477] the case of overdetermined DAE systems is treated. There are different ways of handling these. A simple suggestion is to solve for the least square solution to the system, but one could also think of using some kind of pseudo-inverse, in a Newton-like process.
32
3. PROJECTION METHODS
CHAPTER 4
BDF-methods
Written by Astrid Jrdis Kuijers Keywords: BDF-methods, multi step-methods, stability properties, A-stability, DASSL, variable step-size. In this chapter an introduction to BDF-methods along with a little in general about multi-step-methods will be given. The second part will be about using the BDF-methods on DAEs in different cases. It will go into order, stability and convergence of the BDF-methods used on DAEs with different indexes. 4.1. Multi Step-Methods in general and BDF-methods A general multi step-method may be written as, 1 h
j 0
j yn j j fxn j yn j
j 0
where h refers to the step-size used, j and j are coefcients for respectively the number of backward steps and the function value at the backward steps. k is called the order of the method and denotes how many backward steps are used. The method is an implicit method if 0 0 and if so it will be necessary to calculate the function value in the point that has to be estimated(iteratively). An example of a multi step-method could be, h fn2 4fn1 fn 3 which is called the Milnes-method [NGT93]. yn yn2 4.1.1. BDF-methods. The BDF-methods are a group of multi step-methods in which the function value only will be calculated in the point that is going to be found. A variable number of points calculated in the steps before can be used. This means that a BDF-method is an implicit multi-step method where 0 0
33
34
4. BDF-METHODS
but 1 k 0. BDF stands for backward differentiating formulas. It is called this because of the general formula, that can be written as: hy n
1 j yn 1 j
where is the backward differens operator. Another way to write it, is to write it the way it is written in the multi step-part 1 h
j 0
j yn j
yn
0 fxn yn
The simplest BDF-method is the implicit Euler method [NGT93], yn1 hfxn yn which is of order 1. For BDF-methods the stability for solving ODEs can be investigated by investigating the linear test equation: y y 0 For the method to be stable it requires that the roots of the polynomial y hy are lying inside the unit circle see [ESF98] page 106. and are dened in 5.2.
BDF stability plot 5
1 Im
K=1
K=2
K=6
K=5
K=4
K=3
5 5
0 RE
F IGURE 4.1. Stability regions for BDF-methods In Figure 4.1 the regions of the complex plane, where h leads to a stable discretization, are pictured. BDF-methods are stable for all with Re 0 and arg where is given as the angle in Figure 4.1 for which Re 0. This is called A-stability or A()-stability.[ESF98] It is seen, that the region,
35
where the method is stable becomes smaller and smaller for higher orders of the method. A method can only be stable up till the order of 6 because for the order of 7 there are no longer a region of A-stability. Because of their stability-properties they are good candidates for solving DAEs. In the early 70th C.W. Gear started to write about using BDF-methods in connection with such problems. Now BDF-methods are widely used in computer codes that solves ODEs and DAEs, such as the DASSL and the LSODI codes for solving DAEs. [AP98] Writes: There has been much work on developing convergence results for general multi step methods. For solving DAEs the coefcient of the multi step methods must satisfy a set of order conditions which is in addition to the order conditions for ODEs. It turns out that these additional order conditions are satised by BDF methods.
4.2. BDF-methods applied to DAEs Applying the BDF formula to the fully implicit DAE Eq. 1.1 leads to Fxn yn k j
0 j yn j
This could be illustrated by the van der Pol equation given in the introduction where 0. The implicit Euler is used where = 1: y2 n y2 n1 y2 n h 2 1 y1 2 y2 y1
0 0
In this case the following theorem holds true [BCP89]: T HEOREM 4.1. In general a BDF-method of order k 7 with constant stepsize applied to a constant coefcient linear DAE of index is convergent of order O hk after 1k 1 steps. This will not be proved but an example will be shown. The index-3 system y 1 y 2 0 y2 y3 y1 gx
(4.1)
36
4. BDF-METHODS
If the index-3 system is solved with the implicit Euler method, then the solution will look like gxn y1 n y1 n1 y2 n h y2 n y2 n1 y3 n h It is seen that y1 will be determined exactly at every step, but in the rst step y2 and y3 will be determined incorrectly. After 2 steps y2 will however be determined with an accuracy or order O h, as will y3 be after 3 steps[BCP89, page 43]. If a variable step size BDF-method is used, it is necessary to be much more careful with the method used. The accuracy of a BDF-solution would then be of the order [BCP89] y1 n
O hq max
where q = mink k 2
Here is the index of the system to be solved and k is the order of the method used. This means for example that a numerical solution of an index-3 system with a BDF-method of order k 1 would have an accuracy of order O 1, and thus be useless. Many computer codes for solving DAEs such as LSODI or DASSL starts with a BDF-method of order 1 (as no previous steps are available initially), and therefore will fail for systems of index-3 or higher. 4.2.1. Semi-Explicit Index-1 Systems. When applying linear multi stepmethods (and therefore also BDF-methods) to semi explicit index-1 DAEs (see page 2) they are stable and convergent to the same order of accuracy for the DAE as for the underlying ODE. These problems can therefore be solved with any linear multi-step method, which is appropriate for the underlying ODE, assuming that the constrains are satised in each step. [BCP89] 4.2.2. Fully-Implicit Index-1 Systems. For the fully implicit index-1 system it can be shown that a constant-step size BDF-method with order k 7 with
37
the initial values given correct to the order of O hk converges with an accuracy of order O hk if each Newton iteration is solved to an accuracy of order O hk1.[BCP89] If a variable step-size BDF-method is used (with step-size restrictions as for the standard ODE case) then it will also be convergent for the fully implicit index-1 system.[BCP89] 4.2.3. Semi-Explicit Index-2 Systems. A general Semi-Explicit Index-2 Systems can be written as, y 0 fx y z gx y z
g 1 where exist and is bounded in a neighborhood of the solution. If a Semiy Explicit Index-2 system is solved with a constant step-size BDF-method of order k 7 with the initial values given correct to the order of O hk it converges with an accuracy of order O hk after k 1 steps if each Newton iteration is solved to an accuracy of order O hk1.[BCP89] If a variable step size BDF-method is used the same as in the index-1 case will happen. It will be convergent if the method is implemented in a way so that it is stable for standard ODEs.[BCP89]
4.2.4. Index-3 Systems of Hessenberg form. Such a system can be written as: y z 0 f1 x y z u f2 x y z gx y
g f2 f1 Where the system will be of index-3 if the matrix z y u is non singular. If a constant step size BDF-method is used with the starting values at an accuracy of order O hk1, and the algebraic equations are solved to an accuracy of order O hk2 if k 2 or to an accuracy of order O hk3 if k 1, the solution would after k 1 steps converge to an order of O hk . If a variable step-size BDF-method is used the system might fail to converge. This is shown by trying to solve the equations 4.1 with the implicit Euler method. After three steps the solution would become,
38
4. BDF-METHODS
gxn1 1 y2 n1 gxn1 gxn hn1 1 gxn1 gxn gxn gxn1 (4.2) y3 n1 hn1 hn1 hn Expressing Eq. 4.2 as gxn1 gxn gxn gxn1 y3 n1 2 hn hn1 hn1 it is seen that if hn1 0 for hn xed then only the denominator of the second term is guaranteed to tend to zero. y1 n1 4.2.5. Summary. From the above Table 4.1 is presented in order to summarize the results for applying BDF-methods to various types of DAEs.
DAE Type Semi-Explicit Index-1 Systems Fully-Implicit Index-1 Systems Semi-Explicit Index-2 Systems Index-3 Systems of Hessenberg form Accuracy The same as for the underlying ODE O hk see 4.2.2 O hk after k+1 steps dif. eq. O hk , Alg. eq. depend on k
TABLE 4.1. Accuracy obtained when applying BDF-methods to DAEs. Notice that when using variable step-size there are no guarantee of convergence for DAEs of index higher than 2. 4.2.6. DAE-codes. As mentioned earlier there exist codes using BDFmethods for solving DAEs. Most of these codes are designed for solving index-0 or index-1 DAEs. In [BCP89] some of these codes are described and tested. They have focused on the DASSL-code. DASSL uses BDF-methods with variable step size with orders up till 5. It is reported that the DASSL-code successfully has solved a wide variety of scientic problems, and that there are probabilities of testing out where it went wrong. They report that the nite difference calculation of the Jacobian matrix is the weakest part of the code. It also have problems with handling inconsistent initial conditions and discontinuities. A detailed description of DASSL and a little bit of the code LSODI can be found in [BCP89].
CHAPTER 5
where fy f0 y f y depends linearly on , then a -blocked multi step-method could look like (5.2)
j 0
j yn j
h j fyn j n j hf ynk j n j
j 0 j 0 39
40
It could be seen, that the term is modied by a term concerning the algebraic variables. In general in connection with multi-step methods the following generating polynomials are dened: (5.3)
j 0
j j
j 0
j j
j 0
j j
5.2.1. Why -blocking. There could be problems using otherwise convergent and good methods for index-2 DAEs. Instead of using another method, the method could be changed a little by -blocking. The root condition states that the roots in the generating polynomial should be numerically less than 1. By -blocking it is the roots of - that have to be numerically less than 1. Therefore it is possible to change a method a little for a given problem to make it convergent. It could also be used to optimizes a method by optimizing - such that the roots become small. This is written in a theorem as in [HW91]: T HEOREM 5.1. Consider the index-2 problem Eq. 5.1 discretized by Eq. 5.2 with starting values x j xt j O h p and j t j O hk for j 0 k 1, where p k or p k 1 is the order of the pair . Furthermore, let be chosen such that has roots only inside the unit circle and xtn O hk for any sufcient smooth function. Then the discretization Eq. 5.2 is convergent [CA94], i.e. for nh t constant, xn xtn
O h p
n tn
O hk
By -blocking it is seen, that it is only the algebraic variables, that are used in the correction by , this implies a reduction in the order of the error for the algebraic equations, but for the differential equations the error would be of the same order as for the method without -blocking. 5.2.2. Difference correcting. Another way of using -blocking is to minimize the order of the error of the method. This is simply done be using the error of the method to -block it. A normal ODE could be discretized by a BDFmethod by h1
j k
j yn1 1 j
fyn1
41
The error would in principal terms look like [CA94] k fytn1 k1 Therefore the Difference Corrected BDF-method (DCBDF) would become h1
j k
j yn1 1 j
k fyn 1
k
The difference corrected BDF-method would then have an error of order O hk1, which is one higher than the rst BDF-method. 5.3. Consequences of -blocking Some of the advantages and disadvantages of -blocking are listed below. e.g. Adams-methods
It is possible to use otherwise non-convergent methods for Index-2 DAE There is only a convergence order reduction on the algebraic variables It can improve an existing method It is only possible for k 3 to obtain a convergent -blocked Adams
method for index-2 DAEs 5.4. Discretization of the Euler-Lagrange DAE Formulation
Let us again consider the Lagrange system Eq. 5.1. We dene a multi-step method using the and polynomials as dened in Eq. 5.3. We introduce the Forward Shift Operator E , which shifts the solution one step forward, (5.4) yn1 E yn h E is invertible, i.e., yn E 1 yn1 h. With this operator we introduce the difference operators E k E and E k E for notational convenience, i.e. (5.5) yn
i 0
ki yni
y n
yn
i 0
kiyni
fy is found as
42
Now, since using the difference operators introduces instability, we add a stabilizing term, namely the -blocker term consisting of a -difference operator according to Section 5.2: (5.7) 1 y h n 0 f0 yn f yn f ynn gyn
Using the BDF method with Eq. 5.7 we apply the correction (5.8)
k 1
k
in order to obtain the Difference Corrected BDF method (DCBDF): (5.9) 1 y h n 0 f0yn fyn fyn k n k 1 gyn
5.4.1. Commutative application of the DCBDF method. When applying the DCBDF method to a Lagrange system can be done in two ways, each having their start point in the dynamical variables, y y 0 1 y h 0 f: f0 f g f0 f g
43
(5.11)
This seems to be commutative, but method 1 is unstable whereas method 2 is stable . Also, note the difference between the output Eq. 5.10 and Eq. 5.11. In the unstable case the operator takes f and as input, whereas only takes f as input in the stable case.
5.4.2. Example: The Trapezoidal Rule. Let us look at the Trapezoidal integration (5.12) which has the polynomials (5.13) 1 1 1 2 2 yn yn1 h fn fn1 2
Here the strict root condition is not satised, since has a root on the unit circle; 1 0, hence the method is not stable. Using the notation of the difference operators we get (5.14) T R T R
1 E
1 2
Application of the Trapezoidal Rule to the Lagrange system Eq. 5.1 gives 1 yn h 0 T R f0 yn fyn n gyn
(5.15)
and we introduce a correction term in agreement with Eq. 5.8 f yn n 2 (since k 1), i.e., 2, which gives (using f0 f0 yn and f f yn to
44
simplify notation) 1 yn T R f0 f n fn 2 h T R f0 T R f n f n 2 1 T R f0 1 E 1 f n f n 2 2 1 1 T R f0 f n E 1 f n f n 2 2 2 1 1 T R f0 f n f yn1n1 f n n1 2 2 2 1 T R f0 f n f n1 2 This formulation is implicit in all terms. The correction term is a normal difference approximation, which deviates the order by O h in agreement with Theorem 5.1 hence the consistency of the discretization drops one order. This affects primarily (the constraint variables), since the other variables already are lowered due to the discretization. 5.5. Observed relative error Simulations show , that the properties expressed in the Theorem 5.1 do hold empirically as well. Depicting the step-size vs. the relative error for the position variable and for the algebraic variable shows a loglog connection in agreement with the expectations. It is furthermore seen, that by application of the DCBDF, one obtains an error reduction of the order O h on the position variable, which was to be expected according to Theorem 5.1.
Part 2
Special Issues
CHAPTER 6
Algebraic Stability
Written by Anton Antonov Antonov Keywords: Algebraic stability, A-stability, B-stability, ANstability, BN-stability, B-consistency Algebraic stability This chapter presents the notion of algebraic stability. We will use the method as demonstrated in [But87, Chap. 35]. Linear stability theory explores the equation y x yx
where this equation of course could be considered as system of equations. From it we can make generalizations in two ways (6.1) (6.2) which both are described by (6.3) y x f x yx y x y x qxyx f yx
Good stability properties of a method applied to the equation (6.1) are designated by the name AN -stability. Apparently the sufx N stands for nonautonomous. If these properties are good according to (6.2) then they are said to reveal B-stability. Last, general obedient behavior i.e. nice according (6.3) is called BN -stability which afrmates the impression that the sufx N stands for non-autonomous. 6.1. General linear stability - AN -stability For a single step of a Runge-Kutta method from xn1 to xn1 h, let zi hqxn1 hci i 1 2 s so that the stages Y1 Y2 Ys and the nal result yn are computed from the given value yn1 by the equations
47
48
6. ALGEBRAIC STABILITY
Yi
yn1 ai j ziY j i
j 1
12
s
yn diagz1 z2
yn1 bi ziYi
i 1
AZY
yn1 e
yn
yn1 bT ZY
where Y Y1 Y2 Ys T . Assuming that the linear equations occurring in Eq. 6.6 have solution, which will be the case if detI AZY 0, the value of yn yn1 is found to be yn yn1 rz Note that rz RzI. RZ
where RZis given by Eq. 6.10 below and generates the function r given by 1 zbT I zA1e
D EFINITION 6.1. A Runge-Kutta method is said to be AN-stable if for all z1 z2 zs such that zi z j if ci c j , (6.8) (6.9) (6.10) detI AZ RZ 0 1
RZ
1 bT ZI AZ1e
It can be seen from Denition 6.1 that AN -stability is not necessary a consequence from A-stability1 . AN -stability could be characterized in a fairly satisfactory way. We will need to make use of the following denition which generalizes positive deniteness for two non-symmetric matrices. D EFINITION 6.2. An s s matrix A is positive denite if there exists a diagonal matrix D diagd1 d2 ds with d1 d2 ds 0 such that DA AT D is positive denite.
A method is called A-stable if r(z) problem y qy as yn rzn z hq.
1
49
It is easy to see that a symmetric matrix is positive denite in the sense of this denition iff it is positive denite in the usual sense. It is also easy to see that if A is positive denite then it cannot be singular because if Av 0 then vT DA AT Dv 0 and, furthermore, because of the congruence A1T DA AT DA1 DA1 A1T D the matrix A1 is also positive denite. For the characterization of AN -stability dene B diagb1 b2 bs so that eT B and dene M by Eq. 6.11 below so that M is a symmetric matrix bT with the i j element equal to bi ai j b j a ji bi b j . Because of its importance in a wider context, M is also known as the algebraic stability matrix for the method. In the theorem which follows, from Burrage and Butcher (1979) we restrict ourselves to non-conuent methods in the sense that ci c j for i j. T HEOREM 6.1. If non-conuent Runge-Kutta method is AN-stable then (6.11) and B diagb1 b2 M MA AT B bbT bs are each positive semidenite matrices.
The reading of the proof could be omitted. It is presented here in order to explain the strange appearance of the matrix M . We will just refer to the last formula in the proof. Proof: If bi 0 for some i 1 2 s then choose Z so that z j zi for a positive real number. We see that RZ 1 bi O 2 0 if j i and
which exceeds 1 for sufciently small. Having established that b1 b2 bs 0, choose Z diagiy1 iys where i 1 y y1 y2 ys T is an arbitrary real vector and a non-zero real number. Expand I AZ1 by the time series in Eq. 6.10 and we nd RZ so that 1 ib y y b y BA A B bb y O 3
T T 2 T T T
1 ibT 2 yT BAy O 3 RZ
RZRZ
which is greater than 1 for sufciently small unless yT My 0. Since the requirement of the matrix B diagb1 b2 bs to be positive definite is quite natural, the condition for the matrix M to be positive semidenite is the important one. When the matrix B is positive denite the method is consistent in the following sense. b1 b2 bs are something like weights for taking
1 2 yT My O 3
50
6. ALGEBRAIC STABILITY
an average of the representatives of the derivative in order for the point of the next step to be calculated with this averaged slope. There is no sense for one or more of these weights to be negative. The rst of the next two rows of graphics presents a consistent averaging. The second presents a non-consistent one2 .
y y y y
6.2. Non-linear stability We consider the stability of methods with a non-linear test problem y x f yx in which f fullls the one-sided Lipschitz condition with one-sided Lipschitz constant 0. D EFINITION 6.3. The function f : X X is said to satisfy one-sided Lipschitz condition if there exists a number , known as one-sided Lipschitz constant for f , such that for all u v X , f u f v T u v uv
2
From the denition above it can be seen that f x is decreasing when 0. If y1 and y2 are solutions of the problem y f y with initial values at x0 then for 0 we have that yx zx yx0 zx0 , for any x x0 i.e. the primitive of f is a contractive mapping. We consider stability condition that guarantees that computed solution-matrix have similar behavior.
2
s i 1 bi
The notion consistency for a Runge-Kutta method is used when it satises the condition 1
51
D EFINITION 6.4. A Runge-Kutta method is said to be B-stable if for any problem y x f yx where f is such that f u f v T u v 0 for all u and v, then yn zn yn1 zn1 for two solution sequences yn yn1 and zn zn1 If the coefcients of the method are c1 a11 a12 c2 a21 a22 . . . . . . . . . cs as1 as2 b1 b2 bi ai j b j a ji bi b j a1s a2s . . . ass bs
(6.12)
then the matrix dened in Eq. 6.11 has the individual element (6.13) mi j i j 12 s
D EFINITION 6.5. If for a method dened by Eq. 6.12 and M given by Eq. 6.13 is positive semidenite and none of b1 b2 bs is negative, then the method is said to be algebraically stable. The difference between algebraic stability and AN -stability is that the later one is for non-conuent methods. Since M is symmetric, it can be written in the form M NT GN where G diagg1 g2 gs and N is non-singular. If the method is algebraically stable then (6.14) g1 g2 gs 0
T HEOREM 6.2. An algebraically stable Runge-Kutta method is B-stable. We will skip the rigorous proof but we will explain why we could expect that. In the proof of Theorem 6.1 we saw that the conditions for algebraic stability imply contractive behavior of RZ yn yn1 and contractivity of course should imply yn zn yn1 zn1 - the condition in B-stability denition. If the denition of B-stability were modied to use the test equation Eq. 6.3: y x f x yx where for all x u v f x u f x v T u v 0, then, as it stated in the theorem Theorem 6.3 below, an algebraically stable method still enjoys the modied Bstability property. Since the modied property has similar relationship to Bstability as AN -stability has to A-stability it is known as BN -stability.
52
6. ALGEBRAIC STABILITY
D EFINITION 6.6. A Runge-Kutta method is said to be B-stable if for any problem y x f x yx where f is such that f x u f x v T u v 0 for all u and v, then yn zn yn1 zn1 for two solution sequences yn yn1 and z n z n 1 T HEOREM 6.3. An algebraically stable Runge-Kutta method is BN-stable.
CHAPTER 7
Singularities in DEs
Written by Erik stergaard Keywords: Singularity, impasse-point, projection, augmention This section concerns singularities in DAEs and ODEs and how to handle these. Denitions are given and a formalism for projecting the ODE part of the DAE onto the algebraic manifold which handle local singularities correctly is presented. The method is presented in two ways: Firstly (as done in section 7.2) almost without algebraic symbols in order to establish an overlook of the approach. Secondly (as done in section 7.3) a detailed description of the method including all the algebra is presented. Finally a pseudo code (algorithm) for implementing the method is given. 7.1. Motivation Consider the DAE initial value problem x 2 1 x 0 x2 2 1 x0 1 1 which has the unique solution xt
1 t t 1
We see, that x1 0 0 and that limt 1 xt x1. Despite the existence of the solution in t 1, a DAE solver cannot continue the integration beyond t 1 since the derivative 1 1 t x 2 1t has as singularity at t 1, which makes the step size decrease until the process is stopped. This type of singularities arises often in applications, especially in
53
54
7. SINGULARITIES IN DES
electrical engineering, where such points are called impasse points . The nomenclature is fairly descriptive since these points are impassable for numerical integrators. It is now the goal to derive a method for obtaining the solution both at and beyond the impasse point. The approach is to project the DE part of the DAE into its own algebraic equation, also named the manifold, in a neighborhood of the impasse point. This results in an ODE with a singularity, and the second step is then to augment the ODE system to an ODE without singularities. It is possible to integrate this system e.g. using the RK method. The computed solution is then transformed back through the two steps to the original DAE. See gure 7.1.
x0
F IGURE 7.1. Visualization of the projection/augmention principle 7.2. Description of the method Consider the autonomous DAE where x I g:I A : I nn and x I : rankAx r n. Let us now consider the DAE only on the manifold dened by the algebraic constraint, i.e. we project the DE part of the DAE onto the manifold using a so called chart (see also paragraph 7.3.2). Since we now consider the DAE on the
n n
Axx
gx
55
manifold alone, we can handle the resulting expressions as being a pure ODE (see paragraph 7.3.3). Thus we now consider the autonomous ODE where y I h:I B:I Let us assume, that the mass matrix B for the ODE has non-full rank in the point y0 , i.e. rankBy0 n. The point y0 is then called a singular point of the ODE. The corresponding point belonging to the DAE-space is x0 y0 , and it is called an impasse point for the DAE. In analogy with naming y0 a singularity for the ODE, x0 is called a singularity for the DAE. Now, by choosing suiting normalizations, transformations and orthonormal basises (see paragraph 7.3.4) we are able to form an ODE d s vs ds which does not have a singularity in the point s0 corresponding to the singular point y0 , and which matches the solution of the ODE in an area around the singular point. This process is called augmention . It is now possible to integrate the nice non-singular ODE using standard methods (a RK is satisfactory) in order to pass the singular point. Finally, we augment the solution back into the ODE containing the singular point and project the solution here into the manifold of the DAE using the chart.
n n nn .
Byy
hy
y0
y0
7.3. Detailed description of the method 7.3.1. Singularities in ODEs. Consider the autonomous ODE (7.1) where y I
n
Byy h:I
n
hy B:I
nn .
y0
y0
D EFINITION 7.1. The point y0 is called a regular point if rankBy0 n. The point y0 is called a singular point if rankBy0 n and if y0 is a limit point of regular points. We introduce the operator y : n n (7.2) yu v hy v
as
DByuu
where is the well known natural inner product and D is the Jacobian operator. u and v are arbitrarily chosen such that u kerBy 0 and v kerByT 0 . This function can be shown ([PR94a], [PR94b], and [Rhe84]) to be nonzero. We will use y as shorthand for yu v.
56
7. SINGULARITIES IN DES
0. The sin-
Accessibility is dened as the existence of a nal t t such that there exists an interval J1 t 0 where the system has a solution y1 t , y1t y0 for t 0 . Likewise, inaccessibility is dened as the existence of a nal t t such that there exists an interval J2 0 t where the system has a solution y2 t , y2 t y0 for t 0 . t At the singular point, y for t 0, thus the integrator fails since the step length tends to zero. 7.3.2. Projection: A little more on charts. Consider the manifold M y n gy 0 and the vector eld v : M n . We dene the ODE on M as y vy yM
n nr . We dene the local chart
A solution of (7.3.2) can then be found by the following method: 1. Project v onto Ei via the chart i by multiplying vy with Di y (see 7.11). 2. Apply standard methods to the projected eld (e.g. a RK solver). 3. Pull the solution back on the manifold Mi using i1 . 7.3.3. Singularities in DAEs. Again, consider the autonomous DAE (7.4)
n n
Axx
gx
where x I g:I A : I nn and x I : rankAx r n. We introduce the so called local chart 1 that denes a bijective projection of the algebraic part W of the DAE onto a manifold with lower dimension. Set W x I : gx rgeAx . The chart y x maps y U r into x V W. D EFINITION 7.3. The point x0 is an accessible or inaccessible impasse point (i.e. singular point) of the DAE if and only if the point y0 1 x0 is an accessible or inaccessible singular point of the reducing projection of the DAE to an ODE locally to x0 .
57
We introduce the projection operator Pi . The operator is connected to the local chart i , and Pi projects the vector eld in n onto the set rgeAx0 . This denes the functions (7.5) y By hy i1 x Pi Aiy Diy Pi giy Hy
It can be shown ([PR94a], [PR94b] and [Rhe84]), that the reduction (7.5) of the DAE (7.2) to the ODE (7.6) on the manifold holds the same properties (existence, uniqueness, smoothness etc.) as the original DAE. L EMMA 7.1. An impasse point is a limit point of non-impasse points. Since the projection is bijective (and even diffeomorc), and since singular points are limit points of regular points according to the denition 7.1, the lemma is easily proved. 7.3.4. Augmention: Making the singular ODE a nonsingular ODE. Given the ODE (7.7) Byy hy with a singular point y0 . Let y yt be a C1 solution to (7.7). Let s be a C1 function with s 0 dene a transformation of the independent variable t as t s. Finally, let s yt ys. By application of the chain rule and from (7.7) we see, that satises (7.8) B d ds d h ds
y Since d we choose such that d 0 and d . We dt y y0 ds y y0 ds y y0 T specify implicitly by the normalization c d ds 1, where c n is chosen to suit the problem. We dene the functions vs and y by
(7.9)
c n is chosen such that the matrix in (7.9) evaluated at the singular point y0 is regular. We will demonstrate, that this augmention of the original ODE fullls our purpose.
By cT
hy
0
vy y
0 1
58
7. SINGULARITIES IN DES
and set Uc Uxc rn . This ensures existence of the chart 1 . We note that the Jacobian of the chart then can be found as (7.11) Dy Uy UcT Uy 1 We form the matrix system
(7.12)
Bc c1 T c2 T
hc
0 0
e 0 0
v1 v2 w1 w2 z1 z2
0 0 1 0 0 1
This system is build using the singular value decomposition (SVD) method. The system is used in order to ensure that the DAE singularity is transformed to the equilibrium point xc of the augmented ODE system. Output is the vectors e c1 c2 which form an orthonormal basis by (7.13) VT 0 (from the SVD decomp.) L Bc hc VR r 1 2 r 0 VL rr VR r1r1 diag1 m m Recall that ek k 1 m denotes the unit basis vectors of . Let (7.14) u1 1
r 1 VRer 1
u2 2
VRer r1
which gives us the augmention elements e c1 c2 : u1 u2 e VL er (7.15) c1 c2 r u1 u2 Finally v and is evaluated by solving (7.12) and using (7.16) (7.17) vy y z2v1 z1 v2 z2w1 z1 w2
From (7.9) it is seen that v 0 y and that 0 y y0 , that is monotone and that y0 0. Hence the problem dt (7.18) yt t 0 0 ds has a unique, monotonically increasing solution in a neighborhood of t t0 . Also, since changes sign when passing the singular point it can be used as an indicator for whether we have passed the singular point.
59
d s ds
vs
Now we can use a one step method e.g. a RK solver to integrate (7.22) in order to nd and thereby nding y. The legacy of using a RK solver is shown in [KN92]. 7.4. Implementing the algorithm We here present an algorithm (see gure 7.2) for integration beyond impasse points. This method needs a few comments. We assume, that we have an algorithm that performs normal DAE integration until an impasse point is found. We then switch to using the functions in gure 7.2 in order to pass the impasse point, where after we can use the standard DAE method again. Remember, that impasse points in this context are supposed to be limit points of non-impasse points. In the previous sections we established existence of the functions and denitions that forms the projection and augmention . These formalisms only exist as symbols and not as accessible objects. This means, that every time we need an evaluation of , we need all the previous evaluations as well. Since this among other things consists of QR and SVD factorizations, the price for doing this is rather high and the factorizations cannot be ensured to be reused in the next steps. Thus we need to perform the two steps illustrated in gure 7.1 for every RK step we apply. In comparison to the projection/augmention calculations, the RK step is a low price to pay. Also, since the projection/augmention only exist locally, multi-step methods are non applicable. Only one-step methods can be applied.
60
7. SINGULARITIES IN DES
and denote xc as the center of the local coordinate system belonging to the local chart. xc is assumed close to the impasse point x0 . Finally, for ease of notation we QR-factorize the Jacobian of the algebraic part of the DAE as (7.24) Dg2 xc T Q1 Q2 R 0
61
Implementation of the method proc SingDAE xk tol xc : xk Compute QR factorization of (7.24). Output: R Q1 Q2 Uc : Q2 Bc : Axc Uc hc : g1 xc Compute augmentation vectors in (7.15) v : Eval _v y1 : RK y0 if sign0 sign1 then We have passed the singular point else Do it again xk1 : Eval _y1 return xk1 end Evaluation of the chart (the tangential coordinate system) proc Eval _y R Q1 x : xc Uc y while ok FALSE do Solve RT z g2 x for z x : x Q1 z od return x end Evaluation of the augmented system proc Eval _vQ1 R e c1 c2 x : Eval _y Compute Dy by (7.11) Bc : AxDy Solve the augmented system (7.12) Compute vy and y from(7.17) return v end
62
7. SINGULARITIES IN DES
CHAPTER 8
fy t
for i
1l
where q and p are position and momentum of the l particles, respectively. H denotes the Hamiltonian which is the total energy of the system and must be conserved. In spacecraft dynamics it is also important to preserve the energy of the planetspaceship system. An example of such a system is given in section 8.3. A very basic example of a system with an invariant is the simple pendulum rotating about a xed point at one end. The length of the rod of the pendulum has
63
64
to be constant, but different examples [AP98], [ESF98] show that this is not the case with any applied method. 8.2. Solving ODEs with invariants 8.2.1. Methods preserving invariants. Some methods may inherently preserve special invariants. For example Runge-Kutta methods will preserve linear constraints. Methods preserving constraints of up to quadratic order exist. If the divergence of f (divf n i 1 f xi ) is zero for a system of ODEs the system is termed as being symplectic. One example of a symplectic system is a Hamiltonian system. Symplectic methods preserve the divergence of the system and may be applied in this case. 8.2.2. Reformulation as a DAE. The ODE system with an invariant (8.3a) (8.3b) (8.4a) (8.4b)
T
y cy y t
fy t 0
fy t Dyz cy y t
c y
is boundedly invertible.
C The choice is often , which will dene an orthogonal projection of the y solution onto the constraint manifold [AP98]. There may for a given system be reasons for choosing other projections. The DAE reformulation approach may, however, not be advantageous because the DAE will be of index-2 or higher. To solve an index-2 DAE one should be very careful when selecting the solution method.
8.2.3. Stabilization of the ODE system. To avoid the DAE one may apply stabilization of the solution to the ODE system. Poststabilization of the solution is what is done in projection methods (see chapter 3). One may also embed the stabilization in the ODE system by formulating it as: (8.5) y fy t Fycy The solution to the system is stable, i.e. it is tending towards vanishing invariant 0 changes from any initial point, provided and CF is positive denite. 0 is 0 the smallest eigenvalue of CF. 0 is a constant such Cfy t 0 cy . The matrix function F may be chosen as F DCD1 0 1. In many cases 0 is zero. In such cases the invariant is termed integral.
65
8.3. Flying to the moon In [DM70] a restricted three-body problem involving motion and interaction of the earth, the moon and a space ship is presented. The trajectory of the space-
F IGURE 8.1. The restricted three-body problem for earth, moon and spaceship
y1
ship is formulated from Newtons second law in a rotating coordinate system y2 with origin in the center of mass of earth, m1 and moon, m2 . The mass 2 ratio is m1m 0 01215. The trajectory is dened by two ordinary differential m2 equations: y 1 y 2 where y1 2y 2
1
3 r1 y2 1 y2 y2 2y 1 3 3 r1 r2
(8.6) (8.7)
y1 y1 1
3 r2
(8.8) (8.9)
r1 r2
y1 2 y2 2 y1
1 2 y2 2
There is an important invariant known as the Jacobi integral in this system: 1 2 2 2 2 1 J (8.10) y y2 y1 y2 2 1 r1 r2
66
Below some simulation results obtained with the ode45 method of matlab are presented. The simulations are based on a proposed problem in [DM70]: Try to make a spaceship positioned near the earth y to the moon. The graphs presented are initialized at the same position 2 0 1 and with the same velocity in the y2 -direction y 4. The y1 -velocity is increased to 2 produce results where the spacecraft end at the Earth, goes around the Earth, reaches a stable point between the to bodies, reaches the moon and goes to outer space. The most interesting result from the modeling point-of-view is that the Jacobi integral is not at all constant. This means that the applied method is not applicable for this problem.
67
8.3.1. Going nowhere. Figure 8.2 show the trajectory of the spaceship at low initial speed. The ship returns to the Earth, and the trajectory would in reality end there. The value of the Jacobi Integral changes a lot over the path as shown gure 8.3.
0.6
0.5
0.4
0.05
0.1
0.15 y1
0.2
0.25
0.3
0.35
05
J 0 5 0
0.1
0.2
0.3
0.4
0.5 t
0.6
0.7
0.8
0.9
05
68
8.3.2. Round about the Earth. In gure 8.4 the spacecraft goes around the earth and is eventually getting back. Figure 8.5 showing the Jacobi Integral values shows that changes in the integral value are distinct whenever the ship is near the planet.
0.6
0.4
0.2
0.4
0.2
0 y1
0.2
0.4
0.6
0.8
11
250
200
150
J 100 50 0 50 0
5 t
10
11
69
8.3.3. Lunar landing. In gure 8.6 the spaceship reaches the moon. It is however questionable if this also would be the case in a real three-body system, due to the change of the integral over the path (gure 8.7).
0.5
0.4
0.3
0.2
0.4
0.6 y1
0.8
1.2
1.4
14
0.2
0.4
0.6 t
0.8
1.2
1.4
14
70
8.3.4. Whats out there? In gure 8.8 the spaceship leaves the earth-moon system and goes into outer space. The values of the integral are shown to be non-constant also for this example in gure 8.9.
6
0 y2 2 4 6 8 8
2 y1
20
5 t
10
20
8.4. Comments on Improvement of the Solution In section 8.2 ways to avoid the badness of the solutions presented above have been described. These approaches may be introduced to overcome the problems
71
encountered in the three-body system. A few comments on how to apply the improved methods may be in order here: Methods preserving invariants: are not readily available for this system. Reformulation as a DAE: will require a higher-index method for an implicit DAE system. Such a method has not been available. Poststabilization of the ODE system: is the recommended approach according to [AP98]. One problem which has not been discussed in this reference is how to deal with the system if it is implicit in the derivatives and not only a function of the variables in y. This is the case for the system examined and a way to overcome this complication has not been found in the literature available.
72
Part 3
Applications
CHAPTER 9
Multibody Systems
Written by Falko Jens Wagner a Keywords: Multibody Systems, mechanical systems, Pendulum example, truck model, rigid bodies, elastic bodies, joints, degrees of freedom, multibody system analysis, suspension model Multibody systems is one area, in which methods for solving DAEs are of special interest. This chapter is about multibody systems, why they result in DAE systems and what kind of problems, that can arise when dealing with multibody systems and formulating their corresponding DAE system.
9.1. What is a Multibody System? A multibody system is a mechanical system, that consists of one or more bodies. (Strictly, a multibody system should consist of more than one body, but essentially a system consisting of only one body is treated in the same way. So this is a matter of generalization.) The bodies can either be rigid or elastic. They can have a mass and/or a torque. Somehow they are connected with each other (or they would not be considered being in the same system). These mass-less connections can either be force-elements, like springs, dampers or friction, or non-elastic joints, i.e. translational or rotational joints. Joints give a restriction in the movement and can transfer forces from one body to another. Joints are reducing the degrees of freedom for the system [Eic92, page 12]. While the force elements result in explicitly given forces, joints result in implicit given forces (like before: they are only transferring forces from one body to another). The equations necessary for describing a multibody system are the equation of motion (Newtons 2. law)
75
76
9. MULTIBODY SYSTEMS
(9.1)
Mpp
f p p
where M is the mass matrix, p denotes the position and f are the forces (both internal and external) acting on the system. These equations (Eq. 9.1) are the result of external forces acting on the system or due to force-elements such as springs, dampers or friction. As it can be seen in (Eq. 9.1), the forces can be a function of both position and velocity. Friction forces are typically a function of velocity, whereas spring forces are a function of the position (i.e. elongation) of the spring. In addition to the equation of motion, there can be a restriction (9.2) gp 0
on the position, caused by the different joints. 9.2. Why this interest in Multibody Systems? In general, computer simulations are used to investigate systems of the real world, that would otherwise not have been possible to analyze, either because it is too expensive, takes too long time or is too dangerous, etc.. When a physical systems has to be simulated, the way of seeing the system as consisting of several bodies, that are interacting which each other in certain ways, is one formalism, that is very easy to understand and to simulate. For example, the bodies are typically represented only in one point, namely their center of mass. When modeling a physical system using the concept of multiple bodies connected by force elements or joints, the resulting equation system (using Eq. 9.1 and Eq. 9.2) will end up being a differential algebraic equation system. In addition to that, multibody systems are easy to understand and therefore serve as good examples for the concept of differential algebraic equation systems. 9.3. A little bit of history repeating Solving DAE systems is not a trivial task, and good and stable numerical methods have only been implemented for a short period of time. Up until the sixties the equations of motion for multibody systems where solved by hand on the basis of the Euler-Lagrange formulation [Eic92, page 14]:
77
(9.3)
d dt
Tp p p
Tpp p
gp
Qp p GT p 0
where T is a function for the kinematic energy, p is the position, is the Lagrange multiplier, Q represents the external forces acting on the system, G are (here) gravitational forces and g are the algebraic constraints on the position. An analytic solution was then looked for. But as the systems got bigger and more complex, it was necessary to automate the process of stating the equations of motion. Programs for this purpose where developed, that could analyze the structure of the system, and automatically derive the equations of motion on the basis of the description of the system-elements. Differentiating (Eq. 9.3) and substituting with f Q Tp p p Tp yields the implicit-linear differential equation (9.4) which together with gp 0 Mpp fp p GT p
forms a DAE system, that is in the form of (Eq. 9.1 and Eq. 9.2). Then there are different formalisms to solve these systems, which basically are the theme of these notes. 9.4. The tasks in multibody system analysis The process of transforming a real world system into a multibody dynamics model is not much unlike the process of general modeling and simulation, see Figure 9.1 for process diagram. First of all the mechanical system in question has to be analyzed with respect to simulation purpose, i.e. What function/behavior of the system do we want to investigate?. According to [ESF98, page 13] the following 5 aspects of investigation can be considered: Kinematic analysis Static equilibrium Dynamic simulation Linear system analysis
78
9. MULTIBODY SYSTEMS
79
a way so that the simulation objectives can be fullled. This is often a highly iterative process. 9.5. Example of a multibody system The following examples of a multibody system are used to illustrate the structure of multibody systems.
F IGURE 9.2. Example of a multibody system 9.5.1. The multibody truck. This gure (Figure 9.2) shows, how a truck is modeled using the multibody concept. A model is always a simplication of the real world, and so is this truck. The truck model consists of 5 bodies. Body 1 is the chassis of the truck, body 2 is the load area, while body 3 is the cockpit. Bodies 4 and 5 represent the wheels. For analyzing the movement of the truck the ground has been included in the system as body 6. The wheel suspension is represented by springs and dampers. The same model is used for the contact between the wheels and the road. The load area has a rotational joint at the back, which allows the loading area to be tilted. The other end of the load area, as the cockpit, has a spring-damper suspension. This is the multibody model of the truck.
80
9. MULTIBODY SYSTEMS
After dening the bodies, the equations of motion are applied. These equations wont be stated here, but can be found in [ESF98, page 20], from which this example is taken. Instead a part of the truck model is used as an example for analysis. Figure 9.3 shows the wheel suspension of the truck.
Here m1 is the mass of the truck (or the part of the truck, which load rests on this wheel). m2 is the mass of the wheel, f is the damping coefcient taken by the damper, k1 is the spring constant of the suspension and k2 is the spring constant of the rubber wheel itself (since the tire is very elastic, the oscillation of it should be taken into account). The mathematical model for this system is given by Eq. 9.1 and becomes (see also [MKZ92, p.45]):
(9.5)
m2 y f y
2
m1 y 1 f y1 y2 k1 y1 y2
2 1
y k1y2 y1 k2y2 z
0 y 1
Solving these equations (Eq. 9.5) for y 1 and y2 and substituting with v1 and v2 y2 yields the equation system :
81
y 1 y (9.6)
2
v1 v2
v 1 v
2
mf
v1
1
k1 v2 m y1 y2
1
k1 k2 mf v2 v1 m y2 y1 y2 z m2 2 2
A simulation with the following parameters and initial values m1 500 m2 50 k1 7500 k2 150000 f 2250 z 0 F IGURE 9.4. Parameters for wheel suspension model
y1 -0.05 y2 0 v1 0 v2 0 F IGURE 9.5. Initial values for wheel suspension model gives the results printed in Figure 9.6. The graph shows the elongation (deformation) of the two springs in the system (Figure 9.3). Since the system is damped, the oscillations will vanish after a few seconds. 9.5.2. The pendulum. Another multibody example, which is used throughout these lecture notes, is the pendulum (Figure 9.7). The equations of the pendulum are as follows (taken from [ESF98, page 150]):
82
0.01
9. MULTIBODY SYSTEMS
0.01
elongation
0.02
0.03
0.04
y1 y
2
0.05
0.5
1.5 time
2.5
p 2 v 1 v
2
v1 v2
p1 p2
0 0 0
v1 2 v2 2 p1 2 p2 2 p2 g
p1 2 p2 2 1 p1 v1 p2 v2
The rst 4 equations (Eq. 9.7a-Eq. 9.7d) represent the equation of motion for the pendulum, written in x y-coordinates (or p1 p2 ) and transformed to 1. order differential equations. Together with a restriction to the length of the pendulum (Eq. 9.7e) the corresponding DAE system becomes an index-3 formulation. Differentiating the
9.6. PROBLEMS
83
restriction (Eq. 9.7e) yields the restriction (Eq. 9.7f), which is equal to Eq. 3.5). This is an index-2 formulation, while a further differentiation to the restriction (Eq. 9.7g) will give the resulting DAE system index 1 (se also Eq. 3.6). Further differentiation of the index-1 problem yields an ordinary differential equation system, or simply index-0. This is also called the State-Space Form (Eq. 9.8). In some literature ([Eic92, ]) this form is also called the minimal-coordinates form. (9.8) 1
2
g1
As one can see from Eq. 9.8, this form is an ordinary differential equation system. See section 3.2.1 for the analytical results of index reduction. 9.5.3. Numerical solution. A simulation of the index-0 or State-Space Form is shown in Figure 9.8.
0.4 0.2
0.2
0.8
0.6
0.4
0.2
0 x
0.2
0.4
0.6
0.8
F IGURE 9.8. Solution for index-0 formulation. A simulation of the index-1 formulation is shown in Figure 9.9. For both simulations the Matlab function "ode23tb" (stiff systems) is used for integration. In the latter example we see the effect of drift-off because of the preceding index reduction (see section 3.2.1). Different numerical methods are more or less efcient when it comes to solving the DAE form (Eq. 9.7a-Eq. 9.7g). In general it is harder to solve the index-3 formulation than the index-1 formulation. 9.6. Problems One factor, that causes problems for many numerical methods, is algebraic loops. When deriving the mathematical model from a model, the structure of the
84
0.4
9. MULTIBODY SYSTEMS
0.2
0.2
0.8
0.6
0.4
0.2
0 x
0.2
0.4
0.6
0.8
F IGURE 9.9. Solution for index-1 formulation. model together with the formalism can result in algebraic loops in the mathematical model. This means, that certain variables are over-specied. This can cause the numerical method to break down. There are different algorithms, that can eliminate algebraic loops and make the problem solvable. 9.7. Multibody Software Dymola is a program package, that can be used to simulate multibody systems. It is developed by Dynasim in Lund, Sweden, and runs on both UNIX and Windows machines. Its graphical user interface gives good support to the multibody concept.
CHAPTER 10
86
vice The constitutive relations are most often empirical relations for transfer of energy, pressure losses, chemical reactions Another characteristic property of energy systems is the thermodynamic state relations which anywhere in a plant connect the state variables pressure, temperature, enthalpy, entropy, density and so on. These relations are the reason why only pressure and enthalpy are needed as global variables in a system. Enthalpy is chosen because it is generally applicable whereas temperature does not supply sufcient information during phase transition which is of great importance in many systems. The mathematical formulation of the conservation laws are as follows: Conservation of mass is applied for any separate uid ow through a component. (10.1) i m dM dt
ows
m is a mass ow. In formulation in- and outlet ows will have opposite signs. is the mass of a volume, V , of a uid in a component. is an average of M V the density of the uid in the component and t is time. Conservation of energy only accounts for thermal energy of the uid. Changes in potential or kinetic energy may often be neglected. (10.2) P W i hi Q m dUi dUS dt masses dt
ows
, P and W are heat ow, electrical power and h is the enthalpy of a uid, Q mechanical power respectively. Ui Mi ui indicates internal energy of a uid
87
mass with specic internal energy ui . Us is likewise the internal energy of the material of the device. Conservation of momentum is due to the lumping assumption subject to a constitutive description by frictional, gravitational and accelerational pressure changes and power input/output. (10.3) p W f m H P p is pressure, H is the height difference between inlet and outlet of the component. The resulting system of equations is a semi-explicit index-1 DAE. The equation system could naturally be written in the more familiar, general DAE form (10.4) My ft y z
where M is a constant matrix. However, this formulation does not indicate the exact nature of the system. Each equation may depend on more than one of the derivatives, but the system may be formulated so each equation is explicit in one of the derivates. Such a formulation is provided below: (10.5) y 0 Ft y z y Gt y z
y contains differential variables, y their time derivatives, and z the algebraic variables. y on the right hand side of the differential equations indicates that time derivatives of other variables may be present in the equations. It is worth to mention that discontinuities are often present in energy systems. One common example is the opening and closing of a valve. 10.2. Example Simulations with DNA The following examples have been carried out with the program DNA which is a general energy system simulator [Lor95]. DNA is an abbreviation for Dynamic Network Analysis, implying that the energy system is regarded as a network of components, similar to the way electrical circuits are often modeled. 10.2.1. Numerical methods of DNA. This section describes the numerical methods implemented in DNA. The methods have been carefully selected as the most appropriate for application to the energy system models described above. A BDF-method (Eq. 10.6) has been chosen as integration method because it is most efcient for the semi-explicit index-1 case. In the scalar case the BDF of
88
i 0
iyni
k f tn yn h
being the time step. For further details of BDF-methods consult chapter with h 4 The order of the method has been limited to four to have a large enough stability region. The method is implemented as a predictor-corrector method with truncation error estimation and time step control. The error estimate may for multi-step methods be determined by Milnes Estimate, provided the methods are of the same order: (10.7)
C CP Ck k1 1 C Ck 1 P yC n yn
where CC and CP are error constants of the corrector and the predictor respectively. yC and yP are the values of corrector and predictor. As BDF-methods are based on backward differences, time step changes require cumbersome interpolation. Due to this the Nordsieck formulation of the BDF-method has been chosen. The Nordsieck formulation is based on a Taylor expansion back in time of the solution to the equation system. For each differential variable in the system a Nordsieck vector is dened by: j j h yi j! 0 k1
(10.8)
Y j
for
This simplies step size control because each entry of the Nordsieck vector only have to be multiplied by the ratio, between a new time step and a rejected time step. (10.9) Y j
j h y j!
for
0 k1
Also Milnes estimate may be calculated in an easy way due to the Nordsieck formulation. Firstly, the predicted value should be observed to be given by the corrected value of the previous time step. (10.10) YP i n1 P YC i n for variable i
89
where the P is the Pascal Matrix with elements dened recursively by Pr s Pr 1 s Pr 1 s 1 1 0 for r s and r for r s and r otherwise 1 1
P A new variable w is introduced to the system as wi Y 2C i Y 2i . Y 2 equals hyi . The difference in the error estimate may be expressed as wi l1, with l being a constant vector dependent on the order of the method. The size of a new time step is calculated as
(10.11)
n1 h
n k1 BDF ch d
BDF is the maximum allowed truncation error, d is a norm of the actual truncation error estimate, c is a constant that limits the step size change to avoid too frequent changes of the sign of the step size ratio. The equation system may be formulated on residual form including equations for the corrector values and the differences in the error estimate. This forms a non-linear equation system, which may be solved iteratively. In DNA a modied Newton method has been implemented. Discontinuities should be handled in a robust way. The way to do so, is to determine the point in time where the discontinuity is situated, integrate up to this point and restart the method at the discontinuity [ESF98]. Discontinuities may be divided into time-dependent and variable-dependent ones. The rst kind may be explicitly specied in the system. The other kind is more difcult to handle, because it may be dened in a complicated way by differential and/or algebraic variables. To handle such a discontinuity switch variables have to be included in the system. These variables are dened to change sign at the discontinuity. When the integration method encounters a change of sign in a switch it should by iteration determine the exact location of the discontinuity and hereby efciently nd the correct point to restart at. For efciency reasons, it is important that the method is restarted at a time instance where the switch has changed its sign. The procedure of obtaining the exact location of the switch is based on knowledge of the previously calculated values of the switch variable. As basis two points, one before and one after the switch are calculated. From these two values of the switch variable the value of a better approximation to the switch time may be determined. This new value may then be applied in an iteration procedure until a precise value of the sign change of the switch variable has been obtained. The approximation may be determined by iteration in the switch variable until
90
zs t 0. However, a more efcient approach is to interpolate the inverse func1 t . The value of this function at zs 0 will be an approximation tion Dzs zs to the switch time. 10.2.2. Example 1: Air owing through a pipe. This rst example is rather simple, but it gives an impression of the equations applied for heat transfer calculations. The model describes the behavior of hot air owing through a pipe initially at a low temperature. Due to the temperature difference heat is transferred from the uid to the wall of the pipe. Mass balance dM m i m (10.12a) o dt Energy balance dU dUs m i hi m (10.12b) o ho dt dt Internal energy of the air ui uo U M (10.12c) 2 Total mass of uid in the pipe (10.12d) M V Heat transfer is governed by a heat transfer coefcient, k, and the transferring area, A. The main problem of the model is this equation because of the log mean temperature difference, which may easily end up trying to take logarithms to negative numbers during Newton iterations. (10.12e) m i hi m o ho dU dt kA
Ti
Ts To Ts
Ti Ts ln T o Ts
For air ows one could apply ideal gas laws for the uid property calculations T f1 p h f2 p h
Figure 10.2 shows the progress of mass of air in the pipe and temperatures of uid and pipe material. Considering DAEs and application of numerical methods to them, gure 10.3 may be be of more interest. It shows the time step determined as optimal for the fourth order BDF of DNA. The conclusion is that the time step is generally increasing because the solution approaches a steady state. About
91
time step 80 the step is almost kept constant. The reason for this is that the Newton method about this point only converges to a value near, but lower than, the allowed maximum.
F IGURE 10.2. Temperature of air in pipe, T , air at outlet, To , pipe wall, Ts and mass of air in pipe M .
F IGURE 10.3. Time step size for the air ow model 10.2.3. Example 2: Steam temperature control. In a power plant boiler the temperature of steam to the turbine has to be controlled carefully, due to material limitations. This is done by injecting water to the steam leaving the boiler
92
and hereby lower the temperature. This process is named attemperation. The following example shows such a temperature control. The purpose of the example is to show the discontinuity handling of the integration routine. Therefore, the temperature of the inlet steam is set to a value outside the proportional gain of the PI-controller. This causes the controller to increase the control signal until it is one and stay at that value. The model is shown in gure 10.4. In the model
F IGURE 10.4. Steam attemperation mass and energy balances are of course applied. Furthermore the model involves the following equations. Attemperator error signal is the difference between the actual temperature and the set point of the temperature. (10.13a) e To TSP PI-control: The control signal is the sum of an integral and a proportional part. However it is limited to be between zero and one. (10.13b) (10.13c) Integral control signal. (10.13d) Switch variable. (10.13e) zs mI dt KI e m p mI m p mI 1 1 zc m p mI mp for Kp e 0 zc 1 Proportional control signal.
1 m p mI m p mI 1
93
Control valve: The opening of a valve determines the mass ow of water to have the correct steam temperature. (10.13f) m C zc
pi po v1
F IGURE 10.5. Temperature at outlet of attemperator Figure 10.5 shows the temperature of the steam leaving the attemperator. The discontinuity is easily seen from this graph. In gure 10.6 the control signals are shown. It is seen that the integrating signal is increasing after the discontinuity, and so is the sum of proportional and integrating control. However, the controllers output signal cannot exceed one and remains so. The graph also shows the switch variable value. It is seen that this variable reaches zero at the discontinuity. After the discontinuity the sign of the switch variable is changed so it is positive due to the implementation. It then increases as the sum of the control signals. The iterative procedure to determine the location of the discontinuity is displayed in table 10.1. The rst entry is the step before the switch changes sign. The second entry is the step the determines that a discontinuity is going to be passed. The three following steps show how the approximation to the switch point becomes more and more precise and eventually is of the order of the wanted precision. Figure 10.7 shows the step size chosen in DNA. It is seen that at the discontinuity the integration is restarted at order one with the initial step size. This
94
F IGURE 10.6. Control signal, Zc , control signal value demanded by the system, Za , integrating control signal, Zd , and switch variable value, Zs .
F IGURE 10.7. Step size for attemperator model choice of step after the restart may be optimized. For instance, could the optimal value found at the rst time step, which is rst order, have been selected to improve the speed of the method.
95
Time Switch variable value 211.55 0.19102 264.50 -0.0176 216.9019 -2.0e-5 216.8363 5e-10 216.8363 -1e-15 TABLE 10.1. Iterating to locate the exact switching point.
96
Bibliography
[AP98] Uri M. Ascher and Linda R. Petzold. Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations. Society for Industrial and Applied Mathematics, 1998. K.E. Brenan, S.L. Campbell, and L.R. Petsold. Numerical Solutions of Initial-Value Problems in Differential-Algebraic Equations. North-Holland, 1989. J. C. Butcher. The Numerical Analysis of Ordinary Differential Equations. John Wiley & Sons, 1987. G. Sderlind C. Arevalo, C. Fhrer. Stabilized Multistep Methods for Index 2 EulerLagrange DAEs. BIT-Numerical Mathematics, December 1994. James W. Daniel and Ramon E. Moore. Computation and Theory in Ordinary Differential Equations. W.H. Freeman and Company, 1970. Edda Eich. Projizierende Mehrschrittverfahren zur numerischen Lsung von Bewegungsgleichungen rechnishcer Mehrkrpersysteme mit Zwangsbedingungen und Unstetigkeiten. VDI Verlag, 1992. E. Eich-Soellner and C. Fhrer. Numerical Methods in Multibody Dynamics. B. G. Teubner, 1998. E. Hairer, C. Lubich, and M. Roche. The Numerical Solution of Differential-Algebraic Systems by Runge-Kutta Methods. Lecture Notes in Mathematics. Springer-Verlag, 1989. E. Hairer, S. P. Nrsett, and G. Wanner. Solving Ordinary Differential Equations I. Nonstiff Problems. Springer-Verlag, second edition, 1993. E. Hairer and G. Wanner. Solving Ordinary Differential Equations II. Stiff and Differential-Algebraic Problems. Springer-Verlag, 1991. E. Hairer and G. Wanner. Solving Ordinary Differential Equations II. Stiff and Differential-Algebraic Problems. Springer-Verlag, second revised edition edition, 1996. D. Stoffer K. Nipp. Research Report No. 92-14: Invariant manifolds of numerical integration schemes applied to stiff systems of singular perturbation type Part I: RK -methods. Seminar fr Angewandte Mathematik, Eidgenssische Technische Hochschule, Switzerland., December 1992. Anne Kvrn, Syvert Paul Nrsett, and Brynjulf Owren. Runge-kutta research in trondheim. Technical report, Division of Mathematical Science, The Norwegian University of Science and Technology, 1996. Anne Kvrn. More and, to be hoped, better DIRK methods for the solution of stiff ODEs. Technical report, Division of Mathematical Science, The Norwegian University of Science and Technology, 1992.
97
[ESF98] [HLR89]
[KN92]
[KNO96]
[Kv92]
98
BIBLIOGRAPHY
John Denholm Lambert. Numerical Methods for Ordinary Differential Equations. John Wiley & Sons, 1991. [Lor95] B. Lorentzen. Power Plant Simulation. PhD thesis, Laboratory for Energetics, Technical University of Denmark, 1995. [MKZ92] D. Matko, R. Karba, and B. Zupancic. Simulation and Modelling of Continuous Systems. Prentice Hall, 1992. [NGT93] H.B. Nielsen and P. Grove Thomsen. Numeriske metoder for sdvanlige differentialligninger, hfte 66 . Numerisk Institut, 1993. [PR94a] W.C. Rheinboldt P.J. Rabier. On the Computation of Impasse Points of Quasi-Linear Differential-Algebraic Equations. Math. Comp., 62(205):133154, January 1994. [PR94b] W.C. Rheinboldt P.J. Rabier. On the Computation of Impasse Points of Quasi-Linear Differential-Algebraic Equations. Jour. Math. Anal. App., (181):429454, 1994. [Rhe84] W.C. Rheinboldt. Differential-Algebraic Systems as Differential Equations on Manifolds. Math. Comp., 43(168):473482, October 1984. [RO88] jr. R.E. OMalley. On Nonlinear singularly perturbed initial value problems. SIAM Review, 30:193212, 1988.
[Lam91]
Index
B-stable, 51 AN-stable, 48 algebraic stability matrix, 49 one-sided Lipschitz condition, 50 one-sided Lipschitz constant, 50 positive denite, 48 Semi-Explicit Index-1 Systems, 38 Adams methods, 39 algebraic constraints, 77 algebraic loops, 83 Algebraic stability, 47 algebraic stability, 51 augmention, 55, 57, 59 autonomous, 1 BDF-methods, 33, 87 -blocking, 39 chart, 54, 56, 58 collocation, 11 component, 85 computer simulations, 76 conservation laws, 86 constant coefcient linear DAE, 2 constistent initial conditions, 27 constitutive relations, 86 control volume, 85 coordinate projection, 29 corrector, 88 dampers, 75 DASSL, 35, 38 DCBDF, 41, 42 commutative application of, 42 degrees of freedom, 75
99
difference correction, 41, 42 differential algebraic equation system, 76 differentiation index, 3, 24 DIRK methods, 17 discontinuity, 89 DNA, 87 drift-off, 27, 83 Dymola, 84 Dynamic Network Analysis, 87 Dynasim, 84 elastic bodies, 75 Energy System, 85 energy systems, 85 -embedding, 12 -expansion, 18 equation of motion, 75 ERK methods, 16 ESDIRK methods, 17 Euler-Lagrange formulation, 76 FIRK methods, 17 Forward Shift Operator, 41 friction, 75 Friction forces, 76 fully implicit DAE, 2 Fully-Implicit Index-1 Systems, 38 generating polynomials, 40 gravitational forces, 77 Hamiltonian systems, 63 heat transfer calculations, 90 impasse point, 54, 59 accessibility of, 56 index, 2
100
INDEX
differentiation index, 3, 24 pertubation index, 3 index reduction, 4, 24, 83 drift-off, 27 implications, 27 pendulum example, 24 Index-3 Systems of Hessenberg form, 38 invariant, 31, 63 Jacobi integral, 65 kinematic energy, 77 Lagrange multiplier, 77 Lagrange system, 41, 42 Lagranges principle, 42, 43 LSODI, 35, 38 lumped model, 85 manifold, 54, 56 ODE restricted to, 26 projection to, 29 mass-less connections, 75 matlab, 66 mechanical system, 75 Milnes Estimate, 88 minimal-coordinates form, 83 model of pendulum, 81 modied Newton method, 89 molecular dynamics, 63 multi step-method, 33 multibody system analysis, 77 Multibody systems, 75 multibody truck, 79 network, 87 Newtons 2. law, 75 non-autonomous, 1 non-elastic joints, 75 Nordsieck formulation, 88 order, 88 order reduction, 10 over determined systems, 31 pendulum example, 23, 81 pertubation index, 3
physical systems, 76 Poststabilization, 64 predictor, 88 predictor-corector methods, 88 predictor-corrector methods, 39 projection, 59 by use of a chart, 56 projection methods, 23, 29 coordinate projection, 29 drift-off, 27 index reduction, 24 manifolds, restriction to, 26 Runge-Kutta methods, 30 quadrature condition, 10 regular point denition of, 55 restricted three-body problem, 65 restriction on position, 76 rigid bodies, 75 root condition, 40 rotational joints, 75 Runge-Kutta methods, 9 collocation, 11, 15 design criteria, 18 -embedding, 14 epsilon-embedding-embedding, 12 epsilon-expansion-expansion, 18 index-1, 11 index-2, 14 internal stages, 10 order reduction, 10 projection, 30 quadrature condition, 10 simplifying (order) conditions, 10 special methods, 16 stage order, 10 state space form method, 12 stify accurate, 11 SDIRK methods, 17 semi-explicit index-2 problem, 14 Semi-Explicit Index-2 Systems, 38 singular perturbations, 5 singular point
INDEX
101
accessibility of, 56 denition of, 55 singularity, 53 spring forces, 76 springs, 75 stabilization, 64 stage order, 10 state space form, 12 step size control, 88 stify accurate methods, 11 switch variables, 89 Symplectic methods, 64 symplectic system, 64 thermodynamic state relations, 86 time step control, 88 torque, 75 translational joints, 75 trapezoidal rule, 43 truck model, 79 truncation error estimation, 88 Van der Pol equation, 5 wheel suspension, 80