Solving Nonlinear ODE and PDE Problems: Hans Petter Langtangen
Solving Nonlinear ODE and PDE Problems: Hans Petter Langtangen
Solving Nonlinear ODE and PDE Problems: Hans Petter Langtangen
problems
2016
Contents
1 Introduction of basic concepts 2
1.1 Linear versus nonlinear equations . . . . . . . . . . . . . . . . . . 2
1.2 A simple model problem . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Linearization by explicit time discretization . . . . . . . . . . . . 5
1.4 Exact solution of nonlinear algebraic equations . . . . . . . . . . 5
1.5 Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6 Picard iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.7 Linearization by a geometric mean . . . . . . . . . . . . . . . . . 9
1.8 Newton’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.9 Relaxation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.10 Implementation and experiments . . . . . . . . . . . . . . . . . . 12
1.11 Generalization to a general nonlinear ODE . . . . . . . . . . . . 14
1.12 Systems of ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
6 Exercises 46
References 60
Index 69
ax + b = 0,
for arbitrary real constants a and b. The unknown is a number x. All other
algebraic equations, e.g., x2 + ax + b = 0, are nonlinear. The typical feature in
a nonlinear algebraic equation is that the unknown appears in products with
itself, like x2 or ex = 1 + x + 12 x2 + 3!
1 3
x + ···.
We know how to solve a linear algebraic equation, x = −b/a, but there
are no general methods for finding the exact solutions of nonlinear algebraic
equations, except for very special cases (quadratic equations are a primary
example). A nonlinear algebraic equation may have no solution, one solution, or
many solutions. The tools for solving nonlinear algebraic equations are iterative
methods, where we construct a series of linear equations, which we know how to
solve, and hope that the solutions of the linear equations converge to the solution
of the nonlinear equation we want to solve. Typical methods for nonlinear
2
algebraic equations are Newton’s method, the Bisection method, and the Secant
method.
is nonlinear because of the term −u2 where the unknown function is multiplied
by itself. Also
∂u ∂u
+u = 0,
∂t ∂x
is nonlinear because of the term uux where the unknown function appears in
a product with itself or one if its derivatives. (Note here that we use different
notations for derivatives: u0 or du/dt for a function u(t) of one variable, ∂u
∂t or
ut for a function of more than one variable.)
Another example of a nonlinear equation is
u00 + sin(u) = 0,
because sin(u) contains products of u if we expand the function in a Taylor
series:
1
sin(u) = u − u3 + . . .
3
3
For example, the term T (u) = (sin2 t)u0 (t) is linear because
4
Thereafter, we compare the performance of the various approaches. Despite the
simplicity of (1), the conclusions reveal typical features of the various methods
in much more complicated nonlinear PDE problems.
un+1 − un
= un (1 − un ),
∆t
which is a linear algebraic equation for the unknown value un+1 that we can
easily solve:
un+1 = un + ∆t un (1 − un ) .
The nonlinearity in the original equation poses in this case no difficulty in the
discrete algebraic equation. Any other explicit scheme in time will also give only
linear algebraic equations to solve. For example, a typical 2nd-order Runge-Kutta
method for (1) leads to the following formulas:
u∗ = un + ∆tun (1 − un ),
1
un+1 = un + ∆t (un (1 − un ) + u∗ (1 − u∗ ))) .
2
The first step is linear in the unknown u∗ . Then u∗ is known in the next step,
which is linear in the unknown un+1 .
un − un−1
= un (1 − un ), (2)
∆t
results in a nonlinear algebraic equation for the unknown value un . The equation
is of quadratic type:
5
nonlinear algebraic equations at a given time level. The notation is inspired
by the natural notation (i.e., variable names) used in a program, especially
in more advanced partial differential equation problems. The unknown in the
algebraic equation is denoted by u, while u(1) is the value of the unknown at the
previous time level (in general, u(`) is the value of the unknown ` levels back in
time). The notation will be frequently used in later sections. What is meant
by u should be evident from the context: u may be 1) the exact solution of the
ODE/PDE problem, 2) the numerical approximation to the exact solution, or 3)
the unknown solution at a certain time level.
The quadratic equation for the unknown un in (2) can, with the new notation,
be written
We see that the r1 root, corresponding to a minus sign in front of the square
root in (4), behaves as 1/∆t and will therefore blow up as ∆t → 0! Since we
know that u takes on finite values, actually it is less than or equal to 1, only the
r2 root is of relevance in this case: as ∆t → 0, u → u(1) , which is the expected
result.
For those who are not well experienced with approximating mathematical
formulas by series expansion, an alternative method of investigation is simply to
compute the limits of the two roots as ∆t → 0 and see if a limit unreasonable:
>>> print r1.limit(dt, 0)
-oo
>>> print r2.limit(dt, 0)
u_1
6
1.5 Linearization
When the time integration of an ODE results in a nonlinear algebraic equation,
we must normally find its solution by defining a sequence of linear equations and
hope that the solutions of these linear equations converge to the desired solution
of the nonlinear algebraic equation. Usually, this means solving the linear
equation repeatedly in an iterative fashion. Alternatively, the nonlinear equation
can sometimes be approximated by one linear equation, and consequently there
is no need for iteration.
Constructing a linear equation from a nonlinear one requires linearization
of each nonlinear term. This can be done manually as in Picard iteration, or
fully algorithmically as in Newton’s method. Examples will best illustrate how
to linearize nonlinear problems.
F (u) = au2 + bu + c = 0,
with a = ∆t, b = 1 − ∆t, and c = −u(1) . Let u− be an available approximation
of the unknown u. Then we can linearize the term u2 simply by writing u− u.
The resulting equation, F̂ (u) = 0, is now linear and hence easy to solve:
7
c
auk uk+1 + buk+1 + c = 0 ⇒ uk+1 = − , k = 0, 1, . . .
auk + b
Since we need to perform the iteration at every time level, the time level counter
is often also included (recall that c = −un−1 ):
un−1
aun,k un,k+1 + bun,k+1 − un−1 = 0 ⇒ un,k+1 = , k = 0, 1, . . . ,
aun,k + b
with the start value un,0 = un−1 and the final converged value un = un,k for
sufficiently large k.
However, we will normally apply a mathematical notation in our final formulas
that is as close as possible to what we aim to write in a computer code and then
it becomes natural to use u and u− instead of uk+1 and uk or un,k+1 and un,k .
|u − u− | ≤ u ,
or when the residual in the equation is sufficiently small (r ),
un − un−1
= un (1 − un−1 ), (5)
∆t
which is a linear algebraic equation in the unknown un , and therefore we can
easily solve for un , and there is no need for any alternative notation.
We shall later refer to the strategy of taking one Picard step, or equivalently,
linearizing terms with use of the solution at the previous time step, as the Picard1
method. It is a widely used approach in science and technology, but with some
limitations if ∆t is not sufficiently small (as will be illustrated later).
8
Notice.
Equation (5) does not correspond to a “pure” finite difference method where
the equation is sampled at a point and derivatives replaced by differences
(because the un−1 term on the right-hand side must then be un ). The best
interpretation of the scheme (5) is a Backward Euler difference combined
with a single (perhaps insufficient) Picard iteration at each time level, with
the value at the previous time level as start for the Picard iteration.
un+1 − un 1 1
= un+ 2 − (un+ 2 )2 . (6)
∆t
1
The term un+ 2 is normally approximated by an arithmetic mean,
11 n
un+ 2 ≈ (u + un+1 ),
2
such that the scheme involves the unknown function only at the time levels where
we actually compute it. The same arithmetic mean applied to the nonlinear
term gives
1 1 n
(un+ 2 )2 ≈ (u + un+1 )2 ,
4
which is nonlinear in the unknown un+1 . However, using a geometric mean for
1
(un+ 2 )2 is a way of linearizing the nonlinear term in (6):
1
(un+ 2 )2 ≈ un un+1 .
1
Using an arithmetic mean on the linear un+ 2 term in (6) and a geometric mean
for the second term, results in a linearized equation for the unknown un+1 :
un+1 − un 1
= (un + un+1 ) − un un+1 ,
∆t 2
which can readily be solved:
1 + 21 ∆t
un+1 = un .
1 + ∆tun − 21 ∆t
9
This scheme can be coded directly, and since there is no nonlinear algebraic
equation to iterate over, we skip the simplified notation with u for un+1 and
u(1) for un . The technique with using a geometric average is an example of
transforming a nonlinear algebraic equation to a linear one, without any need
for iterations.
The geometric mean approximation is often very effective for linearizing
quadratic nonlinearities. Both the arithmetic and geometric mean approxima-
tions have truncation errors of order ∆t2 and are therefore compatible with the
truncation error O(∆t2 ) of the centered difference approximation for u0 in the
Crank-Nicolson method.
Applying the operator notation for the means and finite differences, the
linearized Crank-Nicolson scheme for the logistic equation can be compactly
expressed as
t,g n+ 1
[Dt u = ut − u2 ] 2 .
Remark.
If we use an arithmetic mean instead of a geometric mean for the nonlinear
term in (6), we end up with a nonlinear term (un+1 )2 . This term can be
linearized as u− un+1 in a Picard iteration approach and in particular as
un un+1 in a Picard1 iteration approach. The latter gives a scheme almost
identical to the one arising from a geometric mean (the difference in un+1
being 14 ∆tun (un+1 − un ) ≈ 41 ∆t2 u0 u, i.e., a difference of O(∆t2 )).
F (u) = 0 .
Newton’s method linearizes this equation by approximating F (u) with its Taylor
series expansion around a computed value u− and keeping only the linear part:
1
F (u) = F (u− ) + F 0 (u− )(u − u− ) + F 00 (u− )(u − u− )2 + · · ·
2
≈ F (u− ) + F 0 (u− )(u − u− ) = F̂ (u) .
F (u− )
u = u− − .
F 0 (u− )
10
Expressed with an iteration index in the unknown, Newton’s method takes on
the more familiar mathematical form
F (uk )
uk+1 = uk − , k = 0, 1, . . .
F 0 (uk )
It can be shown that the error in iteration k + 1 of Newton’s method is the
square of the error in iteration k, a result referred to as quadratic convergence.
This means that for small errors the method converges very fast, and in particular
much faster than Picard iteration and other iteration methods. (The proof of
this result is found in most textbooks on numerical analysis.) However, the
quadratic convergence appears only if uk is sufficiently close to the solution.
Further away from the solution the method can easily converge very slowly or
diverge. The reader is encouraged to do Exercise 3 to get a better understanding
for the behavior of the method.
Application of Newton’s method to the logistic equation discretized by the
Backward Euler method is straightforward as we have
F 0 (u) = 2au + b .
The iteration method becomes
a(u− )2 + bu− + c
u = u− − , u− ← u . (7)
2au− + b
At each time level, we start the iteration by setting u− = u(1) . Stopping criteria
as listed for the Picard iteration can be used also for Newton’s method.
An alternative mathematical form, where we write out a, b, and c, and use a
time level counter n and an iteration counter k, takes the form
1.9 Relaxation
One iteration in Newton’s method or Picard iteration consists of solving a linear
problem F̂ (u) = 0. Sometimes convergence problems arise because the new
solution u of F̂ (u) = 0 is “too far away” from the previously computed solution
u− . A remedy is to introduce a relaxation, meaning that we first solve F̂ (u∗ ) = 0
for a suggested value u∗ and then we take u as a weighted mean of what we had,
u− , and what our linearized equation F̂ = 0 suggests, u∗ :
11
u = ωu∗ + (1 − ω)u− .
The parameter ω is known as a relaxation parameter, and a choice ω < 1 may
prevent divergent iterations.
Relaxation in Newton’s method can be directly incorporated in the basic
iteration formula:
F (u− )
u = u− − ω . (9)
F 0 (u− )
12
u_ = u_ - omega*F(u_)/dF(u_)
k += 1
u[n] = u_
iterations.append(k)
return u, iterations
We may run experiments with the model problem (1) and the different
strategies for dealing with nonlinearities as described above. For a quite coarse
time resolution, ∆t = 0.9, use of a tolerance r = 0.1 in the stopping criterion
introduces an iteration error, especially in the Picard iterations, that is visibly
much larger than the time discretization error due to a large ∆t. This is
illustrated by comparing the upper two plots in Figure 1. The one to the right
has a stricter tolerance = 10−3 , which leads to all the curves corresponding to
Picard and Newton iteration to be on top of each other (and no changes can be
visually observed by reducing r further). The reason why Newton’s method does
much better than Picard iteration in the upper left plot is that Newton’s method
with one step comes far below the r tolerance, while the Picard iteration needs
on average 7 iterations to bring the residual down to r = 10−1 , which gives
insufficient accuracy in the solution of the nonlinear equation. It is obvious that
the Picard1 method gives significant errors in addition to the time discretization
unless the time step is as small as in the lower right plot.
The BE exact curve corresponds to using the exact solution of the quadratic
equation at each time level, so this curve is only affected by the Backward Euler
time discretization. The CN gm curve corresponds to the theoretically more
accurate Crank-Nicolson discretization, combined with a geometric mean for
linearization. This curve appears as more accurate, especially if we take the plot
in the lower right with a small ∆t and an appropriately small r value as the
exact curve.
When it comes to the need for iterations, Figure 2 displays the number of
iterations required at each time level for Newton’s method and Picard iteration.
The smaller ∆t is, the better starting value we have for the iteration, and the
faster the convergence is. With ∆t = 0.9 Picard iteration requires on average
32 iterations per time step, but this number is dramatically reduced as ∆t is
reduced.
However, introducing relaxation and a parameter ω = 0.8 immediately
reduces the average of 32 to 7, indicating that for the large ∆t = 0.9, Picard
iteration takes too long steps. An approximately optimal value for ω in this
case is 0.5, which results in an average of only 2 iterations! Even more dramatic
13
impact of ω appears when ∆t = 1: Picard iteration does not convergence in 1000
iterations, but ω = 0.5 again brings the average number of iterations down to 2.
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
u
u
0.5 0.5
0.4 FE 0.4 FE
BE exact BE exact
0.3 BE Picard 0.3 BE Picard
BE Picard1 BE Picard1
0.2 BE Newton 0.2 BE Newton
CN gm CN gm
0.1 0.1
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9
t t
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
u
0.5 0.5
0.4 FE 0.4 FE
BE exact BE exact
0.3 BE Picard 0.3 BE Picard
BE Picard1 BE Picard1
0.2 BE Newton 0.2 BE Newton
CN gm CN gm
0.1 0.1
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9
t t
Figure 1: Impact of solution strategy and time step length on the solution.
Remark. The simple Crank-Nicolson method with a geometric mean for the
quadratic nonlinearity gives visually more accurate solutions than the Backward
Euler discretization. Even with a tolerance of r = 10−3 , all the methods for
treating the nonlinearities in the Backward Euler discretization give graphs that
cannot be distinguished. So for accuracy in this problem, the time discretization
is much more crucial than r . Ideally, one should estimate the error in the time
discretization, as the solution progresses, and set r accordingly.
14
dt=0.9, eps=5E-02 dt=0.9, eps=1E-03
12 Picard Picard
Newton 40 Newton
10 35
30
8
No of iterations
No of iterations
25
6 20
15
4
10
2
5
0 0
2 4 6 8 10 2 4 6 8 10
Time level Time level
4 2.0
No of iterations
No of iterations
3 1.5
2 1.0
1 0.5
0 0.0
5 10 15 20 20 40 60 80 100
Time level Time level
Explicit time discretization. Explicit ODE methods like the Forward Euler
scheme, Runge-Kutta methods, Adams-Bashforth methods all evaluate f at
time levels where u is already computed, so nonlinearities in f do not pose any
difficulties.
15
u∗ = ∆t f (u− , tn ) + u(1) , u = ωu∗ + (1 − ω)u− , u− ← u,
until a stopping criterion is fulfilled.
16
F (u− ) u− − ∆t f (u− , tn ) − u(1)
u = u− − ω 0 −
= u− − ω ∂
. (11)
F (u ) 1 − ∆t ∂u f (u− , tn )
un+1 − un 1
= (f (un+1 , tn+1 ) + f (un , tn )) .
∆t 2
We can write the scheme as a nonlinear algebraic equation
1 1
F (u) = u − u(1) − ∆t f (u, tn+1 ) − ∆t f (u(1) , tn ) = 0 . (12)
2 2
A Picard iteration scheme must in general employ the linearization
1 1
F̂ (u) = u − u(1) − ∆t f (u− , tn+1 ) − ∆t f (u(1) , tn ),
2 2
while Newton’s method can apply the general formula (11) with F (u) given in
(12) and
1 ∂f
F 0 (u) = 1 − ∆t (u, tn+1 ) .
2 ∂u
d
u0 (t) = f0 (u0 (t), u1 (t), . . . , uN (t), t),
dt
d
u1 (t) = f1 (u0 (t), u1 (t), . . . , uN (t), t),
dt
..
.
d
uN (t) = fN (u0 (t), u1 (t), . . . , uN (t), t),
dt
as
17
of the system. For example, here is a backward difference scheme applied to
each component,
un0 − un−1
0
= f0 (un , tn ),
∆t
un1 − un−1
1
= f1 (un , tn ),
∆t
..
.
unN − un−1
N
= fN (un , tn ),
∆t
which can be written more compactly in vector form as
un − un−1
= f (un , tn ) .
∆t
This is a system of algebraic equations,
un − ∆t f (un , tn ) − un−1 = 0,
or written out
Example. We shall address the 2×2 ODE system for oscillations of a pendulum
subject to gravity and air drag. The system can be written as
18
un+1
0 − un0 n+ 1 n+ 1 n+ 1
= − sin u1 2 − βu0 2 |u0 2 |
∆t
1 n+1 1
≈ − sin (u1 + un1 ) − β (un+1 + un0 )|un+1 + un0 |, (16)
2 4 0 0
un+1 − un1 n+ 1 1
1
= u0 2 ≈ (un+1 + un0 ) . (17)
∆t 2 0
This is a coupled system of two nonlinear algebraic equations in two unknowns
un+1
0 and un+1
1 .
Using the notation u0 and u1 for the unknowns un+1
0 and un+1
1 in this system,
(1) (1) n n
writing u0 and u1 for the previous values u0 and u1 , multiplying by ∆t and
moving the terms to the left-hand sides, gives
(1) 1 (1) 1 (1) (1)
u0 − u0 + ∆t sin (u1 + u1 ) + ∆tβ(u0 + u0 )|u0 + u0 | = 0, (18)
2 4
(1) 1 (1)
u1 − u1 − ∆t(u0 + u0 ) = 0 . (19)
2
Obviously, we have a need for solving systems of nonlinear algebraic equations,
which is the topic of the next section.
F (u) = 0,
where u is a vector of unknowns u = (u0 , . . . , uN ), and F is a vector function:
F = (F0 , . . . , FN ). The system at the end of Section 1.12 fits this notation with
N = 2, F0 (u) given by the left-hand side of (18), while F1 (u) is the left-hand
side of (19).
Sometimes the equation system has a special structure because of the under-
lying problem, e.g.,
A(u)u = b(u),
with A(u) as an (N + 1) × (N + 1) matrix function of u and b as a vector function:
b = (b0 , . . . , bN ).
We shall next explain how Picard iteration and Newton’s method can be
applied to systems like F (u) = 0 and A(u)u = b(u). The exposition has a focus
on ideas and practical computations. More theoretical considerations, including
quite general results on convergence properties of these methods, can be found
in Kelley [1].
19
2.1 Picard iteration
We cannot apply Picard iteration to nonlinear equations unless there is some
special structure. For the commonly arising case A(u)u = b(u) we can linearize
the product A(u)u to A(u− )u and b(u) as b(u− ). That is, we use the most
previously computed approximation in A and b to arrive at a linear system for
u:
A(u− )u = b(u− ) .
A relaxed iteration takes the form
2. u = ωu∗ + (1 − ω)u−
3. u− ← u
“Until convergence” means that the iteration is stopped when the change in
the unknown, ||u − u− ||, or the residual ||A(u)u − b(u)||, is sufficiently small, see
Section 2.3 for more details.
F (u− ) + J(u− ) · (u − u− ),
where J is the Jacobian of F , defined by
∂Fi
Ji,j = .
∂uj
So, the original nonlinear system is approximated by
20
F̂ (u) = F (u− ) + J(u− ) · (u − u− ) = 0,
which is linear in u and can be solved in a two-step procedure: first solve
Jδu = −F (u− ) with respect to the vector δu and then update u = u− + δu. A
relaxation parameter can easily be incorporated:
one gets
X ∂Ai,k ∂bi
Ji,j = Ai,j + uk − . (20)
∂uj ∂uj
k
(A + A0 u − b0 )δu = −Au + b,
or
21
Here we have inserted a parameter γ such that γ = 0 gives the Picard system
and γ = 1 gives the Newton system. Such a parameter can be handy in software
to easily switch between the methods.
1. solve (A(u− ) + γ(A0 (u− )u− − b0 (u− )))δu = −A(u− )u− + b(u− ) with
respect to δu
2. u = u− + ωδu
3. u− ← u
22
With the change in solution as criterion we can formulate a combined absolute
and relative measure of the change in the solution:
||F (u)|| ≤ rr ||F (u0 )|| + ra or ||δu|| ≤ ur ||u0 || + ua or k > kmax . (23)
S 0 = −βSI, (24)
I 0 = βSI − νI, (25)
where S(t) is the number of people who can get ill (susceptibles) and I(t) is the
number of people who are ill (infected). The constants β > 0 and ν > 0 must be
given along with initial conditions S(0) and I(0).
S n+1 − S n 1 β
= −β[SI]n+ 2 ≈ − (S n I n + S n+1 I n+1 ), (26)
∆t 2
I n+1 − I n 1 1 β ν
= β[SI]n+ 2 − νI n+ 2 ≈ (S n I n + S n+1 I n+1 ) − (I n + I n+1 ) .
∆t 2 2
(27)
Introducing S for S n+1 , S (1) for S n , I for I n+1 , I (1) for I n , we can rewrite the
system as
1
FS (S, I) = S − S (1) + ∆tβ(S (1) I (1) + SI) = 0, (28)
2
1 1
FI (S, I) = I − I (1) − ∆tβ(S (1) I (1) + SI) + ∆tν(I (1) + I) = 0 . (29)
2 2
23
S (1) − 21 ∆tβS (1) I (1)
S= ,
1 + 12 ∆tβI −
I (1) + 12 ∆tβS (1) I (1) − 12 ∆tνI (1)
I= .
1 − 12 ∆tβS − + 21 ∆tν
∂ ∂
1 + 21 ∆tβI 1
J= ∂S FS ∂I FS = 2 ∆tβS .
∂
∂S FI
∂
∂I FI
− 12 ∆tβI 1 − 12 ∆tβS + 21 ∆tν
The Newton system J(u− )δu = −F (u− ) to be solved in each iteration is then
1 + 21 ∆tβI − 1 −
2 ∆tβS δS
=
− 12 ∆tβI − 1 − 12 ∆tβS − + 12 ∆tν δI
S − − S (1) + 21 ∆tβ(S (1) I (1) + S − I − )
−
I − − I (1) − 21 ∆tβ(S (1) I (1) + S − I − ) + 21 ∆tν(I (1) + I − )
Remark. For this particular system of ODEs, explicit time integration methods
work very well. Even a Forward Euler scheme is fine, but the 4-th order Runge-
Kutta method is an excellent balance between high accuracy, high efficiency, and
simplicity.
∂u
= ∇ · (α(u)∇u) + f (u), x ∈ Ω, t ∈ (0, T ], (30)
∂t
∂u
−α(u) = g, x ∈ ∂ΩN , t ∈ (0, T ], (31)
∂n
u = u0 , x ∈ ∂ΩD , t ∈ (0, T ] . (32)
In the present section, our aim is to discretize this problem in time and then
present techniques for linearizing the time-discrete PDE problem “at the PDE
level” such that we transform the nonlinear stationary PDE problem at each
time level into a sequence of linear PDE problems, which can be solved using
24
any method for linear PDEs. This strategy avoids the solution of systems of
nonlinear algebraic equations. In Section 4 we shall take the opposite (and more
common) approach: discretize the nonlinear problem in time and space first,
and then solve the resulting nonlinear algebraic equations at each time level by
the methods of Section 2. Very often, the two approaches are mathematically
identical, so there is no preference from a computational efficiency point of view.
The details of the ideas sketched above will hopefully become clear through the
forthcoming examples.
un+1 − un
= ∇ · (α(un )∇un ) + f (un ),
∆t
which is a linear equation in the unknown un+1 with solution
un − un−1
= ∇ · (α(un )∇un ) + f (un ) . (33)
∆t
This is a nonlinear PDE for the unknown function un (x). Such a PDE can be
viewed as a time-independent PDE where un−1 (x) is a known function.
We introduce a Picard iteration with k as iteration counter. A typical
linearization of the ∇ · (α(un )∇un ) term in iteration k + 1 is to use the previously
computed un,k approximation in the diffusion coefficient: α(un,k ). The nonlinear
source term is treated similarly: f (un,k ). The unknown function un,k+1 then
fulfills the linear PDE
un,k+1 − un−1
= ∇ · (α(un,k )∇un,k+1 ) + f (un,k ) . (34)
∆t
25
The initial guess for the Picard iteration at this time level can be taken as the
solution at the previous time level: un,0 = un−1 .
We can alternatively apply the implementation-friendly notation where u
corresponds to the unknown we want to solve for, i.e., un,k+1 above, and u−
is the most recently computed value, un,k above. Moreover, u(1) denotes the
unknown function at the previous time level, un−1 above. The PDE to be solved
in a Picard iteration then looks like
u − u(1)
= ∇ · (α(u− )∇u) + f (u− ) . (35)
∆t
At the beginning of the iteration we start with the value from the previous time
level: u− = u(1) , and after each iteration, u− is updated to u.
Remark on notation.
The previous derivations of the numerical scheme for time discretizations of
PDEs have, strictly speaking, a somewhat sloppy notation, but it is much
used and convenient to read. A more precise notation must distinguish
clearly between the exact solution of the PDE problem, here denoted
ue (x, t), and the exact solution of the spatial problem, arising after time
discretization at each time level, where (33) is an example. The latter is
here represented as un (x) and is an approximation to ue (x, tn ). Then we
have another approximation un,k (x) to un (x) when solving the nonlinear
PDE problem for un by iteration methods, as in (34).
In our notation, u is a synonym for un,k+1 and u(1) is a synonym for
n−1
u , inspired by what are natural variable names in a code. We will usually
state the PDE problem in terms of u and quickly redefine the symbol u to
mean the numerical approximation, while ue is not explicitly introduced
unless we need to talk about the exact solution and the approximate
solution at the same time.
Linearization via Taylor expansions. Let un,k be the most recently com-
puted approximation to the unknown un . We seek a better approximation on
the form
un = un,k + δu . (36)
26
The idea is to insert (36) in (33), Taylor expand the nonlinearities and keep
only the terms that are linear in δu (which makes (36) an approximation for un ).
Then we can solve a linear PDE for the correction δu and use (36) to find a new
approximation
un,k+1 = un,k + δu
to un . Repeating this procedure gives a sequence un,k+1 , k = 0, 1, . . . that
hopefully converges to the goal un .
Let us carry out all the mathematical details for the nonlinear diffusion PDE
discretized by the Backward Euler method. Inserting (36) in (33) gives
un,k + δu − un−1
= ∇ · (α(un,k + δu)∇(un,k + δu)) + f (un,k + δu) . (37)
∆t
We can Taylor expand α(un,k + δu) and f (un,k + δu):
dα n,k
α(un,k + δu) = α(un,k ) + (u )δu + O(δu2 ) ≈ α(un,k ) + α0 (un,k )δu,
du
df n,k
f (un,k + δu) = f (un,k ) + (u )δu + O(δu2 ) ≈ f (un,k ) + f 0 (un,k )δu .
du
Inserting the linear approximations of α and f in (37) results in
un,k + δu − un−1
= ∇ · (α(un,k )∇un,k ) + f (un,k )+
∆t
∇ · (α(un,k )∇δu) + ∇ · (α0 (un,k )δu∇un,k )+
∇ · (α0 (un,k )δu∇δu) + f 0 (un,k )δu . (38)
The term α0 (un,k )δu∇δu is of order δu2 and therefore omitted since we expect
the correction δu to be small (δu δu2 ). Reorganizing the equation gives a
PDE for δu that we can write in short form as
un,k − un−1
F (un,k ) = − ∇ · (α(un,k )∇un,k ) − f (un,k ), (39)
∆t
1
δF (δu; un,k ) = δu − ∇ · (α(un,k )∇δu)−
∆t
∇ · (α0 (un,k )δu∇un,k ) − f 0 (un,k )δu . (40)
Note that δF is a linear function of δu, and F contains only terms that are
known, such that the PDE for δu is indeed linear.
27
Observations.
The notational form δF = −F resembles the Newton system Jδu = −F for
systems of algebraic equations, with δF as Jδu. The unknown vector in a
linear system of algebraic equations enters the system as a linear operator
in terms of a matrix-vector product (Jδu), while at the PDE level we have
a linear differential operator instead (δF ).
un,k+1 − un−1
= ∇ · (α(un,k )∇un,k+1 ) + f (un,k )
∆t
+ ∇ · (α0 (un,k )δu∇un,k ) + f 0 (un,k )δu . (41)
Note that the first line is the same PDE as arise in the Picard iteration, while the
remaining terms arise from the differentiations that are an inherent ingredient
in Newton’s method.
u− − u(1)
F (u− ) = − ∇ · (α(u− )∇u− ) − f (u− ), (42)
∆t
1
δF (δu; u− ) = δu − ∇ · (α(u− )∇δu)−
∆t
∇ · (α0 (u− )δu∇u− ) − f 0 (u− )δu . (43)
The form that orders the PDE as the Picard iteration terms plus the Newton
method’s derivative terms becomes
u − u(1)
= ∇ · (α(u− )∇u) + f (u− )+
∆t
γ(∇ · (α0 (u− )(u − u− )∇u− ) + f 0 (u− )(u − u− )) . (44)
Derivation with alternative notation. Some may prefer to derive the lin-
earized PDE for δu using the more compact notation. We start with inserting
un = u− + δu to get
u− + δu − un−1
= ∇ · (α(u− + δu)∇(u− + δu)) + f (u− + δu) .
∆t
28
Taylor expanding,
and inserting these expressions gives a less cluttered PDE for δu:
u− + δu − un−1
= ∇ · (α(u− )∇u− ) + f (u− )+
∆t
∇ · (α(u− )∇δu) + ∇ · (α0 (u− )δu∇u− )+
∇ · (α0 (u− )δu∇δu) + f 0 (u− )δu .
1 1 1
[f (u)]n+ 2 ≈ f ( (un + un+1 )) = [f (ut )]n+ 2 , (45)
2
1 1 t 1
[f (u)]n+ 2 ≈ (f (un ) + f (un+1 )) = [f (u) ]n+ 2 , (46)
2
1 1 1 1
[α(u)∇u]n+ 2 ≈ α( (un + un+1 ))∇( (un + un+1 )) = [α(ut )∇ut ]n+ 2 , (47)
2 2
1 1 1 t 1
[α(u)∇u]n+ 2 ≈ (α(un ) + α(un+1 ))∇( (un + un+1 )) = [α(u) ∇ut ]n+ 2 , (48)
2 2
1 1 t 1
[α(u)∇u]n+ 2 ≈ (α(un )∇un + α(un+1 )∇un+1 ) = [α(u)∇u ]n+ 2 . (49)
2
A big question is whether there are significant differences in accuracy between
taking the products of arithmetic means or taking the arithmetic mean of
products. Exercise 6 investigates this question, and the answer is that the
approximation is O(∆t2 ) in both cases.
29
4 Discretization of 1D stationary nonlinear dif-
ferential equations
Section 3 presented methods for linearizing time-discrete PDEs directly prior
to discretization in space. We can alternatively carry out the discretization in
space of the time-discrete nonlinear PDE problem and get a system of nonlinear
algebraic equations, which can be solved by Picard iteration or Newton’s method
as presented in Section 2. This latter approach will now be described in detail.
We shall work with the 1D problem
The problem (50) arises from the stationary limit of a diffusion equation,
∂u ∂ ∂u
= α(u) − au + f (u), (51)
∂t ∂x ∂x
as t → ∞ and ∂u/∂t → 0. Alternatively, the problem (50) arises at each time
level from implicit time discretization of (51). For example, a Backward Euler
scheme for (51) leads to
un − un−1 n
d n du
= α(u ) − aun + f (un ) . (52)
∆t dx dx
Introducing u(x) for un (x), u(1) for un−1 , and defining f (u) in (50) to be f (u)
in (52) plus un−1 /∆t, gives (50) with a = 1/∆t.
[−Dx αDx u + au = f ]i .
Writing this out for a uniform mesh with points xi = i∆x, i = 0, . . . , Nx , leads
to
1
− α 1 (ui+1 − ui ) − α
i+ 2 1 (ui − ui−1 )
i− 2 + aui = f (ui ) . (53)
∆x2
30
This equation is valid at all the mesh points i = 0, 1, . . . , Nx − 1. At i = Nx we
have the Dirichlet condition ui = D. The only difference from the case with
(α(x)u0 )0 and f (x) is that now α and f are functions of u and not only on x:
(α(u(x))u0 )0 and f (u(x)).
The quantity αi+ 12 , evaluated between two mesh points, needs a comment.
Since α depends on u and u is only known at the mesh points, we need to express
αi+ 12 in terms of ui and ui+1 . For this purpose we use an arithmetic mean,
although a harmonic mean is also common in this context if α features large
jumps. There are two choices of arithmetic means:
1
αi+ 12 ≈ α( 21 (ui + ui+1 )) = [α(ux )]i+ 2 , (54)
1 x i+ 1
αi+ 12 ≈ 2 (α(ui ) + α(ui+1 )) = [α(u) ] 2 (55)
1
− ((α(ui ) + α(ui+1 ))(ui+1 − ui ) − (α(ui−1 ) + α(ui ))(ui − ui−1 ))
2∆x2
+ aui = f (ui ), (56)
[−Dx αx Dx u + au = f ]i .
At mesh point i = 0 we have the boundary condition α(u)u0 = C, which is
discretized by
[α(u)D2x u = C]0 ,
meaning
u1 − u−1
α(u0 ) =C. (57)
2∆x
The fictitious value u−1 can be eliminated with the aid of (56) for i = 0. Formally,
(56) should be solved with respect to ui−1 and that value (for i = 0) should be
inserted in (57), but it is algebraically much easier to do it the other way around.
Alternatively, one can use a ghost cell [−∆x, 0] and update the u−1 value in
the ghost cell according to (57) after every Picard or Newton iteration. Such
an approach means that we use a known u−1 value in (56) from the previous
iteration.
31
1
Ai,i = (α(ui−1 ) + 2α(ui ) + α(ui+1 )) + a,
2∆x2
1
Ai,i−1 = − (α(ui−1 ) + α(ui )),
2∆x2
1
Ai,i+1 = − (α(ui ) + α(ui+1 )),
2∆x2
bi = f (ui ) .
The matrix A(u) is tridiagonal: Ai,j = 0 for j > i + 1 and j < i − 1.
The above expressions are valid for internal mesh points 1 ≤ i ≤ Nx − 1. For
i = 0 we need to express ui−1 = u−1 in terms of u1 using (57):
2∆x
u−1 = u1 − C. (58)
α(u0 )
This value must be inserted in A0,0 . The expression for Ai,i+1 applies for i = 0,
and Ai,i−1 does not enter the system when i = 0.
Regarding the last equation, its form depends on whether we include the
Dirichlet condition u(L) = D, meaning uNx = D, in the nonlinear algebraic
equation system or not. Suppose we choose (u0 , u1 , . . . , uNx −1 ) as unknowns,
later referred to as systems without Dirichlet conditions. The last equation corre-
sponds to i = Nx −1. It involves the boundary value uNx , which is substituted by
D. If the unknown vector includes the boundary value, (u0 , u1 , . . . , uNx ), later
referred to as system including Dirichlet conditions, the equation for i = Nx − 1
just involves the unknown uNx , and the final equation becomes uNx = D,
corresponding to Ai,i = 1 and bi = D for i = Nx .
Mesh with two cells. It helps on the understanding of the details to write
out all the mathematics in a specific case with a small mesh, say just two cells
(Nx = 2). We use u−i for the i-th component in u .
−
The starting point is the basic expressions for the nonlinear equations at
mesh point i = 0 and i = 1 are
32
Equation (59) written out reads
1
( − (α(u−1 ) + α(u0 ))u−1 +
2∆x2
(α(u−1 ) + 2α(u0 ) + α(u1 ))u0 −
(α(u0 ) + α(u1 )))u1 + au0 = f (u0 ) .
1
( − (α(u− − −
−1 ) + 2α(u0 ) + α(u1 ))u1 +
2∆x2
(α(u− − −
−1 ) + 2α(u0 ) + α(u1 ))u0 + au0
1
= f (u−
0)− − (α(u− −
−1 ) + α(u0 ))C,
α(u0 )∆x
where
2∆x
u− −
−1 = u1 − C.
α(u−
0)
Equation (60) contains the unknown u2 for which we have a Dirichlet con-
dition. In case we omit the condition as a separate equation, (60) with Picard
iteration becomes
1
( − (α(u− −
0 ) + α(u1 ))u0 +
2∆x2
(α(u− − −
0 ) + 2α(u1 ) + α(u2 ))u1 −
(α(u− − −
1 ) + α(u2 )))u2 + au1 = f (u1 ) .
We must now move the u2 term to the right-hand side and replace all occurrences
of u2 by D:
1
( − (α(u− −
0 ) + α(u1 ))u0 +
2∆x2
(α(u− −
0 ) + 2α(u1 ) + α(D))u1 + au1
1
= f (u−1)+ (α(u−
1 ) + α(D))D .
2∆x2
The two equations can be written as a 2 × 2 system:
B0,0 B0,1 u0 d0
= ,
B1,0 B1,1 u1 d1
where
33
1
B0,0 = (α(u− − −
−1 ) + 2α(u0 ) + α(u1 )) + a (61)
2∆x2
1
B0,1 = − (α(u− − −
−1 ) + 2α(u0 ) + α(u1 )), (62)
2∆x2
1
B1,0 = − (α(u− −
0 ) + α(u1 )), (63)
2∆x2
1
B1,1 = (α(u− −
0 ) + 2α(u1 ) + α(D)) + a, (64)
2∆x2
1
d0 = f (u−0)− − (α(u− −
−1 ) + α(u0 ))C, (65)
α(u0 )∆x
1
d1 = f (u−1)+ (α(u−1 ) + α(D))D . (66)
2∆x2
The system with the Dirichlet condition becomes
B0,0 B0,1 0 u0 d0
B1,0 B1,1 B1,2 u1 = d1 ,
0 0 1 u2 D
with
1
B1,1 = (α(u− −
0 ) + 2α(u1 ) + α(u2 )) + a, (67)
2∆x2
1
B1,2 = − (α(u−
1 ) + α(u2 ))), (68)
2∆x2
−
d1 = f (u1 ) . (69)
Fi = Ai,i−1 (ui−1 , ui )ui−1 + Ai,i (ui−1 , ui , ui+1 )ui + Ai,i+1 (ui , ui+1 )ui+1 − bi (ui ) .
34
∂ ∂Ai,i ∂ui
(Ai,i (ui−1 , ui , ui+1 )ui ) = ui + Ai,i
∂ui ∂ui ∂ui
∂ 1
= ( (α(ui−1 ) + 2α(ui ) + α(ui+1 )) + a)ui +
∂ui 2∆x2
1
(α(ui−1 ) + 2α(ui ) + α(ui+1 )) + a
2∆x2
1
= (2α0 (ui )ui + α(ui−1 ) + 2α(ui ) + α(ui+1 )) + a .
2∆x2
The complete Jacobian becomes
1
Fi = − ((α(ui ) + α(ui+1 ))(ui+1 − ui ) − (α(ui−1 ) + α(ui ))(ui − ui−1 ))
2∆x2
+ aui − f (ui ) = 0 . (70)
At the boundary point i = 0, u−1 must be replaced using the formula (58).
When the Dirichlet condition at i = Nx is not a part of the equation system,
the last equation Fm = 0 for m = Nx − 1 involves the quantity uNx −1 which
must be replaced by D. If uNx is treated as an unknown in the system, the last
equation Fm = 0 has m = Nx and reads
35
the right Dirichlet value at the beginning of the iteration (uNx = D), and then
the Newton update should be zero for i = 0, i.e., δuNx = 0. This also forces the
right-hand side to be bi = 0, i = Nx .
We have seen, and can see from the present example, that the linear system
in Newton’s method contains all the terms present in the system that arises
in the Picard iteration method. The extra terms in Newton’s method can be
multiplied by a factor such that it is easy to program one linear system and set
this factor to 0 or 1 to generate the Picard or Newton system.
Z L Z L Z L
α(u)u0 v 0 dx + auv dx = f (u)v dx + [α(u)u0 v]L
0, ∀v ∈ V .
0 0 0
[α(u)u0 v]L 0 0
0 = α(u(L))u (L)v(L) − α(u(0))u (0)v(0) = 0 − Cv(0) = −Cv(0) .
(Recall that since ψi (L) = 0, any linear combination v of the basis functions also
vanishes at x = L: v(L) = 0.) The variational problem is then: find u ∈ V such
that
Z L Z L Z L
0 0
α(u)u v dx + auv dx = f (u)v dx − Cv(0), ∀v ∈ V . (71)
0 0 0
! !
X Z L X
α(D + ck ψk )ψj0 ψi0 + aψi ψj dx cj
j∈Is 0 k∈Is
Z L X
= f (D + ck ψk )ψi dx − Cψi (0), i ∈ Is . (72)
0 k∈Is
36
Fundamental integration problem. Methods that use the Galerkin or
weighted residual method face a fundamental difficulty in nonlinear problems:
RL P RL P
how can we integrate a terms like 0 α( k ck ψk )ψi0 ψj0 dx and 0 f ( k ck ψk )ψi dx
when we do not know the ck coefficients in the argument of the α andP f functions?
We can resort to numerical integration, provided an approximate k ck ψk can
be used for the argument u in f and α. This is the approach used in computer
programs.
However, if we want to look more mathematically into the structure of the
algebraic equations generated by the finite element method in nonlinear problems,
and compare such equations with those arising in the finite difference method, we
RL P
need techniques that enable integration of expressions like 0 f ( k ck ψk )ψi dx
by hand. Two such techniques will be shown: the group finite element and
numerical integration based on the nodes only. Both techniques are approximate,
but they allow us to see the difference equations in the finite element method.
The details are worked out in Appendix A. Some readers will prefer to dive into
these symbolic calculations to gain more understanding of nonlinear finite element
equations, while others will prefer to continue with computational algorithms
(in the next two sections) rather than analysis.
This is a linear problem a(u, v) = L(v) with bilinear and linear forms
Z L Z L
a(u, v) = (α(u− )u0 v 0 + auv) dx, L(v) = f (u− )v dx − Cv(0) .
0 0
Make sure to distinguish the coefficient a in auv from the differential equation
from the a in the abstract bilinear form notation a(·, ·).
The linear system associated with (73) is computed in the standard way.
Technically, we are back to solvingP−(α(x)u0 )0 + au = f (x). The unknown u
is sought on the form u = B(x) + j∈Is cj ψj , with B(x) = D and ψi = ϕν(i) ,
ν(i) = i, and Is = {0, 1, . . . , N = Nn − 2}.
37
4.5 Newton’s method defined from the variational form
Application of Newton’s method to the nonlinear variational form (71) arising
from the problem (50) requires identification of the nonlinear algebraic equations
Fi = 0. Although we originally denoted the unknowns in nonlinear algebraic
equations by u0 , . . . , uN , it is in the present context most natural to have the
unknowns as c0 , . . . , cN and write
Fi (c0 , . . . , cN ) = 0, i ∈ Is ,
and define the Jacobian as Ji,j = ∂Fi /∂cj for i, j ∈ Is .
The specific form of the equations Fi = 0 follows from the variational form
Z L Z L
(α(u)u0 v 0 + auv) dx = f (u)v dx − Cv(0), ∀v ∈ V,
0 0
P
by choosing v = ψi , i ∈ Is , and setting u = j∈Is cj ψj , maybe with a boundary
function to incorporate Dirichlet conditions.
With v = ψi we get
Z L
Fi = (α(u)u0 ψi0 + auψi − f (u)ψi ) dx + Cψi (0) = 0, i ∈ Is . (74)
0
In the differentiations leading to the Jacobian we will frequently use the results
∂u ∂ X ∂u0 ∂ X
= ck ψk = ψj , = ck ψk0 = ψj0 .
∂cj ∂cj ∂cj ∂cj
k k
Z L
∂Fi ∂
Ji,j = = (α(u)u0 ψi0 + auψi − f (u)ψi ) dx
∂cj 0 ∂cj
L
∂u0 0
Z
∂u 0 ∂u ∂u
= ((α0 (u) u + α(u) )ψ + a ψi − f 0 (u) ψi ) dx
0 ∂cj ∂cj i ∂cj ∂cj
Z L
= ((α0 (u)ψj u0 + α(u)ψj0 )ψi0 + aψj ψi − f 0 (u)ψj ψi ) dx
0
Z L
= (α0 (u)u0 ψi0 ψj + α(u)ψi0 ψj0 + (a − f 0 (u))ψi ψj ) dx (75)
0
38
dα
, α0 =
du
while in u0 the differentiation is with respect to x, so
du
.u0 =
dx
Similarly, f is a function of u, so f 0 means df /du.
When calculating the right-hand side vector Fi and the coefficient matrix
Ji,j in the linear system to be solved in each Newton iteration, one must use a
previously computed u, denoted by u− , for the symbol u in (74) and (75). With
this notation we have
Z L
α(u− )u−0 ψi0 + (au− − f (u− ))ψi dx − Cψi (0),
Fi = i ∈ Is , (76)
0
Z L
Ji,j = (α0 (u− )u−0 ψi0 ψj + α(u− )ψi0 ψj0 + (a − f 0 (u− ))ψi ψj ) dx, i, j ∈ Is .
0
(77)
These expressions can be used for any basis {ψi }i∈Is . Choosing finite element
functions for ψi , one will normally want to compute the integral contribution cell
by cell, working in a reference cell. To this end, we restrict the integration to one
cell and transform the cellPto [−1, 1]. The most recently computed approximation
u− to u becomes ũ− = t ũ−1 t ϕ̃t (X) over the reference element, where ũt
−1
is
the value of u− at global node (or degree of freedom) q(e, t) corresponding to the
local node t (or degree of freedom). The formulas (76) and (77) then change to
Z 1
F̃r(e) = α(ũ− )ũ−0 ϕ̃0r + (aũ− − f (ũ− ))ϕ̃r det J dX − C ϕ̃r (0),
(78)
−1
Z 1
J˜r,s
(e)
= (α0 (ũ− )ũ−0 ϕ̃0r ϕ̃s + α(ũ− )ϕ̃0r ϕ̃0s + (a − f 0 (ũ− ))ϕ̃r ϕ̃s ) det J dX, (79)
−1
39
δu to the solution. Dirichlet conditions are implemented by inserting them in
the initial guess u− for the Newton iteration and implementing δui = 0 for all
known degrees of freedom. The manipulation of the linear system follows exactly
the algorithm in the linear problems, the only difference being that the known
values are zero.
40
can in this way be clearly distinguished. However, we favor to use u for the
quantity that is most naturally called u in the code, and that is the most
recent approximation to the solution, e.g., named un,k h above. This is also
the key quantity of interest in mathematical derivations of algorithms as
well. Choosing u this way makes the most important mathematical cleaner
than using more cluttered notation as un,k h . We therefore introduce other
symbols for other versions of the unknown function. It is most appropriate
for us to say that ue (x, t) is the exact solution, un in the equation above
is the approximation to ue (x, tn ) after time discretization, and u is the
spatial approximation to un from the most recent iteration in a nonlinear
iteration method.
Z
(uv + ∆t α(u)∇u · ∇v − ∆tf (u)v − u(1) v) dx = 0, ∀v ∈ V . (81)
Ω
Z
Fi ≈ F̂i = (uψi + ∆t α(u− )∇u · ∇ψi − ∆tf (u− )ψi − u(1) ψi ) dx . (83)
Ω
P equations F̂i = 0 are now linear and we can easily derive a linear system
The
j∈I
P s
Ai,j cj = bi , i ∈ Is , for the unknown coefficients {ci }i∈Is by inserting
u = j cj ψj . We get
Z Z
Ai,j = (ϕj ψi + ∆t α(u− )∇ϕj · ∇ψi ) dx, bi = (∆tf (u− )ψi + u(1) ψi ) dx .
Ω Ω
Z
Fi ≈ F̂i = (u− ψi + ∆t α(u− )∇u− · ∇ψi − ∆tf (u− )ψi − u(1) ψi ) dx . (84)
Ω
41
∂u X ∂
= ck ψk = ψj , (85)
∂cj ∂cj
k
∂∇u X ∂
= ck ∇ψk = ∇ψj . (86)
∂cj ∂cj
k
Z
∂Fi
Ji,j = = (ψj ψi + ∆t α0 (u)ψj ∇u · ∇ψi + ∆t α(u)∇ψj · ∇ψi −
∂cj Ω
∆tf 0 (u)ψj ψi ) dx . (87)
The evaluation of Ji,j as the coefficient matrix in the linear system in Newton’s
method applies the known approximation u− for u:
Z
Ji,j = (ψj ψi + ∆t α0 (u− )ψj ∇u− · ∇ψi + ∆t α(u− )∇ψj · ∇ψi −
Ω
∆tf 0 (u− )ψj ψi ) dx . (88)
Hopefully, this example also shows how convenient the notation with u and u−
is: the unknown to be computed is always u and linearization by inserting known
(previously computed) values is a matter of adding an underscore. One can take
great advantage of this quick notation in software [2].
The nonlinearity in the α(u) coefficient condition (89) therefore does not con-
tribute with a nonlinearity in the variational form.
42
Robin conditions. Heat conduction problems often apply a kind of Newton’s
cooling law, also known as a Robin condition, at the boundary:
∂u
− α(u)
= h(u)(u − Ts (t)), x ∈ ∂ΩR , (91)
∂u
where h(u) is a heat transfer coefficient between the body (Ω) and its sur-
roundings, Ts is the temperature of the surroundings, and ∂ΩR is a part of the
boundary where this Robin condition applies. The boundary integral (90) now
becomes
Z
h(u)(u − Ts (T ))v ds .
∂ΩR
In many physical applications, h(u) can be taken as constant, and then the
boundary term is linear in u, otherwise it is nonlinear and contributes to the
Jacobian in a Newton method. Linearization in a Picard method will typically
use a known value in h, but keep the u in u − Ts as unknown: h(u− )(u − Ts (t)).
Exercise 15 asks you to carry out the details.
ut = ∇ · (α(u)∇u) + f (u),
can be discretized by (e.g.) a Backward Euler scheme, which in 2D can be
written
x y
[Dt− u = Dx α(u) Dx u + Dy α(u) Dy u + f (u)]ni,j .
We do not dive into the details of handling boundary conditions now. Dirichlet
and Neumann conditions are handled as in a corresponding linear, variable-
coefficient diffusion problems.
Writing the scheme out, putting the unknown values on the left-hand side
and known values on the right-hand side, and introducing ∆x = ∆y = h to save
some writing, one gets
∆t 1
uni,j − ( (α(uni,j ) + α(uni+1,j ))(uni+1,j − uni,j )
h2 2
1
− (α(uni−1,j ) + α(uni,j ))(uni,j − uni−1,j )
2
1
+ (α(uni,j ) + α(uni,j+1 ))(uni,j+1 − uni,j )
2
1
− (α(uni,j−1 ) + α(uni,j ))(uni,j − uni−1,j−1 )) − ∆tf (uni,j ) = un−1
i,j
2
This defines a nonlinear algebraic system on the form A(u)u = b(u).
43
Picard iteration. The most recently computed values u− of un can be used
in α and f for a Picard iteration, or equivalently, we solve A(u− )u = b(u− ). The
result is a linear system of the same type as arising from ut = ∇ · (α(x)∇u) +
f (x, t).
The Picard iteration scheme can also be expressed in operator notation:
x y
[Dt− u = Dx α(u− ) Dx u + Dy α(u− ) Dy u + f (u− )]ni,j .
∆t
Fi,j = ui,j − (
h2
1
(α(ui,j ) + α(ui+1,j ))(ui+1,j − ui,j )−
2
1
(α(ui−1,j ) + α(ui,j ))(ui,j − ui−1,j )+
2
1
(α(ui,j ) + α(ui,j+1 ))(ui,j+1 − ui,j )−
2
1 (1)
(α(ui,j−1 ) + α(ui,j ))(ui,j − ui−1,j−1 )) − ∆t f (ui,j ) − ui,j = 0 .
2
It is convenient to work with two indices i and j in 2D finite difference discretiza-
tions, but it complicates the derivation of the Jacobian, which then gets four
indices. (Make sure you really understand the 1D version of this problem as
treated in Section 4.1.) The left-hand expression of an equation Fi,j = 0 is to be
differentiated with respect to each of the unknowns ur,s (recall that this is short
notation for unr,s ), r ∈ Ix , s ∈ Iy :
∂Fi,j
Ji,j,r,s =
.
∂ur,s
The Newton system to be solved in each iteration can be written as
X X
Ji,j,r,s δur,s = −Fi,j , i ∈ Ix , j ∈ Iy .
r∈Ix s∈Iy
Given i and j, only a few r and s indices give nonzero contribution to the
Jacobian since Fi,j contains ui±1,j , ui,j±1 , and ui,j . This means that Ji,j,r,s has
nonzero contributions only if r = i ± 1, s = j ± 1, as well as r = i and s = j. The
corresponding terms in Ji,j,r,s are Ji,j,i−1,j , Ji,j,i+1,j , Ji,j,i,j−1
P ,PJi,j,i,j+1 , and
Ji,j,i,j . Therefore, the left-hand side of the Newton system, r s Ji,j,r,s δur,s
collapses to
Ji,j,r,s δur,s = Ji,j,i,j δui,j + Ji,j,i−1,j δui−1,j + Ji,j,i+1,j δui+1,j + Ji,j,i,j−1 δui,j−1
+ Ji,j,i,j+1 δui,j+1
44
The specific derivatives become
∂Fi,j
Ji,j,i−1,j =
∂ui−1,j
∆t 0
= (α (ui−1,j )(ui,j − ui−1,j ) + α(ui−1,j )(−1)),
h2
∂Fi,j
Ji,j,i+1,j =
∂ui+1,j
∆t
= (−α0 (ui+1,j )(ui+1,j − ui,j ) − α(ui−1,j )),
h2
∂Fi,j
Ji,j,i,j−1 =
∂ui,j−1
∆t 0
= (α (ui,j−1 )(ui,j − ui,j−1 ) + α(ui,j−1 )(−1)),
h2
∂Fi,j
Ji,j,i,j+1 =
∂ui,j+1
∆t
= (−α0 (ui,j+1 )(ui,j+1 − ui,j ) − α(ui,j−1 )) .
h2
The Ji,j,i,j entry has a few more terms and is left as an exercise. Inserting
the most recent approximation u− for u in the J and F formulas and then
forming Jδu = −F gives the linear system to be solved in each Newton iteration.
Boundary conditions will affect the formulas when any of the indices coincide
with a boundary value of an index.
−∇ · (||∇u||q ∇u) = f,
which is an equation modeling the flow of a non-Newtonian fluid through a
channel or pipe. For q = 0 we have the Poisson equation (corresponding to a
Newtonian fluid) and the problem is linear. A typical value for pseudo-plastic
fluids may be qn = −0.8. We can introduce the continuation parameter Λ ∈ [0, 1]
such that q = qn Λ. Let {Λ` }n`=0 be the sequence of Λ values in [0, 1], with
corresponding q values {q` }n`=0 . We can then solve a sequence of problems
45
−∇ · ||∇u` ||q` ∇u` = f,
` = 0, . . . , n,
`
where the initial guess for iterating on u is the previously computed solution
u`−1 . If a particular Λ` leads to convergence problems, one may try a smaller
increase in Λ: Λ∗ = 12 (Λ`−1 + Λ` ), and repeat halving the step in Λ until
convergence is reestablished.
6 Exercises
Problem 1: Determine if equations are nonlinear or not
Classify each term in the following equations as linear or nonlinear. Assume
that u, u, and p are unknown functions and that all other symbols are known
quantities.
1. mu00 + β|u0 |u0 + cu = F (t)
2. ut = αuxx
3. utt = c2 ∇2 u
4. ut = ∇ · (α(u)∇u) + f (x, y)
5. ut + f (u)x = 0
6. ut + u · ∇u = −∇p + r∇2 u, ∇ · u = 0 (u is a vector field)
7. u0 = f (u, t)
8. ∇2 u = λeu
Solution.
1. mu00 is linear; β|u0 |u0 is nonlinear; cu is linear; F (t) does not contain the
unknown u and is hence constant in u, so the term is linear.
2. ut is linear; αuxx is linear.
3. utt is linear; c2 ∇2 u is linear.
4. ut is linear; ∇ · (α(u)∇u) is nonlinear; f (x, y) is constant in u and hence
linear.
5. ut is linear; f (u)x is nonlinear if f is nonlinear in u.
6. ut is linear; u · ∇u is nonlinear; −∇p is linear (in p); r∇2 u is linear; ∇ · u
is linear.
7. u0 is linear; f (u, t) is nonlinear if f is nonlinear in u.
8. ∇2 u is linear; λeu is nonlinear.
Filename: nonlinear_vs_linear.
46
Exercise 2: Derive and investigate a generalized logistic
model
The logistic model for population growth is derived by assuming a nonlinear
growth rate,
[Dt+ u = a(u)u]n ,
or written out,
un+1 − un
= a(un )un .
∆t
The scheme is linear in the unknown un+1 :
[Dt− u = a(u)u]n ,
becomes
un − un−1
= a(un )un ,
∆t
which is a nonlinear equation in the unknown u, here expressed as un+1 :
47
The standard Crank-Nicolson scheme,
t 1
Dt u = a(u)u ]n+ 2 ,
takes the form
un+1 − un 1 1
= a(un )un + a(un+1 )un+1 .
∆t 2 2
This is a nonlinear equation in the unknown un+1 ,
1 1
un+1 − ∆ta(un+1 )un+1 = un + ∆ta(un )un .
2 2
However, with the suggested geometric mean, the a(u)u term is linearized:
un+1 − un
= a(un )un+1 ,
∆t
leading to a linear equation in un+1 :
(1 − ∆ta(un ))un+1 = un .
b) Formulate Picard and Newton iteration for the Backward Euler scheme in
a).
(1 − ∆ta(u− ))un+1 = un .
Alternatively, with an iteration index k,
(1 − ∆ta(un+1,k ))un+1,k+1 = un .
Newton’s method starts with identifying the nonlinear equation as F (u) = 0,
and here
F (u) = u − ∆ta(u)u − un .
The Jacobian is
F (u)
J(u) = = 1 − ∆t(a0 (u)u + a(u)) .
du
The key equation in Newton’s method is then
48
c) Implement the numerical solution methods from a) and b). Use logistic.py3
to compare the case p = 1 and the choice (93).
Solution. We specialize the code for a(u) to (93) since the code was developed
from logistic.py. It is convenient to work with a dimensionless form of the
problem. Choosing a time scale tc = 1% and a scale for u, uc = M , leads to
import numpy as np
def FE_logistic(p, u0, dt, Nt):
u = np.zeros(Nt+1)
u[0] = u0
for n in range(Nt):
u[n+1] = u[n] + dt*(1 - u[n])**p*u[n]
return u
def BE_logistic(p, u0, dt, Nt, choice=’Picard’,
eps_r=1E-3, omega=1, max_iter=1000):
# u[n] = u[n-1] + dt*(1-u[n])**p*u[n]
# -dt*(1-u[n])**p*u[n] + u[n] = u[n-1]
if choice == ’Picard1’:
choice = ’Picard’
max_iter = 1
u = np.zeros(Nt+1)
iterations = []
u[0] = u0
for n in range(1, Nt+1):
c = -u[n-1]
if choice == ’Picard’:
def F(u):
return -dt*(1-u)**p*u + u + c
u_ = u[n-1]
k = 0
while abs(F(u_)) > eps_r and k < max_iter:
# u*(1-dt*(1-u_)**p) + c = 0
u_ = omega*(-c/(1-dt*(1-u_)**p)) + (1-omega)*u_
k += 1
u[n] = u_
iterations.append(k)
elif choice == ’Newton’:
def F(u):
return -dt*(1-u)**p*u + u + c
def dF(u):
return dt*p*(1-u)**(p-1)*u - dt*(1-u)**p + 1
3 http://tinyurl.com/nm5587k/nonlin/logistic.py
49
u_ = u[n-1]
k = 0
while abs(F(u_)) > eps_r and k < max_iter:
u_ = u_ - F(u_)/dF(u_)
k += 1
u[n] = u_
iterations.append(k)
return u, iterations
def CN_logistic(p, u0, dt, Nt):
# u[n+1] = u[n] + dt*(1-u[n])**p*u[n+1]
# (1 - dt*(1-u[n])**p)*u[n+1] = u[n]
u = np.zeros(Nt+1)
u[0] = u0
for n in range(0, Nt):
u[n+1] = u[n]/(1 - dt*(1 - u[n])**p)
return u
Hint. You need to experiment to find what “infinite time” is (increases substan-
tially with p) and what the appropriate tolerance is for testing the asymptotic
limit.
def test_asymptotic_value():
T = 100
dt = 0.1
Nt = int(round(T/float(dt)))
u0 = 0.1
p = 1.8
u_CN = CN_logistic(p, u0, dt, Nt)
u_BE_Picard, iter_Picard = BE_logistic(
p, u0, dt, Nt, choice=’Picard’,
eps_r=1E-5, omega=1, max_iter=1000)
u_BE_Newton, iter_Newton = BE_logistic(
p, u0, dt, Nt, choice=’Newton’,
eps_r=1E-5, omega=1, max_iter=1000)
u_FE = FE_logistic(p, u0, dt, Nt)
for arr in u_CN, u_BE_Picard, u_BE_Newton, u_FE:
expected = 1
computed = arr[-1]
tol = 0.01
msg = ’expected=%s, computed=%s’ % (expected, computed)
print msg
assert abs(expected - computed) < tol
It is important with a sufficiently small eps_r tolerance for the asymptotic value
to be accurate (using eps_r=1E-3 leads to a value 0.92 at t = T instead of 0.994
when eps_r=1E-5).
50
e) Perform experiments with Newton and Picard iteration for the models (93)
and (??). See how sensitive the number of iterations is to ∆t and p.
Filename: logistic_p.
51
Terminal
Use this program to investigate potential problems with Newton’s method when
2
solving e−0.5x cos(πx) = 0. Try a starting point x0 = 0.8 and x0 = 0.85 and
watch the different behavior. Just run
Terminal
52
Exercise 6: Find the truncation error of arithmetic mean
of products
In Section 3.4 we introduce alternative arithmetic means of a product. Say the
product is P (t)Q(t) evaluated at t = tn+ 21 . The exact value is
1 1 1
[P Q]n+ 2 = P n+ 2 Qn+ 2
1
There are two obvious candidates for evaluating [P Q]n+ 2 as a mean of values of
P and Q at tn and tn+1 . Either we can take the arithmetic mean of each factor
P and Q,
1 1 n 1
[P Q]n+ 2 ≈ (P + P n+1 ) (Qn + Qn+1 ), (95)
2 2
or we can take the arithmetic mean of the product P Q:
1 1 n n
[P Q]n+ 2 ≈ (P Q + P n+1 Qn+1 ) . (96)
2
The arithmetic average of P (tn+ 12 ) is O(∆t2 ):
1 n
P (tn+ 12 ) = (P + P n+1 ) + O(∆t2 ) .
2
A fundamental question is whether (95) and (96) have different orders of accuracy
in ∆t = tn+1 − tn . To investigate this question, expand quantities at tn+1 and
1
tn in Taylor series around tn+ 12 , and subtract the true value [P Q]n+ 2 from the
approximations (95) and (96) to see what the order of the error terms are.
Hint. You may explore sympy for carrying out the tedious calculations. A
general Taylor series expansion of P (t + 12 ∆t) around t involving just a general
function P (t) can be created as follows:
53
Use these examples to investigate the error of (95) and (96) for n = 0. (Choosing
n = 0 is necessary for not making the expressions too complicated for sympy, but
there is of course no lack of generality by using n = 0 rather than an arbitrary n
- the main point is the product and addition of Taylor series.)
Filename: product_arith_mean.
54
Problem 10: Finite differences for the 1D Bratu problem
We address the so-called Bratu problem
cosh((x − 12 )θ/2)
ue (x) = −2 ln ,
cosh(θ/4)
where θ solves
√
θ= 2λ cosh(θ/4) .
There are two solutions of (99) for 0 < λ < λc and no solution for λ > λc . For
λ = λc there is one unique solution. The critical value λc solves
1p
1= 2λc sinh(θ(λc )/4) .
4
A numerical value is λc = 3.513830719.
a) Discretize (99) by a centered finite difference method.
b) Set up the nonlinear equations Fi (u0 , u1 , . . . , uNx ) = 0 from a). Calculate
the associated Jacobian.
c) Implement a solver that can compute u(x) using Newton’s method. Plot the
error as a function of x in each iteration.
d) Investigate whether Newton’s method gives second-order convergence by
computing ||ue − u||/||ue − u− ||2 in each iteration, where u is solution in the
current iteration and u− is the solution in the previous iteration.
Filename: nonlin_1D_Bratu_fd.
where ϕi (x) are P1 finite element basis functions and uk are unknown coefficients,
more precisely the values of the unknown function u at nodes xk . We introduce a
node numbering that goes from left to right and also that all cells have the same
length h. Given i, the integral only gets contributions from [xi−1 , xi+1 ]. On this
55
interval ϕk (x) = 0 for k < i − 1 and k > i + 1, so only three basis functions will
contribute:
X
uk ϕk (x) = ui−1 ϕi−1 (x) + ui ϕi (x) + ui+1 ϕi+1 (x) .
k
Split this integral in two integrals over cell L (left), [xi−1 , xi ], and cell R (right),
[xi , xi+1 ]. Over cell L, u simplifies to ui−1 ϕi−1 + ui ϕi (since ϕi+1 = 0 on this
cell), and over cell R, u simplifies to ui ϕi + ui+1 ϕi+1 . Make a sympy program
that can compute the integral and write it out as a difference equation. Give
the f (u) formula on the command line. Try out f (u) = u2 , sin u, exp u.
Hint. Introduce symbols u_i, u_im1, and u_ip1 for ui , ui−1 , and ui+1 , re-
spectively, and similar symbols for xi , xi−1 , and xi+1 . Find formulas for the
basis functions on each of the two cells, make expressions for u on the two cells,
integrate over each cell, expand the answer and simplify. You can ask sympy for
LATEX code and render it either by creating a LATEX document and compiling it
to a PDF document or by using http://latex.codecogs.com to display LATEX
formulas in a web page. Here are some appropriate Python statements for the
latter purpose:
56
Problem 12: Finite elements for the 1D Bratu problem
We address the same 1D Bratu problem as described in Problem 10.
a) Discretize (12) by a finite element method using a uniform mesh with P1
elements. Use a group finite element method for the eu term.
b) Set up the nonlinear equations Fi (u0 , u1 , . . . , uNx ) = 0 from a). Calculate
the associated Jacobian.
Filename: nonlin_1D_Bratu_fe.
k(T )Tx |x=0 = h(T )(T − Ts ), −k(T )Tx |x=L = h(T )(T − Ts ),
where h(T ) is a heat transfer coefficient and Ts is the given surrounding temper-
ature.
a) Discretize this PDE in time using either a Backward Euler or Crank-Nicolson
scheme.
b) Formulate a Picard iteration method for the time-discrete problem (i.e., an
iteration method before discretizing in space).
c) Formulate a Newton method for the time-discrete problem in b).
d) Discretize the PDE by a finite difference method in space. Derive the matrix
and right-hand side of a Picard iteration method applied to the space-time
discretized PDE.
e) Derive the matrix and right-hand side of a Newton method applied to the
discretized PDE in d).
Filename: nonlin_1D_heat_FD.
57
• ue (x, t) for the exact solution of the PDE problem
• ue (x)n for the exact solution after time discretization
• un (x) for the spatially discrete solution j cj ψj
P
Filename: nonlin_heat_FE_usymbols.
58
Exercise 17: Differentiate a highly nonlinear term
The operator ∇ · (α(u)∇u) with α(u) = |∇u|q appears in several physical
problems, especially flow of Non-Newtonian fluids. The expression |∇u| is
defined as the Euclidean norm of a vector: |∇u|2 = ∇u · ∇u. In aP Newton
method one has to carry out the differentiation ∂α(u)/∂cj , for u = k ck ψk .
Show that
∂
|∇u|q = q|∇u|q−2 ∇u · ∇ψj .
∂uj
Solution.
∂ ∂ q q q ∂
|∇u|q = (∇u · ∇u) 2 = (∇u · ∇u) 2 −1 (∇u · ∇u)
∂cj ∂cj 2 ∂cj
q ∂ ∂
= |∇u|q−2 ( (∇u) · ∇u + ∇u · (∇u))
2 ∂cj ∂cj
∂u
= q|∇u|q−2 (∇u · ∇ ) = q|∇u|q−2 (∇u · ∇ψj )
∂cj
Filename: nonlin_differentiate.
Hint. Set up the unknowns that enter the difference equation at a point (i, j)
in 2D or (i, j, k) in 3D, and identify the nonzero entries of the Jacobian that can
arise from such a type of difference equation.
Filename: nonlin_sparsity_Jacobian.
59
Problem 20: Investigate a 1D problem with a continuation
method
Flow of a pseudo-plastic power-law fluid between two flat plates can be modeled
by
n−1 !
d du du
µ0
= −β, u0 (0) = 0, u(H) = 0,
dx dx dx
where β > 0 and µ0 > 0 are constants. A target value of n may be n = 0.2.
a) Formulate a Picard iteration method directly for the differential equation
problem.
b) Perform a finite difference discretization of the problem in each Picard
iteration. Implement a solver that can compute u on a mesh. Verify that the
solver gives an exact solution for n = 1 on a uniform mesh regardless of the cell
size.
c) Given a sequence of decreasing n values, solve the problem for each n
using the solution for the previous n as initial guess for the Picard iteration.
This is called a continuation method. Experiment with n = (1, 0.6, 0.2) and
n = (1, 0.9, 0.8, . . . , 0.2) and make a table of the number of Picard iterations
versus n.
d) Derive a Newton method at the differential equation level and discretize the
resulting linear equations in each Newton iteration with the finite difference
method.
e) Investigate if Newton’s method has better convergence properties than Picard
iteration, both in combination with a continuation method.
References
[1] C. T. Kelley. Iterative Methods for Linear and Nonlinear Equations. SIAM,
1995.
[2] M. Mortensen, H. P. Langtangen, and G. N. Wells. A FEniCS-based pro-
gramming framework for modeling turbulent flow by the Reynolds-averaged
Navier-Stokes equations. Advances in Water Resources, 34(9), 2011.
60
from finite difference discretization. We shall dive into this problem here. To see
the structure of the difference equations implied by the finite element method,
we have to find symbolic expressions for the integrals, and this is extremely
difficult since the integrals involve the unknown function in nonlinear problems.
However, there are some techniques that allow us to approximate the integrals
and work out symbolic formulas that can be compared with their finite difference
counterparts.
We shall address the 1D model problem (50) from the beginning of Section 4.
The finite difference discretization is shown in Section 4.1, while the variational
form based on Galerkin’s method is developed in Section 4.3. We build directly
on formulas developed in the latter section.
ψi = ϕν(i) , i ∈ Is ,
where degree of freedom number ν(i) in the mesh corresponds to unknown
number i (ci ). In the present example, we use all the basis functions except the
last at i = Nn − 1, i.e., Is = {0, . . . , Nn − 2}, and ν(j) = j. The expansion of u
can be taken as
X
u=D+ cj ϕν(j) ,
j∈Is
but itPis more common in a finite element context to use a boundary function
B = j∈Ib Uj ϕj , where Uj is prescribed Dirichlet condition for degree of freedom
number j and Uj is the corresponding value.
X
u = DϕNn −1 + cj ϕν(j) .
j∈Is
where cj = u(xν(j) ). That is, ν(j) maps unknown number j to the corresponding
node number ν(j) such that cj = u(xν(j) ).
61
where f (xj ) is the value of f at node j. Since f is a function of u, f (xj ) =
f (u(xj )). Introducing uj as a short form for u(xj ), we can write
X
f (u) ≈ f (uj )ϕj .
j
This approximation is known as the group finite element method or the product
approximation technique. The index j runs over all node numbers in the mesh.
The principal advantages of the group finite element method are two-fold:
Below, we shall explore point 2 to see exactly how the finite element method
creates more complex expressions in the resulting linear system (the differ-
ence equations) that the finite difference method does. It turns out that it is
very difficult
R to see what kind of terms in the difference equations that arise
from f (u)ϕi dx without using the group finite element method or numerical
integration utilizing the nodes only. R
Note, however, that an expression like f (u)ϕi dx causes no problems in a
computer program as the integral is calculated by numerical integration using
an existing approximation of u in f (u) such that the integrand can be sampled
at any spatial point.
Simplified problem. Our aim now is to derive symbolic expressions for the
difference equations arising from the finite element method in nonlinear problems
and compare the expressions with those arising in the finite difference method.
To this end, let us simplify the model problem and set a = 0, α = 1, f (u) = u2 ,
and have Neumann conditions at both ends such that we get a very simple
nonlinear problem −u00 = u2 , u0 (0) = 1, u0 (L) = 0. The variational form is then
Z L Z L
0 0
u v dx = u2 v dx − v(0), ∀v ∈ V .
0 0
The term with u0 v 0 is well known so the only new feature is the term u2 v dx.
R
62
R
Integrating nonlinear functions. Consider
P the term u2 v dx in the varia-
tional formulation with v = ϕi and u = k ϕk uk :
Z L X
( uk ϕk )2 ϕi dx .
0 k
Evaluating this integral for P1 elements (see Problem 11) results in the expression
h 2
(u + 2ui (ui−1 + ui+1 ) + 6u2i + u2i+1 ),
12 i−1
to be compared with the simple value u2i that would arise in a finite difference
discretization when u2 is sampled at mesh point xi . More complicated f (u)
RL
functions in the integral 0 f (u)ϕi dx give rise to much more lengthy expressions,
if it is possible to carry out the integral symbolically at all.
Application of the group finite element method. Let use the group finite
element method to derive the terms in the difference equation corresponding to
f (u) in the differential equation. We have
!
Z L Z L X X Z L
f (u)ϕi dx ≈ ( ϕj f (uj ))ϕi dx = ϕi ϕj dx f (uj ) .
0 0 j j 0
R
We recognize this expression as the mass matrix M , arising from ϕi ϕj dx,
times the vector f = (f (u0 ), f (u1 ), . . . , ): M f . The associated terms in the
difference equations are, for P1 elements,
h
(f (ui−1 ) + 4f (ui ) + f (ui+1 )) .
6
Occasionally, we want to interpret this expression in terms of finite differences,
and to this end a rewrite of this expression is convenient:
h h2
(f (ui−1 ) + 4f (ui ) + f (ui+1 )) = h[f (u) − Dx Dx f (u)]i .
6 6
That is, the finite element treatment of f (u) (when using a group finite element
method) gives the same term as in a finite difference approach, f (ui ), minus a
diffusion term which is the 2nd-order discretization of 16 h2 f 00 (xi ).
We may lump the mass matrix through integration with the Trapezoidal rule
so that M becomes diagonal in the finite element method. In that case the f (u)
term in the differential equation gives rise to a single term hf (ui ), just as in the
finite difference method.
63
A.3 Numerical integration of nonlinear terms by hand
R
Let us reconsider a term f (u)v dx as treated in the previous section, but
now we want to integrate this term numerically. Such an approach can lead to
easy-to-interpret formulas if we apply a numerical integration rule that samples
the integrand at the node points xi only, because at such points, ϕj (xi ) = 0 if
j 6= i, which leads to great simplifications.
The term in question takes the form
Z L X
f( uk ϕk )ϕi dx .
0 k
P
Evaluation of the integrand at a node x` leads to a collapse of the sum k uk ϕk
to one term because
X
uk ϕk (x` ) = u` .
k
X
f( uk ϕk (x` )) ϕi (x` ) = f (u` )δi` ,
| {z } | {z }
k
δk` δi`
This is the same representation of the f term as in the finite difference method.
The term C contains the evaluations of the integrand at the ends with weight 12 ,
needed to make a true Trapezoidal rule:
h h
C= f (u0 )ϕi (0) + f (uNn −1 )ϕi (L) .
2 2
The answer hf (ui ) must therefore be multiplied by 12 if i = 0 or i = Nn − 1.
Note that C = 0 for i = 1, . . . , Nn − 2.
One can alternatively use the Trapezoidal rule on the reference cell and
assemble the contributions. It is a bit more labor in this context, but working on
the reference cell is safer as that approach is guaranteed to handle discontinuous
derivatives of finite element functions correctly (not important in this particular
example), while the rule above was derived with the assumption that f is
continuous at the integration points.
The conclusion is that it suffices to use the Trapezoidal rule if one wants
to derive the difference equations in the finite element method and make them
similar to those arising in the finite difference method. The Trapezoidal rule
has sufficient accuracy for P1 elements, but for P2 elements one should turn to
Simpson’s rule.
64
A.4 Finite element discretization of a variable coefficient
Laplace term
Turning back to the model problem (50), it remains to calculate the contribution
of the (αu0 )0 and boundary terms to the difference equations. The integral in
the variational form corresponding to (αu0 )0 is
Z L X
α( ck ϕk )ϕ0i ϕ0j dx .
0 k
P
Numerical integration utilizing a value of k ck ϕk from a previous iteration
must in general be used to compute the integral. Now our aim is to integrate
symbolically, as much as we can, to obtain some insight into how the finite element
method approximates this term. To be able to derive symbolic expressions, we
must either turn to the group finite element method or numerical integration in
the node points. Finite element basis functions ϕi are now used.
P
Group finite element method. We set α(u) ≈ k α(uk )ϕk , and then we
write
Z L X XZ L X
α( ck ϕk )ϕ0i ϕ0j dx ≈ ( ϕk ϕ0i ϕ0j dx)α(uk ) = Li,j,k α(uk ) .
0 k k |0 k
{z }
Li,j,k
Further calculations are now easiest to carry out in the reference cell. With P1
elements we can compute Li,j,k for the two k values that are relevant on the
reference cell. Turning to local indices, one gets
(e) 1 1 −1
Lr,s,t = , t = 0, 1,
2h −1 1
where r, s, t = 0, 1 are indices over local degrees of freedom
P in the reference cell
(i = q(e, r), j = q(e, s), and k = q(e, t)). The sum k Li,j,k α(uk ) at the cell
P1 (e)
level becomes t=0 Lr,s,t α(ũt ), where ũt is u(xq(e,t) ), i.e., the value of u at local
node number t in cell number e. The element matrix becomes
1 1 1 −1
(α(ũ0 ) + α(ũ1 )) . (101)
2 h −1 1
As usual, we employ a left-to-right numbering of cells and nodes. Row number i
in the global matrix gets contributions from the first row of the element matrix
in cell i and the last row of the element matrix in cell i − 1. In cell number
i − 1 the sum α(ũ0 ) + α(ũ1 ) corresponds to α(ui−1 ) + α(ui ). The same sum
becomes α(ui ) + α(ui+1 ) in cell number i. We can with this insight assemble
the contributions to row number i in the global matrix:
1
(−(α(ui−1 ) + α(ui )), α(ui−1 ) + 2α(ui ) + α(ui+1 ), −(α(ui ) + α(ui+1 ))) .
2h
65
Multiplying by the vector of unknowns ui results in a formula that can be
arranged to
1 1 1
− ( (α(ui ) + α(ui+1 ))(ui+1 − ui ) − (α(ui−1 ) + α(ui ))(ui − ui−1 )), (102)
h 2 2
which is nothing but the standard finite difference discretization of −(α(u)u0 )0
with an arithmetic mean of α(u) (and the usual factor h because of the integration
in the finite element method).
Z 1 Z 1 X 1
X h 2 dϕ̃r 2 dϕ̃s h
α( ũt ϕ̃t )ϕ̃0r ϕ̃0s dX = α( ũt ϕ̃t ) dX
−1 t
2 −1 t=0
h dX h dX 2
Z 1 X 1
1
= (−1)r (−1)s α( ut ϕ̃t (X))dX
2h −1 t=0
1 1
1 X X
≈ (−1)r (−1)s α( ϕ̃t (−1)ũt ) + α( ϕ̃t (1)ũt )
2h t=0 t=0
1
= (−1)r (−1)s (α(ũ0 ) + α(ũ1 )) . (103)
2h
The element matrix in (103) is identical to the one in (101), showing that the
group finite element method and Trapezoidal integration are equivalent with
a standard finite discretization of a nonlinear Laplace term (α(u)u0 )0 using an
arithmetic mean for α: [Dx xDx u]i .
R
The term auv dx in the variational form is linear and gives these terms in
the algebraic equations:
66
ah h2
(ui−1 + 4ui + ui+1 ) = ah[u − Dx Dx u]i .
6 6
The final term in the variational form is the Neumann condition at the bound-
ary: Cv(0) = Cϕi (0). With a left-to-right numbering only i = 0 will give a
contribution Cv(0) = Cδi0 (since ϕi (0) 6= 0 only for i = 0).
Summary.
For the equation
−(α(u)u0 )0 + au = f (u),
P1 finite elements result in difference equations where
x
• the term −(α(u)u0 )0 becomes −h[Dx α(u) Dx u]i if the group finite
element method or Trapezoidal integration is applied,
• f (u) becomes hf (ui ) with Trapezoidal integration or the “mass ma-
2
trix” representation h[f (u) − h6 Dx Dx f (u)]i if computed by a group
finite element method,
h2
• au leads to the “mass matrix” form ah[u − 6 Dx Dx u]i .
67
68
Index
continuation method, 45, 60
fixed-point iteration, 7
Picard iteration, 7
product approximation technique, 63
69