Partial Differential Equations I: Part 1. PDE 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

PARTIAL DIFFERENTIAL EQUATIONS I

VERSION: September 12, 2022

ARMIN SCHIKORRA

Contents

References 5
Index 6

Part 1. PDE 1 7
1. Introduction and some basic notation 7
2. Laplace equation 12
2.1. Sort of a physical motivation 12
2.2. Definitions 13
2.3. Fundamental Solution, Newton- and Riesz Potential 14
2.4. Mean Value Property for harmonic functions 20
2.5. Maximum and Comparison Principles 22
2.6. Harnack Principle 26
2.7. Green Functions 27
2.8. Perrons method (illustration) 32
2.9. Weak Solutions, Regularity Theory 40
2.10. Methods from Calculus of Variations – Energy Methods 46
2.11. Linear Elliptic equations 49
2.12. Maximum principles for linear elliptic equations 51
3. Heat equation 59
3.1. Again, sort of a physical motivation 59
1
PARTIAL DIFFERENTIAL EQUATIONS I VERSION: September 12, 2022 2

3.2. Sort of a optimization motivation 59


3.3. Fundamental solution and Representation 60
3.4. Mean-value formula 62
3.5. Maximum principle and Uniqueness 63
3.6. Harnack’s Principle 68
3.7. Regularity and Cauchy-estimates 69
3.8. Variational Methods 71
4. Wave Equation 72
4.1. Global Solution via Fourier transform 73
4.2. Energy methods 75
5. Black Box – Sobolev Spaces 76
5.1. Approximation by smooth functions 79
5.2. Embedding Theorems 80
5.3. Trace Theorems 83
5.4. Difference Quotients 83
6. Existence and basic regularity theory for Laplace Equation 85
6.1. Existence: Proof of Theorem 6.1 87
6.2. Uniqueness: Proof of Theorem 6.3 88
6.3. Interior regularity theory: Proof of Theorem 6.4 90
6.4. Global/Boundary regularity theory: Proof of Theorem 6.6 96
6.5. An alterative approach to boundary regularity theory: reflection 101
6.6. Extension to more general elliptic equations 102
7. The Role of Harmonic Analysis in PDE – Lp -theory 102
7.1. Short introduction to Calderon-Zygmund Theory 102
7.2. Calderon-Zygmund operators 103
7.3. W 1,p -theory for the Laplace equation 106
7.4. W 1,p -theory for a constant coefficient linear elliptic equation 109
PARTIAL DIFFERENTIAL EQUATIONS I VERSION: September 12, 2022 3

7.5. W 1,p -theory for a Hölder continuous coefficient linear elliptic equation 110
7.6. W 2,p -theory 112
8. De Giorgi - Nash - Moser iteration and De Giorgi’s theorem 112
8.1. Boundedness 112
PARTIAL DIFFERENTIAL EQUATIONS I VERSION: September 12, 2022 4

In Analysis
there are no theorems
only proofs
PARTIAL DIFFERENTIAL EQUATIONS I VERSION: September 12, 2022 5

A large part of these notes are based on [Evans, 2010] and lectures by Heiko von der Mosel
(RWTH Aachen).

References
[Adams and Fournier, 2003] Adams, R. A. and Fournier, J. J. F. (2003). Sobolev spaces, volume 140 of
Pure and Applied Mathematics (Amsterdam). Elsevier/Academic Press, Amsterdam, second edition.
[Chen, 1999] Chen, Z.-Q. (1999). Multidimensional symmetric stable processes. Korean J. Comput. Appl.
Math., 6(2):227–266.
[Evans, 2010] Evans, L. C. (2010). Partial differential equations, volume 19 of Graduate Studies in Math-
ematics. American Mathematical Society, Providence, RI, second edition.
[Evans and Gariepy, 2015] Evans, L. C. and Gariepy, R. F. (2015). Measure theory and fine properties of
functions. Textbooks in Mathematics. CRC Press, Boca Raton, FL, revised edition.
[Gazzola et al., 2010] Gazzola, F., Grunau, H.-C., and Sweers, G. (2010). Polyharmonic boundary value
problems, volume 1991 of Lecture Notes in Mathematics. Springer-Verlag, Berlin. Positivity preserving
and nonlinear higher order elliptic equations in bounded domains.
[Giaquinta and Martinazzi, 2012] Giaquinta, M. and Martinazzi, L. (2012). An introduction to the regular-
ity theory for elliptic systems, harmonic maps and minimal graphs, volume 11 of Appunti. Scuola Normale
Superiore di Pisa (Nuova Serie) [Lecture Notes. Scuola Normale Superiore di Pisa (New Series)]. Edizioni
della Normale, Pisa, second edition.
[Gilbarg and Trudinger, 2001] Gilbarg, D. and Trudinger, N. S. (2001). Elliptic partial differential equa-
tions of second order. Classics in Mathematics. Springer-Verlag, Berlin. Reprint of the 1998 edition.
[Giusti, 2003] Giusti, E. (2003). Direct methods in the calculus of variations. World Scientific Publishing
Co., Inc., River Edge, NJ.
[Iwaniec and Sbordone, 1998] Iwaniec, T. and Sbordone, C. (1998). Riesz transforms and elliptic PDEs
with VMO coefficients. J. Anal. Math., 74:183–212.
[John, 1991] John, F. (1991). Partial differential equations, volume 1 of Applied Mathematical Sciences.
Springer-Verlag, New York, fourth edition.
[Koike, 2004] Koike, S. (2004). A beginner’s guide to the theory of viscosity solutions, volume 13 of MSJ
Memoirs. Mathematical Society of Japan, Tokyo.
[Kuznetsov, 2019] Kuznetsov, N. (2019). Mean value properties of harmonic functions and related topics
(a survey). Preprint, arXiv:1904.08312.
[Littman et al., 1963] Littman, W., Stampacchia, G., and Weinberger, H. F. (1963). Regular points for
elliptic equations with discontinuous coefficients. Ann. Scuola Norm. Sup. Pisa (3), 17:43–77.
[Llorente, 2015] Llorente, J. G. (2015). Mean value properties and unique continuation. Commun. Pure
Appl. Anal., 14(1):185–199.
[Maz’ya, 2011] Maz’ya, V. (2011). Sobolev spaces with applications to elliptic partial differential equations,
volume 342 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical
Sciences]. Springer, Heidelberg, augmented edition.
[Talenti, 1976] Talenti, G. (1976). Best constant in Sobolev inequality. Ann. Mat. Pura Appl. (4), 110:353–
372.
[Ziemer, 1989] Ziemer, W. P. (1989). Weakly differentiable functions, volume 120 of Graduate Texts in
Mathematics. Springer-Verlag, New York. Sobolev spaces and functions of bounded variation.
Index
a priori estimates, 41 Poisson equation, 13
potential, 15
barrier, 37
bump function, 40 quasilinear, 8

Calderon-Zymgund theory, 86 regularity theory, 41


classical solution, 40 Riesz potential, 15
cutoff function, 40
Schauder theory, 86
differentiating the equation, 42 semi-group theory, 59
Dirichlet principle, 46 semilinear, 8
distributional, 40 smooth, 78
distributional solutions, 39 Sobolev space, 75
Sobolev-Slobodeckij, 82
Einstein summation convention, 84 strictly convex, 88
elliptic, 9, 11, 84 strong maximum principle, 62
elliptic equation, 84 strong solution, 40
energy decay, 70 subharmonic, 13, 31
Euler-Lagrange-equations, 46 subsolution, 13
exterior sphere condition, 37 superharmonic, 13
supersolution, 13
flattening the boundary, 96
fractional Sobolev space, 82 test-functions, 40
fully nonlinear, 8 trace sense, 76
fundamental solution, 13, 15, 60 trace space, 82
Tychonoff, 66
Gagliardo-seminorm, 82
variation, 59
harmonic, 13 Viscosity solutions, 39
heat ball, 61
heat kernel, 59, 60 weak maximum principle, 62
homogeneous, 13 weak solution, 39, 47
homogeneous heat equation, 59 Wirtinger’s inequality, 80
homogeneous Laplace equation, 13
hyperbolic, 9, 11

inhomogeneous, 59
inhomogenous, 13
interior sphere condition, 57
inverse Fouriertransform, 14

Laplace equation, 12
linear, 8

maximum principles, 21
mean value property, 19

Neumann boundary problem, 57


Newton potential, 15

parabolic, 9, 11
6
PARTIAL DIFFERENTIAL EQUATIONS I VERSION: September 12, 2022 7

Part 1. PDE 1

1. Introduction and some basic notation

When studying Partial Differential Equations (PDEs) the first question that arises is: what
are partial differential equations.
Let Ω ⊂ Rn be an open set and u : Ω → R be differentiable. The partial derivatives ∂1 is
the directional derivative

d d
∂1 u(x) ≡ ∂x1 u(x) = u(x) = u(x + te1 ),
dx1 dt t=0
where e1 = (1, 0, . . . , 0) is the first unit vector. The partial derivatives ∂2 , . . . ∂n are defined
likewise.
Sometimes it is convenient to use multiinidces: an n-multiindex γ is a vector γ = (γ1 , γ2 , . . . , γn )
where γ1 , . . . , γn ∈ {0, 1, 2, . . . , }. The order of a multiindex is |γ| defined as
n
X
|γ| = γi .
i=1

For a suitable often differentiable function u : Ω → R and a multiindex γ we denote with


∂ γ u the partial derivatives
∂ γ u(x) = ∂xγ11 ∂xγ22 . . . ∂xγnn u(x).
For example, for γ = (1, 0, 0, . . . , 0) we have
∂ γ u(x) = ∂x1 u,
i.e. a partial derivative of first order; and for γ = (1, 2, 0, . . . , 0) we have
∂ γ u = ∂122 u ≡ ∂1 ∂2 ∂2 u,
i.e. a partial derivative of 3rd order.
The collection of all partial derivatives of k-th order of u is usually denoted by D k u(x) ∈ Rn
k

or (the “gradient”) ∇k u. Usually these are written in matrix form, namely


Du(x) = (∂1 u(x), ∂2 u(x), ∂3 u(x), . . . , ∂n u(x))
and  
∂11 u(x) ∂12 u(x) ∂13 u(x) . . . ∂1n u(x)

2
 ∂21 u(x) ∂22 u(x) ∂23 u(x) . . . ∂2n u(x) 

D u(x) = (∂ij u)i,j=1,...n ≡ 
 .. .. 

 . . 
∂n1 u(x) ∂n2 u(x) ∂n3 u(x) . . . ∂nn u(x)
Definition 1.1. Let Ω ⊂ Rn an open set and k ∈ N ∪ {0}. A partial differential equation
(PDE) of k-th order is an expression of the form
(1.1) F (D k u(x), D k−1 u(x), D k−2 u(x), . . . , Du(x), u(x), x) = 0 x ∈ Ω,
PARTIAL DIFFERENTIAL EQUATIONS I VERSION: September 12, 2022 8

where u : Ω → R is the unknown (also the “solution” to the PDE) and F is a given
structure (i.e. map)
F : R n × Rn × . . . × Rn × R × Ω → R
k k−1

• (1.1) is called linear if F is linear in u: meaning if we can find for every n-multiindex
γ with |γ| ≤ k a function aγ : Ω → R (independent of u) such that
X
F (Dk u(x), D k−1 u(x), D k−2 u(x), . . . , Du(x), u(x), x) = aγ (x)∂ γ u(x)
|γ|≤k

• (1.1) is called semilinear if F is linear with respect to the highest order k, namely
if
X
F (Dk u(x), D k−1 u(x), D k−2 u(x), . . . , Du(x), u(x), x) = aγ (x)∂ γ u(x)+G(D k−1 u(x), D k−2 u(x), . . . , Du(x
|γ|=k

• (1.1) is called quasilinear if F is linear with respect to the highest order k but the
coefficient for the highest order may depend on the lower order derivatives of u.
Namely if we have a representation of the form
X
F (Dk u(x), D k−1 u(x), D k−2 u(x), . . . , Du(x), u(x), x) = aγ (Dk−1 u(x), D k−2 u(x), . . . , Du(x), u(x), x)∂ γ u
|γ|=k

• If all the above do not apply then we call F fully nonlinear.

We have a system of partial differential equations of order k, if u : Ω → Rm is a vector


and/or the structure function F is also a vector
F : R m n × Rm n × . . . × R m n × Rm × Ω → Rℓ
k k−1

for m, ℓ ≥ 1.

The goal in PDE is usually (besides modeling what PDE describes what situation) to solve
PDEs, possibly subject to side-condition (such as prescribed boundary data on ∂Ω).
This is rarely possible explicitely (even in the linear case) – which is a huge contrast to
ODE. E.g.
u′′ (x) = 2u(x) x ∈ R,

then we know that u(x) = e 2x
A, and we can compute A by prescribing some initial value
at x = 0 or similar.
Now if we try that in two dimensions, and consider
∆u(x) ≡ ∂11 u(x) + ∂22 u(x) = 2u(x) x ∈ B(0, 1) ⊂ R2 ,
it is really difficult to see what u is (observe that also the amound of initial data – e.g.
values at ∂B(0, 1) is not one, but infinitely many!
So in general the best one can hope for is address the following main questions for PDEs
are
PARTIAL DIFFERENTIAL EQUATIONS I VERSION: September 12, 2022 9

• Is there a solution to a problem (and if so: in what sense? – we will learn the
distributional/weak sense and strong sense)
• Are solutions unique (under reasonable assumptions like initial data, boundary
data?)?
• What are properties of the solutions (e.g. does the solution depend continuously
on the data of the problem)?

It is important to accept that there are PDEs without (classical) solutions and there is no
general theory of PDEs. There is theory for several types of PDES.
Example 1.2 (Some basic linear equations). • Laplace equation
n
X
∆u := uxi xi = 0.
i=1

• Eigenvalue equation (aka Helmholtz equation)


∆u = λu.
• Transport equation
n
X
∂t u − b i u xi = 0
i=1

• Heat equation
∂t u − ∆u = 0
• Schrödinger equation
i∂t u + ∆u = 0
• Wave equation
utt − ∆u = 0

Second order linear equations are classified into elliptic, parabolic, hyperbolic PDE. Roughly
this is understood as follows. Assume that u depends on x and t, then

• elliptic means the equation is of the form


uxx + utt = G(x, y, u, ut , ux )
• parabolic means
uxx = G(x, y, u, ut , ux )
• Hyperbolic
uxx −utt = G(x, y, u, ut , ux )
or
ux,t = G(x, y, u, ut , ux )
PARTIAL DIFFERENTIAL EQUATIONS I VERSION: September 12, 2022 10

Let us have a generic second order linear equation


Auxx + Buxy + Cuyy + Dux + Euy + F u = g
(for now let us assume that A, B, . . . are constant.) We can write the second-order part as
! !
1
A B ∂xx u ∂yx u
Auxx + Buxy + Cuyy = 1
2 : ,
2
B C ∂xy u ∂yy u
where : denotes the matrix scalar product (sometimes: Hilbert-Schmidt product). If AC −
1 2
4
B > 0 the determinant of the coefficient matrix is positive, i.e. either the matrix has
two positive eigenvalues λ1 > 0 and λ2 > 0 or two negative eigenvalues λ1 < 0 and λ2 < 0,
and there exists orthogonal matrices P ∈ SO(2) such that
!
1
A B
P T
1
2 P = diag(λ1 , λ2 ).
2
B C
Then we have
! !
1
A B ∂xx u ∂yx u
1
2 : = diag(λ1 , λ2 ) : P T D2 uP.
2
B C ∂xy u ∂yy u
Now consider ũ(x, y) := u(P (x, y)t ), then by the chain rule,
D2 ũ(x, y) = P t (D2 u)(P (x, y))P,
so that if we set (x̃, ỹ)t := P (x, y)t we have
λ1 ux̃x̃ + λ2 uỹỹ = G(u, ux , uy ),
that is if AC − 14 B 2 > 0 we can transform our equation into an elliptic equation.
Similarly, if AC − 14 B 2 = 0, at least one eigenvalue of the matrix in question is negative,
one is positive, so we can transform the equation into
λ1 ux̃x̃ −λ2 uỹỹ = G(u, ux , uy ),
i.e. a hyperbolic equation.
And if AC − 14 B 2 < 0, one of the eigenvalues is zero, so that we have the structure
λ1 ux̃x̃ = G(u, ux , uy ),
i.e. we are parablic.
Whether one is elliptic, parabolic, hyperbolic is not purely an algebraic question – it often
determines the ways we can understand properties of the equation in question. Often we
think of elliptic equation as equilibrium or stationary equations, parabolic equations as a
flow of an energy, and hyperbolic of a wave-like equation – but this is not really always the
case, since the Schrödinger equation is parabolic in the previous sense, but it is wave-like.
It generally holds: every type of equation warrants its own methods.
PARTIAL DIFFERENTIAL EQUATIONS I VERSION: September 12, 2022 11

One can extend this theory, of course, to higher dimensions. If


n
X n
X
Aij ∂xi ,xj u + Bi ∂xi u + Cu = D,
i,j=1 i=1

then we may assume that A is symmetric (any antisymmetric part vanishes because
(∂xi ,xj u)ij is symmetric) – and thus we can discuss its eigenvalues.

• The equation is elliptic if all eigenvalues are nonzero and have the same sign.
• The equation is parabolic if exactly one eigenvalue is zero, all others are nonzero
and have the same sign.
• The equation is hyperbolic if no eigenvalue is zero, and n − 1 eigenvalues have the
same sign, and the other one has the opposite sign.

Of course there are more cases (and they may be very challenging to treat). In principle:
elliptic means the second order derivatives “move in the same direction”, parabolic means
“all but one direction move in the same direction and the remaining direction is of first
order only”, and hyperbolic “the second derivatives compete with each other”.
Of course, since in general A and B are nonconstant, the type of equation may change and
depend on the point x (e.g. tuxx + utt = 0).
Example 1.3 (Some basic nonlinear equations). • Eikonal equation
|Du| = 1
• p-Laplace equation
n
X
div (|Du|p−2 Du) ≡ ∂i (|Du|p−2 ∂i u) = 0
i=1

• Minimal surface equation


 
Du
div  q  = 0.
1+ |Du|2
• Monge-Ampere
det(D2 u) = 0.
• Hamilton-Jacobi
∂t u + H(Du, x) = 0

The notion of what constitutes a solution is important, as a too weak notion allows for too
many solutions, and a too strong notion of solution may allow for no solutions at all. We
illustrate this for the Eikonal equation:
Exercise 1.4. We consider different notions of solutions for the Eikonal equation:
PARTIAL DIFFERENTIAL EQUATIONS I VERSION: September 12, 2022 12

(1) Consider

|u′ (x)|
= 1 x ∈ (−1, 1)
(1.2)
u(−1) = u(1) = 0.

Show that there is no u ∈ C 0 ([−1, 1]) ∩ C 1 ((−1, 1)) such that (1.2) is satified.
(2) Consider instead

|u′ (x)|
= 1 all but finitely many x ∈ (−1, 1)
(1.3) u(−1) = u(1) = 0.

Show that there are infinitely many solutions u ∈ C 0 ([−1, 1]) that are differentiable
in all but finitely many points in (−1, 1) such that (1.3) is satified.
(3) Show that there is a sequence uk ∈ C 0 ([−1, 1]) that are differentiable in all but
finitely many points in (−1, 1), such that
sup |uk (x) − 0| −−−→ 0.
k→∞

x∈[−1,1]

(4) Consider instead



|u′ (x)| = 1 in at most one point x ∈ (−1, 1)
(1.4)
u(−1) = u(1) = 0.
Show that there are still two solutions u ∈ C 0 ([−1, 1]) that are differentiable in all
but at most one points in (−1, 1) such that (1.4) is satified.

In this course we will focus on the linear theory (the nonlinear theory is almost always
based on ideas on the linear theory). Almost each of the linear and nonlinear equations
warrants its own course, so we will focus on the basics (namely: mainly elliptic equations).

2. Laplace equation

2.1. Sort of a physical motivation. The following is often used to motivate Laplace’s
equation
Assume Ω is an open set in Rn (usually R3 ), and u describes the density of a fluid or heat
that is at an equilibrium state, i.e. no fluid is moving in or our, or not heat is exchanged
any more. This means that if we look at any subset Ω′ ⊂ Ω nothing flows out or in that
would change the density, that is
Z
∇u · ν = 0.
∂Ω′
By Green’s divergence theorem this is equivalent to saying
Z
div (∇u) = 0.
Ω′
PARTIAL DIFFERENTIAL EQUATIONS I VERSION: September 12, 2022 13

Figure 2.1. Solve ∆u = 0 on the annulus (inner radius r = 2 and outer ra-
dius R = 4) with boundary condition g(θ) = 0 if |θ| = 2 and g(θ) = 4 sin(5σ)
for |θ| = 4 – where σ ∈ [0, 2π) is the angle such that (sin(σ), cos(σ)) = θ/|θ|.
Source: Fourthirtytwo/Wikipedia CC-SA 3

Since this happens for all Ω′ we obtain that


div (∇u) = 0
Pn
So we call div (∇u) =: ∆u and observe that ∆u = i=1 ∂xi xi u = tr(D2 u).
Often one thinks of Laplace equation ∆u = 0 in Ω as a heat distribution. Take Ω a solid,
apply some heat at its boundary: at θ ∈ ∂Ω we apply g(θ) heat. Wait until the heat had
time to fully propagate. Then the solution u : Ω → R to the Dirichlet boundary problem

∆u =0 in Ω
u =g on ∂Ω

describes the temperature u(x) at the point x ∈ Ω. Cf. Figure 2.1.

2.2. Definitions. Let Ω ⊂ Rn be an open set (this will always be the case from now on).

• We consider the homogeneous Laplace equation


(2.1) ∆u = 0 in Ω
P
where we recall that ∆u = tr(D 2 u) = ni=1 ∂ii u.
• The inhomogenous equation (sometimes: Poisson equation) is, for a given function
f : Ω → R,
(2.2) ∆u = f in Ω

Two types of boundary problems are common:


PARTIAL DIFFERENTIAL EQUATIONS I VERSION: September 12, 2022 14

• Dirichlet-problem or Dirichlet-data g : ∂Ω → R

∆u =f in Ω
u =g on ∂Ω
• Neumann-problem or Neumann-data g : ∂Ω → R

∆u =f in Ω
∂ ν u = g on ∂Ω
Here ν : ∂Ω → Rn is the outwards facing Runit normal of ∂Ω. (Often this is
combined with a normalizing assumption like Ω u = 0, because u + c is otherwise
a solution if u is a solution – i.e. non-uniqueness occurs).
Definition 2.1. A function u ∈ C 2 (Ω) is called harmonic if u pointwise solves
∆u(x) = 0 in Ω
We also say, u is a solution to the homogeneous Laplace equation.
We say that u is a subsolution or subharmonic if
∆u(x) ≥ 0 in Ω.
If
∆u(x) ≤ 0 in Ω
we say that u is a supersolution or superharmonic.
This notion is very confusing, but it comes from the fact that −∆u is a “positive operator”
(i.e. has only positive eigenvalues).

2.3. Fundamental Solution, Newton- and Riesz Potential. There are many trivial
solutions (polynomials of order 1) of Laplace equation. But these are not very interesting.
There is a special type of solution which is called fundamental solution (which, funny
enough, is actually not a solution).
It appears when we want to compute the solution to an equation on the whole space
(2.3) ∆u(x) = f (x).
For this we make a brief (formal) introduction to Fourier transform:
The Fourier transform takes a map f : Rn → R and transforms it into Fu ≡ fˆ : Rn → R
as follows
ˆ 1 Z
f (ξ) := e−i⟨ξ,x⟩ f (x) dx.
(2π) R
n
2 n

The inverse Fouriertransform f is defined as


1 Z
f (ξ) :=

e+i⟨ξ,x⟩ f (x) dx.
(2π) 2 Rn
n
PARTIAL DIFFERENTIAL EQUATIONS I VERSION: September 12, 2022 15

It has the nice property that (f ∧ )∨ = f .


One of the important properties (which we will check in exercises) is that derivatives
become polynomial factors after Fourier transform:
(∂xi g)∧ (ξ) = −iξi ĝ(ξ).
For the Laplace operator ∆ this implies
(∆u)∧ (ξ) = −|ξ|2 û(ξ).
(Side-remark: In this sense −∆ is positive operator).
This means that if we look at the equation (2.3) and apply Fourier transform on both sides
we have
−|ξ|2 û(ξ) = fˆ(ξ),
that is
û(ξ) = −|ξ|−2 fˆ(ξ),
Inverting the Fourier transform we get an explicit formula for u in terms of the data f .
 ∨
u(x) = − |ξ|−2 fˆ(ξ) (x).
This is not a very nice formula, so let us simplify it. Another nice property of Fourier
transform (and its inverse) is that products become convolutions. Namely
Z
(g(ξ)f (ξ)) (x) =

g ∨ (x − z)f ∨ (z) dz.
Rn
In our case, for g(ξ) = −|ξ|−2 we get that
Z
u(x) = g ∨ (x − z) f (z) dz.
Rn
Now we need to compute g ∨ (x − z), and for this we restrict our attention to the situation
where the dimension is n ≥ 3. In that case, just by the definition of the (inverse) Fourier
transform we can compute that since g has homogeneity of order 2 (i.e. g(tξ) = t −2 g(ξ),
then g ∨ is homogeneous of order 2 − n. In particular
g ∨ (x) = |x|2−n g ∨ (x/|x|).
Now an argument that radial functions stay radial under Fourier transforms leads us to
conclude that
g ∨ (x) = c1 |x|2−n .
That is, we have arrived that (by formal computations) a solution of (2.3) should satisfy
Z
(2.4) u(x) = c1 |x − z|2−n f (z) dz.
Rn
The constant c1 can be computed explicitely, and we will check below that this potential
representation of u really is true. This potential is called the Newton potential (which is
a special case of so-called Riesz potentials). The kernel of the Newton potential is called
the fundamental solution of the Laplace equation (which, let us stress this again, is not a
solution)
PARTIAL DIFFERENTIAL EQUATIONS I VERSION: September 12, 2022 16

Definition 2.2. The fundamental solution Φ(x) of the Laplace equation for x ̸= 0 is given
as 
− 1 log |x| for n = 2
Φ(x) = 2π
1
−
n(n−2)ωn
|x| 2−n
for n ≥ 2
Here ωn is the Lebesgue measure of the unit ball ωn = |B(0, 1)|.

One can explicitely check that ∆Φ(x) = 0 for x ̸= 0 (indeed, ∆Φ(x) = δ0 where δ0 is the
Dirac measure at the point zero, cf. remark 2.6).
Exercise 2.3. Show that Φ ∈ C ∞ (Rn \ {0}) and compute that ∆Φ(x) = 0 for x ̸= 0.

The following statement justifies (somewhat) the notion of fundamental solution: the
fundamental solution Φ(x) can be used to construct all solutions to the imhomogeneous
Laplace equation:
Theorem 2.4. Let u be the Newton-potential of f ∈ Cc2 (Rn ), that is
Z
u(x) := Φ(x − y) f (y) dy.
Rn

Here Cc2 (Rn ) are all those functions in C 2 (Rn ) such that f is constantly zero outside of
some compact set.
We have

• u ∈ C 2 (Rn )
• −∆u = f in Rn .

Proof. First we show differentiability of u. By a substitution we may write


Z Z
u(x) := Φ(x − y) f (y) dy = Φ(z) f (x − z) dz.
Rn Rn

Now if we denote the difference quotient


u(x + hei ) − u(x)
∆ehi u(x) :=
h
where ei is the i-th unit vector, then we obtain readily
Z Z
∆ehi u(x) := Φ(x − y) f (y) dy = Φ(z) (∆ehi f )(x − z) dz.
Rn Rn

One checks that Φ is locally integrable (it is not globally integrable!), that is for every
bounded set Ω ⊂ Rn ,
Z
(2.5) |Φ| < ∞.

PARTIAL DIFFERENTIAL EQUATIONS I VERSION: September 12, 2022 17

Indeed, (we show this for n ≥ 3, the case n = 2 is an exercise), if Ω ⊂ Rn is a bounded


set, then it is contained in some large ball B(0, R).
Z Z
(2.6) |Φ| ≤ C |x|2−n dx
Ω B(0,R)

Using Fubini’s theorem,


Z
|x|2−n dx
B(0,R)
Z RZ
= |θ|2−n dHn−1 (θ) dr
0 ∂B(0,r)
Z R Z
2−n
= r dHn−1 (θ) dr
0 ∂B(0,r)
Z R
=cn r2−n rn−1 dr
0
Z R
=cn r1 dr
0
1
=cn R2 < ∞.
2
This establishes (2.5)
On the other hand (∆ehi f ) has still compact support for every h. In particular, by dominated
convergence we can conclude that
Z
lim ∆ehi u(x) = Φ(z) lim (∆ehi f )(x − z) dz.
h→0 Rn h→0

that is Z
∂i u(x) = Φ(z) (∂i f )(x − z) dz.
Rn
In the same way Z
∂ij u(x) = Φ(z) (∂ij f )(x − z) dz.
Rn
Now the right-hand side of this equation is continuous (again using the compact support
of f ). This means that u ∈ C 2 (Rn ).
To obtain that ∆u = f we first use the above argument to get
Z
∆u(x) = Φ(z) (∆f )(x − z) dz.
Rn

Observe that
(∆f )(x − z) = ∆x (f (x − z)) = ∆z (f (x − z)).
Now we fix a small ε > 0 (that we later send to zero) and split the integral, we have
Z Z Z
∆u(x) = Φ(x−y) f (y) dy = Φ(z) (∆f )(x−z) dz+ Φ(z) (∆f )(x−z) dz =: Iε +IIε .
Rn B(0 ,ε) Rn \B(0 ,ε)
PARTIAL DIFFERENTIAL EQUATIONS I VERSION: September 12, 2022 18

The term Iε contains the singularity of Φ, but we observe that

Iε −−→ 0.
ε→0

Indeed, this follows from the absolute continuity of the integral and since Φ is integrable
on B(0, 1):
Z
|Iε | ≤ sup |∆f | |Φ(z)| −−→ 0.
ε→0
Rn B(x,ε)

The term IIε does not contain any singularity of Φ which is smooth on Rn \Bε (0), so we
can perform an integration by parts1
Z Z Z
IIε = Φ(z) (∆f )(x−z) dz = Φ(z) ∂ν f (x−z) dH n−1
(z)− ∇Φ(z)·∇f (x−z) dz.
Rn \B(0,ε) ∂B(0,ε) Rn \B(0,ε)

Here ν is the unit normal to the ball ∂B(0, ε), i.e. ν = −z


ε
.
By the definition of Φ one computes that (using (2.5))
Z Z


Φ(z) ∂ν f (x − z) dH (z) ≤ sup |∇f | |Φ(z)| −−→ 0.
n−1 ε→0

∂B(0,ε) Rn ∂B(0,ε)

So we perform another integration by parts and have


Z Z
IIε =o(1) − ∂ν Φ(z) f (x − z) dz + ∆Φ(z) f (x − z) dz
∂B(0,ε) Rn \B(0,ε) | {z }
=0
Z
= o(1) − ∂ν Φ(z) f (x − z) dz
∂B(0,ε)

Here in the last step we used that ∆Φ = 0 away from the origin, Exercise 2.3.
Now we observe that the unit normal on ∂B(0, ε) is ν(z) = − zε and

− 1 1 z n = 2,
2π |z| |z|
DΦ(z) =  1
− n(n−2)ω n
(2 − n)|z|1−n |z|
z
n ≥ 3.

Thus, for |z| = ε,


1 1−n
∂ν Φ(z) = ν · DΦ(z) = ε
nωn

1
Z Z Z
f ∂i g = i
f gν − ∂i f g,
Ω ∂Ω Ω

where ν is the normal of ∂Ω pointing outwards (from the point of view of Ω). ν i is the i-th component of
ν. Fun exercise: Check this rule in 1D, to see the relation what we all learned in Calc 1.
PARTIAL DIFFERENTIAL EQUATIONS I VERSION: September 12, 2022 19

Thus we arrive at
Z
1
IIε =o(1) − f (x − z) dHn−1 (z)
∂B(0,ε) nωn εn−1
Z
=o(1) − f (x − z) dHn−1 (z)
∂B(0,ε)
Z
=o(1) − f (x) + (f (x) − f (x − z)) dHn−1 (z)
∂B(0,ε)

Here we use the mean value notation


Z Z
1
= n−1 .
∂B(0,ε) H (∂B(0, ε) ∂B(0,ε)
Now one shows (exercise!) that for continuous f
Z
lim (f (x) − f (x − z)) dHn−1 (z) = 0.
ε→0 ∂B(0,ε)

(Indeed this is essentially Lebesgue’s theorem). That is


IIε = o(1) − f (x) as ε → 0
and thus
∆u(x) = −f (x) + o(ε),
and letting ε → 0 we have
∆u(x) = −f (x),
as claimed. □
Exercise 2.5. Show that log |x| is locally integrable, i.e. that for any bounded set Ω ⊂ Rn
we have Z
log |x| < ∞.

Remark 2.6. One can argue (in a distributional sense, which we learn towards the end of
the semester)
−∆Φ = δ0 ,
where δ0 denotes the Dirac measure at 0, namely the measure such that
Z
f (x) dδ0 = f (0) for all f ∈ C 0 (Rn ).
Rn
Observe that δ0 is not a function, only a measure. In this sense one can justify that
Z
−∆u(x) =∆ Φ(x − z)f (z)
Z R
n

= ∆Φ(x − z)f (z) dz


Rn
Z
= f (z) dδx (z)
Rn
=f (x)

You might also like