PDF Datastream
PDF Datastream
PDF Datastream
EQUATIONS
by
Qitong Li
Charlotte
2019
Approved by:
c 2019
Qitong Li
ALL RIGHTS RESERVED
iii
ABSTRACT
QITONG LI. Inverse source problems for elliptic and parabolic equations. (Under
the direction of DR. LOC NGUYEN)
In this dissertation, we solve two inverse problems. The first one is the inverse
source problem for the Helmholtz equation that governs the wave propagating in
anisotropic media. The second one is to recover the initial condition for parabolic
equations from the lateral Cauchy data.
Regarding to the first problem, we propose a numerical method to compute a source
function from the external measurement of the wave field generated by that source.
We derive an equation which is independent of the unknown source. However, this
equation is not a standard partial differential equation. A method to solve it is not yet
available. By truncating the Fourier series of the wave field with respect to a special
basis, we can approximate that equation by a system of elliptic partial differential
equations. The solution to this “approximate” system directly yields the desired source
function. We solve that system of elliptic equations by the quasi-reversibility method.
The convergence of this method is proved in this dissertation via a new Carleman
estimate.
Regarding to the second problem, we find the initial condition for parabolic equa-
tions from the Cauchy lateral data of their solutions. We employ a technique similar
to the one mentioned in the previous paragraph. We split our method into two stages.
In the first stage, we establish an additional equation for the solution to the parabolic
equation. Solving this equation is challenging. The theory to solve it is not yet
available. Hence, in the second stage, we approximate this equation by an elliptic
system. This system is solved by the quasi-reversibility method. The convergence
of the quasi-reversibility method as the measurement noise goes to zero is proved.
We present the implementation of our algorithm in details and verify our method by
iv
ACKNOWLEDGEMENTS
I would like to express my sincere gratitude to my advisor Dr. Loc Nguyen. I’m
not able to get here without his assistance, guidance and patience.
I would also like to thank Dr. Michael Klibanov for his advices and fruitful dis-
cussions. Many thanks to Dr. Shaozhong Deng and Dr. Hwan Lin for their valuable
contributions and comments on my dissertation. I also acknowledge the Mathematics
and Statistics Department which gives me the opportunity to study here and supports
me over the years.
Thanks my parents to encourage and support me to move toward my goal through-
out my life.
vi
TABLE OF CONTENTS
CHAPTER 1: INTRODUCTION 1
CHAPTER 2: Preliminaries 10
3.4.3. Tests 44
4.3.2. Tests 61
REFERENCES 67
CHAPTER 1: INTRODUCTION
This dissertation aims to solve two inverse problems. The first one is to reconstruct
a source term from the bondary measurement of the wave field, governed by the
Helmholtz equation. The second one is to recover the initial condition for parabolic
equations from the lateral Cauchy data. These two inverse problems have great
real world applications in electroencephalography, biomedical imaging, brain imag-
ine, photoacoustic tomography, seismic imaging, determine the spatially distributed
temperature, and identify the pollution on the surface of rivers or lakes, and etc. [1–5]
In the first problem, we propose a numerical method to solve an inverse source
problem for the Helmholtz equation in the multi-frequency regime. This is the prob-
lem of determining the unknown source from external measurement of the wave field.
Some similar inverse source problems for Helmholtz-like PDEs were studied both
analytically and numerically in [1, 6]. In particular, in works [6, 7] uniqueness and
stability results were proven for a special case and it was also shown that the sta-
bility estimate improves when the frequency grows. The uniqueness of this inverse
source problem was proven in [8] for non constant coefficients. To the best of our
knowledge, past numerical methods for these problems are based on various methods
of the minimization of mismatched least squares functionals. Good quality numerical
solutions are obtained in [2, 8, 9] for high frequencies. However, convergence rates of
minimizers to the exact solution when the noise in the data tends to zero were not
studied in those papers. On the other hand, we refer here to the work [10], in which a
non-iterative method, based on a fresh idea, was proposed to solve the inverse source
problem for a homogenous medium. Uniqueness and stability results were proven
in [10] and good quality numerical results were presented. In this dissertation we
2
solve the inverse source problem for inhomogeneous media. We introduce a new nu-
merical method based on the Quasi-Reversibility Method (QRM). The Lipschitz-like
convergence rate of the solution due to QRM to the exact solution, as long as the
noise in the data tends to zero, is proved.
The second inverse problem is a problem of recovering the initial condition of the
parabolic equation from the lateral Cauchy data. This problem has many real-world
applications ; for e.g., determine the spatially distributed temperature inside a solid
from the boundary measurement of the heat and heat flux in the time domain [11];
identify the pollution on the surface of the rivers or lakes [12]; effectively monitor the
heat conductive processes in steel industries, glass and polymer forming and nuclear
power station [13]. Due to its realistic applications, this problem has been studied
intensively. The uniqueness of such similar problems is well-known, see [14]. Also, it
can be reduced from the logarithmic stability results in [11,13]. The natural approach
to solve this problem is the optimal control method; that means, minimize a mismatch
functional. The proof of the convergence of the optimal control method to the true
solution to these inverse problems is challenging and is omitted. In this dissertation,
we introduce an approximate model, as a coupled linear system of elliptic partial
differential equations. Solution to this model is the vector of Fourier coefficients of
the solutions to the parabolic equation mentioned above. This approximate model
is solved by the quasi-reversibility method. We will prove the convergence for the
quasi-reversibility method as the measurement noise tends to 0. The convergent rate
is Lipschitz. We present the implementation of our algorithm in details and verify our
method by showing some numerical examples. More details can be found in section
1.2.
3
Below x = (x1 , ..., xn−1 , z) ∈ Rn , n ≥ 2. Let Ω be the cube (−R, R)n ⊂ Rn for
some R ≥ 1, and
Γ+ = {x ∈ ∂Ω : z = R}. (1.1.1)
For i, j = 1, ..., n, let functions aij ∈ C 1 (Rn ), bj ∈ C(Rn ), c ∈ C(Rn ) be such that:
1. For all x ∈ Rn
aij (x) = aji (x) 1 ≤ i, j ≤ n. (1.1.2)
n
X
µ1 |ξ|2 ≤ aij (x) ξi ξj ≤ µ2 |ξ|2 for all x ∈ Rn , ξ ∈ Rn . (1.1.3)
i,j=1
3. For all x ∈ Rn \ Ω
1 if i = j,
aij (x) = (1.1.4)
0 if i 6= j.
4. For all x ∈ Rn \ Ω,
bj (x) = c(x) = 0. (1.1.5)
n
X n
X
Lu = aij (x)uxi xj + bi (x)uxi + c(x)u for u ∈ H 2 (Rn ). (1.1.6)
i,j=1 i=1
n
X
L0 u = aij (x)uxi xj . (1.1.7)
i,j=1
Let k > 0 be the wave number and u = u(x, k) be the complex valued wave field
4
of wave number k, generated by the source function which has the form of separable
variables g(k)f (x), where functions g ∈ C 1 [0, ∞) and f ∈ C 1 (Rn ). The wave field
u(x, k) ∈ C 2 (Rn ), k > 0, satisfies the equation
Here, the function n ∈ C 1 (Rn ) is the spatially distributed refractive index. We assume
that
n (x) = 1 for x ∈ Rn \ Ω. (1.1.10)
Condition (1.1.10) means that the refractive index of the background (air or vacuum)
is scaled to be 1. See [15] for the well-posedness of problem (1.1.8)–(1.1.9) in the
case L = ∆. The well-posedness for the general operator L is an assumption in this
dissertation. Given numbers k and k such that 0 < k < k < ∞ and assuming that
the function
g : [k, k] → R
Problem 1 (Inverse source problem with Cauchy data). Reconstruct the functions
f (x), x ∈ Ω, given the following data
and
G(x, k) = ∂ν u(x, k) x ∈ Γ+ , k ∈ (k, k) (1.1.12)
5
Problem 2 (Inverse source problem with Dirichlet data). Reconstruct the functions
f (x), x ∈ Ω, given the following data
The Dirichlet boundary data (1.1.13) implicitly contain the Neumann boundary
data for the function u on the entire boundary ∂Ω. Indeed, for each k ∈ (k, k) one
can uniquely solve equation (1.1.8) with the radiation condition (1.1.9) and boundary
condition (1.1.13) in the unbounded domain Rn \ Ω. The resulting solution provides
the Neumann boundary condition ∂ν u(x, k) for x ∈ ∂Ω, k ∈ (k, k), where ν is the
unit outward normal vector at ∂Ω.
We propose a new numerical method which enables us to establish convergence
rate of minimizers of a certain functional of the Quasi-Reversibility Method (QRM)
to the exact solution, as long as the noise in the data tends to zero. Our method is
based on several ingredients:
1. Elimination of the unknown source function f (x) from the original PDE via
the differentiation with respect to k of the function u(x, k)/g (k) .
2. The use of a newly published orthonormal basis in L2 k, k , see [16], to obtain
an overdetermined boundary value problem for a system of coupled elliptic
6
3. The use of the QRM to find an approximate solution of that boundary value
problem.
4. The formulation and the proof of a new Carleman estimate for the operator L0
in (1.1.7).
5. In the case of Problem 1, the use of this Carleman estimate for establishing the
convergence rate of the minimizers of the QRM to the exact solution, as long
as the noise in the data tends to zero.
Recently a similar idea was applied to develop a new numerical method for the X-
ray computed tomography with a special case of incomplete data [17] as well as to the
development of a globally convergent numerical method for a 1D coefficient inverse
problem [18]. The above items 1, 4 and 5 have roots in the Bukhgeim-Klibanov
method, which was originally introduced in [19]. Even though there exists now a
significant number of publications on this method, we refer here only to a few of
them [20–23] since this thesis is not about that method. The original goal of [19] was
to prove uniqueness theorems for coefficient inverse problems. Nowadays, however,
ideas of this method are applied for constructions of numerical methods for coefficient
inverse problems and other ill-posed problems, see, e.g. [16, 18, 24, 25].
Given N > 1, we approximate the wave field by its N th partial sum of the Fourier
series with respect to a special orthonormal basis. Consider the Fourier coefficients of
the wave field as new unknowns. We can derive from the partial differential equation
mentioned in the previous paragraph a system for such Fourier coefficients. Our
numerical reconstruction is now based on a numerical solver for this system. Since
this “cut-off" step is not rigorous, the obtained system is just an “approximate", rather
than exact, model. However, we still employ this technique since it is quite efficient
in solving many linear and nonlinear inverse problems [17, 18, 24–28].
7
Note that the system for Problem 1 is over-determined since the boundary data
involves both Dirichlet and Neumann information of the wave field. The over-
determined boundary value problem for the system of PDEs for Problem 1 by the
quasi-reversibility method. This method is well-known to be a perfect tool to solve
overdetermined boundary value problems for PDEs. The quasi-reversibility method
was first introduced by Lattès and Lions [29] for numerical solutions of ill-posed prob-
lems for PDEs. It has been studied intensively since then, see e.g., [23, 30–36].
A survey on this method can be found in [37]. The solutions of the systems above
due to the quasi-reversibility method are called the regularized solutions. The conver-
gence of the regularized solutions to the true ones as the noise tends to 0 was proved
in [24] using Carleman estimates for the case when L = ∆ for spherical domains.
In this thesis, we extend the Carleman estimate for the case M is not necessarily
identical to Id and then use it to prove the convergence of the quasi-reversibility
method. In contrast, the well-posedness for Problem 2 is only studied numerically in
this dissertation.
We will prove a Carleman estimate in Section 2.2.1. We introduce the algorithms
and the quasi-reversibility method to solve Problems 1 and 2 in Section 3.1. In Section
3.3, we discuss about the convergence of the regularized solutions. Then, in Section
3.4, we describe the implementation leading to the numerical results and show several
numerical examples.
n
X n
X
Lv = aij (x)vxi xj + bi (x)vxi + c(x)v for v ∈ C 2 (Rd ). (1.2.3)
i,j=1 i=1
Problem 3 is the problem of recovering the initial condition of the parabolic equa-
tion from the lateral Cauchy data. In this dissertation, we employ the technique de-
veloped by our own research group. The main point of this technique is to derive an
approximate model for the Fourier coefficients of the solution to the governing partial
differential equation. This technique was first introduced in [16]. This approximate
9
model is a system of elliptic equations. It, together with Cauchy boundary data, is
solved by the quasi-reversibility method. This approach was used to solve an inverse
source problem for Helmholtz equation [40] and to inverse the Radon transform with
incomplete data [17]. Especially, Klibanov, Li and Zhang [41] used the convexification
method, a stronger version of this technique, to compute numerical solutions to the
nonlinear problem of electrical impedance tomography with restricted Dirichlet-to-
Neumann map data. It is remarkable mentioning that the numerical solutions in [41]
due to the convexification method are impressive.
As mentioned in the previous paragraph, we employ the quasi-reversibility method
to solve an approximate model for Fourier coefficients of the solution to (1.2.4). This
method was first introduced by Lattès and Lions [29]. It is used to computed nu-
merical solutions to ill-posed problems for partial differential equations. Due to its
strength, since then, the quasi-reversibility method attracts the great attention of
the scientific community see e.g., [23, 30–35, 42–44]. We refer the reader to [37] for a
survey on this method. The solution of the approximate model in the previous para-
graph due to the quasi-reversibility method is called regularized solution in the theory
of ill-posed problems [45]. A question arises immediately about the convergence of
the quasi-reversibility method: whether or not the regularized solutions obtained by
the quasi-reversibility method converges to the true solution of our system of partial
differential equations as the noise tends to 0. The affirmative answer to this question
is obtained using a general Carleman estimate. Moreover, we employ a Carleman esti-
mate (in section 2.2.2) to prove that the convergence rate is Lipschitz. It is important
mentioning that in the celebrate paper [19], Bukhgeim and Klibanov discovered the
use of Carleman estimate in studying inverse problems for all three main types of
partial differential equations.
CHAPTER 2: Preliminaries
For each n > 1, define φn (k) = (k − k0 )n−1 exp(k − k0 ), k ∈ (a, b), where k0 =
(a + b)/2. The sequence {φn }∞ 2
n=1 is complete in L (a, b). Using the Gram-Schmidt
N
X
u(x, k) = un (x)Ψn (k) (2.1.1)
n=1
where
Z b
un (x) = u(x, k)Ψn (k)dk.
a
In the truncation context (2.1.1), the partial derivative with respect to k of u(x, k) is
approximated by
N
X
∂k u(x, k) = un (x)Ψ0n (k) (2.1.2)
n=1
for all x ∈ Ω and k ∈ (a, b). To reconstruct the wave field u(x, k), we compute un (x),
1 ≤ n ≤ N , via (2.1.1), (2.1.2) and the knowledge of Ψn and Ψ0n . We therefore require
that the function Ψ0n cannot be identically 0. The “sin and cosine" basis of the usual
11
Fourier transform does not meet this requirement while it is not hard to verify that
the basis {Ψn }∞
n=1 does.
Then, it is successfully used to solve nonlinear coefficient inverse problems [18, 24]
and the inverse X-ray tomographic problem with incomplete data [17].
The following result plays an important role in our analysis.
DN = (dmr )N
m,r=1 (2.1.4)
The main aim of this section is to prove the following estimate 2.2.1.
For brevity, we assume that the function u in Theorem 2.2.1 is a real valued one.
Indeed, this theorem holds true for complex valued function u. This fact follows
directly from the theorem itself. Hence, in this section, we define the space
2
(Ω) = w ∈ H 2 (Ω) : w |∂Ω = 0, ∂ν w |Γ+ = 0
H0,#
in (3.2.7) as the set of all real valued functions satisfying the same constraints. Recall
the operator the uniformly elliptic operator L0 in (1.1.7).
12
Theorem 2.2.1 (Carleman estimate). Let the number b > R. Let the coefficients
aij (x) of the uniformly elliptic operator L0 defined in (1.1.7) satisfy conditions (1.1.2),
(1.1.3) and also aij ∈ C 1 (Ω). Suppose that
p0 = p0 µ1 , µ2 , b, n, R, max kaij kC 1 (Ω) >1
ij
and
λ0 = λ0 µ1 , µ2 , b, n, R, max kaij kC 1 (Ω) ≥ 1,
ij
both of which depend only on listed parameters, such that the following Carleman
estimate holds:
Z Z
2 p
(∇u)2 + λ2 u2 exp [2λ (z + b)p ] dx,
(L0 u) exp [2λ (z + b) ] dx≥C2 λ (2.2.2)
Ω Ω
2
for all λ ≥ λ0 , p ≥ p0 and u ∈ H0,# (Ω). Here, the constant
C2 = C2 µ1 , µ2 , b, p, n, R, max kaij kC 1 (Ω) > 0
ij
2
Proof. Below in this proof u ∈ C 2 Ω ∩ H0,# 2
(Ω) . The case u ∈ H0,# (Ω) can be
obtained via the density argument. In this proof C2 > 0 denotes different positive
numbers depending only on above listed parameters. On the other hand, everywhere
below C3 = C3 µ1 , µ2 , b, R, maxij kaij kC 1 (Ω) > 0 also denotes different positive
constants depending only on listed parameters but independent on p, unlike C2 . Also,
in this proof O (1/λ) denotes different functions belonging to C 1 Ω and satisfying
13
the estimate
C2
kO (1/λ)kC 1 (Ω) ≤ for all λ, p ≥ 1. (2.2.3)
λ
Z
Ur · νdσ ≥ 0 r ∈ {1, ..., 14} , (2.2.4)
∂Ω
where Ur · ν means the scalar product of vectors Ur and ν in Rn : recall that ν is the
outward looking unit normal vector on ∂Ω. In fact it follows from the proof that, the
integrals in (2.2.4) equal zero for r = 1, 2. But they are non-negative starting from
r = 3.
Introduce the new function v (x) = u (x) exp [λ (z + b)p ] . Then
and
n−1
X n−1
X
y1 = aij vxi xj + ain vxi z + ann vzz , (2.2.6)
i,j=1 i=1
Thus,
We now estimate from the below each term in the right hand side of inequality (2.2.10)
separately. We do this in several steps.
Step 1. Estimate from the below of the quantity 2y1 y3 (z + b)2−p . By (2.2.6) and
15
(2.2.7), we have
2y1 y3 (z + b)2−p
n−1 n−1
!
1X X
= −4λp (z + b) ann vz aij vxi xj + aij vxj xi + ain vxi z + ann vzz . (2.2.11)
2 i,j=1 i=1
−2λp(z + b)ann vz aij vxi xj + aij vxj xi
h i
= −2λp (z + b) ann aij (vz vxi )xj − (z + b) ann aij vzxj vxi − (z + b) (ann aij )xj vz vxi
h i
− 2λp (z + b) ann aij vz vxj xi − (z + b) ann aij vzxi vxj − (z + b) (ann aij )xi vz vxj
= 2λp (z + b) ann aij vxi vxj z − 2λp ((z + b) ann aij )z vxi vxj
+ (−2λp (z + b) ann aij vz vxi )xj + 4λp (z + b) (ann aij )xj vz vxi
+ −2λp (z + b) ann aij vz vxj xi + 4λp (z + b) (ann aij )xi vz vxj .
Hence,
n−1
X
− 4λp (z + b) ann ain vz vxi z
i=1
n−1
X n−1
X
−2λp (z + b) ann ain vz2 λp (z + b) (ann ain )xi vz2 .
= xi
+
i=1 i=1
Hence,
n−1
X
− 4λp (z + b) ann ain vz vxi z ≥ −C3 λpvz2 + divU2 . (2.2.13)
i=1
16
Now, U2 · ν = 0 for x ∈ ∂Ω for two reasons: first, this is because vz (x) = 0 for
xi = ±R and, second, due to condition (2.2.1). Hence, due to the first reason, we do
not actually use here yet condition (2.2.1).
Next, we estimate the term −4λp (z + b) a2nn vz vzz in (2.2.11),
−C3 λpvz2 = −C3 λp u2z + 2λp (z + b)p−1 uz u + λ2 p2 (z + b)2p−2 u2 exp [2λ (z + b)p ]
(2.2.8)
(z + b) (a2nn )z
3 3 2p−2 2
≥ 2λ p (2p − 1) (z + b) µ1 1 + v2
(2p − 1) µ21
+ −2λ3 p3 (z + b)2p−1 a2nn v 2 z
p0 = p0 µ1 , µ2 , b, n, R, max kaij kC 1 (Ω) >1
ij
such that
(z + b) (a2nn )z 1
1+ 2
≥ , for all p ≥ p0 . (2.2.19)
(2p − 1) µ1 2
n−1
! n−1 n−1
!
X X X
−2λp (z + b) akn vxk aij vxi xj + ain vxi z + ann vzz . (2.2.21)
k=1 i,j=1 i=1
18
First,
−λp (z + b) akn vxk aij vxi xj + aji vxj xi
= −λp (z + b) akn aij vxk vxi xj + vxk vxj xi
= −λp (z + b) akn aij vxk vxj xi + λp (z + b) akn aij vxk xi vxj
+ λp (z + b) (akn aij )xi vxk vxj + (−λp (z + b) akn aij vxk vxi )xj
Next,
n−1
! n−1
!
X X
−2λp (z + b) akn vxk aij vxi xj ≥ −C3 λp (∇v)2 + divU8 . (2.2.24)
k=1 i,j=1
Considering in (2.2.22) and (2.2.23) explicit forms of coordinates of the vector function
U8 and using (2.2.1), we conclude that U7 satisfies condition (2.2.4).
We now estimate the term
n−1
! n−1
!
X X
−2λp (z + b) akn vxk ain vxi z . (2.2.25)
k=1 i=1
We have
n−1
! n−1
! n−1
X X X
−2λp (z + b) akn vxk ain vxi z = −λp (z + b) akn ain (vxk vxi z + vxi vxk z ) .
k=1 i=1 i,k=1
19
We have:
= (−λp (z + b) akn ain vxi vxk )z + λp ((z + b) akn ain )z vxi vxk .
n−1
! n−1
!
X X
−2λp (z + b) akn vxk ain vxi z ≥ −C3 λp (∇v)2 + divU9 , (2.2.26)
k=1 i=1
We have
−2λp (z + b) akn ann vxk vzz = (−2λp (z + b) akn ann vxk vz )z + 2λp (z + b) akn ann vxk z vz
n−1
!
X
−2λp (z + b) akn vxk ann vzz ≥ −C3 λp (∇v)2 + divU10 , (2.2.28)
k=1
where (2.2.4) is valid for U10 . Summing up (2.2.24), (2.2.26) and (2.2.28), we obtain
n−1
X
2y2 y4 (z + b)2−p = −2λ3 p3 (z + b)2p−1 ain ann vxi v
i=1
n−1
! n−1
!
X X
= λ3 p3 (z + b)2p−1 ain ann v 2 + λ3 p3 (z + b)2p−1 (ain ann )xi v2.
i=1 xi i=1
Comparing this with (2.2.10), (2.2.16), (2.2.19), (2.2.20) and (2.2.29), we obtain
(L0 u)2 exp [2λ (z + b)p ] (z + b)2−p ≥ −C3 λp (∇u)2 exp [2λ (z + b)p ]
Step 5. Estimate from the below − (L0 u) u exp [2λ (z + b)p ] . We have
n−1
X
p
− (L0 u) u exp [2λ (z + b) ] = − aij uxi xj u exp [2λ (z + b)p ]
i,j=1
n−1
X
− ain uxi z u exp [2λ (z + b)p ] − ann uzz u exp [2λ (z + b)p ]
i=1
n−1
X n−1
X
−aij uxj u exp [2λ (z + b)p ] aij uxi uxj exp [2λ (z + b)p ]
= xi
+
i,j=1 i,j=1
n−1
X n−1
X
p
+ (aij )xi uxj u exp [2λ (z + b) ] + (−ain uz u exp [2λ (z + b)p ])xi
i,j=1 i=1
n−1
X n−1
X
+ (ain )xi uz u exp [2λ (z + b)p ] + ain uz uxi exp [2λ (z + b)p ]
i=1 i=1
+ (−ann uz u exp [2λ (z + b)p ])z + ann u2z exp [2λ (z + b)p ]
+ 2λp (z + b)p−1 ann uz u exp [2λ (z + b)p ] + (ann )z uz u exp [2λ (z + b)p ] . (2.2.31)
Next,
2λp (z + b)p−1 ann uz u exp [2λ (z + b)p ] = λp (z + b)p−1 ann u2 exp [2λ (z + b)p ] z
Combining (2.2.31) with (2.2.32) and taking into account (1.1.3) as well as inequalities
like uxi u ≥ −u2xi / (2λ) − λu2 /2, we obtain for λ ≥ λ0
µ1
− (L0 u) u exp [2λ (z + b)p ] ≥ (∇u)2 exp [2λ (z + b)p ]
2
− 3λ2 p2 (z + b)2p−2 ann u2 exp [2λ (z + b)p ] + divU13 , (2.2.33)
p 2 p 2−p
− 4C3 λpµ−1
1 (L0 u) u exp [2λ (z + b) ] + (L0 u) exp [2λ (z + b) ] (z + b)
see (2.2.4) for U14 . We can choose p0 so large that, in addition to (2.2.19),
12ann (x) 1
1− ≥ , ∀p ≥ p0 . (2.2.35)
pµ1 2
p 2 p 2−p
− 4C3 λpµ−1
1 (L0 u) u exp [2λ (z + b) ] + (L0 u) exp [2λ (z + b) ] (z + b)
Combining this with the right hand side of (2.2.34), integrating the obtained pointwise
inequality over the domain Ω and taking into account (2.2.4), (2.2.35) and Gauss’
formula, we obtain the target estimate (2.2.2).
Corollary 2.2.1. Assume that conditions of Theorem 2.2.1 are satisfied. Since we
should have in Theorem 2.2.1 b > R, we choose in (2.2.2) b = 3R. Let p0 > 1
and λ0 > 1 be the numbers of Theorem 2.2.1. Consider the N −D complex valued
2
vector functions W (x) ∈ H0,# (Ω) . Then there exists a sufficiently large number λ1 ,
depending only on µ1 , µ2 , n, R, maxij , kaij kC 1 (Ω) , maxj kbj kC (Ω) , kckC (Ω) , knkC (Ω)
k, k, N such that the following Carleman estimate holds
Z
2
−1
L (W (x)) + DN SN n2 (x)W (x) exp [2λ (z + 3R)p0 ] dx
Ω
Z
|∇W |2 + λ2 |W |2 exp [2λ (z + 3R)p0 ] dx (2.2.36)
≥C3 λ
Ω
23
2
for all λ ≥ λ1 and W in H0,# (Ω) .
This Corollary follows immediately from Theorem 2.2.1 as well as from the well
known fact (see, e.g. lemma 2.1 in [23]) that the Carleman estimate depends only on
the principal part of a PDE operator while the lower order terms of this operator can
be absorbed in this estimate.
Let the matrix A be as in (1.2.1). The main aim of this section is to prove a Car-
leman estimate in a general domain Ω. Similar versions of Carleman estimate can be
found in [41, Theorem 3.1] and [46, Lemma 5] when Ω is an annulus and [40, The-
orem 4.1] and when Ω is a cube. In this dissertation, we will use the following esti-
mate to derive the convergence of the quasi-reversibility method. It can be deduced
from [14, Lemma 3, Chapter 4, §1].
Without lost of generality, we can assume that
( d
)
X
Ω⊂ x = (x1 , x2 , . . . , xd ) : 0 < x1 + X −2 x2i < 1 (2.2.37)
i=2
d
1 X 2
ψ(x) = x1 + x + α, 0 < α < 1/2. (2.2.38)
2X 2 i=1 i
Using Lemma 3 in [30, Chapter 4, §1] for the function u ∈ C 2 (Ω) that is independent
of the time variable, we can find a constant σ0 and a constant σ1 (depending only
on α and the entries aij , 1 ≤ i, j ≤ d, of the matrix A) such that for all λ ≥ σ0 and
p > σ1
λ3 p3 −2p−2 2
2λψ −p (x) λp 2
|U | ≤ Ce |∇u| + 3 ψ u . (2.2.40)
X X
where ν the outward unit normal vector of ∂Ω. Then, there exist a positive number
σ0 and σ1 , depending only on α and A, such that
Z Z
λp 2λψ −p (x) −p
e 2
|∇u| dx + λ p3 4
ψ −2p−2 e2λψ (x) |u|2 dx
X2 Ω Ω
Z
−p
≤C ψ p+2 e2λψ (x) |Div(A∇u)|2 dx (2.2.42)
Ω
In fact, assume that ∇u(x) 6= 0 at some points x ∈ ∂Ω. Since u(x) = 0 on ∂Ω, see
(2.2.41), ∇u(x) · τ (x) = 0 where τ (x) is any tangent vector to ∂Ω at the point x.
Thus, ∇u(x) is perpendicular to ∂Ω at x. In other words, ∇u(x) = θν(x) for some
nonzero scalar θ. We have 0 = A(x)∇u(x) · ν(x) = θA(x)ν(x) · ν(x), which is a
contradiction to (1.2.2).
25
Z Z
λp 2λψ −p (x) −p
e ψ −2p−2 e2λψ (x) |u|2 dx
2
|∇u| dx + λ p 3 4
X2 Ω Ω
Z Z
Cλp −p
2λψ (x) −p
≤− 2 e uDiv(A∇u)dx + C ψ p+2 e2λψ (x) |Div(A∇u)|2 dx. (2.2.44)
X Ω Ω
Z
Here, the term DivU dx is dropped because it vanishes due the the divergence
Ω
1 2
theorem, (2.2.41) and (2.2.43) Using the inequality |ab| ≤ λpa2 + 2λp
b
Z
Cλp −p (x)
− 2 e2λψ uDiv(A∇u)dx
X Ω
Cλ2 p2
Z Z
2λψ −p (x) 2 C −p
≤ 2
e u dx + 2 e2λψ (x) |Div(A∇u)|2 dx. (2.2.45)
X Ω X Ω
Z Z
λp 2λψ −p (x) −p (x)
e 2
|∇u| dx + λ p 3 4
ψ −2p−2 e2λψ |u|2 dx
X2 Ω Ω
Z
−p
≤C ψ p+2 e2λψ (x) |Div(A∇u)|2 dx.
Ω
Assume that in (1.1.8) g(k) 6= 0, ∀k ∈ [k, k]. Introduce the function v(x, k),
u(x, k)
v(x, k) = , x ∈ Ω, k ∈ [k, k]. (3.1.1)
g(k)
To eliminate the unknown right hand side f (x) from equation (3.1.2), we differentiate
it with respect to k and obtain
L (∂k v(x, k)) + k 2 n2 (x)∂k v(x, k) + 2kn2 (x)v(x, k) = 0, x ∈ Ω, k ∈ [k, k]. (3.1.3)
It follows from (1.1.11), (1.1.12) and (3.1.1) that in the case of Problem 1 the function
v satisfies the following boundary conditions
F (x, k)
v(x, k) = , x ∈ ∂Ω, k ∈ [k, k], (3.1.4)
g(k)
G(x, k)
∂z v(x, k) = , x ∈ Γ+ , k ∈ [k, k]. (3.1.5)
g(k)
N
X
v(x, k) = vm (x)Ψm (k) x ∈ Ω, k ∈ [k, k], (3.1.6)
m=1
XN
∂k v(x, k) = vm (x)Ψ0m (k) x ∈ Ω, k ∈ [k, k], (3.1.7)
m=1
where
Z k
vm (x) = v(x, k)Ψm (k)dk x ∈ Ω, m = 1, 2, . . . , N. (3.1.8)
k
Remark 3.1.1. Similarly with [16–18,24], we assume here that the truncated Fourier
series (3.1.6) satisfies equation (3.1.2) and that truncated Fourier series (3.1.6) and
(3.1.7), taken together, satisfy equation (3.1.3). This is our approximate mathe-
matical model. Since we work with a numerical method, we accept this approxima-
tion scheme. Our main goal below is to find numerically Fourier coefficients vm (x),
m = 1, 2, . . . , N, of v(x, k), see (3.1.8). If those Fourier coefficients are approxi-
mated, the target unknown function f (x) can be approximated as the right hand side
of (3.1.2).
We now compare numerically the true function v(x, k) with its approximation
(3.1.6). and observe that their difference is small, see Figure 3.1 for the illustration.
28
(a) The real parts of the true and test (b) The imaginary parts of the true and
functions test functions
P∞
Figure 3.1: The comparison of the true function v(·, k = 1.5) = m=1 um (x)Ψm (k)
and the test function 10
P
v
m=1 m (·)Ψ n (k) in Test 5, see Section 2.2.1. In this test, we
2
consider the case n = 2 and Ω = (−2, 2) . On Ω, we arrange a uniform grid of
121 × 121 points in Ω. Those points are numbered from 1 to 1212 . In (a) and (b),
we respectively show the real and imaginary parts of the two functions at 300 points
numbered from 7170 to 7470. It is evident that reconstructing the first 10 terms of
the Fourier coefficients of v(x, k) is sufficient to solve our inverse source problems.
N
X N
X
(Lvr (x)) Ψ0r (k) n2 (x)vr (x) k 2 Ψ0r (k) + 2kΨr (k) = 0, x ∈ Ω.
+ (3.1.9)
r=1 r=1
For each m = 1, ..., N , we multiply both sides of (3.1.9) by the function Ψm (k) and
then integrate the resulting equation with respect to k ∈ k, k . We obtain
N
X Z k
(Lvr (x)) Ψ0r (k)Ψm (k)dk
r=1 k
N
X Z k
2
k 2 Ψ0r (k) + 2kΨr (k) Ψm (k)dk = 0 (3.1.10)
+ n (x)vr (x)
r=1 k
Z k
(smr )N k 2 Ψ0r (k) + 2kΨr (k) Ψm (k)dk.
SN = m,r=1 , with smr = (3.1.12)
k
Denote
!T
Z k Z k
F (x, k) F (x, k)
Fe(x) = Ψ1 (k)dk, . . . , ΨN (k)dk , x ∈ ∂Ω, (3.1.14)
k g(k) k g(k)
!T
Z k Z k
G(x, k) G(x, k)
G(x)
e = Ψ1 (k)dk, . . . , ΨN (k)dk , x ∈ Γ+ . (3.1.15)
k g(k) k g(k)
It follows from (3.1.4) and (3.1.5) that in the case of 1 the vector function V (x)
satisfies the following two boundary conditions:
and
∂ν V (x) = G(x),
e x ∈ Γ+ . (3.1.17)
In the next section, we briefly discuss the QRM used in Step 3 of Algorithm 1. We
mention that the QRM is an efficient approach to solve partial differential equations
with over-determined boundary data.
In this section, we present the QRM for the numerical solution of Problem 1. By
saying below that a vector valued function belongs to a Hilbert space, we mean that
each of its components belongs to this space. The norm of this vector valued function
in that Hilbert space is naturally defined as the square root of the sum of squares of
norms of components. Recall that by Proposition 2.1.1 the matrix DN is invertible.
Therefore, by (3.1.13), (3.1.16) and (3.1.17) we need to find an approximate solution
of the following over-determined boundary value problem with respect to the vector
function V (x)
−1
L (V (x)) + DN SN n2 (x)V (x) = 0, x ∈ Ω, (3.2.1)
∂ν V (x) = G(x),
e x ∈ Γ+ . (3.2.3)
Z
−1 2
J (V ) = L (V (x)) + DN SN n2 (x)V (x) dx+kV k2H 2 (Ω) , (3.2.4)
Ω
We assume that the set of vector functions indicated in the formulation of this
problem is non empty; i.e., we assume that there exists an N −D vector valued function
Φ such that the set
n o
2
Φ ∈ H (Ω) , Φ |∂Ω = F (x), ∂ν Φ |Γ+ = G(x) .
e e (3.2.5)
Theorem 3.2.1. Assume that there exists an N −D vector valued function Φ satis-
fying (3.2.5). Then for each > 0, there exists a unique minimizer Vmin, ∈ H 2 (Ω) of
the functional J in (3.2.4) that satisfies boundary conditions (3.2.2) and (3.2.3).
Proof. The proof of Theorem 3.2.1 is based on the variational principle and Riesz
theorem. Let (·, ·) and [·, ·] denote scalar products in Hilbert spaces L2 (Ω) and
H 2 (Ω) respectively of N −D vector valued functions. For any vector valued function
V ∈ H 2 (Ω) satisfying boundary conditions (3.2.2) and (3.2.3), set
2
By (3.2.5) W ∈ H0,# (Ω) , where
2
(Ω) = w ∈ H 2 (Ω) : w |∂Ω = 0, ∂ν w |Γ+ = 0 .
H0,# (3.2.7)
2
Clearly H0,# (Ω) is a closed subspace of the space H 2 (Ω) . Let Vmin, be any minimizer
32
−1 −1
SN n2 (x)Wmin, (x), L (P (x)) + DN SN n2 (x)P (x)
L (Wmin, (x)) + DN
−1 −1
SN n2 (x)Φ(x), L (P (x)) + DN SN n2 (x)P (x)
+ [Wmin, , P ] = L (Φ (x)) + DN
+ [Φ, P ] , (3.2.9)
2
for all P ∈ H0,# (Ω). The left hand side of the identity (3.2.9) generates a new scalar
2
product {·, ·} in the space H0,# (Ω) . The corresponding norm {·} is equivalent to the
standard norm k·kH 2 (Ω) . Hence, (3.2.9) is equivalent to
−1 −1
SN n2 (x)Φ(x), L (P (x)) + DN SN n2 (x)P (x)
{Wmin, , P } = L (Φ (x)) + DN
+ [Φ, P ] (3.2.10)
2
for all P ∈ H0,# (Ω) . On the other hand, the right hand side of (3.2.10) can be
estimated as
−1 −1
SN n2 (x)Φ(x), L (P (x)) + DN SN n2 (x)P (x) + [Φ, P ]
L (Φ (x)) + DN
≤ C1 {Φ} {P } ,
−1
where the number C1 = C1 L, DN SN , n2 , > 0 depends only on listed parameters.
Hence, the right hand side of (3.2.10) can be considered as a bounded linear functional
lΦ (P ) : H02 (Ω) → C. By Riesz theorem there exists unique vector function Q ∈
33
2
H0,# (Ω) such that
2
{Wmin, , P } = {Q, P } , for all P ∈ H0,# (Ω) ,
directly yielding the identity (3.2.10). As a consequence, Wmin, exists and; indeed,
Wmin, = Q. Finally, by (3.2.8) Vmin, = Wmin, + Φ.
The minimizer Vmin, of J , subject to the constraints (3.2.2) and (3.2.3) is called
the regularized solution of the problem (3.2.1), (3.2.2) and (3.2.3). In the theory
of Ill-Posed Problems, it is important to prove convergence of regularized solutions
to the true one as the noise in the data tends to zero [45]. In the next section, we
establish a Carleman estimate for general elliptic operators. This estimate is essential
for the proof of that convergence result in our problem, see Section 2.2.1.
While Theorem 3.2.1 ensures the existence and uniqueness of the solution of the
Minimization Problem (Problem 4), it does not claim convergence of minimizers, i.e.
regularized solutions, to the exact solution as noise in the data tends to zero. At the
same time such a convergence result is obviously important. However, this theorem is
much harder to prove than Theorem 3.2.1. Indeed, while only the variational principle
and Riesz theorem are used in the proof of Theorem 3.2.1, a different apparatus
is required in the convergence analysis. This apparatus is based on the Carleman
estimate of Theorem 2.2.1. Then we establish the convergence rate of minimizers.
Following one of the main principles of the regularization theory [45], we assume
now that vector functions Fe(x) and G(x)
e in (3.2.2) and (3.2.3) are given with a noise.
More precisely, let Φ (x) ∈ H 2 (Ω) be the function defined in (3.2.5). We assume that
this is given with a noise of the level δ ∈ (0, 1) , i.e.
where the vector function Φ∗ ∈ H 2 (Ω) corresponds to the noiseless data. In the case
of noiseless data, we assume the existence of the solution V ∗ ∈ H 2 (Ω) of the following
analog of the problem (3.2.1)-(3.2.3):
−1
L (V ∗ (x)) + DN SN n2 (x)V ∗ (x) = 0, x ∈ Ω, (3.3.2)
∂ν V ∗ (x) = G
e∗ (x), x ∈ Γ+ . (3.3.4)
Similarly to (3.2.5), we assume the existence of the vector valued function function
Φ∗ such that
e∗ (x).
Φ∗ ∈ H 2 (Ω) , Φ∗ (x) |∂Ω = Fe∗ (x), ∂ν Φ∗ (x) |Γ+ = G (3.3.5)
Then (3.2.7), (3.3.5) and (3.3.6) imply that W ∗ ∈ H02 (Ω) . Also, using (3.3.2)-(3.3.5),
we obtain
−1
L (W ∗ (x)) + DN SN n2 (x) (W ∗ (x))
−1
= −L (Φ∗ (x)) − DN SN n2 (x) (Φ∗ (x)) , x ∈ Ω. (3.3.7)
Theorem 3.3.1 (The convergence rate). Assume that conditions of Theorem 3.2.1
as well as conditions (3.3.1)-(3.3.6) hold. Let λ1 be the number of Corollary 2.2.1.
Define the number η as
η = 2 (4R)p0 . (3.3.8)
35
−1/η
Assume that the number δ0 ∈ (0, 1) is so small that ln δ0 > λ1 . Let δ ∈ (0, δ0 ) . Set
= (δ) = δ 2 . Let Vmin,(δ) ∈ H 2 (Ω) be the unique minimizer of the functional (3.2.4)
which is found in Theorem 3.2.1. Then the following convergence rate of regularized
solutions holds
√
Vmin,(δ) − V ∗ H 1 (Ω)
≤ C4 1 + kW ∗ kH 2 (Ω) δ, (3.3.9)
where the C4 > 0 depends on µ1 , µ2 , n, R, maxij , kaij kC 1 (Ω) , maxj kbj kC (Ω) , kckC (Ω) ,
knkC (Ω) k, k, N .
Proof. We use in this proof the Carleman estimate of Corollary 2.2.1. Similarly
2
with (3.2.8) let Vmin,(δ) − Φ = Wmin,(δ) ∈ H0,# (Ω). We now rewrite (3.2.9) as
−1 −1
SN n2 (x)Wmin,(δ) (x), L (P (x)) + DN SN n2 (x)P (x)
L Wmin,(δ) (x) + DN
−1 −1
SN n2 (x)Φ(x), L (P (x)) + DN SN n2 (x)P (x)
+ (δ) Wmin,(δ) , P = L (Φ (x)) + DN
2
for all P ∈ H0,# (Ω). Also, we rewrite (3.3.7) in an equivalent form,
−1 −1
L (W ∗ (x)) + DN SN n2 (x)W ∗ (x), L (P (x)) + DN SN n2 (x)P (x) + (δ) [W ∗ , P ]
−1 −1
= L (Φ∗ (x)) + DN SN n2 (x)Φ∗ (x), L (P (x)) + DN SN n2 (x)P (x)
+ (δ) [W ∗ , P ] , (3.3.11)
2
for all P ∈ H0,# (Ω). Denote
f = Wmin,(δ) − W ∗ ∈ H 2 (Ω) , Φ
W e = Φ − Φ∗ .
0,#
36
h i
L Wf (x) + D−1 SN n2 (x)W f (x), L (P (x)) + D−1 SN n2 (x)P (x) + (δ) W f, P
N N
= L Φ e (x) + D−1 SN n2 (x)Φ(x),
e L (P (x)) + D −1
S N n 2
(x)P (x) + (δ) [W ∗ , P ] ,
N N
2
for all P ∈ H0,# (Ω). Setting here P = W
f and using Cauchy-Schwarz inequality and
(3.3.1), we obtain
Z 2
L W f (x) dx ≤ C4 δ 2 1 + kW ∗ k2 2
f (x) + D−1 SN n2 (x)W
N H (Ω) . (3.3.12)
Ω
Z 2
L W f (x) + D−1 SN n2 (x)Wf (x) dx
N
Ω
Z 2
= L W f (x) exp (2λ (z + 3R)p0 ) exp (−2λ (z + 3R)p0 ) dx
f (x) + D−1 SN n2 (x)W
N
Ω
Z 2
p0 −1 f (x) exp (2λ (z + 3R)p0 ) dx.
≥ exp (−2λ (4R) ) L W (x) + DN
f SN n2 (x)W
Ω
Z 2
L W f (x) exp (2λ (z + 3R)p0 ) dx
f (x) + D−1 SN n2 (x)W
N
Ω
≤C4 δ 2 1 + kW ∗ k2H 2 (Ω) exp (2λ (4R)p0 ) . (3.3.13)
By Corollary 2.2.1 the left hand side of inequality (3.3.13) can be estimated for any
37
λ ≥ λ1 as
Z
L W f (x) 2 exp (2λ (z + 3R)p0 ) dx
f (x) + D−1 SN n2 (x)W
N
Ω
Z 2 2
≥C3 λ ∇Wf +λ W 2 f exp [2λ (z + 3R)p0 ] dx
Ω
f k2 1 ≤ C4 δ 2 1 + kW ∗ k2 2
kW p0
H (Ω) H (Ω) exp (2λ (4R) ) . (3.3.14)
Set = δ 2 . Next, choose λ = λ (δ) such that exp (2λ (4R)p0 ) = 1/δ. Hence,
where the number η is defined in (3.3.8). This choice is possible since δ ∈ (0, δ0 ) and
−1/η
ln δ0 > λ1 , implying that λ (δ) > λ1 . Thus, (3.3.14) and (3.3.15) imply that
√
f kH 1 (Ω) ≤ C4 1 + kW ∗ k 2
kW δ. (3.3.16)
H (Ω)
√
C4 1 + kW ∗ kH 2 (Ω) Vmin,(δ) − V ∗ − (Φ − Φ∗ )
δ ≥ kW
f kH 1 (Ω) =
H 1 (Ω)
≥ Vmin,(δ) − V ∗ H 1 (Ω)
− kΦ − Φ∗ kH 1 (Ω) ≥ Vmin,(δ) − V ∗ H 1 (Ω)
− δ.
Hence,
√ √
Vmin,(δ) − V ∗ H 1 (Ω)
≤ δ + C4 1 + kW ∗ kH 2 (Ω) δ ≤ C4 1 + kW ∗ kH 2 (Ω) δ.
(3.3.17)
38
Numbers C4 in middle and right inequalities (3.3.17) are different and depend only
on µ1 , µ2 , n, R, maxij , kaij kC 1 (Ω) , maxj kbj kC (Ω) , kckC (Ω) , knkC (Ω) k, k, N . The
target estimate (3.3.9) of this theorem follows from (3.3.17) immediately.
In this section, we test our method in the 2-D case. The domain Ω is set to be the
square
Ω = (−R, R)2
In this section, we set k = 1.5 and k = 4.5. The interval [k, k] is uniformly divided
into Mk = 150 sub-intervals whose end points are given by
Hence, we do not specify in this section the operator L and the function n2 (x) outside
of Ω. For brevity, we consider only the isotropic case, i.e. L = ∆ for x ∈ Ω. To show
that our method is applicable for the case of non homogeneous media, we choose the
39
0.1 sin(3|x|2 )
n2 (x) = 1 + for all x ∈ Ω.
3|x|2 + 1
Remark 3.4.1 (The choice for the interval of wave numbers). The length of each side
elong = 2π/k =
of the square Ω is 2R = 4 units. We choose the longest wavelength λ
2π/1.5 = 4.19 which is about 4 units. The upper bound of the wave number k = 4.5
is set so that the shortest wavelength λ
eshort = 1.39 is in the range that is compatible
to the maximal lmax and minimal lmin sizes of the tested inclusions. More precisely,
eshort ∈ (0.7lmax , 1.45lmin ) and λ
we choose λ eshort ≈ 3.
elong /λ
assuming that it has unique solution u(x, k) ∈ C 2 Ω for all k ∈ [k, k]. We solve prob-
lem (3.4.3) by the finite difference method. Having computed the function u(x, k),
we extract the noisy data,
see (1.1.11), (1.1.12). Here δ ∈ (0, 1) is the noise level and “rand” is the function
taking uniformly distributed random numbers in [0, 1]. In this dissertation, we test
our method with the noise level δ = 0.05, which means 5% noise.
Remark 3.4.2. Recall that while in Problem 1 we use both functions F (x, k) and
G(x, k) in (3.4.4), (3.4.5), in Problem 2 we use only the Dirichlet boundary condition
F (x, k), see (1.1.11)-(1.1.13). However, it follows from boundary condition (3.4.3)
that the Neumann boundary condition is ∂ν u(x, k) |∂Ω = ikF (x, k). This explains why
we computationally observe the uniqueness of our numerical solution of Problem 2.
Z
J (V ) = |DN ∆V + SN n2 (x) V |2 dx + kV k2L2 (Ω) . (3.4.6)
Ω
This functional J in (3.4.6) is slightly different from the one in (3.2.4). First, we
−1
do not use here the matrix DN . Indeed, this matrix is convenient to use for the
above theoretical results. However, it is inconvenient to use in computations since
it contains large numbers at N = 10. Second, we replace the term kV k2H 2 (Ω) in
(3.2.4) by the term kV k2L2 (Ω) . This is because the L2 (Ω)−norm is easier to work with
computationally than the H 2 (Ω)−norm. On the other hand, we have not observed
any instabilities probably because the number 121 × 121 of grid points we use is not
too large and all norms in finite dimensional spaces are equivalent. The regularization
parameter in our computations was found by a trial and error procedure, = 10−5 .
We write derivatives involved in (3.4.6) via finite differences. Next, we minimize
41
the resulting functional with respect to values of the vector valued function
Mx X N N n
X X dmr
J (V ) = h2x vr (xi−1 , yj ) + vr (xi+1 , yj )
i,j=2 m=1 r=1
h2x
+vr (xi , yj−1 ) + vr (xi , yj+1 ) − 4vr (xi , yj )
o2 MX x +1 X
N
2
2
+n (xi , yj )smr vr (xi , yj ) + hx |vm (xi , yj )|2 ,
i,j=1 m=1
where dmn and smn are elements of matrices DN and SN in (2.1.3) and (3.1.12)
respectively. Introduce the “line up" version of the set {vn (xi , yj ) : 1 ≤ i, j ≤ Mx +
1, 1 ≤ n ≤ N } as the (Mx + 1)2 N dimensional vector V with
Vm = vm (xi , yj ) 1 ≤ i, j ≤ Mx + 1, 1 ≤ m ≤ N, (3.4.7)
where
m = (i − 1)(Mx + 1)N + (j − 1)N + m. (3.4.8)
where L is the (Mx + 1)2 N × (Mx + 1)2 N matrix defined as follows. For each m =
42
n ≤ N;
1
2. set Lmn = h2x
if n = (i ± 1 − 1)(Mx + 1)N + (j − 1)N + n or n = (i − 1)(Mx +
1)N + (j ± 1 − 1)N + n, 1 ≤ n ≤ N.
where m is as in (3.4.8). Hence, let D be the (Mx + 1)2 N × (Mx + 1)2 N diagonal
matrix with such mth diagonal entries taking value 1 while the others are 0. This
Dirichlet boundary constraint of the vector V become
DV = F̃. (3.4.10)
Here, the vector F̃ is the “line up" vector of the data FN in the same manner when
we defined V, see (3.4.8).
We implement the constraint of V in (3.1.17). This constraint allows us to collect
the following information
Vm − Vm0
= G̃N (xi , yj , m) (3.4.11)
hx
43
N V = G̃ (3.4.13)
where G̃ is the “line up" version of G̃N and the matrix N is defined as
1. Nmm = 1/hx and Nmm0 = −1/hx for m and m0 given by (3.4.8) and (3.4.12)
respectively, 1 ≤ i ≤ Mx + 1, j = Mx + 1.
T T
~0
L L L
D D + Id V = D F̃ (3.4.14)
N N N G̃
for Problem 2. Having the vector V, we can compute the vector VN via (3.4.7). Then,
we follow Steps 4 and 5 of Algorithms 1 and Algorithms 2 to compute the functions
vcomp via (3.1.6) and then f comp by taking the real part of (3.1.2) when k = 1.5.
Remark 3.4.3 (Remark on Problem 2). We use (3.4.15) only for the convenience,
since we do not want to have a significant extra programming effort, given that we
44
3.4.3 Tests
In the cases of Test 1 and Test 2, we apply below our method for Problem 1. And
in the cases of Tests 3-5 we apply our method for Problem 2. Whenever we say below
about the accuracy of values of positive and negative parts of inclusions, we compare
maximal positive values and minimal negative values of computed ones with true
ones. Postprocessing was not applied in all tests presented below.
1. Test 1. Problem 1. Two inclusions with different shapes. The function ftrue is
given by
2.5 if max{0.6|x − 0.75|, |y|} < 1.1,
ftrue = −2 if (x + 0.75)2 + y 2 < 0.552 ,
0 otherwise,
and gtrue (k) = ik for k ∈ [k, k]. We test the reconstructions of the locations,
shapes and positive/negative values of the function f for two different inclusions.
One of them is a rectangle and the other one is a disk. In this case, the function
ftrue attains both positive and negative values. The numerical solution for this
case is displayed on Figure 3.2.
It is evident that, for this test, our method for 1 provides good numerical results.
The reconstructed locations, shapes as well as the positive/negative values of
the function f comp are of a good quality.
2. Test 2.Problem 1. Four circular inclusions. We consider the case when the
45
(a) The function ftrue (b) The real part of the func- (c) The imaginary part of the
tion vtrue (·, k = 1.5) function vtrue (·, k = 1.5)
(d) The real part of the func-(e) The imaginary part of the (f) The function fcomp
tion vcomp (·, k = 1.5) function vcomp (·, k = 1.5)
Figure 3.2: Test 1. The true and reconstructed source functions and the true and
reconstructed functions v(x, k) = u(x, k)/g(k) when k = 1.5. The reconstructed
positive value of the source function is 2.76 (relative error 10.5%). The reconstructed
negative value of the source function is -2.17 (relative error 8.5%).
1, if (x − 0.8)2 + (y − 0.8)2 < 0.552
or (x + 0.8)2 + (y − 0.8)2 < 0.552 ,
ftrue = −1, if (x − 0.8)2 + (y + 0.8)2 < 0.552
or (x + 0.8)2 + (y + 0.8)2 < 0.552 ,
0, otherwise,
and gtrue (k) = 1 for all k ∈ [k, k]. We test the model with four circular inclusions.
The source function f = 1 in the two “upper" inclusion and f = −1 in the two
“lower" inclusions.
(a) The function ftrue (b) The real part of the func- (c) The imaginary part of the
tion vtrue (·, k = 1.5) function vtrue (·, k = 1.5)
(d) The real part of the func-(e) The imaginary part of the (f) The function fcomp
tion vcomp (·, k = 1.5) function vcomp (·, k = 1.5)
Figure 3.3: Test 2. The true and reconstructed source functions and the true and
reconstructed functions v(x, k) = u(x, k)/g(k) when k = 1.5. The reconstructed
positive value of the source function is 1.11 (relative error 11.1%). The reconstructed
negative value of the source function is -1.11 (relative error 11.1%).
3. Test 3. Problem 2. A void in the square. We consider the case when the
negative part of the true source function f is surrounded by a square and f is
positive in this square. More precisely,
1 if max{|x|, |y|} < 1.2 and x2 + y 2 ≥ 0.482 ,
ftrue = −1 if x2 + y 2 < 0.482 ,
0 otherwise,
The true ftrue and computed f comp source functions are displayed in Figure 3.4.
We can see computed shapes of the “positive" square and the “negative" disk
47
(a) The function ftrue (b) The real part of the func- (c) The imaginary part of the
tion vtrue (·, k = 1.5) function vtrue (·, k = 1.5)
(d) The real part of the func-(e) The imaginary part of the (f) The function fcomp
tion vcomp (·, k = 1.5) function vcomp (·, k = 1.5)
Figure 3.4: Test 3. The true and reconstructed source functions and the true and
reconstructed functions v(x, k) = u(x, k)/g(k) when k = 1.5. The reconstructed
positive value of the source function is 1.09 (relative error 9.0%). The reconstructed
negative value of the source function is -0.89 (relative error 11.0%).
are quite acceptable. Given that the noise in the data is 5%, errors in values of
the function f comp are also of an acceptable.
In Figure 3.5, one can see that the source function is computed rather accurately.
48
(a) The function ftrue (b) The real part of the func- (c) The imaginary part of the
tion vtrue (·, k = 1.5) function vtrue (·, k = 1.5)
(d) The real part of the func-(e) The imaginary part of the (f) The function fcomp
tion vcomp (·, k = 1.5) function vcomp (·, k = 1.5)
Figure 3.5: Test 4. The true and reconstructed source functions and the true and
reconstructed functions v(x, k) = u(x, k)/g(k) when k = 1.5. The reconstructed
positive value of the source function is 1.12 (relative error 12.0%). The reconstructed
negative value of the source function is -1.94 (relative error 3.0%).
The values of both “positive" and “negative" parts of the inclusion are computed
with a good accuracy.
2 2 −y 2 2 −y 2
ftrue = 3(1 − x)2 e−x − (y + 1)2 − 10(x/5 − x3 − y 5 )e−x − 1/3e−(x+1) ,
The numerical results for this test are displayed in Figure 3.6. It is evident that
49
(a) The function ftrue (b) The real part of the func- (c) The imaginary part of the
tion vtrue (·, k = 1.5) function vtrue (·, k = 1.5)
(d) The real part of the func-(e) The imaginary part of the (f) The function fcomp
tion vcomp (·, k = 1.5) function vcomp (·, k = 1.5)
Figure 3.6: Test 5. The true and reconstructed source functions and the true and
reconstructed functions v(x, k) = u(x, k)/g(k) when k = 1.5. The true and recon-
structed maximal positive value of the source function are 8.10 and 7.36 (relative
error 9.1%) respectively. The true and reconstructed minimal negative value of the
source function are -6.55 and -5.48 (relative error 16.0%) respectively.
where
Z T
un (x) = u(x, t)Ψn (t)dt, n = 1, 2, . . . . (4.1.2)
0
Fix a positive integer N . We truncate the Fourier series in (4.1.1). The function
u(x, t) is approximated by
N
X
u(x, t) = un (x)Ψn (t) x ∈ Ω, t ∈ [0, T ]. (4.1.3)
n=1
N
X
ut (x, t) = un (x)Ψ0n (t) (4.1.4)
n=1
N
X N
X
un (x)Ψ0n (t) = Lun (x)Ψn (t). (4.1.5)
n=1 n=1
for all x ∈ Ω and t ∈ [0, T ]. For each m ∈ {1, . . . , N }, multiply Ψm (t) to both sides
of (4.1.5) and then integrating the obtained equation with respect to t, we obtain
N Z
X T N Z
X T
Ψm (t)Ψ0n (t)dt un (x) = Ψm (t)Ψn (t)dt Lun (x) (4.1.6)
n=1 0 n=1 0
Z T
smn = Ψm (t)Ψ0n (t)dt (4.1.7)
0
We rewrite (4.1.6) as
N
X
smn un (x) = Lum (x) x ∈ Ω, m = 1, 2, . . . , N. (4.1.9)
n=1
Denote
U (x) = (u1 (x), u2 (x), . . . , uN (x))T . (4.1.10)
Here, the operator L acting on the vector U (x) is understood in the same manner as
it acts on scalar valued function, see (1.2.3).
52
On the other hand, due to (4.1.2) and (1.2.5), the vector U (x) satisfies the boundary
conditions
Z T Z T T
U (x) = Fe(x) = F (x, t)Ψ1 (t)dt, . . . , F (x, t)ΨN (t)dt (4.1.12)
0 0
Z T Z T T
∂ν U (x) = G(x)
e = G(x, t)Ψ1 (t)dt, . . . , G(x, t)ΨN (t)dt (4.1.13)
0 0
This is acceptable since these two functions can be computed directly by the algebraic
formulas (4.1.12) and (4.1.13).
Finding a vector U (x) satisfying equation (4.1.11) and constraints (4.1.12) and
(4.1.13) is the main point in our numerical method to find the function f (x). In fact,
having U (x) = (u1 (x), . . . , u2 (x), . . . , uN (x)) in hand, we can compute the function
u(x, t) via (4.1.3). The desired function f (x) is given by u(x, t = 0).
Due to the truncation step in (4.1.3), equation (4.1.11) is not exact. We call it
an approximate model. Solving it, together with the “over-determined" boundary
conditions (4.1.12) and (4.1.13), for the Fourier coefficients (un (x))N
n=1 of u(x, t),
Figure
PN 4.1: The function u(x, t = 0) (dash-dot) and its approximation
n=1 un (x)Ψn (t = 0) (solid) at the points numbered from 900 to 1050. These func-
tions are taken from Test 4 in Section 4.3.2. It is evident that the larger N , the better
approximation for the function u is obtained by the N th partial sum of the Fourier
series in (4.1.1).
gests us to take N = 30. It is worth mentioning that when N ≤ 25, the numerical
solutions are not satisfactory, when N = 30, numerical results are quite accurate re-
gardless the high noise levels and when N ≥ 35, the computation is time-consuming.
Z
J (U ) = |LU (x) − SU (x)|2 dx + kU k2H 3 (Ω) . (4.1.14)
Ω
subject to the constraints (4.1.12) and (4.1.13). Here is a positive number serving
as a regularization parameter. Impose the condition that the set of admissible data
is nonempty, where Fe and Fe are our indirect data, see Remark 4.1.1, defined in
(4.1.12) and (4.1.13). The result below guarantees the existence and uniqueness for
the minimizer of J , > 0.
Proposition 4.1.1. Assume that the set of admissible data H, defined in (4.1.15), is
nonempty. Then, for all > 0, the functional J admits a unique minimizer satisfying
(4.1.12) and (4.1.13). This minimizer is called the regularized solution to (4.1.11),
(4.1.12) and (4.1.13).
Proof. Proposition 4.1.1 is an analog of [40, Theorem 3.1] whose proof is based on
the Riesz representation theorem. An alternative method to prove this proposition is
from the standard argument in convex analysis, see e.g. [44, 47].
data for (4.1.12) and (4.1.13), see Remark (4.1.1), respectively. The noisy data are
denoted by Feδ and G
eδ . Here δ is the noise level. In this section, assume that there
1. for all x ∈ Ω
holds true.
The assumption about the existence of E satisfying (4.2.1) and (4.2.2) is equivalent
to the condition
In this section, we establish the following result to study the accuracy of the quasi-
reversibility method.
Theorem 4.2.1. Assume that U ∗ is the function that satisfies (3.1.10), (3.1.13) and
e replaced by Fe∗ and G
(3.1.14) with Fe and G e∗ respectively. Fix > 0. Let U δ be the
Feδ and G
eδ respectively. Assume further that there is an “error" function E in H 3 (Ω)N
kU δ − U ∗ k2H 3 (Ω)N ≤ C δ 2 + kU ∗ k2H 3 (Ω)N (4.2.3)
56
hL(U δ − U ∗ ) − S(U δ − U ∗ ), LΦ − SΦiL2 (Ω)N + hU δ − U ∗ , ΦiH 3 (Ω)N = −hU ∗ , ΦiH 3 (Ω)N .
Recall the function ψ in (2.2.38). Fix λ > σ0 and p > σ1 as in Lemma 2.2.1. Set
−p (x) −p (x)
M = max{e2λψ : x ∈ Ω} and m = min{e2λψ : x ∈ Ω}.
We have
1 λψ−p (x)
M kLΦ − SΦk2L2 (Ω)N ≥ e Div(A(x)∇Φ)
2
−p (x) 2
+ eλψ (b(x) · ∇Φ + (c(x)Id − S)Φ) L2 (Ω)N
.
1 −p
M kLΦ − SΦk2L2 (Ω)N ≥ keλψ (x) Div(A(x)∇Φ)k2L2 (Ω)N
2
−p (x)
− keλψ (b(x) · ∇Φ + (c(x)Id − S)Φ)k2L2 (Ω)N .
Thus, by (2.2.42),
M |LΦ − SΦk2L2 (Ω)N ≥ CmkΦkH 1 (Ω)N (4.2.7)
∗
δ
kU − U − Ek2H 1 (Ω)N ≤C δ + 2
kU ∗ k2H 3 (Ω)N .
Corollary 4.2.1. Let f ∗ (x) = u∗ (x, 0) and f δ (x) = uδ (x, 0) where u∗ (x, t) and
uδ (x, t) are computed from U ∗ (x) and U δ (x) via (4.1.1) and (4.1.10). Then, by the
trace theory
√
kf δ − f ∗ kL2 (Ω) ≤ C δ + kU ∗ kH 3 (Ω)N .
58
Figure 4.2: The true function c(x) used for all numerical examples in this section.
We numerically test our method when d = 2. The domain Ω is the square (−R, R)2 .
In this section, we write x = (x, y). For the coefficients of the governing equation, we
choose, for simplicity, A(x) = Id and b(x) = 0. The function c is set as
2 −(y+1)2 2 −y 2 2 −y 2
c(x, y) = (3(1 − x)2 e−x − 10(x/5 − x3 − y 5 )e−x − 1/3e−(x+1) )/10
with step size dt = T /NT . In our tests, NT = 250. The forward problem is solved
by finite difference method in the implicit scheme. Denote by u∗ the solution of the
59
F (x, t) = u(x, t)(1 + δ(2rand − 1)) G(x, t) = ∂ν u(x, t)(1 + δ(2rand − 1))
for (x, t) ∈ ∂Ω×[0, T ] where rand is the uniformly distributed random function taking
value in [0, 1] and δ is the noise level. The noise level δ is given in each numerical
tests.
The main part of this section is to compute the minimizer U of J subject to the
constraints (3.1.13) and (3.1.14). The “cut-off" number N is set to be 30, see Remark
4.1.2 for this choice of N . The discretized version of U (x) = (u1 (x), . . . , uN (x))T ,
x ∈ Ω is (u1 (xi , yj ), . . . , uN (xi , yj ))N x +1
i,j=1 . Hence, J (U ), see (4.1.14), is approximated
by
J (U ) =
Nx X N
X um (xi+1 , yj ) + um (xi−1 , yj ) + um (xi , yj+1 ) + um (xi , yj−1 ) − 4um (xi , yj )
d2x
i,i=2 m=1
d2x
N N
Nx X
X 2 X
+ c(xi , yj )um (xi , yj ) + smn un (xi , yj ) + d2x |um (xi , yj )|2
n=1 i,j=2 m=1
Nx X N Nx X N
X um (xi+1 , yj ) − um (xi , yj ) 2 2 2 X um (xi , yj+1 ) − um (xi , yj ) 2
+2 d2x + dx .
i,j=2 m=1
d x i,j=2 m=1
d x
(4.3.1)
Here, we slightly change the H 3 norm of the regularity term to the H 1 norm. This
makes the computational codes less heavy. The numerical results with this change are
still acceptable. We also modify the regularized parameter of the term k∇U kL2 (Ω)N
to be 2 , instead of , since we observe that the obtained numerical results are more
60
Nx X N N
X X −4 δmn
J (U ) = d2x δmn 2
+ c(xi , yj ) − smn un (xi , yj ) + 2 un (xi+1 , yj )
i,j=2 m=1 n=1
dx dx
Nx X N
δmn δmn δmn 2
2
X
+ 2 un (xi−1 , yj ) + 2 un (xi , yj+1 ) + 2 un (xi , yj−1 ) + dx |um (xi , yj )|2
dx dx dx i,j=2 m=1
Nx X N Nx X N
X um (xi+1 , yj ) − um (xi , yj ) 2 2 2 X um (xi , yj+1 ) − um (xi , yj ) 2
+2 d2x + dx .
i,j=2 m=1
dx i,j=2 m=1
dx
(4.3.2)
Here, we use the Kronecker number δmn for the convience of writing the computational
codes. We next identify {un (xi , yj ) : 1 ≤ i, j ≤ Nx + 1, 1 ≤ n ≤ N } with the
(N +1)2 N
(Nx + 1)2 N dimensional vector U = (ui )i=1x according to the rule ui = un (xi , yj )
where the index i is
J (U) = d2x |LU|2 + +d2x |U|2 + d2x |Dx U|2 + d2x |Dy U|2 .
(c) 0 otherwise.
61
(b) −1/dx if j = i,
(c) 0 otherwise.
(b) −1/dx if j = i,
(c) 0 otherwise.
4.3.2 Tests
1. Test 1. The case of one inclusion. The function ftrue is a smooth function
supported in a disk with radius 1 centered at the origin. More precisely,
1
e− 1−|x|2 +1 if |x| < 1,
ftrue (x) =
0
otherwise.
62
Figure 4.3: Test 1. The true and computed source functions. Our method still works
well when δ = 100%. It is shown in (e) that the reconstructed value of fcomp with
δ = 75% is quite accurate, even better than in (d), but in contrast, the reconstructed
shape starts to break down.
Figure 4.3 displays the functions ftrue and fcomp . Table 4.1 show the recon-
structed value of the function fcomp and the relative error. The noise levels are
δ = 0%, 25%, 50%, 75% and 100%.
It is evident that our method is robust for Test 1 in the sense that the re-
constructed maximal value of the function f and the reconstructed shape and
position of the inclusion are quite accurate.
2. Test 2. The case of two inclusions. The function ftrue is a smooth function
supported in two disks with radius r = 0.8 centered at x1 = (−1, 0) and x2 =
63
Table 4.1: Test 1. Correct and computed maximal values of source functions. errorrel
denotes the relative error of the reconstructed maximal value. postrue is the true
position of the inclusion; i.e., the maximizer of ftrue . poscomp is the computed position
of the inclusion. diserr is the absolute error of the reconstructed positions.
noise level max ftrue max fcomp errorrel postrue poscomp diserr
0% 1 0.99 1.0% (0, 0) (0, 0) 0
25% 1 0.96 4.0% (0, 0) (−0.05, 0) 0.05
50% 1 1.21 21.0% (0, 0) (−0.05, 0.1) 0.11
75% 1 1.01 1.0% (0, 0) (0.2, −0.1) 0.22
100% 1 1.53 53.0% (0, 0) (0.25, 0.1) 0.27
Figure 4.4 displays the functions ftrue and fcomp . Table 4.2 show the recon-
structed value of the function fcomp and the relative error. The noise levels are
δ = 0%, 25%, 50%, 75% and 100%.
The reconstruction in Test 2 is good. In this test, the reconstruct breaks down
when the noise level is 75% although we are able to detect the inclusions with
higher noise levels.
3. Test 3. The case of non-inclusion and nonsmooth function. The function ftrue
is the characteristic function of the letter Y . Figure 4.5 displays the functions
ftrue and fcomp . The noise levels are δ = 10% and 15%.
We can reconstruct the letter Y and the reconstructed maximal of fcomp is good
when δ = 10% but the error is large when the noise level reaches 15%.
4. Test 4. The case of non-inclusion and nonsmooth function. The function ftrue
64
Figure 4.4: Test 2. The true and computed source functions. The reconstruction of
the two inclusions are not symmetric probably because the true function c, see Figure
4.2 for its graph, is negative on the left and positive on the right. However, both
inclusions can be seen when the noise level goes up to 100%.
Table 4.2: Test 2. Correct and computed maximal values of the inclusions. maxinc,true
and maxinc,comp are the true and computed, respectively, maximal values of the source
in an inclusion. errorrel denotes the relative error of the reconstructed maximal value.
postrue is the true position of the inclusion; i.e., the maximizer of ftrue . poscomp is the
computed position of the inclusion. diserr is the absolute error of the reconstructed
positions.
(a) The function ftrue (b) fcomp , δ = 10% (c) fcomp , δ = 15%
Figure 4.5: Test 3. The true and computed source functions. The letter Y can
be detected well in this case. The true maximal value of ftrue is 1. The computed
maximal value of fcomp when δ = 10% is 1.09 (relative error 9%). The computed
maximal value of fcomp when δ = 15% is 1.15 (relative error 15%).
(a) The function ftrue (b) fcomp , δ = 10% (c) fcomp , δ = 15%
Figure 4.6: Test 4. The true and computed source functions. The reconstruction of
λ is acceptable. The true maximal value of ftrue is 1. The computed maximal value
of fcomp when δ = 10% is 1.16 (relative error 16%). The computed maximal value of
fcomp when δ = 15% is 1.11 (relative error 11%).
is the characteristic function of the letter λ. Figure 4.6 displays the functions
ftrue and fcomp . The noise levels are δ = 10% and 15%.
The main aim of this thesis is to solve two inverse source problems in the frequency
domain. In this work, we derived a system of PDEs whose solutions directly provide
the solution of the inverse source problems under consideration.
The main points of the method is derive approximate models by a truncation of the
Fourier series with respect to a special basis. We solved the approximation models
by the quasi-reversibility method. The convergence of this method when the noise
tends to 0 was proved. More importantly, numerical examples show that our method
is robust when proving accurate reconstructions of the unknown source function from
highly noisy data.
67
REFERENCES
[1] G. Bao, J. Lin, and F. Triki, “A multi-frequency inverse source problem,” Journal
of Differential Equations, vol. 249, no. 12, pp. 3443–3465, 2010.
[2] G. Bao, J. Lin, F. Triki, et al., “Numerical solution of the inverse source prob-
lem for the helmholtz equation with multiple frequency data,” Contemp. Math,
vol. 548, pp. 45–60, 2011.
[3] J. Cheng, V. Isakov, and S. Lu, “Increasing stability in the inverse source prob-
lem with many frequencies,” Journal of Differential Equations, vol. 260, no. 5,
pp. 4786–4804, 2016.
[4] A. El Badia and T. Ha-Duong, “An inverse source problem in potential analysis,”
Inverse Problems, vol. 16, no. 3, p. 651, 2000.
[6] V. Isakov and S. Lu, “Increasing stability in the inverse source problem with
attenuation and many frequencies,” SIAM Journal on Applied Mathematics,
vol. 78, no. 1, pp. 1–18, 2018.
[7] M. N. Entekhabi and V. Isakov, “On increasing stability in the two dimensional
inverse source scattering problem with many frequencies,” Inverse Problems,
vol. 34, no. 5, p. 055005, 2018.
[8] V. Isakov and S. Lu, “Inverse source problems without (pseudo) convexity as-
sumptions.,” Inverse Problems & Imaging, vol. 12, no. 4, 2018.
[9] G. Bao, J. Lin, and F. Triki, “An inverse source problem with multiple frequency
data,” Comptes Rendus Mathematique, vol. 349, no. 15-16, pp. 855–859, 2011.
[10] X. Wang, Y. Guo, D. Zhang, and H. Liu, “Fourier method for recovering acoustic
sources from multi-frequency far-field data,” Inverse Problems, vol. 33, no. 3,
p. 035001, 2017.
[12] A. El Badia and T. Ha-Duong, “On an inverse source problem for the heat
equation. application to a pollution detection problem,” Journal of inverse and
ill-posed problems, vol. 10, no. 6, pp. 585–599, 2002.
[13] J. Li, M. Yamamoto, and J. Zou, “Conditional stability and numerical recon-
struction of initial temperature,” Communications on Pure & Applied Analysis,
vol. 8, no. 1, pp. 361–382, 2009.
68
[31] L. Bourgeois, “Convergence rates for the quasi-reversibility method to solve the
cauchy problem for laplace’s equation,” Inverse problems, vol. 22, no. 2, p. 413,
2006.
[36] L. H. Nguyen, “An inverse source problem for hyperbolic equations and the
lipschitz-like convergence of the quasi-reversibility method,” arXiv preprint
arXiv:1806.03921, 2018.
[38] L. C. Evans, “Partial differential equations. second. vol. 19,” Graduate Studies in
Mathematics. American Mathematical Society, Providence, RI, 2010.
[42] L. Bourgeois, D. Ponomarev, and J. Dardé, “An inverse obstacle problem for the
wave equation in a finite time domain,” 2018.
70
[44] L. Nguyen, “An inverse space-dependent source problem for hyperbolic equa-
tions and the lipschitz-like convergence of the quasi-reversibility method,” In-
verse Problems, 2019.