Pourya 2024

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

790 IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, VOL.

10, 2024

A Box-Spline Framework for Inverse Problems With


Continuous-Domain Sparsity Constraints
Mehrsa Pourya , Graduate Student Member, IEEE, Aleix Boquet-Pujadas , and Michael Unser , Fellow, IEEE

Abstract—The formulation of inverse problems in the regularization terms. There is extensive literature devoted to the
continuum eliminates discretization errors and allows for the solution of (1), especially with sparsity-promoting regularizers
exact incorporation of priors. In this paper, we formulate a supported by the theory of compressed sensing [3], [4], [5], [6],
continuous-domain inverse problem over a search space of
continuous and piecewise-linear functions parameterized by box [7], [8], [9].
splines. We present a numerical framework to solve those inverse
problems with total variation (TV) or its Hessian-based extension A. Continuous-Domain Inverse Problems
(HTV) as regularizers. We show that the box-spline basis allows for
exact and efficient convolution-based expressions for both TV and Although the discrete formulation (1) is widespread, a for-
HTV. Our optimization strategy relies on a multiresolution scheme mulation of the inverse problem in the continuum offers several
whereby we progressively refine the solution until its cost stabilizes.
We test our framework on linear inverse problems and demonstrate
advantages. The continuous formulation eliminates discretiza-
its ability to effectively reach a stage beyond which the refinement tion errors and allows for the direct employment of regularizers
of the search space no longer decreases the optimization cost. in the continuum. Moreover, it matches the underlying signal
better and allows for its evaluation at any resolution. These
Index Terms—Continuous and piecewise linear, discretization,
total variation, Hessian total variation, multiresolution. reasons have motivated many studies for the understanding of
continuous-domain inverse problems [10], [11], [12], [13], [14],
[15], [16], [17], [18].
I. INTRODUCTION To pose an inverse problem in the continuum, one denotes the
HE recovery of an unknown signal from a set of noisy unknown signal as the d-dimensional function f : Rd → R and
T measurements—an inverse problem—plays a central role
in the field of imaging [1], [2]. The classic solution of inverse
formulates the problem as

arg minf ∈X (E(ν(f ), y) + λR(f )) , (2)


problems involves an arbitrary discretization step whereby one
expresses the unknown signal in some basis. The pixel basis is where X represents an adequate search space of functions and
popular for its simplicity. Using such a basis, one formulates the ν : X → CM denotes the forward operator. Concretely, ν(f ) =
m=1 with νm : X → C representing the effect of the
inverse problem as (νm (f ))M
arg minc∈RN (E(Hc, y) + λR(c)) , (1) mth detector of the imaging system. The regularizer R : X →
R+ acts in the continuum on the function f ∈ X .
m=1 ∈ C
where the discrete measurements are y = (ym )M M
and To make Problem (2) tractable, one needs to identify a suitable
c = (cn )n=1 ∈ R are the pixel values. In this formulation,
N N
search space X . A natural choice for X comes from the finiteness
H ∈ CM ×N denotes the discretized forward operator (the sys- of the regularization term. This induces a native space BR ,
tem matrix), which models the acquisition process. The loss which is intuitively the largest function space where R(f ) is
functional E : CM × CM → R+ quantifies data fidelity. The well-defined [19], [20]. In the best-case scenario, the solutions
regularizer R : RN → R+ incorporates prior knowledge about of (2) for X = BR are characterized using a representer theo-
the unknown signal to ensure the well-posedness of the prob- rem [21], [22], [23]. This allows one to represent the solution
lem. The hyperparameter λ ∈ R+ balances the data fidelity and with a finite number of parameters, turning the problem into a
computationally feasible one. Representer theorems depend on
Manuscript received 16 October 2023; revised 2 April 2024; accepted 11 May the regularizer and often identify solutions through the extreme
2024. Date of publication 17 May 2024; date of current version 30 May 2024. points of the regularization ball [24]. However, one cannot
This work was supported in part by the European Research Council (ERC Project always rely on representer theorems for discretization as the
FunLearn) under Grant 101020573, in part by the Swiss National Science Foun-
dation under Grant 200020_219356, and in part by Sinergia under Grant CRSII5 solutions they provide sometimes take intractable forms. This
198569. The associate editor coordinating the review of this manuscript and urges the need for alternative strategies to tackle those cases.
approving it for publication was Prof. Alin M Achim. (Corresponding author: In this paper, we focus on regularizers such as total variation
Mehrsa Pourya.)
The authors are with the Biomedical Imaging Group, École polytech- (TV) and Hessian total variation (HTV), as expressed in the
nique fédérale de Lausanne, 1015 Lausanne, Switzerland (e-mail: mehrsa. continuum [25], [26]. The pixel-based implementation of these
[email protected]; [email protected]; [email protected]). continuously defined regularizers provides strong tools for the
This article has supplementary downloadable material available at
https://doi.org/10.1109/TCI.2024.3402376, provided by the authors. solution of inverse problems through compressed-sensing meth-
Digital Object Identifier 10.1109/TCI.2024.3402376 ods [27], [28], [29].
2333-9403 © 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.

Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 02,2024 at 09:28:02 UTC from IEEE Xplore. Restrictions apply.
POURYA et al.: BOX-SPLINE FRAMEWORK FOR INVERSE PROBLEMS WITH CONTINUOUS-DOMAIN SPARSITY CONSTRAINTS 791

The continuous-domain inverse problems over BTV and discretization [46], [47]. The authors of [48] deployed a finite-
BHTV have been studied theoretically. In essence, TV element discretization on a uniform triangulation to solve a
regularization promotes piecewise-constant solutions [22], TV-regularized inverse problem. The CPWL parameterization
[30], [31]. Likewise, HTV regularization promotes continuous of [48] is similar to that of our box-spline-based approach, which
and piecewise-linear (CPWL) solutions [32], [33]. Algorithms partitions the domain into the simplices of the Kuhn-Freudenthal
in the one-dimensional case (d = 1) exist [34], [35], [36]. triangulation. However, the method in [48] is specific to denois-
They rely on B-spline-based parameterizations of the signals on ing problems with a continuum of measurements. In contrast, we
uniform grids [37]. In this paper, we generalize such frameworks consider a more realistic imaging setup wherein a finite set of
to multiple dimensions. measurements is obtained through a forward operator ν. Our
spline-based formulation offers several advantages. It allows
B. Contributions for closed-form expressions of various forward operators, in
combination with TV and HTV regularization. Further, it results
We develop a framework to solve the continuous-domain
in an exact and convolutional upsampling scheme to refine the
problem (2) with TV and HTV regularizations in multiple di-
grid in any dimension.
mensions. We restrict the search space X to the set of CPWL
Works like [49], [50] view TV regularization from the per-
functions defined on uniform grids, and construct our framework
spective of partial differential equations. They find explicit forms
through three main contributions.
of the solution for denoising problems with a continuum of
(i) Development of a CPWL search space in arbitrary dimen-
measurements.
sion: To generate a CPWL search space, we use first-order box
Gridless (or off-the-grid) algorithms provide another alterna-
splines on a Cartesian grid. Contrarily to their (more widespread)
tive to the continuous-domain solution of TV regularization [51].
tensor-product (of linear B-splines) cousins [38], box splines
Indeed, the solution of the off-the-grid algorithms is modeled
guarantee a CPWL mapping, regardless of the dimensionality
as a sum of indicator functions over simple sets. Although such
and without any compromise in approximation quality [39],
approaches are mathematically elegant and work well for images
[40], [41], [42], [43], [44], [45]. In Section III, we present our
with simple structures, they do not seem to generalize well
CPWL basis and derive its properties.
to natural images [52]. They also rely on heavy optimization
(ii) Exact computations of continuous-domain TV and HTV:
techniques. By contrast, multiresolution schemes reduce the
We show that the use of first-order box splines allows for the ex-
artifacts caused by the approximation grid and benefit from the
act computation of TV and HTV through efficient convolutions
efficient solvers developed for discrete optimization.
(see Theorems 2 and 3). Our continuous-domain computations
eliminate the need for singular-value decomposition (SVD) in
HTV, which is the computational bottleneck of the pixel-based II. MATHEMATICAL PRELIMINARIES
approaches. This efficiency carries on to proximal-based opti- A. Continuous and Piecewise-Linear Functions
mization.
A function f : Rd → R is continuous and piecewise-linear
(iii) Multiresolution scheme for optimization: We follow a
(CPWL) if
multiresolution scheme to find a proper grid size for the proposed
1) it forms a continuous map;
CPWL model. This scheme progressively adjusts the grid size
2) its domain Rd can be partitioned into aset {Pk } of non-
until the cost of optimization stabilizes. It relies on multiscale
overlapping polytopes such that Rd = Pk and
extensions of our CPWL search space with dyadic grid sizes. We

prove that the resulting multiscale search spaces are refinable f (x) = (ak x + bk )1Pk (x) (3)
(see Proposition 1). We present the details of the discretization k
at each scale and the numerical solvers in Sections IV and V,
respectively. for some ak ∈ R and bk ∈ R. The indicator function
d

We validate our framework in Section VI, where we present 1Pk : Rd → {0, 1} maps to 1 when evaluated inside the
numerical results for the reconstruction of continuous-domain polytope Pk , and to 0 elsewhere.
signals from Fourier measurements. We observe that the opti- By inspection of (3), the gradient of a CPWL function f is
mization cost stabilizes with the proper decay of the stepsize 
∇f = ak 1Pk (x). (4)
of the grid, which indicates that further refinement of the search
k
space is not productive. Moreover, our proposed multiresolution
strategy accelerates the optimization of the problem on fine Note that the continuity of f at the boundaries of the polytopes
grids. We also discuss the reconstruction quality and computa- imposes some non-trivial constraints on {ak } and {bk } in (3)
tional complexity of our CPWL-based discretizations of TV and and (4). Therefore, arbitrary choices of such parameters are not
HTV, as compared to their fully discrete counterparts. Notably, allowed.
we observe that our approach accelerates the computations for
HTV regularization. B. Box Splines
A box spline BΞp : Rd → R of nonnegative integer or-
C. Related Works
der p in the space Rd is defined through vectors ξ r ∈ Rd ,
A classic approach to the solution of problems with a r ∈ [1, . . . , d + p]. They are gathered into a matrix Ξp =
functional model of the underlying signal is finite-element [ξ 1 · · · ξ d+p ] ∈ Rd×(d+p) [53]. The vectors in the set
Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 02,2024 at 09:28:02 UTC from IEEE Xplore. Restrictions apply.
792 IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, VOL. 10, 2024

{ξ r }dr=1 are linearly independent, meaning that Ξ0 is full-rank.


The piecewise-constant box spline BΞ0 is defined as
 
1 , x = d tr ξ r for tr ∈ [0, 1]
|det Ξ0 | r=1
BΞ0 (x) = (5)
0, otherwise.
For p ≥ 1, the following recursion holds:
 1
BΞp (x) = BΞp−1 (x − tξ d+p )dt. (6)
0
As we shall see, this implies that the first-order box spline BΞ1
is a CPWL function, irrespective of the dimension d.

C. Freudenthal-Kuhn Triangulation
A simplex is the convex hull of d + 1 affinely independent
points in Rd . A triangulation of Rd is its partition into simplices
with disjoint interiors.
The Freudenthal-Kuhn triangulation partitions Rd into the
T -cubes {[T k1 , T (k1 + 1)] × · · · × [T kd , T (kd + 1)]}k∈Zd ,
where k = (kr )dr=1 and T > 0 is the grid size. Each T −cube
is further subdivided into simplices Sπq ,k,T that are uniquely
determined by the set {v 0 , . . . , v d } of their vertices, where
v 0 = T k, (7)
v r = v r−1 + T eπq (r) , r ∈ 1, . . . , d. (8)
Here, πq is the qth member of Π—the set of all permutations
of {1, . . . , d}—and πq (r) denotes the rth element of this per-
mutation [54]. The elements of {e1 , . . . , ed } are the canonical
bases of Rd . An example of the Freudenthal-Kuhn triangulation Fig. 1. (a) An example of the Freudenthal-Kuhn triangulation in the two-
dimensional space: π1 = (1, 2), π2 = (2, 1), and the identified simplices il-
for d = 2 is illustrated in Fig. 1(a). lustrate the support of a first-order box spline on the Cartesian grid. (b) The
basis function ϕ(· − k) is a shifted box spline on the Cartesian grid that is
D. Total Variation positioned at point k ∈ R2 . Such functions are affine over each simplex of the
Freudenthal-Kuhn triangulation.
The isotropic total variation (TV) of a differentiable function
f : Rd → R is defined as
 of the singular values of a matrix denoted by σr (·). Definition
TV(f ) = ∇f (x)2 dx, (9) (10) does not encompass CPWL functions; however, one can
Rd generalize it to accommodate such functions as
where ∇f (x) denotes the gradient of f . The TV regularization
promotes piecewise-constant solutions whose gradient magni- HTV(f ) = Hf S1 ,M , (11)
tude ∇f 2 is zero almost everywhere. A more general defi-
nition of TV that accommodates non-differentiable functions is where Hf is the Hessian of f in the generalized sense of
provided in Appendix A. distribution. Here, ·S1 ,M denotes the Schatten-total-variation
mixed norm. Intuitively, HTV is the total-variation norm (M
E. Hessian Total Variation norm) of the mapping x → Hf (x)S1 while the M norm is a
The Hessian total variation (HTV) is a generalization of generalization of the L1 norm (see [26] for more details). The
TV that involves second-order differentials. For a twice- null space of the HTV seminorm consists of affine mappings,
differentiable function, we write and CPWL functions are dense in the ball of the HTV seminorm,
 including its extremal points [32]. The HTV regularization
HTV(f ) = Hf (x)S1 dx favors CPWL functions with few affine pieces.
Rd
 
d
= σr (Hf (x))dx, (10) III. CONTINUOUS-DOMAIN MODEL AND CALCULATIONS
Rd r=1
In this section, we first define our multiscale CPWL search
where Hf (x) is the Hessian matrix of f at location x and ·S1 is spaces and investigate their properties in Section III-A. We then
the matrix Schatten-one norm. The latter is given by the 1 norm compute their TV and HTV in Section III-B.

Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 02,2024 at 09:28:02 UTC from IEEE Xplore. Restrictions apply.
POURYA et al.: BOX-SPLINE FRAMEWORK FOR INVERSE PROBLEMS WITH CONTINUOUS-DOMAIN SPARSITY CONSTRAINTS 793

A. Multiscale CPWL Search Spaces


We first derive some properties of box splines defined on the
Cartesian grid. They are useful for the definition and analysis
of our proposed search space. In particular, we build mul-
tiscale CPWL search spaces {X(s) |s ∈ Z} that are refinable
(Proposition 1) to support our proposed multiresolution scheme.
Our refinement is dyadic, with the grid size of X(s) being
(s) = 2−s .
1) Cartesian Box Splines: We set Ξ0 = I, where I is the
identity matrix. From (5), we have that

d
BΞ0 (x) = box(xr ), (12)
r=1
Fig. 2. Refinability of the box-spline basis function in the dimension d = 2.
where x = (xr )dr=1 and the function box : R → R is defined as

1, 0 ≤ x ≤ 1 iv) Refinability
box(x) = (13) 
0, otherwise. · 1
ϕ = (ϕ(· + k) + ϕ(· − k)). (19)
2 2
To define the piecewise-linear Cartesian box spline BΞ1 : R → d k∈{0,1}d

R, we choose ξ d+1 = dr=1 er = 1. By (6), we have that Note that (16) expresses ϕ as the composition of two elemen-
  tary CPWL functions. Since the CPWL property is preserved
1 1 
d
through composition [55], this automatically implies that ϕ
BΞ1 (x) = BΞ0 (x − tξ d+1 )dt = box(xr − t)dt
0 0 r=1 is CPWL as well. In Fig. 2, we illustrate (19) for the two-
dimensional case (d = 2). Specifically, ϕ( 2· ), which is a basis
 +∞ 
d
generator for a grid of size T = 2, can be constructed exactly
= box(t) box(xr − t)dt. (14) by a linear combination of seven shifted versions of ϕ(·), which
−∞ r=1
is a basis generator on a grid of size T = 1. The weights are all
We then define our basis generator ϕ as 0.5 except for k = 0 in (19), where ϕ(·) gets a double weight.
Proof of Theorem 1:
ϕ(x) = BΞ1 (x + 1), (15) i) The geometric interpretation of (14) and (15) implies that
where the shift by 1 centers the basis function around the origin. d
An example of the basis generator shifted to position k is shown ϕ(x) = Leb suppt {box(xr − t + 1)}
in Fig. 1(b). r=1
Theorem 1 (Basis-Function Properties): The piecewise-
linear box spline ϕ : Rd → R defined by (15) has the following suppt {box(t)} , (20)
properties. where suppt returns the support of the input function with
i) Explicit closed-form formula respect to the variable t, and Leb denotes the Lebesgue
ϕ(x) = (1+min(x1 , · · · , xd , 0)−max(x1 , · · · , xd , 0))+ , measure. From (13),
d

(16)
where x = (xr )dr=1 , and (x)+ := max(x, 0). ϕ(x) = Leb [xr , xr + 1] [0, 1] . (21)
ii) Interpolatory on the Cartesian grid r=1
 Through Lemma 1 of Appendix1 A, we have that
∀k ∈ Z : ϕ(k) =
d 1, k = 0
(17) 
0, otherwise. Leb(Cd ), LenCd ≥ 0
ϕ(x) = (22)
Leb(∅), LenCd < 0,
iii) Explicit Fourier transform
where Cd = [max(x1 , · · · , xd , 0), min(x1 , · · · , xd , 0) +
ej1 ω − 1  1 − e−jωr
 d
1], and LenCd = (1 + min(x1 , · · · , xd , 0) − max(x1 ,
ϕ̂(ω) =
j1 ω r=1 jωr · · · , xd , 0)). Therefore

 
d LenCd , LenCd ≥ 0
1 ω ωr ϕ(x) = (23)
= sinc sinc (18) 0, LenCd < 0
2π r=1

where ω = (ωr ) ∈ Rd and sinc(x) = sin πx


πx .
1 The appendix is provided in supplementary materials.

Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 02,2024 at 09:28:02 UTC from IEEE Xplore. Restrictions apply.
794 IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, VOL. 10, 2024

implies that where {c[k]}k∈Zd is the set of its so-called expansion


coefficients.
ϕ(x) = [LenCd ]+ (24)
The use of the box splines induces the following properties.
and completes the proof. i) The affine mapping f : Rd → R of form f (x) = a x +
ii) One can verify this property through (16) and the b can be produced by k∈Zd ( T1 a k + b)ϕ( T· − k) for
fact that (max(k1 , · · · , kd , 0) − min(k1 , · · · , kd , 0)) ≥ some a ∈ Rd , b ∈ R.
1 when k = (kr )dr=1 ∈ Zd \ {0}. ii) The space XT has the capacity to approximate any twice-
iii) This result has been proven elsewhere.2 differentiable function [56], [57].
iv) Let us assume that ϕ( 2· ) can be expressed through a linear iii) The set {ϕ( T· − k)}k∈Zd forms a Riesz basis, which
combination of the shifted versions of ϕ(·) as guarantees a unique and stable link between any function
·  f ∈ XT and its expansion coefficients c [58].
ϕ = u[k]ϕ(· − k). (25) iv) The linear regions of the function f ∈ XT are exactly the
2
k∈Z
d
set of the simplices of the Freudenthal-Kuhn triangulation
To find u such that (25) holds, we take the Fourier trans- with the grid size T . The sampled value of f at a vertex
form on both sides, which yields that T k, k ∈ Zd , of the triangulation is f (T k) = c[k]. On
2d ϕ̂(2ω) each simplex, f can be identified by the only hyperplane
U (ejω ) = , (26) that interpolates the function values at the vertices of that
ϕ̂(ω)
simplex [58].
 
where U (ejω ) = k∈Zd u[k]e−jω k (discrete-time 3) Multiscale Search Spaces: We define a series of mul-
Fourier transform of u). By (18) and after some tiscale search spaces {X(s) = X2−s |s ∈ Z}. At scale s, X(s)
simplifications, we have that satisfies (30) with T = 2−s . Therefore, any fs : Rd → R ∈ X(s)
is given by
1 j1 ω d
U (ejω ) = (e + 1) (1 + e−jωr ). (27) 
2 fs (·) = cs [k]ϕ (2s · − k) (32)
r=1
 k∈Zd
We define V (ejω ) = dr=1 (1 + e−jωr ) and invoke sepa-
where {cs [k]}k∈Zd is the set of expansion coefficients of fs at
rability to compute its inverse discrete-time Fourier trans-
scale s.
form v as
Proposition 1 (Refinable Search Space): If f ∈ X(s) with

d  expansion coefficients cs , then f is exactly representable in
v[k ] = (δ[kr ] + δ[kr + 1]) = δ[k + k].
X(s+1) with expansion coefficients
r=1 k∈{0,1}d
(28) 
cs+1 [·] = cs [k]u[· − 2k], (33)
From the properties of the Fourier transform, we have that
k∈Zd
1
u[·] = (v + v[· − 1]) where u is defined in (29).
2 Proof: If f ∈ X(s) , from (32), (25), and (29) we have that
1   
= (δ[· + k] + δ[· + k − 1])
2 f (·) = cs [k] u[k ]ϕ(2s+1 · − 2k − k )
k∈{0,1}
d
k∈Zd k ∈Zd
1   
= (δ[· + k] + δ[· − k]), (29) = cs [k]u[k − 2k]ϕ(2s+1 · − k ). (34)
2
k∈{0,1}d k∈Zd k ∈Zd

where δ[·] represents the Kronecker delta. Equations (29) This implies that
and (25) lead to (19), which completes the proof.  
2) Definition of the Search Space: Equipped with the box cs+1 [k ] = cs [k]u[k − 2k] (35)
spline ϕ : Rd → R, we now define the CPWL function search k∈Zd
space and completes the proof. 
 
· As a result of Theorem 1, we obtain the embedding · · · ⊂
XT = c[k]ϕ − k , c ∈ 2 (Z ) ,
d
(30) X(s−1) ⊂ X(s) ⊂ X(s+1) ⊂ · · · . This means that any function
T
k∈Z
d
at a coarse scale s1 ≤ s is exactly representable at any finer
d scale s.
where 2 (Z ) denotes the space of discrete signals with finite
energy. Each function f : Rd → R ∈ XT can be written as
 · B. Exact and Efficient Computation of the Regularization
f (·) = c[k]ϕ −k , (31)
d k∈Z
T Here, we show that the TV and HTV seminorms of f ∈ XT
can be expressed exactly through efficient convolutions. Without
2 https://doi.org/10.1016/j.acha.2023.101581 loss of generality, our results are applicable to fs ∈ X(s) .

Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 02,2024 at 09:28:02 UTC from IEEE Xplore. Restrictions apply.
POURYA et al.: BOX-SPLINE FRAMEWORK FOR INVERSE PROBLEMS WITH CONTINUOUS-DOMAIN SPARSITY CONSTRAINTS 795

T Paπq ,k,T =
⎡ ⎤
c[k + eπq (1) ] − c[k]
⎢ .. ⎥
⎢ ⎥, (40)
⎣ . ⎦
d d−1
c[k + n=1 eπq (n) ] − c[k + n=1 eπq (n) ]

where
⎡ ⎤
e
πq (1)
⎢ . ⎥
P=⎢ . ⎥
⎣ . ⎦. (41)
e
πq (d)
Fig. 3. Discrete filters {gπq ,r |q, r ∈ {1, 2}} and the internal variable τ , as
used for the calculation of gradient and TV in d = 2. The filters are zero Since P is a permutation matrix, we have that P P = I. There-
everywhere except at the diamond and square points that correspond to the
values of +1 and (−1), respectively. At each plot, the black boundaries identify fore P−1 = P , and
the simplex over which the rth component of the gradient is computable through
the depicted finite difference. aπq ,k,T =
⎡ ⎤
c[k + eπq (1) ] − c[k]
1 ⎢ .. ⎥
1) Total Variation: We first calculate the gradient of f ∈ XT P ⎢
⎣ .
⎥ . (42)

in Lemma 1; we then compute its TV seminorm in Theorem 2. T d d−1
Lemma 1 (Gradient Calculation): The gradient of the func- c[k + n=1 eπq (n) ] − c[k + n=1 eπq (n) ]
tion f ∈ XT on the interior of simplices of its domain is
Consequently,
1   
d! d
∇f (·) = c ∗ gπq ,r [k]1πq ,k,T (·)er , (36) τ −1

T 1
k∈Z
d q=1 r=1 aπq ,k,T [r] = c[k + eπq (n) + er ]
T n=1
where ∗ denotes the discrete convolution operator, and 1πq ,k,T : 
τ −1

Rd → {0, 1} is the indicator function corresponding to the sim-
plex Sπq ,k,T of the Kuhn-Freudental triangulation. Moreover, −c[k + eπq (n) ] , (43)
n=1
we have that
τ −1
 τ −1
 where τ satisfies (38) and r ∈ {1, . . . , d}. Equation (43) allows
gπq ,r [·] = δ[· + eπq (n) + er ] − δ[· + eπq (n) ], (37) us to rewrite the gradient as
n=1 n=1
d  τ −1
1   
d!
where the variable τ satisfies
∇f (·) = c[k + eπq (n) + er ]
T
eπq (τ ) = er . (38) k∈Z
d q=1 r=1 n=1

τ −1

This result tells us that each component of the gradient of
− c[k + eπq (n) ] er 1πq ,k,T (·). (44)
f ∈ XT is a piecewise-constant function. Over the simplices
n=1
whose orientation follows πq , one can obtain the value of the
rth component of the gradient by convolving the expansion After consideration of (37) and the convolution property, the
coefficients c and the finite-difference filter gπq ,r . To further proof of Proposition 1 is complete. 
clarify, we illustrate the filters gπq ,r in the two-dimensional case Theorem 2 (Isotropic Total Variation): The isotropic total
in Fig. 3. Each filter gπq ,r has only two nonzero values (+1 and variation of the function f ∈ XT is
(−1)) regardless of the dimensionality. The number of filters
corresponding to each simplex orientation is d and the number d!  d 1
T d−1    2
of orientations is d!. TV(f ) = ((c ∗ gπq ,r )[k])2 , (45)
Proof of Lemma 1: We combine Property iv of the search d!
k∈Z
d q=1 r=1
space and (4) to write
where the filter gπq ,r satisfies (37).

d!
Theorem 2 hints that, in order to calculate the exact
∇f (·) = aπq ,k,T 1πq ,k,T (·). (39)
continuous-domain TV of f ∈ XT the following steps are
k∈Zd q=1
needed: (i) for every simplex orientation πq , convolve the ex-
To find aπq ,k,T , we use the hyperplane equation over the simplex pansion coefficients c and the finite-difference filters gπq ,r for
Sπq ,k,T . The result is r ∈ {1, · · · , d} to obtain d values at point k; (ii) compute the 2

Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 02,2024 at 09:28:02 UTC from IEEE Xplore. Restrictions apply.
796 IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, VOL. 10, 2024

norm of that d values at each point; (iii) repeat the procedure


for all the d! orientations and add the results at each point;
(iv) sum the obtained values over the whole domain and scale
d−1
the resulting scalar value by T d! . The procedure resembles the
conventional TV calculation except for the aggregation across
the simplex orientations. For the conventional pixel-based TV,
only one arbitrary orientation is considered.
Proof of Theorem 2: We first calculate the 2 norm of the Fig. 4. Discrete filters that are used for the calculation of the generalized
gradient as Hessian and HTV in d = 2. The filters are zero everywhere except at the diamond
and square points that correspond to the values of +1 and (−1), respectively.
∇f 2 (·) = From left to right and at each plot, the black line depicts the facet over which
the difference of gradients contributes to the HTV and is also the support of
d!  d 1 μ1,1 (· − e1 ), μ1,2 (· − e2 ), and μ2,1,1 respectively.
1   2
((c ∗ gπq ,r )[k])2 1πq ,k,T (·).
T
k∈Z
d q=1 r=1
(46)
where
Then, we obtain that
d!  d 1 κ1,p [·] = δ[·] − δ[· − ep ] − δ[· + 1] + δ[· + 1 − ep ],
1   2
TV(f ) = ((c ∗ gπq ,r )[k])2 κ2,p,q [·] = δ[·] − δ[· − ep ] − δ[· − eq ] + δ[· − ep − eq ].
T
k∈Zd q=1 r=1 (52)

1πq ,k,T (x)dx
Rd In Lemma 2, we see that the Hessian of the CPWL function
d!  d 1 f ∈ XT is zero everywhere except for the boundaries (facets)
T d−1    2
of the simplices of the Kuhn-Freudental triangulation. These
= ((c ∗ gπq ,r )[k])2 , (47)
d! boundaries are fully identified by the supports of the Dirac fences
k∈Z
d q=1 r=1
μ1,p ( T· − k) and μ2,p,q ( T· − k) for k ∈ Zd (see Fig. 4). The
which completes the proof. Note that the gradient of f on the Hessian at these boundaries is equal to the matrices Mp,p or Cp,q
junction of the domain simplices is not defined. However, the multiplied by a scalar value. To obtain the proper scalar value,
set of such junctions is of measure zero and does not affect one needs to convolve the expansion coefficients c and the filters
the obtained result. κ1,p or κ2,p,q . We show these filters in the two-dimensional case
2) Hessian Total Variation: We first calculate the Hessian of in Fig. 4.
f ∈ XT in Lemma 2; we then compute its HTV in Theorem 3. Proof of Lemma 2: By linearity and the chain rule for differ-
Lemma 2 (Explicit Hessian): Let the function ψp,q : Rd → R entiation, we obtain that
be defined as
 1  ·
ψp,q (x) = box(xr − xp ), (48) Hf (·) = c[k]Hf {ϕ} − k (x). (53)
T2 T
r∈{1,...,d}\{p,q} dk∈Z

and let the functions μ1,p and μ2,p,q be defined as We compute Hf {ϕ} (the Hessian of the basis function) in
μ1,p (x) = δ(xp )ψp,p (x) Appendix C. It follows

μ2,p,q (x) = δ(xp − xq )box(xp )ψp,q (x), (49) 


1 
d
·
Hf (·) = 2 c[k] μ1,p −k+1
where x = (xr )dr=1 . Further, we define the matrix Mp,q ∈ Rd×d T d p=1k∈Z
T
such that it is zero everywhere except for its [p, q]-th component,
· ·
which is equal to one. We define matrix Cp,q as + μ1,p − k − ep − μ1,p − k − ep + 1
T T
Cp,q = Mp,q + Mq,p − Mp,p − Mq,q . (50) ·
− μ1,p −k Mp,p
T
Then, the generalized Hessian of f ∈ XT is
d q<p 

d 1   ·
1 · − 2 μ2,p,q − k + 1 − eq
Hf (·) = (c ∗ κ1,p )[k]μ1,p − k Mp,p T T
T2 T k∈Z
d p=1 q=1
k∈Zd p=1
· ·

d 
p + μ2,p,q − k + 1 − ep − μ2,p,q −k+1
· T T
+ (c ∗ κ2,p,q )[k]μ2,p,q − k Cp,q ,
T ·
k∈Zd p=1 q=1 − μ2,p,q − k + 1 − ep − eq Cp,q . (54)
(51) T

Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 02,2024 at 09:28:02 UTC from IEEE Xplore. Restrictions apply.
POURYA et al.: BOX-SPLINE FRAMEWORK FOR INVERSE PROBLEMS WITH CONTINUOUS-DOMAIN SPARSITY CONSTRAINTS 797

By a change of variable, by the definition of the filters in (52), 


d
and by the property of convolution, (54) leads to (51), thereby HTV(f ) = T d−2
|(c ∗ κ1,p )[k]|
completing the proof. k∈Zd p=1
Theorem 3 (Hessian Total Variation): The Hessian total vari- 

d 
p
ation of the function f ∈ XT is +2 |(c ∗ κ2,p,q )[k]| , (60)
k∈Zd p=1 q=1

d
HTV(f ) = T d−2 |(c ∗ κ1,p )[k]| which completes the proof. 
k∈Zd p=1


d 
p IV. DISCRETIZATION
+2 |(c ∗ κ2,p,q )[k]| . (55)
In this section, we discretize the continuous-domain problem
k∈Zd p=1 q=1
using our multiscale CPWL search spaces. The representation of
This shows that one can compute the continuous-domain HTV the signal f at scale s is denoted by fs ∈ X(s) and satisfies (32).
of f ∈ XT by aggregating the results of the convolution of c with Our model allows us to uniquely characterize the function fs :
κ1,p and κ2,p,q . Remarkably, this eliminates the need for the Rd → R with the discrete expansion coefficients {cs [k]}k∈Zd .
SVD that is otherwise required for the traditional computation Lemma 3: Let Ω = [0, b1 ] × · · · × [0, bd ] ⊂ Rd and define
of the Schatten norm. Each filter only has four nonzero values Ns = 2s (maxr br ) + 1 and
in any dimension.
 The number of the filters (convolutions) in Ks = {0, . . . , Ns − 1} × . . . × {0, · · · , Ns − 1}. (61)
dimension d is d+12 .
Proof of Theorem 3: From (11), and noting that the individual Then,
arguments of the summations in (51) are disjoint Dirac fences,
we obtain that Ω ⊆ Hull({2−s k|k ∈ Ks }), (62)

HTV(f ) = (56) where Hull returns the convex hull of a set of input points.
We assume that the signal f to be recovered satisfies the zero
d 
 
1  ·  boundary conditions
(c ∗ κ1,p )[k]μ1,p − k Mp,p 
T2 T S1 ,M
k∈Zd p=1 f (x) = 0 for x ∈
/ Ω or x ∈ ∂Ω, (63)
p 

 
d 
·  where Ω = [0, b1 ] × · · · × [0, bd ] ⊂ Rd and ∂Ω denotes the
 
+ (c ∗ κ2,p,q )[k]μ2,p,q − k Cp,q  .
boundary of Ω. With this assumption and from Lemma 3, fs
T S1 ,M
k∈Zd p=1 q=1
is (exactly) equal to
(57)

fs (·) = cs [k]ϕ (2s · − k) , (64)
We then invoke formula for the  · S1 ,M a Dirac fence (see [26],
k∈Ks
Theorem 3.5-2), and obtain
where Ks satisfies (61) and |Ks | = Nsd . Since Ks is a finite
1 
d
set, one can define a bijective mapping idxs : Ks → N, with
HTV(f ) = 2 T |(c ∗ κ1,p )[k]| Leb(D) Mp,p S1
T preimage idx−1
s : N → Ks , to rewrite (31) as
d p=1 k∈Z
d
1  
d 
p 
Ns
+ 2 T |(c ∗ κ2,p,q )[k]| Leb(E) Cp,q S1 , fs (·) = cs,n ϕs,n , (65)
T
k∈Z
d p=1 q=1 n=1
(58)
where cs,n = cs (idx−1 −1
s [n]) and ϕs,n = ϕ(2 · − idxs [n]). In
s
Nd d
where this way, one obtains that cs = Vec(cs ) = (cs,n )n=1s
∈ RNs .
x This is the vectorized representation of signal at resolution s,
D = {x ∈ Rd−1 : box = 1} which we refer to as the parameters of our model. As a result
T
 of Theorem 1, the relation between the parameters at successive
x1 x − x1 1
E = {(x1 , x) ∈ Rd−1 : box box = 1}, scales is
T T
(59) cs+1 = Us cs , (66)

where Us ∈ RNs+1 ×Ns corresponds to the transpose of a


d d
and Leb returns the Lebesgue measure (volume) of the input
domain. The only nonzero eigenvalues of Mp,p and Cp,q are the Toeplitz-like matrix associated with the convolution (with a
elements of {1} and {1, −1}, respectively. Therefore, we have stride of two) of cs with the mirrored version of u defined in
that Mp,p S1 = 1 and Cp,q S1 = 2. In addition, Leb(D) = (29). More details are provided in Appendix D. Lemma 3, also
T d−1 and Leb(E) = T d−1 . Hence, we obtain that verifies that Ns+1 = (2Ns − 1).

Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 02,2024 at 09:28:02 UTC from IEEE Xplore. Restrictions apply.
798 IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, VOL. 10, 2024

A. Exact Discretization of the Forward Operator B. Exact Discretization of TV and HTV


Here, we describe how to discretize the forward operator using Here, we present the formulas for TV and HTV of fs ∈ X(s)
the parameters cs . For a linear forward operator ν and at scale in terms of the parameters cs .
s, we have that 1) Total Variation: Through Theorem 2 and considering the
 Nsd
 vectorization of the expansion coefficients cs = Vec(cs ), we
 obtain that
vm (fs ) = νm , fs  = νm , cs,n ϕs,n
n=1 2−s(d−1)
TV(fs ) = Ls,TV ⊗ cs 2,1 , (76)

d
Ns d!
= cs,n νm , ϕs,n  = h
s,m cs , (67) where Ls,TV ∈ Rd!Ns ×dNs is defined as
d d

n=1
⎡ ⎤
Nd Ls,π1 ,1 · · · Ls,π1 ,d
where m ∈ {1, . . . , M }, and hs,m = (νm , ϕs,n )n=1
s
. By con- ⎢ . .. ⎥
structing Hs ∈ R M ×Nsd
as Ls,TV = ⎢
⎣ .
. ..
. . ⎦.
⎥ (77)
⎡ ⎤ Ls,πd! ,1 · · · Ls,πd! ,d
hs,1
⎢ . ⎥
Hs = ⎢ . ⎥ Here Ls,πq ,r ∈ RNs ×Ns is a Toeplitz-like matrix associated to
d d

⎣ . ⎦, (68)
the vectorization of the convolution of the expansioncoefficients
h
s,M
cs with the kernel gπq ,r defined in (37). We present the definition
we have that of the linear operator (L ⊗ ·) and its adjoint in Appendix E. The
·2,1 is defined as the application of the 2 norm to the rows
v(fs ) = Hs cs . (69)
of the input matrix and then that of the 1 norm to the resulting
The matrix Hs is the exact discretization of the forward operator vector.
at scale s and depends on the response of the box splines to the Our result for the TV in (76) resembles the conventional pixel-
system. As a consequence of Theorem 1, we have that based discretization, except that the present finite-difference
filters are tailored to our model, which makes our computations
Hs = Hs+1 Us . (70)
exact.
Therefore, we only need to compute the discretization of the 2) Hessian Total Variation: Through Theorem 3, we have
forward operator at some fine scale sfine and use (70) to propagate that
it to coarser scales. The computation of Hs can be further
HTV(fs ) = 2−s(d−2) Ls,HTV cs 1 . (78)
simplified depending on the properties of the forward operator.
1) Example: If the forward operator corresponds to a sam- The matrix Ls,HTV ∈ R
d(d+1)
2 Nsd ×Nsd
is defined as
pling in the Fourier domain, we can write that ⎡ ⎤
 Ls,1
νm , ϕs,n  = e−jωm · , ϕs,n  = ϕ̂s,n (ω m ) ⎢ ⎥
⎢ .. ⎥
⎢ . ⎥
= 2−sd ϕ̂0,n (2−s ω m ) (71) ⎢ ⎥
⎢ Ls,d ⎥
Ls,HTV ⎢
=⎢ ⎥, (79)
= 2−sd ϕ̂(2−s ω m )e−j2
−s
ω −1
m idxs [n] . (72) ⎥
⎢ 2Ls,2,1 ⎥
⎢ .. ⎥
The term ϕ̂(2−s ω m ) can be computed in accordance with (18). If ⎢ ⎥
⎣ . ⎦
−s  −1 Nsd
one then defines ws,m = (e−j2 ωm idxs [n] )n=1 , it follows that 2Ls,d,d−1
hs,m = 2−sd ϕ̂(2−s wm )ws,m (73)
where Ls,p ∈ RNs ×Ns is a Toeplitz-like matrix that corresponds
d d

and, therefore, that to the vectorization of the convolution of cs with the filter κ1,p for
p ∈ {1, . . . , d}, and Ls,p,q ∈ RNs ×Ns is a Toeplitz-like matrix
d d

Hs = diag(ϕ̂s )Ws , (74)


that corresponds to the vectorization of the convolution of cs
M ×M
where diag : R → R
M
builds a diagonal matrix with with the filter κ2,p,q of (52) for p, q ∈ {1, . . . , d}, q < p.
the elements of the diagonal being the input vector ϕ̂s = The use of the Hessian total-variation regularization, defined
(2−sd ϕ̂(2−s ω m ))M
m=1 ∈ R , and
M
with different Schatten-p norms, was pioneered by Lefkimmiatis
⎡ ⎤ et al. [28]. In their implementation, they estimate the Hessian

ws,1 using second-order finite differences and rely on a Riemann-
⎢ . ⎥
Ws = ⎢ . ⎥
⎣ . ⎦∈M ×C .
Nsd
(75) sum approximation of the integral for the computation of the
 seminorm. For p = 2, the Schatten-2 norm coincides with the
ws,M
Frobenius norm and allows for a straightforward computation
By choosing w m on a proper grid, one can use the d-dimensional of the seminorm and its proximal operator. However, for p = 1,
discrete Fourier transform (DFT) followed by some vectoriza- their computations rely on the SVD [29]. The present contri-
tion to evaluate Ws cs . bution is a refined scheme where the computation of HTV with

Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 02,2024 at 09:28:02 UTC from IEEE Xplore. Restrictions apply.
POURYA et al.: BOX-SPLINE FRAMEWORK FOR INVERSE PROBLEMS WITH CONTINUOUS-DOMAIN SPARSITY CONSTRAINTS 799

Algorithm 1: Multiresolution Solver. Algorithm 3: Computation of Proxγ,Rs (z, 3 ).


1: Initialize : s = s0 ∈ Z, 1 > 0; 1: Initialize : u0 = 0 ∈ R , v 0 = 0 ∈ RNLs , t0 = 1;
NLs
d d
2: Initialize : cs0 = 0 ∈ RNs , ds0 = 0 ∈ RNs ; 2: Initialize : α2 = 1/(ctRs γ);
3: Define : Losss (c) = 2 y − Hs {c}22 + λRs (c);
1
3: Define : Prs (c) = 12 z − c22 + λRs (c);
4: ĉs0 = Solvers (cs0 , ds0 ); 4: Define : Dus (c) = 12 (c − projCs (c)22 − c22 + z22 );
5: for s ∈ Z and s > s0 do 5: for k = 0 → Niter2 do
6: ĉs = Solvers (cs , ds ); 6: ck = z − γLs,adj {v k };
7: cs+1 = Us ĉs ; 7: cprojk = projCs {ck };
8: ds+1 = Us ĉs ;
9: if (Losss−1 (ĉs−1 ) − Losss (ĉs )) ≤ 1 then
8: uk+1 = PR √s (v k + α2 Ls {cprojk }});
 1+ k 4t2 +1
10: return s, fˆs = k∈Ks ĉs [k]ϕ(2s · − k) 9: tk+1 = 2 ;
−1
11: end if 10: v k+1 = uk+1 + ttkk+1 (uk+1 − uk );
12: end for 11: Gapk = Prs (cprojk ) − Dus (ck );
12: if Gapk < 3 then
13: break
Algorithm 2: Solvers (c0 , d0 ). 14: end if
1: Initialize : t0 = 1, α1 = 1/ctHs , γ = α1 λ, 2, δ > 0; 15: end for
2: for k = 0 → Niter1 do 16: return projCs {z − γLs,adj {uk+1 }}
3: wk = dk + α1 Re(Hs,adj {y − Hs {dk }});
1
4: 3,k = min(δ, (k+1) 3 );
TABLE I
5: ck+1 = Prox √ γ,Rs (wk , 3,k ); LINEAR OPERATORS AND NORMS ASSOCIATED WITH REGULARIZERS
1+ k 4t2 +1
6: tk+1 = 2 ;
−1
7: dk+1 = ck+1 + ttkk+1 (ck+1 − ck );
8: ek = |Losss (ck ) − Losss (ck+1 )|;
9: if ek < 2 Losss (ck ) then
10: break
11: end if
12: end for
13: return ck+1
Hs {c} = Hs c with adjoint Hs,adj {v} = H s v. Moreover, Cs
encodes additional conditions on the range of the function
Schatten-1 norm is not only exact but also does not require SVD,
fs ∈ X(s) . In the absence of such additional constraints, we have
which speeds up the computations. d
that Cs = RNs . For our model, due to the interpolatory property
V. OPTIMIZATION of the basis, the non-negativity of the function fs implies the
non-negativity of its coefficients cs . Under that constraint, we
By choosing the mean-squared error as our loss functional Nd
have that Cs = R≥0s . For the TV and HTV regularizers, Rs (c)
and by using our developed search spaces {X(s) }, we obtain a
is of the form
solution
Rs (c) = Ls {c} , (82)
fˆs ∈ arg minfs ∈X(s) J(fs ) (80) where · : RNLs → R+ denotes an appropriate discrete
At scale s ∈ Z, where J(fs ) = (ν(fs ) − y22 + λR(fs )). (mixed) norm, and Ls : RNs → RNLs is a linear operator. We
Because of the embedding of the search spaces, we necessarily present more details in Table I.
have that J(fˆs+1 ) ≤ J(fˆs ). Our aim is then to find the scale Once we have solved (81), we  obtain the exact solution to (80)
sfine for which J(fˆsfine ) ≈ J(fˆsfine+1 ). In other words, sfine is a by invoking the fact that fˆs = k∈Ks ĉs [k]ϕ(2s · − k). We have
scale beyond which the refinement of the search space no longer that J(fˆs ) = Losss (ĉs ) = Losss+1 (Us ĉs ). The latter equality
improves the optimization cost. guarantees the continuity of the loss with respect to a change
We present our multiresolution scheme in Algorithm 1. At of scale. Further, by optimization at scale (s + 1), we have that
each scale s, we discretize (80) exactly using (69), (76), and J(fˆs+1 ) = Losss+1 (ĉs+1 ) ≤ Losss+1 (Us ĉs ) = J(fˆs ). There-
(78) as fore, the sequence of final losses at each scale is non-increasing.
We now develop iterative schemes to solve (81). We present
ĉs ∈ arg min Losss (c), (81) our results in Algorithms 2 and 3. The derivation of the steps
c∈Cs
of these algorithms involves a dual formulation of the problem
with Losss (c) = ( 12 y − Hs {c}22 + λRs (c)). There, Cs ⊆ similar to [59] and [60]. Since Algorithm 2 requires the evalu-
d d
RNs , while Hs : RNs → CM is a linear operator such that ation of a proximal operator without a closed form, we use an

Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 02,2024 at 09:28:02 UTC from IEEE Xplore. Restrictions apply.
800 IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, VOL. 10, 2024

inexact-proximal method to accelerate the computations [61], Nd


the problems under non-negativity constraints (Cs = R≥0s ). To
[62], [63]. compute the peak-signal-to-noise ratio (PSNR) and to illustrate
The last ingredients of our proposed algorithms are the func- the continuous-domain signals, we consider samples on a fine
tions Clip : RK → RK and Normalize : RK×N → RK×N , (4096 × 4096) grid.
for k ∈ {1, . . . , K} as 1) Perfect Reconstruction (PR): To test our approach, we
ak intentionally express our continuous-domain ground truth using
[Clip(a)]k = , (83)
max(|ak | , 1) the box spline ϕ so that it is exactly representable in X(s) for
s ≥ 0. It reads
with a = (ak )K
k=1 , and

512 
512
1 fGT,PR = cGT [k]ϕ(· − k), (85)
[Normalize(A)]k = [A]k , (84)
max([A]k 2 , 1) k1 =0 k2 =0

where [A]k denotes the kth row of the matrix A ∈ RK×N . where fGT satisfies zero boundary conditions outside of the
Moreover, the constants ctH and ctR are equal to the square domain [0, 512] × [0, 512]. Consequently, the expansion coef-
of the spectral norm of the operators H and L, respectively. ficients cGT are zero on the boundaries. We choose a medical
The function Re returns the complex input’s real part, and ·F image from [70] as the basis for our ground truth.
denotes the Frobenius norm. We formulate an inverse problem in a regular imaging sce-
Note that the Chambolle-Pock algorithm [64] could offer an nario where the perfect reconstruction of the ground truth is
d
alternative to Algorithm 2 for C = RNs . One might also consider possible. To do so, we take advantage of (74) to calculate the
methods based on primal-dual splitting [65], [66], [67] in the Fourier transform of fGT at the frequencies
presence of constraints, with the caveat that they would involve  
−π π 255π
additional hyperparameters. ω ∈ −π, . . . , 0, ,...,
256 256 256
 
A. Convergence to the Solution Over BR −π π 255π
× −π, . . . , 0, ,..., . (86)
In the one-dimensional case, which is equivalent to our frame- 256 256 256
work for (d = 1), the authors in [37] show that the B-spline- To reconstruct the signal from the measurements, we use
based model can approximate the solution of the problem over Algorithm 1 with s0 = (−3), and no regularization (λ = 0). Our
BTV and BHTV . conclusion from this experiment is that we can reconstruct the
For d ∈ {2, 3}, the proposed CPWL functions can approxi- ground truth with high precision (PSNR > 100) as soon as we
mate any integrable function and its TV [48], [68], [69]. This reach scale s = 0 with J(fˆ0 ) = 0.
approximation result serves as a strong indication that our 2) Compressed Sensing (CS): In this part, we focus on a
proposed framework could approximate the solution of (2) compressed-sensing setup where the problem is ill-posed due
over BTV . Moreover, CPWL functions can approximate any to the small number of measurements. Our aims are twofold: to
locally integrable function and its HTV [32]. The construction of compare the effect of the TV versus HTV regularizations; and
those CPWL functions depends on adaptive meshes. While our to compare our approach to the standard pixel-based approach
framework allows for the exact solution of (2) over a search to inverse problems, where an exact computation of TV is also
space of CPWL functions on uniform grids, a proof of the possible (See in Appendix F how we embed pixel-based TV
convergence over BTV and BHTV as the grid size goes to zero within a multiresolution scheme).
would require further rigorous analyses and remains an open Here, we use a ground truth built upon a two-dimensional
theoretical question. cubic-spline basis that is not representable by either the CPWL
or the pixel bases. Specifically, we use
VI. EXPERIMENTS

512 
512

In this section, we present our numerical experiments. Our fGT,CS = cGT [k]ψ(· − k) (87)
code is available on the GitHub repository.3 The reported times k1 =0 k2 =0

for CPU and GPU correspond to the execution of the code on with ψ(x) = β 3 (x1 )β 3 (x2 ), where β 3 is a one-dimensional
an Intel(R) Xeon(R) Gold 6240 @ 2.60 GHz and on a Tesla cubic spline [38].
V100-SXM2-32 GB, respectively. To generate the measurements, we sample the continuous-
domain Fourier transform of fGT,CS at frequencies
A. Multiresolution Reconstruction  
π π π 127π
Here, we validate our multiresolution scheme to solve inverse ω ∈ − ,... − , 0, ,...,
2 256 256 256
problems in two setups: 1) a perfect reconstruction scenario  
with no regularization; and 2) a compressed-sensing counter- π π π 127π
× − ,... − , 0, ,..., , (88)
part with continuous-domain sparsity constraints. We solve all 2 256 256 256
which forms a grid of size (256 × 256). We then apply a radial
3 https://github.com/mehrsapo/BoxDis/ mask and keep only 30 percent of the frequencies in (88) [71].

Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 02,2024 at 09:28:02 UTC from IEEE Xplore. Restrictions apply.
POURYA et al.: BOX-SPLINE FRAMEWORK FOR INVERSE PROBLEMS WITH CONTINUOUS-DOMAIN SPARSITY CONSTRAINTS 801

TABLE II
NUMBER OF ITERATIONS OF ALGORITHM 2 WITH 2 = 10−6

Fig. 5. Ground-truth image [70] and measurements for the CS setup. TABLE III
GPU TIME (SECONDS) OF ALGORITHM 2 WITH 2 = 10−6

Eventually, we retain only 7.5 percent of the frequencies of the


perfect reconstruction scenario in (86). We illustrate the ground-
truth signal and the measurements in Fig. 5.
To solve the reconstruction problems, we use Algorithm 1
with s0 = (−3), 1 = 10−4 , Niter1 = 2000, Niter2 = 500,
−6
2 = 10 and δ = 10−3 . The regularization hyperparameter is
−6
λ = 10 in all cases. For our experiments, Algorithm 1
stops at scale s = 2; therefore, it involves six scales
s ∈ {−3, −2, −1, 0, 1, 2}. The corresponding grid sizes
are T ∈ {8, 4, 2, 1, 0.5, 0.25} and the number of grid points at due to the better initialization of the problem by the solution at
each scale is (Ns1 × Ns2 ) ∈ {(65 × 65), (129 × 129), (257 × the previous scale.
257), (513 × 513), (1025 × 1025), (2049 × 2049)}.
In Fig. 6, we show the continuous-domain solutions (fˆ2 ) B. CPWL-Based Discrete Regularization
of our three different approaches: (i) pixel-based TV; (ii)
Our CPWL-based TV and HTV computations are essential for
CPWL-based TV; and (iii) CPWL-based HTV. The PSNR
us to develop a multiresolution solver for inverse problems with
values for the solutions are 35.03, 37.89, and 39.50, respectively.
a continuous-domain perspective. At a fixed scale, the resulting
The TV-regularization-based solutions are formed of piecewise-
expressions (76) and (78) can be interpreted as discretization
constant pieces, which matches the results of the TV representer
schemes for TV and HTV. One could therefore use these expres-
theorems [31]. Although the locations of the piecewise-constant
sions as alternative definitions for the TV and HTV of discrete
regions are similar, the boundaries are rectangular for the
signals. In this section, we compare these discretizations to some
pixel-based approach and less angular for the CPWL-based
of their classic counterparts.
approach. By contrast, the HTV solution is locally piecewise-
1) Total Variation: For a discrete image c ∈ RN1 ×N2 , our
affine and. Hence, it can better reproduce the ground-truth
CPWL-based computations for TV lead to
structure.
In Fig. 7, we study the same solutions at different scales. At TVCPWL (c) =
coarser scales, the loss is dominated by the data-fidelity term so
N1 −1 N 2 −1
that the regularization effect becomes more evident only at finer 1  
scales. We report the optimization loss in Fig. 8 for different 2 m=0 n=0
approaches. We observe that the optimization loss is continuous
and decreases with the change of scale as supported by theory.  1
(c[m, n] − c[m + 1, n])2 + (c[m, n] − c[m, n + 1])2 2
We also observe that the optimization cost stabilizes one a certain
 1
point of refinement has been reached. + c[m, n] − c[m − 1, n]2 + (c[m, n] − c[m, n − 1])2 2 ,
To compare the computational efficiency of the various meth-
(89)
ods, we present the number of iterations of Algorithm 2 and
the corresponding computational times to reach the stopping with some proper zero padding. Expression (89) resembles
criteria at each scale in Tables II and III, respectively. Note the standard definition of discrete isotropic TV [46]. In effect,
that the relation between the two is not necessarily linear due TVCPWL is the result of averaging the standard discrete isotropic
to the inexact proximal scheme. We also report the number of TV with backward and forward finite differences. This averaging
iterations and time for the direct solution of the problem on the results in TVCPWL being invariant to rotations of 180 degrees,
finest grid (s = 2) with zero initialization. The total time of our but increases the computational complexity of the optimization.
multiresolution scheme with the CPWL basis is less than the To study this effect, we perform a denoising experiment with a
direct solution on the finest grid with zero initialization for both ground-truth image of size (400 × 400). The ground-truth image
TV and HTV regularization. This computational acceleration is involves two regions, a piecewise-constant disk and a smooth
Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 02,2024 at 09:28:02 UTC from IEEE Xplore. Restrictions apply.
802 IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, VOL. 10, 2024

Fig. 6. Solutions of several multiresolution schemes after convergence at the finest scale.

Fig. 7. Solutions at each scale. The region corresponds to the first row of Fig. 6.

region corresponding to the top view of a cone (Fig. 9(a)). TABLE IV


DISCRETE REGULARIZERS
The measurements are obtained by adding zero-mean Gaussian
white noise with a standard variation of 50/255. We compare
our approach to the discrete isotropic TV with forward finite
differences, and to the upwind TV—a discrete variant of TV
that is invariant to ±90 and 180 degrees rotations [72], [73].
To solve the optimization problem for all methods, we use a
variant of Algorithm 3 with a stopping criterion of 10−6 on the
relative error of the loss objective. We set the regularization
hyperparameter to λ = 0.5 for all the methods. We present
the PSNRs of the solutions (left half: disk, right half: cone),
the number of iterations to reach the stopping criterion, and The CPWL-based TV performs marginally better than the
the computational times in Table IV. standard isotropic TV with a forward finite difference. This is

Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 02,2024 at 09:28:02 UTC from IEEE Xplore. Restrictions apply.
POURYA et al.: BOX-SPLINE FRAMEWORK FOR INVERSE PROBLEMS WITH CONTINUOUS-DOMAIN SPARSITY CONSTRAINTS 803

TABLE V
EVALUATION TIME FOR 200 ITERATIONS OF THE PROXIMAL OPERATOR OF HTV
REGULARIZATION

finite-difference-based approach. This is mainly because our ex-


act computations for HTV lead to a minimization problem with
1 -norm constraint, whereas that of [29] involves constraints
with the Schatten-1 norm.
Fig. 8. Loss function for each method in terms of the total number of iterations The faster convergence regarding the number of iterations
of Algorithm 2 at different scales.
comes with no compromise on the computational cost per itera-
tion. On top of the Hessian approximation in [29], their iterative
proximal operator requires a (d × d) SVD for each pixel and
iteration. Our CPWL-based proximal operator for HTV requires
the same number of convolutions but eliminates the need for
SVDs (Algorithm 3). This means that we avoid a cost of O(d3 )
per coefficient (pixel). To demonstrate this effect, we provide the
execution times of 200 iterations of the evaluation of the HTV
proximal operator for d = 2, 3 in Table V. There, for d = 2 and
d = 3, we use an image and a volume with sizes of (583 ×
493) pixels and (583 × 493 × 40) voxels, respectively. For the
proximal operator of [28], we follow two different approaches to
calculate the SVD: the numerical PyTorch SVD function (SVD),
and a closed-form solution (SVD-C). We note that implementing
SVD-C for d = 3 was especially complicated.
Our method is faster than the finite-difference-based method
in low dimensions. We expect the improvement to be even bigger
in higher dimensions because there is no closed form for the SVD
for d > 4.

VII. CONCLUSION
In this paper, we have provided a novel framework for the
solution of the inverse problem (2) with continuous-domain
Fig. 9. (a) Discrete ground-truth signal and its noisy version as the measure-
total variation (TV) and Hessian total variation (HTV) regu-
ments. (b) Solutions. larization. We presented an exact discretization scheme based
on box splines that takes into account the analog nature of the
signals and the physical measurement set. Our formulation is
at the cost of CPU time. The upwind TV outperforms the other exact numerically and yields discrete convolutional formulas
methods in the disk region but is not helpful in the cone region, for the continuously-defined TV and HTV. It also allows for
for which HTV is much better suited. straightforward evaluation of various forward operators in imag-
2) Hessian Total Variation: We now repeat the denoising ing. We also proposed a multiresolution scheme that speeds
experiment of Section VI.B.1, but with HTV regularization up computations and selects the optimal resolution for image
(Fig. 9(b)). On the one hand, we observe that the HTV reg- reconstruction. In all our experiments, we reached a scale beyond
ularization reproduces the smooth structures much better than which making the grid finer yields no further decrease in the
the TV-based solution, which shows staircase effects. On the optimization cost.
other hand, the edge of the disk is blurred in the HTV solution
and TV regularization performs better on such sharp edges.
REFERENCES
We also compare our approach to the finite-difference-based
approach for the Hessian-Schatten-1 regularization proposed [1] M. T. McCann and M. Unser, “Biomedical Image Reconstruction: From
the Foundations to Deep Neural Networks,” Found. Trends Signal Process.,
in [29]. We present the results in Table IV. Our CPWL-based vol. 13, 2019, pp. 283–359.
HTV regularization requires fewer iterations to reach the stop- [2] P. Bertero, M. Boccacci, and C. De Mol, Introduction to Inverse Problems
ping criterion, accelerating the computations compared to the in Imaging, 2nd ed. Boca Raton, FL, USA, 2021.

Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 02,2024 at 09:28:02 UTC from IEEE Xplore. Restrictions apply.
804 IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, VOL. 10, 2024

[3] G. Steidl, J. Weickert, T. Brox, P. Mrázek, and M. Welk, “On the equiva- [30] K. Bredies and M. Carioni, “Sparsity of solutions for variational inverse
lence of soft wavelet shrinkage, total variation diffusion, total variation reg- problems with finite-dimensional data,” Calculus Variations Partial Differ.
ularization, and sides,” SIAM J. Numer. Anal., vol. 42, no. 2, pp. 686–713, Equ., vol. 59, no. 1, 2020, Art. no. 14.
2004. [31] L. Ambrosio, J.-M. Morel, S. Masnou, and V. Caselles, “Connected com-
[4] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, ponents of sets of finite perimeter and applications to image processing,”
no. 4, pp. 1289–1306, Apr. 2006. J. Eur. Math. Soc., vol. 3, no. 1, pp. 39–92, 2001.
[5] E. Candès and J. Romberg, “Sparsity and incoherence in compressive [32] L. Ambrosio, C. Brena, and S. Conti, “Functions with bounded hessian-
sampling,” Inverse Problems, vol. 23, no. 3, 2007, Art. no. 969. schatten variation: Density, variational and extremality properties,” 2023,
[6] M. A. T. Figueiredo, J. M. Bioucas-Dias, and R. D. Nowak, “Majorization– arXiv:2302.12554.
minimization algorithms for wavelet-based image restoration,” IEEE [33] L. Ambrosio, S. Aziznejad, C. Brena, and M. Unser, “Linear inverse prob-
Trans. Image Process., vol. 16, no. 12, pp. 2980–2991, Dec. 2007. lems with Hessian-schatten total variation,” Calculus Variations Partial
[7] A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions Different. Eqs., vol. 63, no. 1, 2024, Art. no. 9.
of systems of equations to sparse modeling of signals and images,” SIAM [34] P. Bohra and M. Unser, “Continuous-domain signal reconstruction us-
Rev., vol. 51, no. 1, pp. 34–81, 2009. ing Lp-norm regularization,” IEEE Trans. Signal Process., vol. 68,
[8] S. Ramani and J. Fessler, “Parallel MR image reconstruction using aug- pp. 4543–4554, 2020.
mented Lagrangian methods,” IEEE Trans. Med. Imag., vol. 30, no. 3, [35] T. Debarre, J. Fageot, H. Gupta, and M. Unser, “Solving continuous-
pp. 694–706, Mar. 2011. domain problems exactly with multiresolution B-splines,” in Proc. IEEE
[9] M. Elad, M. Figueiredo, and Y. Ma, “On the role of sparse and redun- Int. Conf. Acoust. Speech Signal Process., 2019, pp. 5122–5126.
dant representations in image processing,” Proc. IEEE, vol. 98, no. 6, [36] P. Bohra and M. Unser, “Computation of “Best” interpolants in the Lp
pp. 972–982, Jun. 2010. sense,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 2020,
[10] B. Adcock and A. C. Hansen, “Generalized sampling and infinite- pp. 5505–5509.
dimensional compressed sensing,” Found. Comput. Math., vol. 16, no. 5, [37] T. Debarre, J. Fageot, H. Gupta, and M. Unser, “B-spline-based exact
pp. 1263–1323, 2016. discretization of continuous-domain inverse problems with generalized tv
[11] B. Adcock, A. Hansen, B. Roman, and G. Teschke, “Generalized sampling: regularization,” IEEE Trans. Inf. Theory, vol. 65, no. 7, pp. 4457–4470,
Stable reconstructions, inverse problems and compressed sensing over the Jul. 2019.
continuum,” in Proc. Adv. Imag. Electron. Phys., 2014, pp. 187–279. [38] M. Unser, “Splines: A. perfect fit for signal and image processing,” IEEE
[12] A. Flinth and P. Weiss, “Exact solutions of infinite dimensional total- Signal Process. Mag., vol. 16, no. 6, pp. 22–38, Nov. 1999.
variation regularized problems,” Inf. Inference: A J. IMA, vol. 8, no. 3, [39] H. Prautzsch, W. Boehm, and M. Paluszny, “Box splines,” in Bezier B-
pp. 407–443, 2019. Spline Techn., pp. 239–258, 2002.
[13] G. Tang, B. N. Bhaskar, P. Shah, and B. Recht, “Compressed sensing off the [40] J. Campos, S. Aziznejad, and M. Unser, “Learning of continuous and
grid,” IEEE Trans. Inf. Theory, vol. 59, no. 11, pp. 7465–7490, Nov. 2013. piecewise-linear functions with essian total-variation regularization,”
[14] Y. C. Eldar, “Compressed sensing of analog signals in shift-invariant IEEE Open J. Signal Process., vol. 3, pp. 36–48, 2022.
spaces,” IEEE Trans. Signal Process., vol. 57, no. 8, pp. 2986–2997, [41] L. Condat and D. Van De Ville, “Three-directional box-splines: Charac-
Aug. 2009. terization and efficient evaluation,” IEEE Signal Process. Lett., vol. 13,
[15] C. Poon, N. Keriven, and G. Peyré, “The geometry of off-the-grid com- no. 7, pp. 417–420, Jul. 2006.
pressed sensing,” Found. Comput. Math., vol. 23, no. 1, pp. 241–327, 2023. [42] A. Entezari, M. Nilchian, and M. Unser, “A box spline calculus
[16] Y. Eldar and G. Kutyniok, Compressed Sensing: Theory and Applications. for the discretization of computed tomography reconstruction prob-
Cambridge, U.K.: Cambridge Univ. Press, 2012. lems,” IEEE Trans. Med. Imag., vol. 31, no. 8, pp. 1532–1541,
[17] B. Adcock and A. C. Hansen, Compressive Imaging: Structure, Sampling, Aug. 2012.
Learning. Cambridge, U.K.: Cambridge Univ. Press, 2021. [43] A. Entezari, D. Van De Ville, and T. Moller, “Practical box
[18] Y. C. Eldar, Sampling Theory: Beyond Bandlimited Systems. Cambridge, splines for reconstruction on the body centered cubic lattice,”
U.K.: Cambridge Univ. Press, 2015. IEEE Trans. Vis. Comput. Graph., vol. 14, no. 2, pp. 313–328,
[19] H. Gupta, J. Fageot, and M. Unser, “Continuous-domain solutions of linear Mar./Apr. 2008.
inverse problems with Tikhonov versus generalized TV regularization,” [44] A. Entezari and T. Moller, “Extensions of the zwart-powell box spline for
IEEE Trans. Signal Process., vol. 66, no. 17, pp. 4670–4684, Sep. 2018. volumetric data reconstruction on the cartesian lattice,” IEEE Trans. Vis.
[20] M. Unser and J. Fageot, “Native banach spaces for splines and variational Comput. Graph., vol. 12, no. 5, pp. 1337–1344, Sep./Oct. 2006.
inverse problems,” 2019, arXiv:1904.10818. [45] M. Kim and J. Peters, “A practical box spline compendium,” Appl. Math.
[21] M. Unser, J. Fageot, and J. P. Ward, “Splines are universal solutions of Comput., vol. 464, 2024, Art. no. 128376.
linear inverse problems with generalized tv regularization,” SIAM Rev., [46] A. Chambolle and T. Pock, “Chapter 6 - approximating the total variation
vol. 59, no. 4, pp. 769–793, 2017. with finite differences or finite elements,” in Geometric Partial Differential
[22] C. Boyer, A. Chambolle, Y. D. Castro, V. Duval, F. de Gournay, and P. Equations - Part II (Handbook of Numerical Analysis Series), vol. 22, A.
Weiss, “On representer theorems and convex regularization,” SIAM J. Bonito and R. H. Nochetto, Eds. Amsterdam, The Netherlands, Elsevier,
Optim., vol. 29, no. 2, pp. 1260–1281, 2019. 2021, pp. 383–417.
[23] R. Parhi and R. D. Nowak, “Banach space representer theorems for neural [47] M. Herrmann, R. Herzog, S. Schmidt, J. Vidal-Núñez, and G. Wachsmuth,
networks and ridge splines,” J. Mach. Learn. Res., vol. 22, no. 43, pp. 1–40, “Discrete total variation with finite elements and applications to imaging,”
Jan. 2021. J. Math. Imag. Vis., vol. 61, pp. 411–431, 2019.
[24] M. Unser, “A unifying representer theorem for inverse problems and [48] S. Bartels, “Total variation minimization with finite elements: Conver-
machine learning,” Found. Comput. Math., vol. 21, no. 4, pp. 941–960, gence and iterative solution,” SIAM J. Numer. Anal., vol. 50, no. 3,
2021. pp. 1162–1180, 2012.
[25] L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise [49] G. Bellettini, V. Caselles, and M. Novaga, “The total variation flow in RN,”
removal algorithms,” Physica D: Nonlinear Phenomena, vol. 60, no. 1–4, J. Differ. Equ., vol. 184, no. 2, pp. 475–525, 2002.
pp. 259–268, 1992. [50] V. Caselles, A. Chambolle, and M. Novaga, “Regularity for solutions of the
[26] S. Aziznejad, J. Campos, and M. Unser, “Measuring complexity of learn- total variation denoising problem,” Revista Matemática Iberoamericana,
ing schemes using hessian-schatten total variation,” SIAM J. Math. Data vol. 27, no. 1, pp. 233–252, 2011.
Sci., vol. 5, no. 2, pp. 422–445, 2023. [51] Q. Denoyelle, V. Duval, G. Peyré, and E. Soubies, “The sliding frank–wolfe
[27] A. Beck and M. Teboulle, “Fast gradient-based algorithms for constrained algorithm and its application to super-resolution microscopy,” Inverse
total variation image denoising and deblurring problems,” IEEE Trans. Problems, vol. 36, no. 1, 2019, Art. no. 014001.
Image Process., vol. 18, no. 11, pp. 2419–2434, Nov. 2009. [52] Y. De Castro, V. Duval, and R. Petit, “Towards off-the-grid algorithms for
[28] S. Lefkimmiatis, A. Bourquard, and M. Unser, “Hessian-based norm total variation regularized inverse problems,” J. Math. Imag. Vis., vol. 65,
regularization for image restoration with biomedical applications,” IEEE no. 1, pp. 53–81, 2023.
Trans. Image Process., vol. 21, no. 3, pp. 983–995, Mar. 2012. [53] C. De Boor, K. Höllig, and S. Riemenschneider, Box Splines, vol. 98.
[29] S. Lefkimmiatis, J. P. Ward, and M. Unser, “Hessian Schatten-norm Berlin, Germany: Springer, 2013.
regularization for linear inverse problems,” IEEE Trans. Image Process., [54] I. Koutromanos, Fundamentals of Finite Element Analysis: Linear Finite
vol. 22, no. 5, pp. 1873–1888, May 2013. Element Analysis.Hoboken, NJ, USA: Wiley, 2018.

Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 02,2024 at 09:28:02 UTC from IEEE Xplore. Restrictions apply.
POURYA et al.: BOX-SPLINE FRAMEWORK FOR INVERSE PROBLEMS WITH CONTINUOUS-DOMAIN SPARSITY CONSTRAINTS 805

[55] R. Arora, A. Basu, P. Mianjy, and A. Mukherjee, “Understanding deep Mehrsa Pourya (Graduate Student Member, IEEE)
neural networks with rectified linear units,” in Proc. Int. Conf. Learn. received the B.Sc. degree in electrical engineering
Representations, Poster Presentation, 2018. from the Sharif University of Technology, Tehran,
[56] M. Unser, “Approximation power of biorthogonal wavelet expansions,” Iran, in 2020. She is currently working toward the
IEEE Trans. Signal Process., vol. 44, no. 3, pp. 519–527, Mar. 1996. Ph.D. degree with the Biomedical Imaging Group,
[57] G. Strang and G. Fix, “A fourier analysis of the finite element variational École Polytechnique Fédérale de Lausanne, Lau-
method,” Constructive Aspects Funct. Anal., vol. 57, pp. 793–840, 2011. sanne, Switzerland, under the direction of Michael
[58] A. Goujon, J. Campos, and M. Unser, “Stable parameterization of con- Unser. Her research interests include signal process-
tinuous and piecewise-linear functions,” Appl. Comput. Harmon. Anal., ing and machine learning.
vol. 67, 2023, Art. no. 101581.
[59] A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algo-
rithm for linear inverse problems,” SIAM Ournal Maging Ciences, vol. 2,
no. 1, pp. 183–202, 2009.
[60] M. Pourya, A. Goujon, and M. Unser, “Delaunay-triangulation-based Aleix Boquet-Pujadas received the bachelor’s de-
learning with Hessian total-variation regularization,” IEEE Open J. Signal gree in mathematics and the bachelor’s degree in
Process., vol. 4, pp. 167–178, 2023. physics from Universitat Autònoma de Barcelona
[61] S. Bonettini, S. Rebegoldi, and V. Ruggiero, “Inertial variable metric (first-in-class), Bellaterra, Spain, and the Ph.D. de-
techniques for the inexact forward–backward algorithm,” SIAM J. Sci. gree from Sorbonne Université (or UPMC, or Paris
Comput., vol. 40, no. 5, pp. A3180–A3210, 2018. VI), Paris, France. His doctoral thesis was supported
[62] S. Salzo et al., “Inexact and accelerated proximal point algorithms,” J. by a Marie Skłodowska-Curie fellowship at Institut
Convex Anal., vol. 19, no. 4, pp. 1167–1192, 2012. Pasteur and distinguished by the French Society of
[63] M. Schmidt, N. Roux, and F. Bach, “Convergence rates of inexact Biomedical Engineering (SFGBM) for its innovation.
proximal-gradient methods for convex optimization,” in Proc. Adv. Neural He is currently a Postdoctoral Researcher with the
Inf. Process. Syst., 2011. Biomedical Imaging Group, EPFL.
[64] A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex
problems with applications to imaging,” J. Math. Imag. Vis., vol. 40,
pp. 120–145, 2011. Michael Unser (Fellow, IEEE) received the M.S.
[65] L. Condat, “A primal–dual splitting method for convex optimization (summa cum laude) and Ph.D. degrees in electrical
involving lipschitzian, proximable and linear composite terms,” J. Optim. engineering from Ecole Polytechnique Fédérale de
Theory Appl., vol. 158, no. 2, pp. 460–479, 2013. Lausanne (EPFL), Lausanne, Switzerland, in 1981
[66] B. C. Vũ, “A splitting algorithm for dual monotone inclusions involving and 1984, respectively. He is currently a Professor and
cocoercive operators,” Adv. Comput. Math., vol. 38, no. 3, pp. 667–681, the Director of EPFL’s Biomedical Imaging Group,
2013. Lausanne. He is the author with P. Tafti of the book
[67] M. Yan, “A new primal–dual algorithm for minimizing the sum of three An introduction to sparse stochastic processes Cam-
functions with a linear operator,” J. Sci. Comput., vol. 76, pp. 1698–1717, bridge University Press 2014. He has authored or
2018. coauthored more than 350 journal papers in his re-
[68] M.-J. Lai, B. Lucier, and J. Wang, “The convergence of a central-difference search areas which include sampling theory, wavelets,
discretization of rudin-osher-fatemi model for image denoising,” in Proc. the use of splines for image processing, stochastic processes, and computational
2nd Int. Conf. Scale Space Variational Methods Comput. Vis., 2009, bioimaging. His primary area of investigation is biomedical image processing.
pp. 514–526. From 1985 to 1997, he was with the Biomedical Engineering and Instru-
[69] J. Wang and B. J. Lucier, “Error bounds for finite-difference methods for mentation Program, National Institutes of Health, Bethesda USA, conducting
rudin–osher–fatemi image smoothing,” SIAM J. Numer. Anal., vol. 49, research on bioimaging.
no. 2, pp. 845–868, 2011. He was an Associate Editor-in-Chief for most of the primary journals in his
[70] H. Alvis-Miranda, S. Castellar Leones, G. Alcalá-Cerra, and L. Moscote- field including IEEE TRANSACTIONS ON MEDICAL IMAGINGduring 2003–2005,
Salazar, “Cerebral sinus venous thrombosis,” J. Rural Pract., vol. 4, IEEE TRANSACTIONS ON IMAGE PROCESSING, PROCEEDINGS OF IEEE, and
pp. 427–438, 2013. SIAM Journal of Imaging Sciences. He is the founding Chair of the technical
[71] K. Egiazarian, A. Foi, and V. Katkovnik, “Compressed sensing image committee on Bio Imaging and Signal Processing of the IEEE Signal Processing
reconstruction via recursive spatially adaptive filtering,” in Proc. IEEE Society.
Int. Conf. Image Process., 2007, pp. I-549–I-552. He is an EURASIP Fellow (2009) and a Member of the Swiss Academy of
[72] A. Chambolle, S. E. Levine, and B. J. Lucier, “An upwind finite-difference Engineering Sciences. He was the recipient of several international prizes in-
method for total variation–based image smoothing,” SIAM J. Imag. Sci., cluding five IEEE-SPS Best Paper Awards, two Technical Achievement Awards
vol. 4, no. 1, pp. 277–299, 2011. from the IEEE (2008 SPS and EMBS 2010), Technical Achievement Award
[73] L. Condat, “Discrete total variation: New definition and minimization,” from EURASIP (2018), and recently Career Achievement Award (IEEE EMBS
SIAM J. Imag. Sci., vol. 10, no. 3, pp. 1258–1290, 2017. 2020).

Authorized licensed use limited to: EPFL LAUSANNE. Downloaded on June 02,2024 at 09:28:02 UTC from IEEE Xplore. Restrictions apply.

You might also like