Papers by Laurent Demaret
Publication in the conference proceedings of SampTA, Bremen, Germany, 2013
Publication in the conference proceedings of SampTA, Bremen, Germany, 2013
IEEE Transactions on Medical Imaging, 2019
Three-dimensional freehand imaging techniques are gaining wider adoption due to their ?exibility ... more Three-dimensional freehand imaging techniques are gaining wider adoption due to their ?exibility and cost ef?ciency. Typical examples for such a combination of a tracking system with an imaging device are freehand SPECT or freehand 3D ultrasound. However, the quality of the resulting image data is heavily dependent on the skill of the human operator and on the level of noise of the tracking data. The latter aspect can introduce blur or strong artifacts, which can signi?cantly hamper the interpretation of image data. Unfortunately, the most commonly used tracking systems to date, i.e. optical and electromagnetic, present a trade-off between invading the surgeon's workspace (due to line-of-sight requirements) and higher levels of noise and sensitivity due to the interference of surrounding metallic objects. In this work, we propose a novel approach for total variation regularization of data from tracking systems (which we term pose signals) based on a variational formulation in the manifold of Euclidean transformations. The performance of the proposed approach was evaluated using synthetic data as well as real ultrasound sweeps executed on both a Lego phantom and human anatomy, showing signi?cant improvement in terms of tracking data quality and compounded ultrasound images. Source code can be found at https://github.com/IFL-CAMP/pose_regularization.
Journal of Mathematical Imaging and Vision, 2016
2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015
This paper introduces the concept of shape signals, i.e., series of shapes which have a natural t... more This paper introduces the concept of shape signals, i.e., series of shapes which have a natural temporal or spatial ordering, as well as a variational formulation for the regularization of these signals. The proposed formulation can be seen as the shape-valued generalization of the Rudin-Osher-Fatemi (ROF) functional for intensity images. We derive a variant of the classical finite-dimensional representation of Kendall, but our framework is generic in the sense that it can be combined with any shape space. This representation allows for the explicit computation of geodesics and thus facilitates the efficient numerical treatment of the variational formulation by means of the cyclic proximal point algorithm. Similar to the ROF-functional, we demonstrate experimentally that 1-type penalties both for data fidelity term and regularizer perform best in regularizing shape signals. Finally, we show applications of our method to shape signals obtained from synthetic, photometric, and medical data sets. 2.3. Continuous Shape Spaces and Metrics Shape spaces and shape metrics have a long history in computer vision as well as medical image analysis. Es
Applied and Computational Harmonic Analysis, 2017
We propose a signal analysis tool based on the sign (or the phase) of complex wavelet coefficient... more We propose a signal analysis tool based on the sign (or the phase) of complex wavelet coefficients, which we call a signature. The signature is defined as the fine-scale limit of the signs of a signal's complex wavelet coefficients. We show that the signature equals zero at sufficiently regular points of a signal whereas at salient features, such as jumps or cusps, it is non-zero. At such feature points, the orientation of the signature in the complex plane can be interpreted as an indicator of local symmetry and antisymmetry. We establish that the signature rotates in the complex plane under fractional Hilbert transforms. We show that certain random signals, such as white Gaussian noise and Brownian motions, have a vanishing signature. We derive an appropriate discretization and show the applicability to signal analysis.
2015 International Conference on Sampling Theory and Applications (SampTA), 2015
Anisotropic triangulations provide efficient methods for sparse image representations. In previou... more Anisotropic triangulations provide efficient methods for sparse image representations. In previous work, we have proposed a locally adaptive algorithm for sparse image approximation, adaptive thinning, which relies on linear splines over anisotropic Delaunay triangulations. In this contribution, we address theoretical and practical aspects concerning image approximation by linear splines over anisotropic conformal triangulations. Our discussion includes asymptotically optimal N-term approximations on relevant classes of target functions, such as horizon functions across α Hölder smooth boundaries and regular functions of W α,p regularity, for α > 2/p − 1. Moreover, we demonstrate the good performance of our adaptive thinning algorithm by numerical examples and comparisons.
SIAM Journal on Numerical Analysis, 2015
We investigate the nonsmooth and nonconvex L 1-Potts functional in discrete and continuous time. ... more We investigate the nonsmooth and nonconvex L 1-Potts functional in discrete and continuous time. We show Γ-convergence of discrete L 1-Potts functionals toward their continuous counterpart and obtain a convergence statement for the corresponding minimizers as the discretization gets finer. For the discrete L 1-Potts problem, we introduce an O(n 2) time and O(n) space algorithm to compute an exact minimizer. We apply L 1-Potts minimization to the problem of recovering piecewise constant signals from noisy measurements f. It turns out that the L 1-Potts functional has a quite interesting blind deconvolution property. In fact, we show that mildly blurred jump-sparse signals are reconstructed by minimizing the L 1-Potts functional. Furthermore, for strongly blurred signals and a known blurring operator, we derive an iterative reconstruction algorithm.
Ce memoire est dedie a l'application de maillages triangulaires a la compression d'images... more Ce memoire est dedie a l'application de maillages triangulaires a la compression d'images fixes. On cherche a prouver l'efficacite des representations hierarchiques pour le codage avec perte. On commence par deux etudes portant respectivement sur les modeles d'approximation basees sur des elements finis d'Hermite et sur la subdivison barycentrique. On aborde ensuite le theme principal : la multiresolution offerte dans le cadre des elements finis hierarchiques. Les methodes de codage proposees exploitent l'heterogeneite de la repartition statistique des coefficients d'amplitude forte. On a ensuite developpe des methodes originales de construction de preondelettes orthogonales a base d'elements finis lineaires. De maniere generale, le travail a permis de montrer les potentialites des schemas de codage par maillages et leurs bonnes performances lorsqu'on les compare aux meilleurs standards de compression actuels.
We give a short introduction to wedgelet approximations, and describe some of the features of the... more We give a short introduction to wedgelet approximations, and describe some of the features of the implementation available at the website www.wedgelets.de. Here we only give a short account aiming to provide a first understanding of the algorithm and its features, and refer to [1, 2] for details. Wedgelet approximations Wedgelet approximations were introduced by Donoho [1], as a means to efficiently approximate piecewise constant images. Generally speaking, these approximations are obtained by partitioning the image domain adaptively into disjoint sets, followed by computing an approximation of the image on each of these sets. Optimal approximations are defined by means of a certain functional weighing approximation error against complexity of the decomposition. The optimization can be imagined as a game of puzzle: The aim is to approximate the image by putting together a number of pieces from a fixed set, possibly using a minimal number of pieces. As can be imagined, the efficient computation of such an optimal approximation is a critical issue, depending on the particular class of partitions under considerations. Donoho proposed to use wedges, and to study the associated wedgelet approximations. For the sake of notational convenience, we fix that images are understood as elements of the function space R , where I = {0, . . . , 2 − 1} × {0, . . . , 2 − 1}. Other image sizes can be treated by suitable adaptation, at the cost of a more complicated notation. The wedgelet approach can be described by a two-step procedure: 1. Decompose the image domain I into a disjoint union of wedge-shaped sets, I = ⋃ w∈P w. 2. On each set w ∈ P, approximate the image by a constant.
IEEE Transactions on Signal Processing, 2014
Computational Imaging and Vision, 2012
Natural videos are composed of a superposition of moving objects, usually resulting from anisotro... more Natural videos are composed of a superposition of moving objects, usually resulting from anisotropic motions into different directions. By discretization with respect to time, a video may be regarded as a sequence of consecutive natural still images. Alternatively, when considering time as one dimension, a video may be viewed as a 3d scalar field. In this case, customized methods are needed for capturing both the evolution of moving contours along the time axis and the geometrical distortions of the resulting sweep surfaces. Moreover, it is desirable to work with sparse representations. Indeed, already for basic motions (e.g. rotations, translations), customized methods for the construction of well-adapted sparse video data representations are required. To this end, we propose a novel adaptive approximation algorithm for video data. The utilized nonlinear approximation scheme is based on anisotropic tetrahedralizations of the 3d video domain, whose tetrahedra are adapted locally in space (for contour-like singularities) and locally in time (for anisotropic motions). The key ingredients of our approximation method, 3AT, are adaptive thinning, a recursive pixel removal scheme, and least squares approximation by linear splines over anisotropic tetrahedralizations. The approximation algorithm 3AT yields a new concept for the compression of video data. We apply the proposed approximation method first to prototypical geometrical motions, before numerical simulations concerning one natural video are presented.
Mathematics and Visualization
Adaptive thinning algorithms are greedy point removal schemes for bivariate scattered data sets w... more Adaptive thinning algorithms are greedy point removal schemes for bivariate scattered data sets with corresponding function values, where the points are recursively removed according to some data-dependent criterion. Each subset of points, together with its function values, defines a linear spline over its Delaunay triangulation. The basic criterion for the removal of the next point is to minimize the error between the resulting linear spline at the bivariate data points and the original function values. This leads to a hierarchy of linear splines of coarser and coarser resolutions. This paper surveys the various removal strategies developed in our earlier papers, and the application of adaptive thinning to terrain modelling and to image compression. In our image test examples, we found that our thinning scheme, adapted to diminish the least squares error, combined with a postprocessing least squares optimization and a customized coding scheme, often gives better or comparable results to the wavelet-based scheme SPIHT.
SIAM Journal on Imaging Sciences, 2014
Uploads
Papers by Laurent Demaret