Papers by Roberto Cavicchioli
2019 Fifth International Conference on Image Information Processing (ICIIP)
Error diffusion dithering is a technique that is used to represent a grey-scale image in a format... more Error diffusion dithering is a technique that is used to represent a grey-scale image in a format usable by a printer. At every step, an algorithm converts the grey-scale value of a pixel to a new value within the allowed ones, generating a conversion error. To achieve the effect of continuous-tone illusion, the error is distributed to the neighboring pixels. Among the existent algorithms, the most commonly used is Floyd-Steinberg. However, this algorithm suffers two issues: artifacts and slowness. Regarding artifacts, those are textures that can appear after the image elaboration, making it visually different from the original one. In order to avoid this effect, we will use a stochastic version of Floyd-Steinberg algorithm. To evaluate the results, we will apply the Weighted Signal to Noise Ratio (WSNR), a visual-based model to account for perceptivity of dithered textures. This filter has a low-pass characteristic and, in particular, it uses a Contrast Sensitivity Function to evaluate the similarity between the original image and the final image. Our claim is that the new stochastic algorithm is better suited for both the WSNR measure and the visual analysis. Secondly, we will face slowness: we will describe a parallel version of Floyd-Steinberg algorithm that will exploit GPU (Graphics Processing Unit), drastically reducing the spent time. Specifically, we noticed that the serial version computational time increases quadratically with the input size, while the parallel version one increases linearly. Both the image quality and the computational performance of the parallel algorithm are evaluated on several large-scale images.
This paper explores eective algorithms for the solution of numerical nonlinear optimization probl... more This paper explores eective algorithms for the solution of numerical nonlinear optimization problems in image restoration. The technology of modern acquisition techniques and devices most often returns data of increasing size, so we focus on the Scaled Gradient Projection algorithm, which is well suited to large-scale applications. We present its parallel implementations on dierent hardware, namely Tesla-series GPUs and cluster of last-generation multi-core CPUs. Algorithm and codes are tested against both 2D and 3D meaningful real-world problems.
The deconvolution of astronomical images by the Richardson-Lucy method (RLM) is extended here to ... more The deconvolution of astronomical images by the Richardson-Lucy method (RLM) is extended here to the problem of multiple image deconvolution and the reduction of boundary effects. We show the multiple image RLM in its accelerated gradient-version SGP (Scaled Gradient Projection). Numerical simulations indicate that the approach can provide excellent results with a considerable reduction of the boundary effects. Also exploiting GPUlib applied to the IDL code, we obtained a remarkable acceleration of up to two orders of magnitude [Prato et al. 2012]. Multiple image deconvolution problem Multiple image deconvolution problem with Poisson data: min ~ f ≥0 J 0 ( ~ f ;~g) = p X j=1 X m∈S {~g j (m)ln ~g j (m) (A j ~ f )(m) + ~ b j (m) + (1) +(A j ~ f )(m) + ~ b j (m) −~g j (m)} ,
SGP-dec is a Matlab package for the deconvolution of 2D and 3D images corrupted by Poisson noise.... more SGP-dec is a Matlab package for the deconvolution of 2D and 3D images corrupted by Poisson noise. Following amaximum likelihood approach, SGP-dec computes a deconvolved image by early stopping an iterative method for the minimization of the generalized Kullback-Lieibler divergence. The iterative minimization method implemented by SGP-dec is a Scaled Gradient Projection (SGP) algorithm that can be considered an acceleration of the Expectation Maximization method, also known as Richardson-Lucy method. The main feature of the SGP algorithm consists in the combination of non-expensivediagonally scaled gradient directions with adaptive Barzilai-Borwein steplength rules specially designed for thesedirections; global convergence properties are ensured by exploiting a line-search strategy (monotone or nonmonotone)along the feasible direction.The algorithm SGP is provided to be used as iterative regularization method; this means that a regularized reconstruction can be obtained by early stopping the SGP sequence. Several early stopping strategies can be selected, basedon different criteria: maximum number of iterations, distance of successive iterations or function values, discrepancyprinciple; the user must choose a stopping criterion and fixsuited values for the parameters involved by the chosen criterion
Scientific Reports, 2020
Digital Breast Tomosynthesis (DBT) is a modern 3D Computed Tomography X-ray technique for the ear... more Digital Breast Tomosynthesis (DBT) is a modern 3D Computed Tomography X-ray technique for the early detection of breast tumors, which is receiving growing interest in the medical and scientific community. Since DBT performs incomplete sampling of data, the image reconstruction approaches based on iterative methods are preferable to the classical analytic techniques, such as the Filtered Back Projection algorithm, providing fewer artifacts. In this work, we consider a Model-Based Iterative Reconstruction (MBIR) method well suited to describe the DBT data acquisition process and to include prior information on the reconstructed image. We propose a gradient-based solver named Scaled Gradient Projection (SGP) for the solution of the constrained optimization problem arising in the considered MBIR method. Even if the SGP algorithm exhibits fast convergence, the time required on a serial computer for the reconstruction of a real DBT data set is too long for the clinical needs. In this pape...
An Interactive Data Language (IDL) package for the single and multiple deconvolution of 2D images... more An Interactive Data Language (IDL) package for the single and multiple deconvolution of 2D images corrupted by Poisson noise, with the optional inclusion of a boundary effect correction. Following a maximum likelihood approach, SGP-IDL computes a deconvolved image by early stopping of the scaled gradient projection (SGP) algorithm for the solution of the optimization problem coming from the minimization of the generalized Kullback-Leibler divergence between the computed image and the observed image. The algorithms have been implemented also for Graphic Processing Units (GPUs)
The deconvolution of astronomical images by the Richardson-Lucy method (RLM) is extended here to ... more The deconvolution of astronomical images by the Richardson-Lucy method (RLM) is extended here to the problem of multiple image deconvolution and the reduction of boundary effects. We show the multiple image RLM in its accelerated gradient-version SGP (Scaled Gradient Projection). Numerical simulations indicate that the approach can provide excellent results with a considerable reduction of the boundary effects. Also exploiting GPUlib applied to the IDL code, we obtained a remarkable acceleration of up to two orders of magnitude
We consider the numerical solution on modern multicore architectures of large-scale optimization ... more We consider the numerical solution on modern multicore architectures of large-scale optimization problems arising in image restoration. An efficient solution of these optimization problems is important in several areas, such as medical imaging, microscopy and astronomy, where large-scale imaging is a basic task. To face these challenging problems, a lot of effort has been put in designing effective algorithms, that have largely improved the classical optimization strategies usually applied in image processing. Nevertheless, in many large-scale applications also these improved algorithms do not provide the expected reconstruction in a reasonable time. In these cases, the modern multiprocessor architectures represent an important resource for reducing the reconstruction time. Actually, one can consider different possibilities for a parallel computational scenario. One is the use of Graphics Processing Units (GPUs): they were originally designed to perform many simple operations on matrices and vectors with high efficiency and low accuracy (single precision arithmetic), but they have recently seen a huge development of both computational power and accuracy (double precision arithmetic), while still retaining compactness and low price. Another possibility is the use of last-generation multi-core CPUs, where general-purpose, very powerful computational cores are integrated inside the same CPU and a bunch of CPUs can be hosted by the same motherboard, sharing a central memory: they can perform completely dierent and asynchronous tasks, as well as cooperate by suitably distributing the workload of a complex task. Additional opportunities are offered by the more classical clusters of nodes, usually connected in dierent distributed-memory topologies to form large-scale high-performance machines with tens to hundred-thousands of processors. Needless to say, various mix of these architectures (such as clusters of GPUs) are also possible and sold, indeed. It should be noticed, however, that all the mentioned scenarios can exist even in very small-sized and cheap configurations. This is particularly relevant for GPUs: initially targeted at 3D graphics applications, they have been employed in many other scientific computing areas, such as signal and image reconstruction. Recent applications show that in many cases GPU performances are comparable to those of a medium-sized cluster, at a fraction of its cost. Thus, also small laboratories, which cannot afford a cluster, can benet from a substantial reduction of computing time compared to a standard CPU system. Nevertheless, for very large problems, as 3D imaging in confocal microscopy, the size of GPU's on-devices dedicated memory can become a limit to performance. For this reason, the ability to exploit the scalability of clusters by means of standard MPI implementations is still crucial for facing very large-scale applications. Here, we deal with both the GPU and the MPI implementation of an optimization algorithm, called Scaled Gradient Projection (SGP) method, that applies to several imaging problems. GPU versions of this method have been recently evaluated, while an MPI version is presented in this work in the cases of both deblurring and denoising problems. A computational study of the different implementations is reported, to show the enhancements provided by these parallel approaches in solving both 2D and 3D imaging problems
Scientific Reports, 2013
Although deconvolution can improve the quality of any type of microscope, the high computational ... more Although deconvolution can improve the quality of any type of microscope, the high computational time required has so far limited its massive spreading. Here we demonstrate the ability of the scaled-gradient-projection (SGP) method to provide accelerated versions of the most used algorithms in microscopy. To achieve further increases in efficiency, we also consider implementations on graphic processing units (GPUs). We test the proposed algorithms both on synthetic and real data of confocal and STED microscopy. Combining the SGP method with the GPU implementation we achieve a speed-up factor from about a factor 25 to 690 (with respect the conventional algorithm). The excellent results obtained on STED microscopy images demonstrate the synergy between super-resolution techniques and image-deconvolution. Further, the real-time processing allows conserving one of the most important property of STED microscopy, i.e the ability to provide fast sub-diffraction resolution recordings. I mage deconvolution is a computational technique that mitigates the distortions created by an optical system. Agard first applied image deconvolution to fluorescence microscopy in the early 1980s 1. In this seminal paper Agard proposed different algorithms for deconvolving images acquired as three-dimensional (3D) stacks using wide-field microscopy (WFM). In a nutshell, the focal plane of the objective lens moves along the thickness of the specimen and for each position the microscope generates a bi-dimensional (2D) image. Due to the diffraction phenomena, each 2D image, also called optical section, includes considerable out-of-focus light originating from regions of the specimen above and below the focal plane. Image deconvolution uses information describing how the microscope produces the image (forward model) as the basis of a mathematical transformation that reassigns the out-of-focus light to the points of origin. Later, many new optical methods have been proposed to remove out-of-focus light and to generate directly true optical sections. Without pretending to be exhaustive, we mention confocal laser scanning microscopy (CLSM) 2,3 , two-photon excitation microscopy (TPEM) 2,4 and selective plane illumination microscopy (SPIM) 5,6. All these methods remove out-of-focus light by rejecting such light before it reaches the detector or by precluding its generation. Further hybrid techniques, which remove out-of-focus light by combining optical and computational methods are 4Pi microscopy 7,8 and structured illumination microscopy (SIM) 9,10. Since CLSM, TPEM and SPIM have considerably smaller contribution of out-of-focus light they are sometimes considered as pure alternatives to the deconvolution and WFM combo. However, it has been shown that also these techniques can strongly benefit from image deconvolution 11-14. Although out-of-focus background is reduced, the images produced by such systems are still blurred versions of the specimen's structures in the focal plane and are contaminated by noise, thereby deconvolution can improve their contrast and signal-to-noise ratio. Similarly, also single 2D image can benefit of deconvolution, especially when obtained from thin specimen, where out-of-focus background vanishes. More recently new super-resolution fluorescence microscopy approaches (usually referred to as nanoscopy) have enlarged the portfolio of tools for investigating biological samples 15. The nanoscopy techniques have effectively break the diffraction barrier and moved the spatial resolution of fluorescence microscopy down to the nanoscale 16. Importantly, also in these cases image deconvolution can help to improve the quality of their images. This has been demonstrated both for stimulated emission depletion (STED) microscopy 17 , which at moment can be considered as the method of choice between the targeted nanoscopy techniques, and, more
2013 IEEE International Conference on Acoustics, Speech and Signal Processing, 2013
A blurring linear operator η additive Gaussian white noise N (0, σ 2 I) u, g , η ∈ R k (k number ... more A blurring linear operator η additive Gaussian white noise N (0, σ 2 I) u, g , η ∈ R k (k number of pixels) Example: image restoration.
Uploads
Papers by Roberto Cavicchioli