The problem of jointly sparse support recovery is to determine the common support of jointly spar... more The problem of jointly sparse support recovery is to determine the common support of jointly sparse signal vectors from multiple measurement vectors (MMV) related to the signals by a linear transformation. The fundamental limit of performance has been studied in terms of a so-called algebraic bound, relating the maximum recoverable sparsity level to the spark of the sensing matrix and the rank of the signal matrix. However, while the algebraic bound provides the necessary and sufficient condition for the success of joint sparse recovery, it is restricted to the noiseless case. We derive a sufficient condition for jointly sparse support recovery for the noisy case. We show that essentially the same deterministic condition as in the noiseless case suffices for perfect support recovery at finite signal-to-noise ratio (SNR). Furthermore, we perform an average case analysis of the recovery problem when the matrix of jointly sparse signal vectors has random left singular vectors, representing the case of signal vectors in general position. In this case, we provide a relaxed deterministic condition on the sensing matrix for support recovery with high probability at finite SNR. Finally, we quantify the improvements for an i.i.d. Gaussian sensing matrix.
International Journal of Adaptive Control and Signal Processing, 1989
SUMMARY Recently, there has been a considerable interest in parametric estimation of non-Gaussian... more SUMMARY Recently, there has been a considerable interest in parametric estimation of non-Gaussian processes, based on high-order moments. Several researchers have proposed algorithms for estimating the parameters of AR, MA and ARMA processes, based on the ...
Blind deconvolution (BD) arises in many applications. Without assumptions on the signal and the f... more Blind deconvolution (BD) arises in many applications. Without assumptions on the signal and the filter, BD does not admit a unique solution. In practice, subspace or sparsity assumptions have shown the ability to reduce the search space and yield the unique solution. However, existing theoretical analysis on uniqueness in BD is rather limited. In an earlier paper, we provided the first algebraic sample complexities for BD that hold for almost all bases or frames. We showed that for BD of a pair of vectors in C n , with subspace constraints of dimensions m 1 and m 2 , respectively, a sample complexity of n ≥ m 1 m 2 is sufficient. This result is suboptimal, since the number of degrees of freedom is merely m 1 + m 2 − 1. We provided analogus results, with similar suboptimality, for BD with sparsity or mixed subspace and sparsity constraints. In this paper, taking advantage of the recent progress on the information-theoretic limits of unique low-rank matrix recovery, we finally bridge this gap, and derive an optimal sample complexity result for BD with generic bases or frames. We show that for BD of an arbitrary pair (resp. all pairs) of vectors in C n , with sparsity constraints of sparsity levels s 1 and s 2 , a sample complexity of n > s 1 + s 2 (resp. n > 2(s 1 + s 2)) is sufficient. We also present analogous results for BD with subspace constraints or mixed constraints, with the subspace dimension replacing the sparsity level. Last but not least, in all the above scenarios, if the bases or frames follow a probabilistic distribution specified in the paper, the recovery is not only unique, but also stable against small perturbations in the measurements, under the same sample complexities. Index termsuniqueness, sample complexity, bilinear inverse problem, low-rank matrix recovery * This work was supported in part by the National Science Foundation (NSF) under Grants CCF 10-18789 and IIS 14-47879.
ABSTRACT In this correspondence, a corrected version of the convergence analysis given by Lee and... more ABSTRACT In this correspondence, a corrected version of the convergence analysis given by Lee and Bresler in the above titled paper (ibid., vol. 56, no. 9, pp. 4402-4416, Sep. 2010) is presented.
Dynamic tomography is an ill-posed inverse problem where the object evolves during the sequential... more Dynamic tomography is an ill-posed inverse problem where the object evolves during the sequential acquisition of projections. The goal is to reconstruct the object for each time instant. However, performing a direct reconstruction using this inconsistent set of projections is impossible. In this paper, we propose an object-domain recovery algorithm using a variational formulation that combines a partially separable spatio-temporal prior with a basic total-variation spatial regularization for improved performance, while preserving full interpretability. Numerical experiments on data derived from real object CT data demonstrate the advantages of the proposed algorithm over recent projection-domain and deep-prior-based methods.
Scatter due to interaction of photons with the imaged object is a fundamental problem in X-ray Co... more Scatter due to interaction of photons with the imaged object is a fundamental problem in X-ray Computed Tomography (CT). It manifests as various artifacts in the reconstruction, making its abatement or correction critical for image quality. Despite success in specific settings, hardwarebased methods require modification in the hardware, or increase in the scan time or dose. This accounts for the great interest in software-based methods, including Monte-Carlo based scatter estimation, analytical-numerical, and kernel-based methods, with data-driven learning-based approaches demonstrated recently. In this work, two novel physics-inspired deep-learning-based methods, PhILSCAT and OV-PhILSCAT, are proposed. The methods estimate and correct for the scatter in the acquired projection measurements. Different from previous works, they incorporate both an initial reconstruction of the object of interest and the scatter-corrupted measurements related to it, and use a deep neural network architecture and cost function, both specifically tailored to the problem. Numerical experiments with data generated by Monte-Carlo simulations of the imaging of phantoms reveal consistent improvement over a recent purely projectiondomain deep neural network scatter correction method.
With a multi-lab and university team, we propose to develop new methods for a Hydrodynamic and Ra... more With a multi-lab and university team, we propose to develop new methods for a Hydrodynamic and Radiographic Toolbox (HART) that will enable a fuller and more extensive use of experimental radiographic data towards better characterizing and reducing uncertainties in predictive modeling of weapons performance. We will achieve this by leveraging recent developments in areas of computational imaging, statistical and machine learning, and reduced order modeling of hydrodynamics. In terms of software practices, by partnering with XCP (RIS-TRA Project) we will conform to recent XCP standards that are inline with modern software practices and standards and ensure compatibility and ease of inter-operability with existing codes. Three new activities under this proposal include (a) the development and use of deep learning-based surrogates to accelerate reconstruction and variational inference of density fields from radiographs of hydrotests, (b) a model-data fusion strategy that couples deep learning-based density reconstructions with fast hydrodynamics simulators to better constrain the reconstruction, and (c) a method for treating asymmetries using techniques adopted from limited view tomography. All three new activities will be based on improved treatment of scatter, noise, beam spot movement, detector blur, and flat fielding in the forward model, and a use of sophisticated priors to aid in the re construction. The improvements to the forward model and improved algorithmic design of the reconstruction when complete will be contained in the iterative reconstruction code SHIVA-a code project that we have recently initiated. The many ways in which machine learning can be used in the reconstruction work will be contained in a code HERMES that has been initiated with DTRA support. For example, the significant levels of acceleration that will likely be achieved by the use of machine learning techniques will permit us (and are required) to quantify uncertainties in density retrievals. Next, the two-way coupling between density reconstruction and model-based simulation of the hydrodynamics will be contained in code EREBUS, and will permit a fuller realization of the potential of the data to constrain the hydrodynamic model and better address issues related to asymmetries in the problem. Finally, we anticipate that the better consistency with physics achieved in our reconstructions will allow them to be used by X-Division more so than today.
The tomographic reconstruction of objects with spatially localized temporal variation, such as th... more The tomographic reconstruction of objects with spatially localized temporal variation, such as the heart, is considered. Theoretical analysis, which includes tight performance bounds, shows that, by using an optimally scrambled angular sampling order, the required temporal scan rate can be lowered as much as four times while still preserving image quality. A design procedure is presented for the optimum choice of angular sampling pattern, and demonstrated by simulations. The technique permits graceful degradation of performance over a wide range of temporal sampling rates.<<ETX>>
We survey applications of classical and of time-sequential sampling theory and some of its recent... more We survey applications of classical and of time-sequential sampling theory and some of its recent extensions, respectively, in two complementary areas. First to reduce acquisition requirements for dynamic imaging below those predicted by classical theory, and second, to reduce the computation for tomographic reconstruction from O(N3) to O(N2 log N) for an N × N image, with similar acceleration for 3D images. In both areas, the savings demonstrated in practical examples exceed an order-of-magnitude.
Chemical imaging provides information about the distribution of chemicals within a target. When c... more Chemical imaging provides information about the distribution of chemicals within a target. When combined with structural information about the target, in situ chemical imaging opens the door to applications ranging from tissue classification to industrial process monitoring. The combination of infrared spectroscopy and optical microscopy is a powerful tool for chemical imaging of thin targets. Unfortunately, extending this technique to targets with appreciable depth is prohibitively slow. We combine confocal microscopy and infrared spectroscopy to provide chemical imaging in three spatial dimensions. Interferometric measurements are acquired at a small number of focal depths, and images are formed by solving a regularized inverse scattering problem. A low-dimensional signal model is key to this approach: we assume the target comprises a finite number of distinct chemical species. We establish conditions on the constituent spectra and the number of measurements needed for unique recovery of the target. Simulations illustrate imaging of cellular phantoms and sub-wavelength targets from noisy measurements.
Compressed Sensing has been demonstrated to be a powerful tool for magnetic resonance imaging (MR... more Compressed Sensing has been demonstrated to be a powerful tool for magnetic resonance imaging (MRI), where it enables accurate recovery of images from highly undersampled k-space measurements by exploiting the sparsity of the images or image patches in a transform domain or dictionary. In this work, we focus on blind compressed sensing, where the underlying sparse signal model is a priori unknown, and propose a framework to simultaneously reconstruct the underlying image as well as the unknown model from highly undersampled measurements. Specifically, our model is that the patches of the underlying MR image(s) are approximately sparse in a transform domain. We also extend this model to a union of transforms model that is better suited to capture the diversity of features in MR images. The proposed block coordinate descent type algorithms for blind compressed sensing are highly efficient. Our numerical experiments demonstrate the superior performance of the proposed framework for MRI compared to several recent image reconstruction methods. Importantly, the learning of a union of sparsifying transforms leads to better image reconstructions than a single transform.
The problem of fitting a model composed of a number of superimposed signals to noisy data using t... more The problem of fitting a model composed of a number of superimposed signals to noisy data using the maximum likelihood (ML) criterion is considered. A dynamic programming (DP) algorithm which solves the problem efficiently is presented. An asymptotic property of the estimates is derived, and a bound on the bias of the estimates is given. The bound is then computed using perturbation analysis and compared with computer simulation results. The results show that the DP algorithm is a versatile and efficient algorithm for parameter estimation. In practical applications, the estimates can be refined by a local search (e.g., the Gauss-Newton method) of the exact ML criterion, initialized by the DP estimates.<<ETX>>
We apply the weak membrane model with optimization by mean field annealing to the direct segmenta... more We apply the weak membrane model with optimization by mean field annealing to the direct segmentation of tomographic images. We also introduce models based on the minimum description length principle that include penalties for measurement error, boundary length, regions, and means. Outliers are prevented by upper and lower bound constraints on pixel values. Several models are generalized to three-dimensional images. The superiority of our models to convolution back projection is demonstrated experimentally.
Existing algorithms for exact helical cone beam (HCB) tomographic reconstruction are computationa... more Existing algorithms for exact helical cone beam (HCB) tomographic reconstruction are computationally infeasible for clinical applications. Their computational cost is dominated by 3-D backprojection, which is generally an Ç´AE µ operation. We present a fast hierarchical 3-D backprojection algorithm, generalizing fast 2-D parallel beam and fan beam algorithms, which reduces the overall complexity of this step to Ç´AE ¿ ÐÓ AEµ, greatly accelerating the reconstruction.
IEEE Transactions on Signal Processing, Feb 1, 2019
The extremal values of multivariate trigonometric polynomials are of interest in fields ranging f... more The extremal values of multivariate trigonometric polynomials are of interest in fields ranging from control theory to filter design, but finding the extremal values of such a polynomial is generally NP-Hard. In this paper, we develop simple and efficiently computable estimates of the extremal values of a multivariate trigonometric polynomial directly from its samples. We provide an upper bound on the modulus of a complex trigonometric polynomial, and develop upper and lower bounds for real trigonometric polynomials. For a univariate polynomial, these bounds are tighter than existing bounds, and the extension to multivariate polynomials is new. As an application, the lower bound provides a sufficient condition to certify global positivity of a real trigonometric polynomial.
The problem of jointly sparse support recovery is to determine the common support of jointly spar... more The problem of jointly sparse support recovery is to determine the common support of jointly sparse signal vectors from multiple measurement vectors (MMV) related to the signals by a linear transformation. The fundamental limit of performance has been studied in terms of a so-called algebraic bound, relating the maximum recoverable sparsity level to the spark of the sensing matrix and the rank of the signal matrix. However, while the algebraic bound provides the necessary and sufficient condition for the success of joint sparse recovery, it is restricted to the noiseless case. We derive a sufficient condition for jointly sparse support recovery for the noisy case. We show that essentially the same deterministic condition as in the noiseless case suffices for perfect support recovery at finite signal-to-noise ratio (SNR). Furthermore, we perform an average case analysis of the recovery problem when the matrix of jointly sparse signal vectors has random left singular vectors, representing the case of signal vectors in general position. In this case, we provide a relaxed deterministic condition on the sensing matrix for support recovery with high probability at finite SNR. Finally, we quantify the improvements for an i.i.d. Gaussian sensing matrix.
International Journal of Adaptive Control and Signal Processing, 1989
SUMMARY Recently, there has been a considerable interest in parametric estimation of non-Gaussian... more SUMMARY Recently, there has been a considerable interest in parametric estimation of non-Gaussian processes, based on high-order moments. Several researchers have proposed algorithms for estimating the parameters of AR, MA and ARMA processes, based on the ...
Blind deconvolution (BD) arises in many applications. Without assumptions on the signal and the f... more Blind deconvolution (BD) arises in many applications. Without assumptions on the signal and the filter, BD does not admit a unique solution. In practice, subspace or sparsity assumptions have shown the ability to reduce the search space and yield the unique solution. However, existing theoretical analysis on uniqueness in BD is rather limited. In an earlier paper, we provided the first algebraic sample complexities for BD that hold for almost all bases or frames. We showed that for BD of a pair of vectors in C n , with subspace constraints of dimensions m 1 and m 2 , respectively, a sample complexity of n ≥ m 1 m 2 is sufficient. This result is suboptimal, since the number of degrees of freedom is merely m 1 + m 2 − 1. We provided analogus results, with similar suboptimality, for BD with sparsity or mixed subspace and sparsity constraints. In this paper, taking advantage of the recent progress on the information-theoretic limits of unique low-rank matrix recovery, we finally bridge this gap, and derive an optimal sample complexity result for BD with generic bases or frames. We show that for BD of an arbitrary pair (resp. all pairs) of vectors in C n , with sparsity constraints of sparsity levels s 1 and s 2 , a sample complexity of n > s 1 + s 2 (resp. n > 2(s 1 + s 2)) is sufficient. We also present analogous results for BD with subspace constraints or mixed constraints, with the subspace dimension replacing the sparsity level. Last but not least, in all the above scenarios, if the bases or frames follow a probabilistic distribution specified in the paper, the recovery is not only unique, but also stable against small perturbations in the measurements, under the same sample complexities. Index termsuniqueness, sample complexity, bilinear inverse problem, low-rank matrix recovery * This work was supported in part by the National Science Foundation (NSF) under Grants CCF 10-18789 and IIS 14-47879.
ABSTRACT In this correspondence, a corrected version of the convergence analysis given by Lee and... more ABSTRACT In this correspondence, a corrected version of the convergence analysis given by Lee and Bresler in the above titled paper (ibid., vol. 56, no. 9, pp. 4402-4416, Sep. 2010) is presented.
Dynamic tomography is an ill-posed inverse problem where the object evolves during the sequential... more Dynamic tomography is an ill-posed inverse problem where the object evolves during the sequential acquisition of projections. The goal is to reconstruct the object for each time instant. However, performing a direct reconstruction using this inconsistent set of projections is impossible. In this paper, we propose an object-domain recovery algorithm using a variational formulation that combines a partially separable spatio-temporal prior with a basic total-variation spatial regularization for improved performance, while preserving full interpretability. Numerical experiments on data derived from real object CT data demonstrate the advantages of the proposed algorithm over recent projection-domain and deep-prior-based methods.
Scatter due to interaction of photons with the imaged object is a fundamental problem in X-ray Co... more Scatter due to interaction of photons with the imaged object is a fundamental problem in X-ray Computed Tomography (CT). It manifests as various artifacts in the reconstruction, making its abatement or correction critical for image quality. Despite success in specific settings, hardwarebased methods require modification in the hardware, or increase in the scan time or dose. This accounts for the great interest in software-based methods, including Monte-Carlo based scatter estimation, analytical-numerical, and kernel-based methods, with data-driven learning-based approaches demonstrated recently. In this work, two novel physics-inspired deep-learning-based methods, PhILSCAT and OV-PhILSCAT, are proposed. The methods estimate and correct for the scatter in the acquired projection measurements. Different from previous works, they incorporate both an initial reconstruction of the object of interest and the scatter-corrupted measurements related to it, and use a deep neural network architecture and cost function, both specifically tailored to the problem. Numerical experiments with data generated by Monte-Carlo simulations of the imaging of phantoms reveal consistent improvement over a recent purely projectiondomain deep neural network scatter correction method.
With a multi-lab and university team, we propose to develop new methods for a Hydrodynamic and Ra... more With a multi-lab and university team, we propose to develop new methods for a Hydrodynamic and Radiographic Toolbox (HART) that will enable a fuller and more extensive use of experimental radiographic data towards better characterizing and reducing uncertainties in predictive modeling of weapons performance. We will achieve this by leveraging recent developments in areas of computational imaging, statistical and machine learning, and reduced order modeling of hydrodynamics. In terms of software practices, by partnering with XCP (RIS-TRA Project) we will conform to recent XCP standards that are inline with modern software practices and standards and ensure compatibility and ease of inter-operability with existing codes. Three new activities under this proposal include (a) the development and use of deep learning-based surrogates to accelerate reconstruction and variational inference of density fields from radiographs of hydrotests, (b) a model-data fusion strategy that couples deep learning-based density reconstructions with fast hydrodynamics simulators to better constrain the reconstruction, and (c) a method for treating asymmetries using techniques adopted from limited view tomography. All three new activities will be based on improved treatment of scatter, noise, beam spot movement, detector blur, and flat fielding in the forward model, and a use of sophisticated priors to aid in the re construction. The improvements to the forward model and improved algorithmic design of the reconstruction when complete will be contained in the iterative reconstruction code SHIVA-a code project that we have recently initiated. The many ways in which machine learning can be used in the reconstruction work will be contained in a code HERMES that has been initiated with DTRA support. For example, the significant levels of acceleration that will likely be achieved by the use of machine learning techniques will permit us (and are required) to quantify uncertainties in density retrievals. Next, the two-way coupling between density reconstruction and model-based simulation of the hydrodynamics will be contained in code EREBUS, and will permit a fuller realization of the potential of the data to constrain the hydrodynamic model and better address issues related to asymmetries in the problem. Finally, we anticipate that the better consistency with physics achieved in our reconstructions will allow them to be used by X-Division more so than today.
The tomographic reconstruction of objects with spatially localized temporal variation, such as th... more The tomographic reconstruction of objects with spatially localized temporal variation, such as the heart, is considered. Theoretical analysis, which includes tight performance bounds, shows that, by using an optimally scrambled angular sampling order, the required temporal scan rate can be lowered as much as four times while still preserving image quality. A design procedure is presented for the optimum choice of angular sampling pattern, and demonstrated by simulations. The technique permits graceful degradation of performance over a wide range of temporal sampling rates.<<ETX>>
We survey applications of classical and of time-sequential sampling theory and some of its recent... more We survey applications of classical and of time-sequential sampling theory and some of its recent extensions, respectively, in two complementary areas. First to reduce acquisition requirements for dynamic imaging below those predicted by classical theory, and second, to reduce the computation for tomographic reconstruction from O(N3) to O(N2 log N) for an N × N image, with similar acceleration for 3D images. In both areas, the savings demonstrated in practical examples exceed an order-of-magnitude.
Chemical imaging provides information about the distribution of chemicals within a target. When c... more Chemical imaging provides information about the distribution of chemicals within a target. When combined with structural information about the target, in situ chemical imaging opens the door to applications ranging from tissue classification to industrial process monitoring. The combination of infrared spectroscopy and optical microscopy is a powerful tool for chemical imaging of thin targets. Unfortunately, extending this technique to targets with appreciable depth is prohibitively slow. We combine confocal microscopy and infrared spectroscopy to provide chemical imaging in three spatial dimensions. Interferometric measurements are acquired at a small number of focal depths, and images are formed by solving a regularized inverse scattering problem. A low-dimensional signal model is key to this approach: we assume the target comprises a finite number of distinct chemical species. We establish conditions on the constituent spectra and the number of measurements needed for unique recovery of the target. Simulations illustrate imaging of cellular phantoms and sub-wavelength targets from noisy measurements.
Compressed Sensing has been demonstrated to be a powerful tool for magnetic resonance imaging (MR... more Compressed Sensing has been demonstrated to be a powerful tool for magnetic resonance imaging (MRI), where it enables accurate recovery of images from highly undersampled k-space measurements by exploiting the sparsity of the images or image patches in a transform domain or dictionary. In this work, we focus on blind compressed sensing, where the underlying sparse signal model is a priori unknown, and propose a framework to simultaneously reconstruct the underlying image as well as the unknown model from highly undersampled measurements. Specifically, our model is that the patches of the underlying MR image(s) are approximately sparse in a transform domain. We also extend this model to a union of transforms model that is better suited to capture the diversity of features in MR images. The proposed block coordinate descent type algorithms for blind compressed sensing are highly efficient. Our numerical experiments demonstrate the superior performance of the proposed framework for MRI compared to several recent image reconstruction methods. Importantly, the learning of a union of sparsifying transforms leads to better image reconstructions than a single transform.
The problem of fitting a model composed of a number of superimposed signals to noisy data using t... more The problem of fitting a model composed of a number of superimposed signals to noisy data using the maximum likelihood (ML) criterion is considered. A dynamic programming (DP) algorithm which solves the problem efficiently is presented. An asymptotic property of the estimates is derived, and a bound on the bias of the estimates is given. The bound is then computed using perturbation analysis and compared with computer simulation results. The results show that the DP algorithm is a versatile and efficient algorithm for parameter estimation. In practical applications, the estimates can be refined by a local search (e.g., the Gauss-Newton method) of the exact ML criterion, initialized by the DP estimates.<<ETX>>
We apply the weak membrane model with optimization by mean field annealing to the direct segmenta... more We apply the weak membrane model with optimization by mean field annealing to the direct segmentation of tomographic images. We also introduce models based on the minimum description length principle that include penalties for measurement error, boundary length, regions, and means. Outliers are prevented by upper and lower bound constraints on pixel values. Several models are generalized to three-dimensional images. The superiority of our models to convolution back projection is demonstrated experimentally.
Existing algorithms for exact helical cone beam (HCB) tomographic reconstruction are computationa... more Existing algorithms for exact helical cone beam (HCB) tomographic reconstruction are computationally infeasible for clinical applications. Their computational cost is dominated by 3-D backprojection, which is generally an Ç´AE µ operation. We present a fast hierarchical 3-D backprojection algorithm, generalizing fast 2-D parallel beam and fan beam algorithms, which reduces the overall complexity of this step to Ç´AE ¿ ÐÓ AEµ, greatly accelerating the reconstruction.
IEEE Transactions on Signal Processing, Feb 1, 2019
The extremal values of multivariate trigonometric polynomials are of interest in fields ranging f... more The extremal values of multivariate trigonometric polynomials are of interest in fields ranging from control theory to filter design, but finding the extremal values of such a polynomial is generally NP-Hard. In this paper, we develop simple and efficiently computable estimates of the extremal values of a multivariate trigonometric polynomial directly from its samples. We provide an upper bound on the modulus of a complex trigonometric polynomial, and develop upper and lower bounds for real trigonometric polynomials. For a univariate polynomial, these bounds are tighter than existing bounds, and the extension to multivariate polynomials is new. As an application, the lower bound provides a sufficient condition to certify global positivity of a real trigonometric polynomial.
Uploads
Papers by Yoram Bresler