Optimizing quantitative photoacoustic imaging systems: the Bayesian Cramér-Rao bound approach
Quantitative photoacoustic computed tomography (qPACT) is an emerging medical imaging modality that carries the promise of high-contrast, fine-resolution imaging of clinically relevant quantities like hemoglobin concentration and blood-oxygen saturation. However, qPACT image reconstruction is governed by a multiphysics, partial differential equation (PDE) based inverse problem that is highly non-linear and severely ill-posed. Compounding the difficulty of the problem is the lack of established design standards for qPACT imaging systems, as there is currently a proliferation of qPACT system designs for various applications and it is unknown which ones are optimal or how to best modify the systems under various design constraints. This work introduces a novel computational approach for the optimal experimental design of qPACT imaging systems based on the Bayesian Cramér-Rao bound (CRB). Our approach incorporates several techniques to address challenges associated with forming the bound in the infinite-dimensional function space setting of qPACT, including priors with trace-class covariance operators and the use of the variational adjoint method to compute derivatives of the log-likelihood function needed in the bound computation. The resulting Bayesian CRB based design metric is computationally efficient and independent of the choice of estimator used to solve the inverse problem. The efficacy of the bound in guiding experimental design was demonstrated in a numerical study of qPACT design schemes under a stylized two-dimensional imaging geometry. To the best of our knowledge, this is the first work to propose Bayesian CRB based design for systems governed by PDEs.
Learning a stable approximation of an existing but unknown inverse mapping: application to the half-time circular Radon transform
Supervised deep learning-based methods have inspired a new wave of image reconstruction methods that implicitly learn effective regularization strategies from a set of training data. While they hold potential for improving image quality, they have also raised concerns regarding their robustness. Instabilities can manifest when learned methods are applied to find approximate solutions to ill-posed image reconstruction problems for which a unique and stable inverse mapping does not exist, which is a typical use case. In this study, we investigate the performance of supervised deep learning-based image reconstruction in an alternate use case in which a stable inverse mapping is known to exist but is not yet analytically available in closed form. For such problems, a deep learning-based method can learn a stable approximation of the unknown inverse mapping that generalizes well to data that differ significantly from the training set. The learned approximation of the inverse mapping eliminates the need to employ an implicit (optimization-based) reconstruction method and can potentially yield insights into the unknown analytic inverse formula. The specific problem addressed is image reconstruction from a particular case of radially truncated circular Radon transform (CRT) data, referred to as 'half-time' measurement data. For the half-time image reconstruction problem, we develop and investigate a learned filtered backprojection method that employs a convolutional neural network to approximate the unknown filtering operation. We demonstrate that this method behaves stably and readily generalizes to data that differ significantly from training data. The developed method may find application to wave-based imaging modalities that include photoacoustic computed tomography.
Generalized Bayes approach to inverse problems with model misspecification
We propose a general framework for obtaining probabilistic solutions to PDE-based inverse problems. Bayesian methods are attractive for uncertainty quantification but assume knowledge of the likelihood model or data generation process. This assumption is difficult to justify in many inverse problems, where the specification of the data generation process is not obvious. We adopt a Gibbs posterior framework that directly posits a regularized variational problem on the space of probability distributions of the parameter. We propose a novel model comparison framework that evaluates the optimality of a given loss based on its "predictive performance". We provide cross-validation procedures to calibrate the regularization parameter of the variational objective and compare multiple loss functions. Some novel theoretical properties of Gibbs posteriors are also presented. We illustrate the utility of our framework via a simulated example, motivated by dispersion-based wave models used to characterize arterial vessels in ultrasound vibrometry.
Unsupervised knowledge-transfer for learned image reconstruction
Deep learning-based image reconstruction approaches have demonstrated impressive empirical performance in many imaging modalities. These approaches usually require a large amount of high-quality paired training data, which is often not available in medical imaging. To circumvent this issue we develop a novel unsupervised knowledge-transfer paradigm for learned reconstruction within a Bayesian framework. The proposed approach learns a reconstruction network in two phases. The first phase trains a reconstruction network with a set of ordered pairs comprising of ground truth images of ellipses and the corresponding simulated measurement data. The second phase fine-tunes the pretrained network to more realistic measurement data without supervision. By construction, the framework is capable of delivering predictive uncertainty information over the reconstructed image. We present extensive experimental results on low-dose and sparse-view computed tomography showing that the approach is competitive with several state-of-the-art supervised and unsupervised reconstruction techniques. Moreover, for test data distributed differently from the training data, the proposed framework can significantly improve reconstruction quality not only visually, but also quantitatively in terms of PSNR and SSIM, when compared with learned methods trained on the synthetic dataset only.
An Extended Primal-Dual Algorithm Framework for Nonconvex Problems: Application to Image Reconstruction in Spectral CT
Using the convexity of each component of the forward operator, we propose an extended primal-dual algorithm framework for solving a kind of nonconvex and probably nonsmooth optimization problems in spectral CT image reconstruction. Following the proposed algorithm framework, we present six different iterative schemes or algorithms, and then establish the relationship to some existing algorithms. Under appropriate conditions, we prove the convergence of these schemes for the general case. Moreover, when the proposed schemes are applied to solving a specific problem in spectral CT image reconstruction, namely, total variation regularized nonlinear least-squares problem with nonnegative constraint, we also prove the particular convergence for these schemes by using some special properties. The numerical experiments with densely and sparsely data demonstrate the convergence and accuracy of the proposed algorithm framework in terms of visual inspection of images of realistic anatomic complexity and quantitative analysis with metrics structural similarity, peak signal-to-noise ratio, mean square error and maximum pixel difference. We analyze the computational complexity of these schemes, and discuss the extended applications of this algorithm framework in other nonlinear imaging problems.
Minimizing over norms on the gradient
In this paper, we study the minimization on the gradient for imaging applications. Several recent works have demonstrated that is better than the norm when approximating the norm to promote sparsity. Consequently, we postulate that applying on the gradient is better than the classic total variation (the norm on the gradient) to enforce the sparsity of the image gradient. Numerically, we design a specific splitting scheme, under which we can prove subsequential and global convergence for the alternating direction method of multipliers (ADMM) under certain conditions. Experimentally, we demonstrate visible improvements of over and other nonconvex regularizations for image recovery from low-frequency measurements and two medical applications of MRI and CT reconstruction. Finally, we reveal some empirical evidence on the superiority of over when recovering piecewise constant signals from low-frequency measurements to shed light on future works.
Blood and Breath Alcohol Concentration from Transdermal Alcohol Biosensor Data: Estimation and Uncertainty Quantification via Forward and Inverse Filtering for a Covariate-Dependent, Physics-Informed, Hidden Markov Model
Transdermal alcohol biosensors that do not require active participation of the subject and yield near continuous measurements have the potential to significantly enhance the data collection abilities of alcohol researchers and clinicians who currently rely exclusively on breathalyzers and drinking diaries. Making these devices accessible and practical requires that transdermal alcohol concentration (TAC) be accurately and consistently transformable into the well-accepted measures of intoxication, blood/breath alcohol concentration (BAC/BrAC). A novel approach to estimating BrAC from TAC based on covariate-dependent physics-informed hidden Markov models with two emissions is developed. The hidden Markov chain serves as a forward full-body alcohol model with BrAC and TAC, the two emissions, assumed to be described by a bivariate normal which depends on the hidden Markovian states and person-level and session-level covariates via built-in regression models. An innovative extension of hidden Markov modeling is developed wherein the hidden Markov model framework is regularized by a first-principles PDE model to yield a hybrid that combines prior knowledge of the physics of transdermal ethanol transport with data-based learning. Training, or inverse filtering, is effected via the Baum-Welch algorithm and 256 sets of BrAC and TAC signals and covariate measurements collected in the laboratory. Forward filtering of TAC to obtain estimated BrAC is achieved via a new physics-informed regularized Viterbi algorithm which determines the most likely path through the hidden Markov chain using TAC alone. The Markovian states are decoded and used to yield estimates of BrAC and to quantify the uncertainty in the estimates. Numerical studies are presented and discussed. Overall good agreement between BrAC data and estimates was observed with a median relative peak error of 22% and a median relative area under the curve error of 25% on the test set. We also demonstrate that the physics-informed Viterbi algorithm eliminates non-physical artifacts in the BrAC estimates.
A probabilistic approach to tomography and adjoint state methods, with an application to full waveform inversion in medical ultrasound
Bayesian methods are a popular research direction for inverse problems. There are a variety of techniques available to solve Bayes' equation, each with their own strengths and limitations. Here, we discuss stochastic variational inference (SVI), which solves Bayes' equation using gradient-based methods. This is important for applications which are time-limited (e.g. medical tomography) or where solving the forward problem is expensive (e.g. adjoint methods). To evaluate the use of SVI in both these contexts, we apply it to ultrasound tomography of the brain using full-waveform inversion (FWI). FWI is a computationally expensive adjoint method for solving the ultrasound tomography inverse problem, and we demonstrate that SVI can be used to find a no-cost estimate of the pixel-wise variance of the sound-speed distribution using a mean-field Gaussian approximation. In other words, we show experimentally that it is possible to estimate the pixel-wise uncertainty of the sound-speed reconstruction using SVI and a common approximation which is already implicit in other types of iterative reconstruction. Uncertainty estimates have a variety of uses in adjoint methods and tomography. As an illustrative example, we focus on the use of uncertainty for image quality assessment. This application is not limiting; our variance estimator has effectively no computational cost and we expect that it will have applications in fields such as non-destructive testing or aircraft component design where uncertainties may not be routinely estimated.
Deep neural-network based optimization for the design of a multi-element surface magnet for MRI applications
We present a combination of a CNN-based encoder with an analytical forward map for solving inverse problems. We call it an encoder-analytic (EA) hybrid model. It does not require a dedicated training dataset and can train itself from the connected forward map in a direct learning fashion. A separate regularization term is not required either, since the forward map also acts as a regularizer. As it is not a generalization model it does not suffer from overfitting. We further show that the model can be customized to either find a specific target solution or one that follows a given heuristic. As an example, we apply this approach to the design of a multi-element surface magnet for low-field magnetic resonance imaging (MRI). We further show that the EA model can outperform the benchmark genetic algorithm model currently used for magnet design in MRI, obtaining almost 10 times better results.
Equivariant neural networks for inverse problems
In recent years the use of convolutional layers to encode an inductive bias (translational equivariance) in neural networks has proven to be a very fruitful idea. The successes of this approach have motivated a line of research into incorporating other symmetries into deep learning methods, in the form of group equivariant convolutional neural networks. Much of this work has been focused on roto-translational symmetry of , but other examples are the scaling symmetry of and rotational symmetry of the sphere. In this work, we demonstrate that group equivariant convolutional operations can naturally be incorporated into learned reconstruction methods for inverse problems that are motivated by the variational regularisation approach. Indeed, if the regularisation functional is invariant under a group symmetry, the corresponding proximal operator will satisfy an equivariance property with respect to the same group symmetry. As a result of this observation, we design learned iterative methods in which the proximal operators are modelled as group equivariant convolutional neural networks. We use roto-translationally equivariant operations in the proposed methodology and apply it to the problems of low-dose computerised tomography reconstruction and subsampled magnetic resonance imaging reconstruction. The proposed methodology is demonstrated to improve the reconstruction quality of a learned reconstruction method with a little extra computational cost at training time but without any extra cost at test time.
Sparsity-based nonlinear reconstruction of optical parameters in two-photon photoacoustic computed tomography
We present a new nonlinear optimization approach for the sparse reconstruction of single-photon absorption and two-photon absorption coefficients in the photoacoustic computed tomography (PACT). This framework comprises of minimizing an objective functional involving a least squares fit of the interior pressure field data corresponding to two boundary source functions, where the absorption coefficients and the photon density are related through a semi-linear elliptic partial differential equation (PDE) arising in PAT. Further, the objective functional consists of an regularization term that promotes sparsity patterns in absorption coefficients. The motivation for this framework primarily comes from some recent works related to solving inverse problems in acousto-electric tomography and current density impedance tomography. We provide a new proof of existence and uniqueness of a solution to the semi-linear PDE. Further, a proximal method, involving a Picard solver for the semi-linear PDE and its adjoint, is used to solve the optimization problem. Several numerical experiments are presented to demonstrate the effectiveness of the proposed framework.
A DIRECT RECONSTRUCTION ALGORITHM FOR THE ANISOTROPIC INVERSE CONDUCTIVITY PROBLEM BASED ON CALDERÓN'S METHOD IN THE PLANE
A direct reconstruction algorithm based on Calderón's linearization method for the reconstruction of isotropic conductivities is proposed for anisotropic conductivities in two-dimensions. To overcome the non-uniqueness of the anisotropic inverse conductivity problem, the entries of the unperturbed anisotropic tensors are assumed known , and it remains to reconstruct the multiplicative scalar field. The quasi-conformal map in the plane facilitates the Calderón-based approach for anisotropic conductivities. The method is demonstrated on discontinuous radially symmetric conductivities of high and low contrast.
A second order Calderón's method with a correction term and a priori information
Calderón's method is a direct linearized reconstruction method for the inverse conductivity problem with the attribute that it can provide absolute images of both conductivity and permittivity with no need for forward modeling. In this work, an explicit relationship between Calderón's method and the D-bar method is provided, facilitating a "higher-order" Calderón's method in which a correction term is included, derived from the relationship to the D-bar method. Furthermore, a method of including a spatial prior is provided. These advances are demonstrated on simulated data and on tank data collected with the ACE1 EIT system.
Variational regularisation for inverse problems with imperfect forward operators and general noise models
We study variational regularisation methods for inverse problems with imperfect forward operators whose errors can be modelled by order intervals in a partial order of a Banach lattice. We carry out analysis with respect to existence and convex duality for general data fidelity terms and regularisation functionals. Both for and parameter choice rules, we obtain convergence rates of the regularised solutions in terms of Bregman distances. Our results apply to fidelity terms such as Wasserstein distances, -divergences, norms, as well as sums and infimal convolutions of those.
The D-bar method for electrical impedance tomography-demystified
Electrical impedance tomography (EIT) is an imaging modality where a patient or object is probed using harmless electric currents. The currents are fed through electrodes placed on the surface of the target, and the data consists of voltages measured at the electrodes resulting from a linearly independent set of current injection patterns. EIT aims to recover the internal distribution of electrical conductivity inside the target. The inverse problem underlying the EIT image formation task is nonlinear and severely ill-posed, and hence sensitive to modeling errors and measurement noise. Therefore, the inversion process needs to be regularized. However, traditional variational regularization methods, based on optimization, often suffer from local minima because of nonlinearity. This is what makes regularized direct (non-iterative) methods attractive for EIT. The most developed direct EIT algorithm is the D-bar method, based on Complex Geometric Optics solutions and a nonlinear Fourier transform. Variants and recent developments of D-bar methods are reviewed, and their practical numerical implementation is explained.
Fisher information analysis of list-mode SPECT emission data for joint estimation of activity and attenuation distribution
The potential to perform attenuation and scatter compensation (ASC) in single-photon emission computed tomography (SPECT) imaging without a separate transmission scan is highly significant. In this context, attenuation in SPECT is primarily due to Compton scattering, where the probability of Compton scatter is proportional to the attenuation coefficient of the tissue and the energy of the scattered photon and the scattering angle are related. Based on this premise, we investigated whether the SPECT scattered-photon data acquired in list-mode (LM) format and including the energy information can be used to estimate the attenuation map. For this purpose, we propose a Fisher-information-based method that yields the Cramer-Rao bound (CRB) for the task of jointly estimating the activity and attenuation distribution using only the SPECT emission data. In the process, a path-based formalism to process the LM SPECT emission data, including the scattered-photon data, is proposed. The Fisher information method was implemented on NVIDIA graphics processing units (GPU) for acceleration. The method was applied to analyze the information content of SPECT LM emission data, which contains up to first-order scattered events, in a simulated SPECT system with parameters modeling a clinical system using realistic computational studies with 2-D digital synthetic and anthropomorphic phantoms. The method was also applied to LM data containing up to second-order scatter for a synthetic phantom. Experiments with anthropomorphic phantoms simulated myocardial perfusion and dopamine transporter (DaT)-Scan SPECT studies. The results show that the CRB obtained for the attenuation and activity coefficients was typically much lower than the true value of these coefficients. An increase in the number of detected photons yielded lower CRB for both the attenuation and activity coefficients. Further, we observed that systems with better energy resolution yielded a lower CRB for the attenuation coefficient. Overall, the results provide evidence that LM SPECT emission data, including the scattered photons, contains information to jointly estimate the activity and attenuation coefficients.
NON-UNIQUE GAMES OVER COMPACT GROUPS AND ORIENTATION ESTIMATION IN CRYO-EM
Let be a compact group and let . We define the (NUG) problem as finding to minimize . We introduce a convex relaxation of the NUG problem to a (SDP) by taking the Fourier transform of over . The NUG framework can be seen as a generalization of the problem over the orthogonal group and the problem and includes many practically relevant problems, such as the to registering bandlimited functions over the unit sphere in -dimensions and orientation estimation of noisy cryo-Electron Microscopy (cryo-EM) projection images. We implement a SDP solver for the NUG cryo-EM problem using the alternating direction method of multipliers (ADMM). Numerical study with synthetic datasets indicate that while our ADMM solver is slower than existing methods, it can estimate the rotations more accurately, especially at low (SNR).
Hyper-Molecules: on the Representation and Recovery of Dynamical Structures for Applications in Flexible Macro-Molecules in Cryo-EM
Cryo-electron microscopy (cryo-EM), the subject of the 2017 Nobel Prize in Chemistry, is a technology for obtaining 3-D reconstructions of macromolecules from many noisy 2-D projections of instances of these macromolecules, whose orientations and positions are unknown. These molecules are not rigid objects, but flexible objects involved in dynamical processes. The different conformations are exhibited by different instances of the macromolecule observed in a cryo-EM experiment, each of which is recorded as a particle image. The range of conformations and the conformation of each particle are not known a priori; one of the great promises of cryo-EM is to map this conformation space. Remarkable progress has been made in reconstructing rigid molecules based on homogeneous samples of molecules in spite of the unknown orientation of each particle image and significant progress has been made in recovering a few distinct states from mixtures of rather distinct conformations, but more complex heterogeneous samples remain a major challenge. We introduce the "hyper-molecule" theoretical framework for modeling structures across different states of heterogeneous molecules, including continuums of states. The key idea behind this framework is representing heterogeneous macromolecules as high-dimensional objects, with the additional dimensions representing the conformation space. This idea is then refined to model properties such as localized heterogeneity. In addition, we introduce an algorithmic framework for reconstructing such heterogeneous objects from experimental data using a Bayesian formulation of the problem and Markov chain Monte Carlo (MCMC) algorithms to address the computational challenges in recovering these high dimensional hyper-molecules. We demonstrate these ideas in a preliminary prototype implementation, applied to synthetic data.
WHERE DID THE TUMOR START? AN INVERSE SOLVER WITH SPARSE LOCALIZATION FOR TUMOR GROWTH MODELS
We present a numerical scheme for solving an inverse problem for parameter estimation in tumor growth models for glioblastomas, a form of aggressive primary brain tumor. The growth model is a reaction-diffusion partial differential equation (PDE) for the tumor concentration. We use a PDE-constrained optimization formulation for the inverse problem. The unknown parameters are the reaction coefficient (proliferation), the diffusion coefficient (infiltration), and the initial condition field for the tumor PDE. Segmentation of Magnetic Resonance Imaging (MRI) scans drive the inverse problem where segmented tumor regions serve as partial observations of the tumor concentration. Like most cases in clinical practice, we use data from a single time snapshot. Moreover, the precise time relative to the initiation of the tumor is unknown, which poses an additional difficulty for inversion. We perform a frozen-coefficient spectral analysis and show that the inverse problem is severely ill-posed. We introduce a biophysically motivated regularization on the structure and magnitude of the tumor initial condition. In particular, we assume that the tumor starts at a few locations (enforced with a sparsity constraint on the initial condition of the tumor) and that the initial condition magnitude in the maximum norm is equal to one. We solve the resulting optimization problem using an inexact quasi-Newton method combined with a compressive sampling algorithm for the sparsity constraint. Our implementation uses PETSc and AccFFT libraries. We conduct numerical experiments on synthetic and clinical images to highlight the improved performance of our solver over a previously existing solver that uses standard two-norm regularization for the calibration parameters. The existing solver is unable to localize the initial condition. Our new solver can localize the initial condition and recover infiltration and proliferation. In clinical datasets (for which the ground truth is unknown), our solver results in qualitatively different solutions compared to the two-norm regularized solver.
A DOMAIN DECOMPOSITION PRECONDITIONING FOR AN INVERSE VOLUME SCATTERING PROBLEM
We propose domain decomposition preconditioners for the solution of an integral equation formulation of the acoustic forward and inverse scattering problems. We study both forward and inverse volume problems and propose preconditioning techniques to accelerate the iterative solvers. For the forward scattering problem, we extend the domain decomposition based preconditioning techniques presented for partial differential equations in , to integral equations. We combine this domain decomposition preconditioner with a low-rank correction, which is easy to construct, forming a new preconditioner. For the inverse scattering problem, we use the forward problem preconditioner as a building block for constructing a preconditioner for the Gauss-Newton Hessian. We present numerical results that demonstrate the performance of both preconditioning strategies.
Cryo-EM reconstruction of continuous heterogeneity by Laplacian spectral volumes
Single-particle electron cryomicroscopy is an essential tool for high-resolution 3D reconstruction of proteins and other biological macromolecules. An important challenge in cryo-EM is the reconstruction of non-rigid molecules with parts that move and deform. Traditional reconstruction methods fail in these cases, resulting in smeared reconstructions of the moving parts. This poses a major obstacle for structural biologists, who need high-resolution reconstructions of entire macromolecules, moving parts included. To address this challenge, we present a new method for the reconstruction of macromolecules exhibiting continuous heterogeneity. The proposed method uses projection images from multiple viewing directions to construct a graph Laplacian through which the manifold of three-dimensional conformations is analyzed. The 3D molecular structures are then expanded in a basis of Laplacian eigenvectors, using a novel generalized tomographic reconstruction algorithm to compute the expansion coefficients. These coefficients, which we name , provide a high-resolution visualization of the molecular dynamics. We provide a theoretical analysis and evaluate the method empirically on several simulated data sets.