Table of contents

Volume 36

Number 8, August 2020

Previous issue Next issue

Buy this issue in print

Special Issue Papers

084001

, and

Special Issue on Modern Challenges in Imaging

The need to solve discrete ill-posed problems arises in many areas of science and engineering. Solutions of these problems, if they exist, are very sensitive to perturbations in the available data. Regularization replaces the original problem by a nearby regularized problem, whose solution is less sensitive to the error in the data. The regularized problem contains a fidelity term and a regularization term. Recently, the use of a p-norm to measure the fidelity term and a q-norm to measure the regularization term has received considerable attention. The balance between these terms is determined by a regularization parameter. In many applications, such as in image restoration, the desired solution is known to live in a convex set, such as the nonnegative orthant. It is natural to require the computed solution of the regularized problem to satisfy the same constraint(s). This paper shows that this procedure induces a regularization method and describes a modulus-based iterative method for computing a constrained approximate solution of a smoothed version of the regularized problem. Convergence of the iterative method is shown, and numerical examples that illustrate the performance of the proposed method are presented.

084002

, , , , and

Special Issue on Modern Challenges in Imaging

The potential to perform attenuation and scatter compensation (ASC) in single-photon emission computed tomography (SPECT) without a separate transmission scan is highly significant. In this context, attenuation in SPECT is primarily due to Compton scattering, where the probability of Compton scatter is proportional to the attenuation coefficient of the tissue and the energy of the scattered photon and the scattering angle are related. Based on this premise, we investigated whether the SPECT emission data, including the scatter-window data, acquired in list-mode (LM) format and including the energy information, can be used to estimate the attenuation map. For this purpose, we propose a Fisher-information-based method that yields the Cramér–Rao bound (CRB) for the task of jointly estimating the activity and attenuation distribution using only the SPECT emission data. In the process, a path-based formalism to process the LM SPECT emission data, including the scattered-photon data, is proposed. The Fisher information method was implemented on NVIDIA graphics processing units (GPUs) for acceleration. The method was applied to quantify the information content of SPECT LM emission data, which contains up to first-order scattered events, in a simulated SPECT system with parameters modeling a clinical system using realistic computational studies with 2D digital synthetic and anthropomorphic phantoms. Experiments with anthropomorphic phantoms simulated myocardial perfusion and dopamine transporter (DaT)-Scan SPECT studies. The method was also applied to LM data containing up to second-order scatter for a synthetic phantom. The results show that the CRB obtained for the attenuation and activity coefficients was typically much lower than the true value of these coefficients. An increase in the number of detected photons yielded lower CRB for both the attenuation and activity coefficients. Further, we observed that systems with better energy resolution yielded a lower CRB for the attenuation coefficient. Overall, the results provide evidence that LM SPECT emission data, including the scatter-window data, contains information to jointly estimate the activity and attenuation coefficients.

Papers

085001
The following article is Open access

and

For $\mathcal{O}$ a bounded domain in ${\mathbb{R}}^{d}$ and a given smooth function $g:\mathcal{O}\to \mathbb{R}$, we consider the statistical nonlinear inverse problem of recovering the conductivity f > 0 in the divergence form equation $\nabla \cdot \left(f\nabla u\right)=g\;\;\mathrm{o}\mathrm{n}\;\mathcal{O},u=0\mathrm{o}\mathrm{n}\;\partial \mathcal{O},$ from N discrete noisy point evaluations of the solution u = uf on $\mathcal{O}$. We study the statistical performance of Bayesian nonparametric procedures based on a flexible class of Gaussian (or hierarchical Gaussian) process priors, whose implementation is feasible by MCMC methods. We show that, as the number N of measurements increases, the resulting posterior distributions concentrate around the true parameter generating the data, and derive a convergence rate Nλ, λ > 0, for the reconstruction error of the associated posterior means, in ${L}^{2}\left(\mathcal{O}\right)$-distance.

085002
The following article is Open access

, and

We introduce a framework for the reconstruction and representation of functions in a setting where these objects cannot be directly observed, but only indirect and noisy measurements are available, namely an inverse problem setting. The proposed methodology can be applied either to the analysis of indirectly observed functional images or to the associated covariance operators, representing second-order information, and thus lying on a non-Euclidean space. To deal with the ill-posedness of the inverse problem, we exploit the spatial structure of the sample data by introducing a flexible regularizing term embedded in the model. Thanks to its efficiency, the proposed model is applied to MEG data, leading to a novel approach to the investigation of functional connectivity.

085003
The following article is Open access

, , and

In this paper, we are interested in designing and analyzing a finite element data assimilation method for laminar steady flow described by the linearized incompressible Navier–Stokes equation. We propose a weakly consistent stabilized finite element method which reconstructs the whole fluid flow from noisy velocity measurements in a subset of the computational domain. Using the stability of the continuous problem in the form of a three balls inequality, we derive quantitative local error estimates for the velocity. Numerical simulations illustrate these convergence properties and we finally apply our method to the flow reconstruction in a blood vessel.

085004

, and

For dynamical systems of the form dot x = Ax + c, henceforth called affine systems, we study the problem of determining system parameters A, c from data derived from a single trajectory. We establish necessary and sufficient conditions for identifiability of parameters from either a full trajectory or a set of discrete data points sampled from a single trajectory, expressed as geometrical conditions on the trajectory. We describe conditions under which the system has an equilibrium point and examine the problem of identifiability of the equilibrium point from data. We briefly examine the analytical upper and lower bounds for the maximal permissible uncertainty that will guarantee an inverse with specified qualitative properties of the resulting system (e.g., stability, spirality, and so on). Finally, we illustrate an application of the theory to parameter estimation for a class of nonlinear systems consisting of linear-in-parameters systems and demonstrate that affine approximations can yield more accurate estimates of parameter values than those based on finite differences.

085005

and

We are concerned with the inverse scattering problem of recovering an inhomogeneous medium by the associated acoustic wave measurement. We prove that under certain assumptions, a single far-field pattern determines the values of a perturbation to the refractive index on the corners of its support. These assumptions are satisfied, for example, in the low acoustic frequency regime. As a consequence if the perturbation is piecewise constant with either a polyhedral nest geometry or a known polyhedral cell geometry, such as a pixel or voxel array, we establish the injectivity of the perturbation to far-field map given a fixed incident wave. This is the first unique determinancy result of its type in the literature, and all of the existing results essentially make use of infinitely many measurements.

085006

and

In this work, we study the inverse spectral stability for the transmission eigenvalue problem from finitely many data with errors. It is shown that when a ⩽ 1, all potentials, whose transmission eigenvalues are ɛ-close in some disc centered at the origin, are also close.

085007

, , , , , and

Inverse scattering problems of the reconstructions of physical properties of a medium from boundary measurements are substantially challenging ones. This work aims to verify the performance on experimentally collected data of a newly developed convexification method for a 3D coefficient inverse problem for the case of unknown objects buried in a sandbox. The measured backscatter data are generated by a point source moving along an interval of a straight line and the frequency is fixed. Using a special Fourier basis, the method of this work strongly relies on a new derivation of a boundary value problem for a system of coupled quasilinear elliptic equations. This problem, in turn, is solved via the minimization of a Tikhonov-like functional weighted by a Carleman weight function. Different from the continuous case, our weighted cost functional in the partial finite difference does not need the penalty term to gain the global convergence analysis. The numerical verification is performed using experimental data, which are raw backscatter data of the electric field. These data were collected using a microwave scattering facility at The University of North Carolina at Charlotte.

085008

, and

A convexification-based numerical method for a coefficient inverse problem for a parabolic PDE is presented. The key element of this method is the presence of the so-called Carleman weight function in the numerical scheme. Convergence analysis ensures the global convergence of this method, as opposed to the local convergence of the conventional least squares minimization techniques. Numerical results demonstrate a good performance.

085009

For the large-scale linear discrete ill-posed problem min||Axb|| or Ax = b with b contaminated by white noise, the Golub–Kahan bidiagonalization based LSQR method and its mathematically equivalent CGLS, the conjugate gradient (CG) method applied to ATAx = ATb, are most commonly used. They have intrinsic regularizing effects, where the iteration number k plays the role of regularization parameter. The long-standing fundamental question is: Can LSQR and CGLS find two-norm filtering best possible regularized solutions? The author has given definitive answers to this question for severely and moderately ill-posed problems when the singular values of A are simple. This paper extends the results to the multiple singular value case, and studies the approximation accuracy of Krylov subspaces, the quality of low rank approximations generated by Golub–Kahan bidiagonalization and the convergence properties of Ritz values. For the two kinds of problems, we prove that LSQR finds two-norm filtering best possible regularized solutions at semi-convergence. Particularly, we consider some important and untouched issues on best, near best and general rank k approximations to A for the ill-posed problems with the singular values ${\sigma }_{k}=\mathcal{O}\left({k}^{-\alpha }\right)$ with α > 0, and the relationships between them and their nonzero singular values. Numerical experiments confirm our theory. The results on general rank k approximations and the properties of their nonzero singular values apply to several Krylov solvers, including LSQR, CGME, MINRES, MR-II, GMRES and RRGMRES.

085010

and

We are concerned with the Calderón problem of determining the unknown conductivity of a body from the associated boundary measurement. We establish a logarithmic type stability estimate in terms of the Hausdorff distance in determining the support of a convex polygonal inclusion by a single partial boundary measurement. We also derive the uniqueness result in a more general scenario where the conductivities are piecewise constants supported in a nested polygonal geometry. Our methods in establishing the stability and uniqueness results have a significant technical initiative and a strong potential to apply to other inverse boundary value problems.

085011

, and

Acousto-optic imaging (AOI) is a hybrid imaging process. By perturbing the to-be-reconstructed tissues with acoustic waves, one introduces the interaction between the acoustic and optical waves, leading to a more stable reconstruction of the optical properties. The mathematical model was described in [27], with the radiative transfer equation serving as the forward model for the optical transport. In this paper we investigate the stability of the reconstruction. In particular, we are interested in how the stability depends on the Knudsen number, Kn, a quantity that measures the intensity of the scattering effect of photon particles in a media. Our analysis shows that as Kn decreases to zero, photons scatter more frequently, and since information is lost, the reconstruction becomes harder. To counter this effect, devices need to be constructed so that laser beam is highly concentrated. We will give a quantitative error bound, and explicitly show that such concentration has an exponential dependence on Kn. Numerical evidence will be provided to verify the proof.

085012

, and

We consider the inverse source problem of thermo- and photoacoustic tomography, with data registered on an open surface partially surrounding the source of acoustic waves. Under the assumption of constant speed of sound we develop an explicit non-iterative reconstruction procedure that recovers the Radon transform of the sought source, up to an infinitely smooth additive error term. The source then can be found by inverting the Radon transform. Our analysis is microlocal in nature and does not provide a norm estimate on the error in the so obtained image. However, numerical simulations show that this error is quite small in practical terms. We also present an asymptotically fast implementation of this procedure for the case when the data are given on a circular arc in 2D.

085013

, and

The authors consider a randomized solution to ill-posed operator equations in Hilbert spaces. In contrast to statistical inverse problems, where randomness appears in the noise, here randomness arises in the low-rank matrix approximation of the forward operator, which results in using a Monte Carlo method to solve the inverse problems. In particular, this approach follows the paradigm of the study N. Halko et al 2011 SIAM Rev. 53 217–288, and hence regularization is performed based on the low-rank matrix approximation. Error bounds for the mean error are obtained which take into account solution smoothness and the inherent noise level. Based on the structure of the error decomposition the authors propose a novel algorithm which guarantees (on the mean) a prescribed error tolerance. Numerical simulations confirm the theoretical findings.

085014
The following article is Open access

and

We consider an inverse obstacle scattering problem for the Helmholtz equation with obstacles that carry mixed Dirichlet and Neumann boundary conditions. We discuss far field operators that map superpositions of plane wave incident fields to far field patterns of scattered waves, and we derive monotonicity relations for the eigenvalues of suitable modifications of these operators. These monotonicity relations are then used to establish a novel characterization of the support of mixed obstacles in terms of the corresponding far field operators. We apply this characterization in reconstruction schemes for shape detection and object classification, and we present numerical results to illustrate our theoretical findings.

085015

, and

Data-consistent inversion is a recently developed measure-theoretic framework for solving a stochastic inverse problem involving models of physical systems. The goal is to construct a probability measure on model inputs (i.e., parameters of interest) whose associated push-forward measure matches (i.e., is consistent with) a probability measure on the observable outputs of the model (i.e., quantities of interest). Previous implementations required the map from parameters of interest to quantities of interest to be deterministic. This work generalizes this framework for maps that are stochastic, i.e., contain uncertainties and variation not explainable by variations in uncertain parameters of interest. Generalizations of previous theorems of existence, uniqueness, and stability of the data-consistent solution are provided while new theoretical results address the stability of marginals on parameters of interest. A notable aspect of the algorithmic generalization is the ability to query the solution to generate independent identically distributed samples of the parameters of interest without requiring knowledge of the so-called stochastic parameters. This work therefore extends the applicability of the data-consistent inversion framework to a much wider class of problems. This includes those based on purely experimental and field data where only a subset of conditions are either controllable or can be documented between experiments while the underlying physics, measurement errors, and any additional covariates are either uncertain or not accounted for by the researcher. Numerical examples demonstrate application of this approach to systems with stochastic sources of uncertainties embedded within the modeling of a system and a numerical diagnostic is summarized that is useful for determining if a key assumption is verified among competing choices of stochastic maps.

Erratum