Table of contents

Volume 36

Number 12, December 2020

Previous issue Next issue

Buy this issue in print

Topical Review

123001
The following article is Open access

and

Over the last decades, the total variation (TV) has evolved to be one of the most broadly-used regularisation functionals for inverse problems, in particular for imaging applications. When first introduced as a regulariser, higher-order generalisations of TV were soon proposed and studied with increasing interest, which led to a variety of different approaches being available today. We review several of these approaches, discussing aspects ranging from functional-analytic foundations to regularisation theory for linear inverse problems in Banach space, and provide a unified framework concerning well-posedness and convergence for vanishing noise level for respective Tikhonov regularisation. This includes general higher orders of TV, additive and infimal-convolution multi-order total variation, total generalised variation, and beyond. Further, numerical optimisation algorithms are developed and discussed that are suitable for solving the Tikhonov minimisation problem for all presented models. Focus is laid in particular on covering the whole pipeline starting at the discretisation of the problem and ending at concrete, implementable iterative procedures. A major part of this review is finally concerned with presenting examples and applications where higher-order TV approaches turned out to be beneficial. These applications range from classical inverse problems in imaging such as denoising, deconvolution, compressed sensing, optical-flow estimation and decompression, to image reconstruction in medical imaging and beyond, including magnetic resonance imaging, computed tomography, magnetic-resonance positron emission tomography, and electron tomography.

Special Issue Papers

124001
The following article is Open access

, and

Special Issue on Modern Challenges in Imaging

The classic regularization theory for solving inverse problems is built on the assumption that the forward operator perfectly represents the underlying physical model of the data acquisition. However, in many applications, for instance in microscopy or magnetic particle imaging, this is not the case. Another important example represent dynamic inverse problems, where changes of the searched-for quantity during data collection can be interpreted as model uncertainties. In this article, we propose a regularization strategy for linear inverse problems with inexact forward operator based on sequential subspace optimization methods (SESOP). In order to account for local modelling errors, we suggest to combine SESOP with the Kaczmarz' method. We study convergence and regularization properties of the proposed method and discuss several practical realizations. Relevance and performance of our approach are evaluated at simulated data from dynamic computerized tomography with various dynamic scenarios.

124002

, and

Special Issue on Modern Challenges in Imaging

We derive a new 3D model for magnetic particle imaging (MPI) that is able to incorporate realistic magnetic fields in the reconstruction process. In real MPI scanners, the generated magnetic fields have distortions that lead to deformed magnetic low-field volumes with the shapes of ellipsoids or bananas instead of ideal field-free points (FFP) or lines (FFL), respectively. Most of the common model-based reconstruction schemes in MPI use however the idealized assumption of an ideal FFP or FFL topology and, thus, generate artifacts in the reconstruction. Our model-based approach is able to deal with these distortions and can generally be applied to dynamic magnetic fields that are approximately parallel to their velocity field. We show how this new 3D model can be discretized and inverted algebraically in order to recover the magnetic particle concentration. To model and describe the magnetic fields, we use decompositions of the fields in spherical harmonics. We complement the description of the new model with several simulations and experiments, exploring the effects of magnetic fields distortion and reconstruction parameters on the reconstruction.

124003
The following article is Open access

, , , and

Special Issue on Modern Challenges in Imaging

In this paper, we consider the problem of estimating the internal displacement field of an object which is being subjected to a deformation, from optical coherence tomography images before and after compression. For the estimation of the internal displacement field we propose a novel algorithm, which utilizes particular speckle information to enhance the quality of the motion estimation. We present numerical results based on both simulated and experimental data in order to demonstrate the usefulness of our approach, in particular when applied for quantitative elastography, when the material parameters are estimated in a second step based on the internal displacement field.

124004
The following article is Open access

, , , and

Special Issue on Modern Challenges in Imaging

We present a new inner–outer iterative algorithm for edge enhancement in imaging problems. At each outer iteration, we formulate a Tikhonov-regularized problem where the penalization is expressed in the two-norm and involves a regularization operator designed to improve edge resolution as the outer iterations progress, through an adaptive process. An efficient hybrid regularization method is used to project the Tikhonov-regularized problem onto approximation subspaces of increasing dimensions (inner iterations), while conveniently choosing the regularization parameter (by applying well-known techniques, such as the discrepancy principle or the $\mathcal{L}$-curve criterion, to the projected problem). This procedure results in an automated algorithm for edge recovery that does not involve regularization parameter tuning by the user, nor repeated calls to sophisticated optimization algorithms, and is therefore particularly attractive from a computational point of view. A key to the success of the new algorithm is the design of the regularization operator through the use of an adaptive diagonal weighting matrix that effectively enforces smoothness only where needed. We demonstrate the value of our approach on applications in x-ray CT image reconstruction and in image deblurring, and show that it can be computationally much more attractive than other well-known strategies for edge preservation, while providing solutions of greater or equal quality.

124005

and

Special Issue on Modern Challenges in Imaging

Calderón's method is a direct linearized reconstruction method for the inverse conductivity problem with the attribute that it can provide absolute images of both conductivity and permittivity with no need for forward modeling. In this work, an explicit relationship between Calderón's method and the D-bar method is provided, facilitating a 'higher-order' Calderón's method in which a correction term is included, derived from the relationship to the D-bar method. Furthermore, a method of including a spatial prior is provided. These advances are demonstrated on simulated data and on tank data collected with the ACE1 EIT system.

124006

, , and

Special Issue on Modern Challenges in Imaging

One important property of imaging modalities and related applications is the resolution of image reconstructions which relies on various factors such as instrumentation or data processing. Restrictions in resolution can have manifold origins, e.g., limited resolution of available data, noise level in the data, and/or inexact model operators. In this work we investigate a novel data processing approach suited for inexact model operators. Here, two different information sources, high-dimensional model information and high-quality measurement on a lower resolution, are comprised in a hybrid approach. The joint reconstruction of a high resolution image and parameters of the imaging operator are obtained by minimizing a Tikhonov-type functional. The hybrid approach is analyzed for bilinear operator equations with respect to stability, convergence, and convergence rates. We further derive an algorithmic solution exploiting an algebraic reconstruction technique. The study is complemented by numerical results ranging from an academic test case to image reconstruction in magnetic particle imaging.

124007
The following article is Open access

and

Special Issue on Modern Challenges in Imaging

Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the photoacoustically induced initial pressure distribution within tissue. The PACT reconstruction problem corresponds to a time-domain inverse source problem, where the initial pressure distribution is recovered from the measurements recorded on an aperture outside the support of the source. A major challenge in transcranial PACT brain imaging is to compensate for aberrations in the measured acoustic data that are induced by propagation of the photoacoustic wavefields through the skull. To properly account for these effects, previously proposed image reconstruction methods for transcranial PACT require knowledge of the spatial distribution of the elastic parameters of the skull. However, estimating the spatial distribution of these parameters prior to the PACT experiment remains challenging. To circumvent this issue, in this work a method to jointly reconstruct the initial pressure distribution and a low-dimensional representation of the elastic parameters of the skull is developed and investigated. The joint reconstruction (JR) problem is solved by use of a proximal optimization method that allows constraints and non-smooth regularization terms. The proposed method is evaluated by use of large-scale three-dimensional (3D) computer-simulation studies that mimic transcranial PACT experiments.

124008

Special Issue on Modern Challenges in Imaging

Let f(x), $x\in {\mathbb{R}}^{2}$, be a piecewise smooth function with a jump discontinuity across a smooth surface $\mathcal{S}$. Let fΛepsilon denote the Lambda tomography (LT) reconstruction of f from its discrete Radon data $\hat{f}\left({\alpha }_{k},{p}_{j}\right)$. The sampling rate along each variable is ∼epsilon. First, we compute the limit ${f}_{0}\left(\check {x}\right)={\mathrm{lim}}_{{\epsilon}\to 0}\enspace {\epsilon}{f}_{{\Lambda}{\epsilon}}\left({x}_{0}+{\epsilon}\check {x}\right)$ for a generic ${x}_{0}\in \mathcal{S}$. Once the limiting function ${f}_{0}\left(\check {x}\right)$ is known (which we call the discrete transition behavior, or DTB for short), the resolution of reconstruction can be easily found. Next, we show that straight segments of $\mathcal{S}$ lead to non-local artifacts in fΛepsilon, and that these artifacts are of the same strength as the useful singularities of fΛepsilon. We also show that fΛepsilon(x) does not converge to its continuous analogue fΛ = (−Δ)1/2f as epsilon → 0 even if $x\notin \mathcal{S}$. Results of numerical experiments presented in the paper confirm these conclusions. We also consider a class of Fourier integral operators $\mathcal{B}$ with the same canonical relation as the classical Radon transform adjoint, and a class of distributions $g\in {\mathcal{E}}^{\prime }\left({Z}_{n}\right)$, ${Z}_{n}{:=}{S}^{n-1}{\times}\mathbb{R}$, and obtain easy to use formulas for the DTB when $\mathcal{B}g$ is computed from discrete data $g\left({\alpha }_{ \overrightarrow {k}},{p}_{j}\right)$. Exact and LT reconstructions are particular cases of this more general theory.

Papers

125001

, , and

High fidelity models used in many science and engineering applications couple multiple physical states and parameters. Inverse problems arise when a model parameter cannot be determined directly, but rather is estimated using (typically sparse and noisy) measurements of the states. The data is usually not sufficient to simultaneously inform all of the parameters. Consequently, the governing model typically contains parameters which are uncertain but must be specified for a complete model characterization necessary to invert for the parameters of interest. We refer to the combination of the additional model parameters (those which are not inverted for) and the measured data states as the 'complementary parameters'. We seek to quantify the relative importance of these complementary parameters to the solution of the inverse problem. To address this, we present a framework based on hyper-differential sensitivity analysis (HDSA). HDSA computes the derivative of the solution of an inverse problem with respect to complementary parameters. We present a mathematical framework for HDSA in large-scale PDE-constrained inverse problems and show how HDSA can be interpreted to give insight about the inverse problem. We demonstrate the effectiveness of the method on an inverse problem by estimating a permeability field, using pressure and concentration measurements, in a porous medium flow application with uncertainty in the boundary conditions, source injection, and diffusion coefficient.

125002
The following article is Open access

and

We consider the inverse scattering problem of reconstructing a perfect conductor from the far field pattern of a scattered time harmonic electromagnetic wave generated by one incident plane wave. In order to apply iterative regularization schemes for the severely ill-posed problem the first and the second domain derivative of the far field pattern with respect to variations of the domain are established. Characterizations of the derivatives by boundary value problems allow for an application of second degree regularization methods to the inverse problem. A numerical implementation based on integral equations is presented and its performance is illustrated by a selection of examples.

125003

and

We prove a Hölder-logarithmic stability estimate for the problem of finding a sufficiently regular compactly supported function v on ${\mathbb{R}}^{d}$ from its Fourier transform $\mathcal{F}v$ given on [−r, r]d. This estimate relies on a Hölder stable continuation of $\mathcal{F}v$ from [−r, r]d to a larger domain. The related reconstruction procedures are based on truncated series of Chebyshev polynomials. We also give an explicit example showing optimality of our stability estimates.

125004

and

Choosing an appropriate regularization term is necessary to obtain a meaningful solution to an ill-posed linear inverse problem contaminated with measurement errors or noise. The p norm covers a wide range of choices for the regularization term since its behavior critically depends on the choice of p and since it can easily be combined with a suitable regularization matrix. We develop an efficient algorithm that simultaneously determines the regularization parameter and corresponding p regularized solution such that the discrepancy principle is satisfied. We project the problem on a low-dimensional generalized Krylov subspace and compute the Newton direction for this much smaller problem. We illustrate some interesting properties of the algorithm and compare its performance with other state-of-the-art approaches using a number of numerical experiments, with a special focus of the sparsity inducing 1 norm and edge-preserving total variation regularization.

125005
The following article is Open access

and

We consider the identification problem which arises in single photon emission computed tomography (SPECT) of joint reconstruction of both attenuation a and source density f. Assuming that a takes only finitely many values and $f\in {C}_{c}^{1}\left({\mathbb{R}}^{2}\right)$ we are able to characterise singularities appearing in the attenuated Radon transform Raf, which models SPECT data. Using this characterisation we prove that both a and f can be determined in some circumstances from Raf. We also propose a numerical algorithm to jointly compute a and f from Raf based on a weakly convex regularizer when a only takes values from a known finite list, and show that this algorithm performs well on some synthetic examples.

125006

, and

In this article, we provide a modified argument for proving stability for inverse problems of determining spatially varying functions in evolution equations by Carleman estimates. Our method does not require any cut-off procedures and therefore simplifies the existing proofs. We establish the conditional stability for inverse source problems for a hyperbolic equation and a parabolic equation, and our method is widely applicable to various evolution equations.

125007

, and

The inverse quantum scattering problem for the perturbed Bessel equation is considered. A direct and practical method for solving the problem is proposed. It allows one to reduce the inverse problem to a system of linear algebraic equations, and the potential is recovered from the first component of the solution vector of the system. The approach is based on a special form Fourier–Jacobi series representation for the transmutation operator kernel and the Gelfand–Levitan equation which serves for obtaining the system of linear algebraic equations. The convergence and stability of the method are proved as well as the existence and uniqueness of the solution of the truncated system. Numerical realization of the method is discussed. Results of numerical tests are provided revealing a remarkable accuracy and stability of the method.

125008

, , and

A direct reconstruction algorithm based on Calderón's linearization method for the reconstruction of isotropic conductivities is proposed for anisotropic conductivities in two-dimensions. To overcome the non-uniqueness of the anisotropic inverse conductivity problem, the entries of the unperturbed anisotropic tensors are assumed known a priori, and it remains to reconstruct the multiplicative scalar field. The quasi-conformal map in the plane facilitates the Calderón-based approach for anisotropic conductivities. The method is demonstrated on discontinuous radially symmetric conductivities of high and low contrast.

125009
The following article is Open access

, and

We study linear inverse problems under the premise that the forward operator is not at hand but given indirectly through some input-output training pairs. We demonstrate that regularization by projection and variational regularization can be formulated by using the training data only and without making use of the forward operator. We study convergence and stability of the regularized solutions in view of Seidman (1980 J. Optim. Theory Appl.30 535), who showed that regularization by projection is not convergent in general, by giving some insight on the generality of Seidman's nonconvergence example. Moreover, we show, analytically and numerically, that regularization by projection is indeed capable of learning linear operators, such as the Radon transform.

125010
The following article is Open access

, and

Ultrasound tomography (UST) has seen a revival of interest in the past decade, especially for breast imaging, due to improvements in both ultrasound and computing hardware. In particular, three-dimensional UST, a fully tomographic method in which the medium to be imaged is surrounded by ultrasound transducers, has become feasible. This has led to renewed attention on UST image reconstruction algorithms. In this paper, a comprehensive derivation and study of a robust framework for large-scale bent-ray UST in 3D for a hemispherical detector array is presented. Two ray-tracing approaches are derived and compared. More significantly, the problem of linking the rays between emitters and receivers, which is challenging in 3D due to the high number of degrees of freedom for the trajectory of rays, is analysed both as a minimisation and as a root-finding problem. The ray-linking problem is parameterised for a convex detection surface and two robust, accurate, and efficient derivative-free ray-linking algorithms are formulated and demonstrated and compared with a Jacobian-based benchmark approach. To stabilise these methods, novel adaptive-smoothing approaches are proposed that control the conditioning of the update matrices to ensure accurate linking. The nonlinear UST problem of estimating the sound speed was recast as a series of linearised subproblems, each solved using the above algorithms and within a steepest descent scheme. The whole imaging algorithm was demonstrated to be robust and accurate on realistic data simulated using a full-wave acoustic model and an anatomical breast phantom, and incorporating the errors due to time-of-flight (TOF) picking that would be present with measured data. This method can used to provide a low-artefact, quantitatively accurate, 3D sound speed maps. In addition to being useful in their own right, such 3D sound speed maps can be used to initialise full-wave inversion methods, or as an input to photoacoustic tomography reconstructions.

125011

, and

For Bayesian inverse problems with input-to-response maps given by well-posed partial differential equations and subject to uncertain parametric or function space input, we establish (under rather weak conditions on the 'forward', input-to-response maps) the parametric holomorphy of the data-to-QoI map relating observation data δ to the Bayesian estimate for an unknown quantity of interest (QoI). We prove exponential expression rate bounds for this data-to-QoI map by deep neural networks with rectified linear unit activation function, which are uniform with respect to the data δ taking values in a compact subset of ${\mathbb{R}}^{K}$. Similar convergence rates are verified for polynomial and rational approximations of the data-to-QoI map. We discuss the extension to other activation functions, and to mere Lipschitz continuity of the data-to-QoI map.

125012

and

The non-convex $\alpha {\Vert}\cdot {{\Vert}}_{{\ell }_{1}}-\beta {\Vert}\cdot {{\Vert}}_{{\ell }_{2}}\enspace \left(\alpha {\geqslant}\beta {\geqslant}0\right)$ regularization is a new approach for sparse recovery. A minimizer of the $\alpha {\Vert}\cdot {{\Vert}}_{{\ell }_{1}}-\beta {\Vert}\cdot {{\Vert}}_{{\ell }_{2}}$ regularized function can be computed by applying the ST-(αℓ1βℓ2) algorithm which is similar to the classical iterative soft thresholding algorithm (ISTA). It is known that ISTA converges quite slowly, and a faster alternative to ISTA is the projected gradient (PG) method. However, the conventional PG method is limited to solve problems with the classical 1 sparsity regularization. In this paper, we present two accelerated alternatives to the ST-(αℓ1βℓ2) algorithm by extending the PG method to the non-convex $\alpha {\Vert}\cdot {{\Vert}}_{{\ell }_{1}}-\beta {\Vert}\cdot {{\Vert}}_{{\ell }_{2}}$ sparsity regularization. Moreover, we discuss a strategy to determine the radius R of the 1-ball constraint by Morozov's discrepancy principle. Numerical results are reported to illustrate the efficiency of the proposed approach.

125013

and

In this article we propose and study the properties of three distinct algorithms for obtaining stable approximate solutions for systems of ill-posed equations, modeled by linear operators acting between Hilbert spaces. Based on Tikhonov-like methods with uniformly convex penalty terms, we develop new versions with inexact minimization of both one step and iterated-Tikhonov methods. For the case of one step methods, we propose two distinct algorithms, one based in a priori and one based in a posteriori choice of the regularization parameter. Convergence and stability properties are provided, as well as optimal convergence rates (under appropriate source conditions). The third algorithm is based in a variant of the iterated-Tikhonov method with a posteriori choice of the sequence of penalization parameters. For this algorithm, we prove stability for noisy data and the regularization property.

125014
The following article is Open access

, , and

We study variational regularisation methods for inverse problems with imperfect forward operators whose errors can be modelled by order intervals in a partial order of a Banach lattice. We carry out analysis with respect to existence and convex duality for general data fidelity terms and regularisation functionals. Both for a priori and a posteriori parameter choice rules, we obtain convergence rates of the regularised solutions in terms of Bregman distances. Our results apply to fidelity terms such as Wasserstein distances, φ-divergences, norms, as well as sums and infimal convolutions of those.

125015

and

We study a constrained optimization problem of stable parameter estimation given some noisy (and possibly incomplete) measurements of the state observation operator. In order to find a solution to this problem, we introduce a hybrid regularized predictor–corrector scheme that builds upon both, all-at-once formulation, recently developed by B. Kaltenbacher and her co-authors, and the so-called traditional route, pioneered by A. Bakushinsky. Similar to all-at-once approach, our proposed algorithm does not require solving the constraint equation numerically at every step of the iterative process. At the same time, the predictor–corrector framework of the new method avoids the difficulty of dealing with large solution spaces resulting from all-at-once make-up, which inevitably leads to oversized Jacobian and Hessian approximations. Therefore our predictor–corrector algorithm (PCA) has the potential to save time and storage, which is critical when multiple runs of the iterative scheme are carried out for uncertainty quantification. To assess numerical efficiency of novel PCA, two parameter estimation inverse problems in epidemiology are considered. All experiments are carried out with real data on COVID-19 pandemic in Netherlands and Spain.

125016

and

In this article, for a time-fractional diffusion-wave equation ${\partial }_{t}^{\alpha }u\left(x,t\right)=-Au\left(x,t\right)$, 0 < t < T with fractional order α ∈ (1, 2), we consider the backward problem in time: determine u(⋅, t), 0 < t < T by u(⋅, T) and ∂tu(⋅, T). We prove that there exists a countably infinite set Λ ⊂ (0, ) with a unique accumulation point 0 such that the backward problem is well-posed for T ∉ Λ.

125017

, and

Non-Lipschitz regularization has got much attention in image restoration with additive noise removal recently, which can preserve neat edges in the restored image. In this paper, we consider a class of minimization problems with gradient compounded non-Lipschitz regularization applied to non-additive noise removal, with Poisson and multiplicative one as examples. The existence of a solution of the general model is discussed. We also extend the recent iterative support shrinkage strategy to give an algorithm to minimize it, where the subproblem at each iteration is allowed to be solved inexactly. Moreover, this paper is the first one to give the subdifferential of the gradient compounded non-Lipschitz regularization term, based on which we are able to establish the global convergence of the iterative sequence to a stationary point of the original objective function. This is, to our best knowledge, stronger than all the convergence results for gradient compounded non-Lipschitz minimization problems in the current published literature. Numerical experiments show that our proposed method performs well.