Adaptive reconstruction for electrical impedance tomography with a piecewise constant conductivity

In this work we propose and analyze a numerical method for electrical impedance tomography to recover a piecewise constant conductivity from boundary voltage measurements. It is based on standard Tikhonov regularization with a Modica–Mortola penalty functional and adaptive mesh refinement using suitable a posteriori error estimators of residual type that involve the state, adjoint and variational inequality in the necessary optimality condition and a separate marking strategy. We prove the convergence of the adaptive algorithm in the following sense: the sequence of discrete solutions contains a subsequence convergent to a solution of the continuous necessary optimality system. Several numerical examples are presented to illustrate the convergence behavior of the algorithm.


Introduction
Electrical impedance tomography (EIT) aims at recovering the electrical conductivity distribution of an object from voltage measurements on the boundary.It has attracted much interest in medical imaging, geophysical prospecting, nondestructive evaluation and pneumatic oil pipeline conveying etc.A large number of reconstruction algorithms have been proposed; see, e.g., [37,36,26,35,31,30,24,15,21,40,39,18,3,34,33,57,28,54,51] for a rather incomplete list.One prominent idea underpinning many imaging algorithms is regularization, especially Tikhonov regularization [29].In practice, they are customarily implemented using the continuous piecewise linear finite element method (FEM) on quasi-uniform meshes, due to its flexibility in handling spatially variable coefficients and general domain geometry.The convergence analysis of finite element approximations was carried out in [22,47,27].
In several practical applications, the physical process is accurately described by the complete electrode model (CEM) [14,50].It employs nonstandard boundary conditions to capture characteristics of the experiment.In particular, around the electrode edges, the boundary condition changes from the Neumann to Robin type, which, according to classical elliptic regularity theory [23], induces weak solution singularity around the electrode edges; see, e.g., [45] for an early study.Further, the low-regularity of the sought-for conductivity distribution, especially that enforced by a nonsmooth penalty, e.g., total variation, can also induce weak interior singularities of the solution.Thus, a (quasi-)uniform triangulation of the domain can be inefficient for resolving these singularities, and the discretization errors around electrode edges and internal interfaces can potentially compromise the reconstruction accuracy.These observations motivate the use of an adaptive strategy to achieve the desired accuracy in order to enhance the overall computational efficiency.
For direct problems, the mathematical theory of AFEM, including a posteriori error estimation, convergence and computational complexity, has advanced greatly [1,44,53,12].A common adaptive FEM (AFEM) consists of the following successive loops: (1.1) The module ESTIMATE employs the given problem data and computed solutions to provide computable quantities on the local errors, and distinguishes different adaptive algorithms.
In this work, we develop an adaptive EIT reconstruction algorithm with a piecewise constant conductivity.In practice, the piecewise constant nature is commonly enforced by a total variation penalty.However, it is challenging for AFEM treatment (see e.g., [5] for image denoising).Thus, we take an indirect approach based on a Modica-Mortola type functional: where the constant ε > 0 is small and W (s) : R → R is the double-well potential, i.e., with c 0 , c 1 > 0 being two known values that the conductivity σ can take.The functional F ε Γ-converges to the total variation semi-norm [42,43,2].The corresponding regularized least-squares formulation reads where α > 0 is a regularization parameter; see Section 2 for further details.In this work, we propose a posteriori error estimators and an adaptive reconstruction algorithm of the form (1.1) for (1.3) based on a separate marking using three error indicators in the module MARK; see Algorithm 3.1.Further, we give a convergence analysis of the algorithm, in the sense that the sequence of state, adjoint and conductivity generated by the adaptive algorithm contains a convergent subsequence to a solution of the necessary optimality condition.The technical proof consists of two steps: Step 1 shows the subsequential convergence to a solution of a limiting problem, and Step 2 proves that the solution of the limiting problem satisfies the necessary optimality condition.The main technical challenges in the convergence analysis include the nonlinearity of the forward model, the nonconvexity of the double well potential and properly treating the variational inequality.The latter two are overcome by pointwise convergence of discrete minimizers and Lebesgue's dominated convergence theorem, and AFEM analysis techniques for elliptic obstacle problems, respectively.The adaptive algorithm and its convergence analysis are the main contributions of this work.Last, we situate this work in the existing literature.In recent years, several adaptive techniques, including AFEM, have been applied to the numerical resolution of inverse problems.In a series of works [6,7,8,9], Beilina et al studied the AFEM in a dual weighted residual framework for parameter identification.Feng et al [20] proposed a residual-based estimator for the state, adjoint and control by assuming convexity of the cost functional and high regularity on the control.Li et al [38] derived a posteriori error estimators for recovering the flux and proved their reliability; see [55] for a plain convergence analysis.Clason et al [17] studied functional a posteriori estimators for convex regularized formulations.Recently, Jin et al [32] proposed a first AFEM for Tikhonov functional for EIT with an H 1 (Ω) penalty, and also provided a convergence analysis.This work extends the approach in [32] to the case of a piecewise constant conductivity.There are a number of major differences between this work and [32].First, the H 1 (Ω) penalty in [32] facilitates deriving the a posteriori estimator on the conductivity σ, by completing the squares and suitable approximation argument, which is not directly available for the Mordica-Mortola functional F ε .Second, we develop a sharper error indicator associated with the crucial variational inequality than that in [32], by a novel constraint preserving interpolation operator [13]; see the proof of Theorem 5.5, which represents the main technical novelty of this work.Third, Algorithm 3.1 employs a separate marking for the estimators, instead of a collective marking in [32], which automatically takes care of different scalings of the estimators.
The rest of this paper is organized as follows.In Section 2, we introduce the complete electrode model, and the regularized least-squares formulation.In Section 3, we give the AFEM algorithm.In Section 4, we present extensive numerical results to illustrate its convergence and efficiency.In Section 5, we present the lengthy technical convergence analysis.Throughout, •, • and (•, •) denote the inner product on the Euclidean space and (L 2 (Ω)) d , respectively, by • the Euclidean norm, and occasionally abuse •, • for the duality pairing between the Hilbert space H and its dual space.The superscript t denotes the transpose of a vector.The notation c denotes a generic constant, which may differ at each occurrence, but it is always independent of the mesh size and other quantities of interest.

Regularized approach
This part describes the regularized approach for recovering piecewise constant conductivities.

Complete electrode model (CEM)
Let Ω be an open bounded domain in R d (d = 2, 3) with a polyhedral boundary ∂Ω.We denote the set of electrodes by {e l } L l=1 , which are line segments/planar surfaces on ∂Ω and satisfy ēi ∩ ēk = ∅ if i = k.The applied current on electrode e l is denoted by I l , and the vector Then the CEM reads: given the conductivity σ, positive contact impedances {z l } L l=1 and input current (2.1) The inverse problem is to recover the conductivity σ from a noisy version U δ of the electrode voltage U (σ † ) (for the exact conductivity σ † ) corresponding to one or multiple input currents.
Below the conductivity σ is assumed to be piecewise constant, i.e., in the admissible set where the constants c 1 > c 0 > 0 are known, Ω 1 ⊂ Ω is an unknown open set with a Lipschitz boundary and χ Ω1 denotes its characteristic function.We denote by H the space H 1 (Ω) ⊗ R L with its norm given by A convenient equivalent norm on the space H is given below.
Lemma 2.1.On the space H, the norm • H is equivalent to the norm • H, * defined by The weak formulation of the model (2.1) reads [50]: where the trilinear form a(σ, where (•, •) L 2 (e l ) denotes the L 2 (e l ) inner product.For any σ ∈ A, {z l } L l=1 and I ∈ Σ L , the existence and uniqueness of a solution (u, U ) ∈ H to (2.2) follows from Lemma 2.1 and Lax-Milgram theorem.

Regularized reconstruction
For numerical reconstruction with a piecewise constant conductivity, the total variation (TV) penalty is popular.The conductivity σ is assumed to be in the space BV(Ω) of bounded variation [4,19], i.e., Below we discuss only one dataset, since the case of multiple datasets is similar.Then Tikhonov regularization leads to the following minimization problem: The scalar α > 0 is known as a regularization parameter.It has at least one minimizer [46,22].Since σ is piecewise constant, by Lebesgue decomposition theorem [4], the TV term , where S σ is the jump set, [σ] = σ + − σ − denotes the jump across S σ and H d−1 refers to the (d − 1)-dimensional Hausdorff measure.The numerical approximation of (2.3) requires simultaneously treating two sets of different Hausdorff dimensions (i.e., Ω and S σ ), which is very challenging.Thus, we replace the TV term |σ| TV(Ω) in (2.3) by a Modica-Mortola type functional [43] where ε is a small positive constant controlling the width of the transition interface, and W : R → R is the double-well potential given in (1.2).The functional F ε was first proposed to model phase transition of two immiscible fluids in [11].It is connected with the TV semi-norm as follows [43,42,2]; see [10] for an introduction to Γ-convergence.
Then F ε Γ-converges to F in L 1 (Ω) as ε → 0 + .Let {ε n } n≥1 and {v n } n≥1 be given sequences such that The proposed EIT reconstruction method reads where α = α/c W , and the admissible set A is defined as Now we recall a useful continuity result of the forward map [22, Lemma 2.2], which gives the continuity of the fidelity term in the functional J ε .See also [31,18] for related continuity results.
To obtain the necessary optimality system of (2.4), we use the standard adjoint technique.The adjoint problem for (2.2) reads: find (p, P ) ∈ H such that By straightforward computation, the Gâteaux derivative J ε (σ)[µ] of the functional J ε at σ ∈ A in the direction µ ∈ H 1 (Ω) is given by Then the minimizer σ * to problem (2.4) and the respective state (u * , U * ) and adjoint (p * , P * ) satisfy the following necessary optimality system: where the variational inequality at the last line is due to the box constraint in the admissible set A. The optimality system (2.8) forms the basis of the adaptive algorithm and its convergence analysis.

Adaptive algorithm
Now we develop an adaptive FEM for problem (2.4).Let T 0 be a shape regular triangulation of Ω into simplicial elements, each intersecting at most one electrode surface e l , and T be the set of all possible conforming triangulations of Ω obtained from T 0 by the successive use of bisection.Then T is uniformly shape regular, i.e., the shape-regularity of any mesh T ∈ T is bounded by a constant depending only on T 0 [44,52].Over any T ∈ T, we define a continuous piecewise linear finite element space where P 1 (T ) consists of all linear functions on T .The space V T is used for approximating the state u and adjoint p, and the discrete admissible set A T for the conductivity is given by Then we approximate problem (2.4) by minimizing the following functional over A T : Then, similar to Theorem 2.2, there exists at least one minimizer σ * T to (3.2), and the minimizer σ * T and the related state (u * T , U * T ) ∈ H T and adjoint (p * T , P Further, (u * T , U * T ) and (p * T , P * T ) depend continuously on the problem data, i.e., where the constant c can be made independent of α and ε.
To describe the error estimators, we first recall some useful notation.The collection of all faces (respectively all interior faces) in T ∈ T is denoted by F T (respectively F i T ) and its restriction to the electrode ēl and ∂Ω\ ∪ L l=1 e l by F l T and F c T , respectively.A face/edge F has a fixed normal unit vector n F in Ω with n F = n on ∂Ω.The diameter of any T ∈ T and F ∈ F T is denoted by 1) , respectively.For the solution (σ * T , (u * T , U * T ), (p * T , P * T )) to problem (3.3), we define two element residuals for each element T ∈ T and two face residuals for each face where [•] denotes the jump across interior face F .Then for any element T ∈ T , we define the following three error estimators It differs from the direct problem in (2.8) by replacing the conductivity σ * with σ * T instead, and is a perturbation of the latter case.The perturbation is vanishingly small in the event of the conjectured (subsequential) convergence σ * T → σ * .The estimator η T ,2 (σ * T , p * T , P * T , T ) admits a similar interpretation.These two estimators are essentially identical with that for the H 1 (Ω) penalty in [32], and we refer to [32, Section 3.3] for a detailed heuristic derivation.The estimator η T ,3 (σ * T , u * T , p * T , T ) is related to the variational inequality in the necessary optimality condition (2.8), and roughly provides a quantitative measure how well it is satisfied.The estimator (including the exponent q) is motivated by the convergence analysis; see the proof of Theorem 5.5 and Remark 5.2 below.It represents the main new ingredient for problem (2.4), and differs from that for the H 1 (Ω) penalty in [32].
Remark 3.1.The estimator η k,3 improves that in [32], i.e., in terms of the exponents on h T and h F .This improvement is achieved by a novel constraint preserving interpolation operator defined in (5.13) below.
Now we can formulate an adaptive algorithm for (2.4); see Algorithm 3.1.Below we indicate the dependence on the mesh T k by the subscript k, e.g., J ε,k for J ε,T k .Algorithm 3.1 AFEM for EIT with a piecewise constant conductivity.
1: Specify an initial mesh T 0 , and set the maximum number K of refinements. 5: (MARK) Mark three subsets ) with the largest error indicator: (REFINE) Refine each element T in M k by bisection to get T k+1 .

7:
Check the stopping criterion.8: end for The MARK module selects a collection of elements in the mesh T k .The condition (3.5) covers several commonly used marking strategies, e.g., maximum, equidistribution, modified equidistribution, and Dörfler's strategy [49, pp. 962].Compared with a collective marking in AFEM in [32], Algorithm 3.1 employs a separate marking to select more elements for refinement in each loop, which leads to fewer iterations of the adaptive process.The error estimators may also be used for coarsening, which is relevant if the recovered inclusions change dramatically during the iteration.However, the convergence analysis below does not carry over to coarsening, and it will not be further explored.
Last, we give the main theoretical result: for each fixed ε > 0, the sequence of discrete solutions

Numerical experiments and discussions
Now we present numerical results to illustrate Algorithm 3.1 on a square domain Ω = (−1, 1) 2 .There are sixteen electrodes {e l } L l=1 (with L = 16) evenly distributed along ∂Ω, each of length 1/4.The contact impedances {z l } L l=1 are all set to unit.We take ten sinusoidal input currents, and for each voltage U (σ † ) ∈ R L , generate the noisy data U δ by where is the (relative) noise level, and {ξ l } L l=1 follow the standard normal distribution.Note that = 1e-2 refers to a relatively high noise level for EIT.The exact data U (σ † ) is computed using a much finer uniform mesh, to avoid the most obvious form of "inverse crime".
In the experiments, we fix K (the number of refinements) at 15, q (exponent in η q k,3 ) at 2, and ε (the functional F ε ) at 1e-2.The marking strategy (3.5) in the module MARK selects a minimal refinement set with a threshold θ = 0.7.The refinement is performed with one popular refinement strategy, i.e., newest vertex bisection [41].Specifically, it connects the midpoint x T , as a newest vertex, of a reference edge (i) The true conductivity σ † is given by σ 0 (x) + χ B1 (x) + χ B2 (x), with B 1 and B 2 denote two circles centered at (0, 0.5) and (0, −0.5), respectively, both with a radius 0.3.
The numerical results for Example 1(i) with = 1e-3 and = 1e-2 are shown in Figs.1-5, where d.o.f.denotes the degree of freedom of the mesh.It is observed from Fig. 1 that with both uniform and adaptive refinements, the final recoveries have comparable accuracy and capture well the inclusion locations.
Next we examine the adaptive refinement process more closely.In Figs. 2 and 3, we show the meshes T k during the iteration and the corresponding recoveries σ k for Example 1(i) at two noise levels = 1e-3 and = 1e-2, respectively.On the coarse mesh T 0 , the recovery has very large errors and can only identify one component and thus fails to correctly identify the number of inclusions, due to the severe under-resolution of both state and conductivity.Nonetheless, Algorithm 3.1 can correctly recover the two components with reasonable accuracy after several adaptive loops, and accordingly, the support of the recovery is gradually refined with its accuracy improving steadily.In particular, the inclusion locations stabilize after several are most effective for resolving the state and adjoint, whereas η 2 k,3 is effective for detecting internal jumps of the conductivity.The magnitude of η 2 k,2 is much smaller than η 2 k,1 , since the boundary data U δ − U (σ k ) for the adjoint is much smaller than the input current I for the state.Thus, a simple collective marking strategy (i.e., η ) may miss the correct singularity, due to their drastically different scalings.In contrast, the separate marking in (3.5) can take care of the scaling automatically.
In Fig. 5, we plot the L 2 (Ω) and L 1 (Ω) errors of the recoveries versus d.o.f.N , where the recovery on the corresponding finest mesh is taken as the reference (since the recoveries by the adaptive and uniform refinements are slightly different; see Fig. 1).Due to the discontinuity of the sought-for conductivity, the L 1 (Ω) norm is especially suitable for measuring the convergence.The convergence of the algorithm is clearly observed for both adaptive and uniform refinements.Further, with a fixed d.o.f., AFEM gives more accurate results than the uniform one in both error metrics.These observations show the computational efficiency of the adaptive algorithm.
Examples 1(ii) and (iii) are variations of Example 1(i), and the results are presented in Figs.6-9.The proposed approach assumes a piecewise constant conductivity with known lower and upper bounds.Example 1(ii) does not fulfill the assumption, since the true conductivity σ † is not piecewise constant.Thus the algorithm can only produce a piecewise constant approximation to the exact one.Nonetheless, the inclusion support is reasonably identified.When the noise level increases from 1e-3 to 1e-2, the reconstruction accuracy deteriorates significantly; see Fig. 6.Example 1(iii) involves high contrast inclusions, which are well known to be numerically more challenging.This is clearly observed in Fig. 8, where the recovery accuracy is inferior, especially for the noise level = 1e-2.However, the adaptive refinement procedure works well similarly as the preceding examples: the refinement occurs mainly around the electrode edges and inclusion interface; see Figs. 7 and 9 for the details.Now we consider one more challenging example with four inclusions.
The numerical results for Example 2 are given in Figs.10-12.The results are in excellent agreement with the observations from Example 1: The algorithm converges steadily as the adaptive iteration proceeds, and with a low noise level, it can accurately recover all four inclusions, showing clearly the efficiency of the adaptive approach.The refinement is mainly around the electrode edge and interval interface.

Proof of Theorem 3.1
The lengthy and technical proof is divided into two steps: Step 1 shows the convergence to an auxiliary minimization problem over a limiting admissible set in Section 5.1, and Step 2 shows that the solution of the auxiliary problem satisfies the necessary optimality system (2.8) in section 5.2.The overall proof strategy is similar to [32], and hence we omit relevant arguments.

Auxiliary convergence
Since the two sequences {H k } k≥0 and { A k } k≥0 generated by Algorithm 3.1 are nested, we may define Clearly H ∞ is a closed subspace of H.For the set A ∞ , we have the following result [32, Lemma 4.1].
Lemma 5.1.A ∞ is a closed convex subset of A.
Over the limiting set A ∞ , we define an auxiliary limiting minimization problem: where  with Taking the test function (v, V ) = (u kj − u * kj , U kj − U * kj ) ∈ H kj in the first line of (3.3) and (5.4) and then applying the Cauchy-Schwarz inequality lead to In view of (5.5), pointwise convergence in (5.3) and Lebesgue's dominated convergence theorem, This and Lemma 2.1 imply (u kj − u * kj , U kj − U * kj ) H → 0.Then, (5.5) and the triangle inequality imply Further, we have the following auxiliary convergence.
Proof.The convergence of (u * kj , U * kj ) was already proved in Theorem 5.1.Taking . By (5.6) and (5.7), we have ∇σ Next we consider the convergence of the sequence {(p * k , P * k )} k≥0 .With a minimizer (σ * ∞ , (u * ∞ , U * ∞ )) to problem (5.1), we define a limiting adjoint problem: find By Lemma 2.1 and Lax-Milgram theorem, (5.9) is uniquely solvable.We have the following convergence result for (p * ∞ , P * ∞ ).The proof is identical with [32, Theorem 4.5], and hence omitted.Proof.The proof is identical with [32, Lemma 4.8], using Theorems 5.2-5.3, and hence we only give a brief sketch.By [32, Lemma 3.5], for each T ∈ T k with its face F (intersecting with e l ), there hold Remark 5.1.The argument of Theorem 5.4 dates back to [49], and the main tools include the Galerkin orthogonality of the residual operator, the Lagrange and the Scott-Zhang interpolation operators [16,48], the marking condition (3.5) and a density argument.Further, the error estimators emerge in the proof and are then employed in the module ESTIMATE of Algorithm 3.1.Next we prove that the limit (σ * ∞ , (u * ∞ , U * ∞ ), (p * ∞ , P * ∞ )) satisfies the variational inequality in (2.8).The proof relies crucially on a constraint preserving interpolation operator.We denote by D T the union of elements in T with a non-empty intersection with an element T ∈ T , and by ω F the union of elements in T sharing a common face/edge with F ∈ F T .Let The set T + k consists of all elements not refined after the k-th iteration, and all elements in T 0 k are refined at least once after the k-th iteration.Clearly, T + l ⊂ T + k for l < k.We also define a mesh-size function where T i denotes the interior of an element T ∈ T k , and F i the relative interior of an edge F ∈ F k .It has the following property [49, Corollary 3.3]: Proof.The inverse estimate and scaled trace theorem imply that for each T ∈ T k (with its face F )   With the choice q = d/(d − 1), combining these two estimates gives ) → 0 as j → ∞.
Due to a lack of Galerkin orthogonality for variational inequalities, we employ a local L r -stable interpolation operator of Clément/Chen-Nochetto type.Let N k be the set of all interior nodes of T k and {φ x } x∈N k be the nodal basis functions in V k .For each x ∈ N k , the support of φ x is denoted by ω x , i.e., the union of all elements in T k with a non-empty intersection with x.Then we define Π k : x ∈ Ω.The definition is adapted from [13] (for elliptic obstacle problems) by replacing the maximal ball ∆ x ⊂ ω x centered at an interior node x by ω x .Π k v satisfies following properties; see Appendix B for a proof.
where the last inequality is due to the variational inequality in (3.3) with µ k = I k µ.
Step ii.Bound the I.By elementwise integration by parts, Hölder inequality, the definition of the estimator η k,3 and Lemma 5.3 with r = q (with q being the conjugate exponent of q), Thus, for any k > l, by (discrete) Hölder's inequality and the finite overlapping property of the patches D T , due to uniform shape regularity of the meshes T k ∈ T, there holds Then by the error estimate of I k [16], By Lemma 5.2, for any small ε > 0, we can choose k 1 > l 1 for some large fixed l 1 such that whenever k > k 1 , c( (5.17) Step iii.Bound the term II.For the term II, elementwise integration and Hölder inequality yield By the scaled trace theorem, local inverse estimate, L q -stability of Π k in Lemma 5.3, local quasi-uniformity and interpolation error estimate for I k [16], we deduce that for k > l 1/q µ W 2,q (Ω) .
Remark 5.2.The computable quantity η k,3 (σ * k , u * k , p * k , T ) emerges naturally from the proof, i.e., the upper bounds on I and II, which motivates its use as the a posteriori error estimator in Algorithm 3.1.

B Proof of Lemma 5.3
The proof follows that in [13,25].By Hölder inequality and h d T ≤ |ω x | for each node x ∈ T , 1 The desired L r -stability follows from the estimate φ x L r (T ) ≤ ch d/r T , by the local quasi-uniformity of the mesh.In view of the definition (5.13), Π k ζ = ζ for any ζ ∈ R. By local inverse estimate, the L r -stability of Π k , standard interpolation error estimate [16] and local quasi-uniformity, By the scaled trace theorem, for any F ⊂ ∂T , there holds Then (B.1) and (B.2) complete the proof of the lemma.

Figure 7 :Figure 8 :
Figure 7: The meshes T k and recovered conductivities σ k during the adaptive iteration for Example 1(ii) with = 1e-3 and α = 2e-2.The number under each figure refers to d.o.f.

Figure 9 :
Figure 9: The meshes T k and recovered conductivities σ k during the adaptive iteration for Example 1(iii) with = 1e-3 and α = 1e-4.The number under each figure refers to d.o.f.

Figure 11 :
Figure 11: The meshes T k and recovered conductivities σ k during the adaptive refinement, for Example 2 with = 1e-3 and α = 2e-2.The number under each figure refers to d.o.f.

2 Figure 12 :
Figure12: The L 2 (Ω) and L 1 (Ω) errors versus the degree of freedom N of the mesh, for Example 2, using the adaptive (solid) and uniform (dashed) refinement.