529
Views
15
CrossRef citations to date
0
Altmetric
Articles

Sparsity prior for electrical impedance tomography with partial data

&
Pages 524-541 | Received 22 Dec 2014, Accepted 28 Apr 2015, Published online: 26 May 2015

Abstract

This paper focuses on prior information for improved sparsity reconstruction in electrical impedance tomography with partial data, i.e. Cauchy data measured on subsets of the boundary. Sparsity is enforced using an 1 norm of the basis coefficients as the penalty term in a Tikhonov functional, and prior information is incorporated by applying a spatially distributed regularization parameter. The resulting optimization problem allows great flexibility with respect to the choice of measurement subsets of the boundary and incorporation of prior knowledge. In fact, the measurement subsets can be chosen completely arbitrary. The problem is solved using a generalized conditional gradient method applying soft thresholding. Numerical examples with noisy simulated data show that the addition of prior information in the proposed algorithm gives vastly improved reconstructions, even for the partial data problem. Moreover, numerical examples show that a reliable reconstruction for the partial data problem can only be found close to the measurement subsets. The method is in addition compared to a total variation approach.

AMS Subject Classifications:

1. Introduction

The inverse problem in electrical impedance tomography (EIT) consists of reconstructing an electrical conductivity distribution in the interior of an object from electrostatic boundary measurements on the surface of the object. EIT is an emerging technology with applications in medical imaging,[Citation1] geophysics [Citation2] and industrial tomography.[Citation3] The underlying mathematical problem is known as the Calderón problem in recognition of Calderón’s seminal paper.[Citation4]

Consider a bounded domain ΩRn,n2, with smooth boundary Ω. In order to consider partial boundary measurements, we introduce the subsets ΓN,ΓDΩ for the Neumann and Dirichlet data, respectively. Let σL(Ω) with 0<cσ a.e. denote the real-valued conductivity distribution in Ω. Applying a boundary current flux g (Neumann condition) through ΓNΩ gives rise to the interior electric potential u characterized as the solution to(1.1) ·(σu)=0inΩ,σuν=gonΩ,ΓDu|Ωds=0,(1.1)

where ν is an outward unit normal to Ω. The latter condition in (Equation1.1) is a grounding of the total electric potential along the subset ΓDΩ. To be precise, we define the spacesL2(Ω)gL2(Ω)Ωgds=0,H-1/2(Ω){gH-1/2(Ω)g,1=0},

consisting of boundary functions with mean zero, and the spacesHΓD1(Ω){uH1(Ω)u|ΩHΓD1/2(Ω)},HΓD1/2(Ω)fH1/2(Ω)ΓDfds=0,

consisting of functions with mean zero on ΓD designed to encompass the partial boundary data. Since σ is real valued, it is sufficient to consider the above spaces as real vector spaces, for the use of real-valued Cauchy data in the measurements. Using standard elliptic theory, it follows that (Equation1.1) has a unique solution uHΓD1(Ω) for any gH-1/2(Ω). This defines the Neumann-to-Dirichlet map (ND-map) Rσ as an operator from H-1/2(Ω) into HΓD1/2(Ω) by gu|Ω, and the partial ND-map as g(Rσg)|ΓD.

The data for the classical Calderón problem is the full operator Rσ with ΓD=ΓN=Ω. The problem is well studied and there are numerous publications addressing different aspects of its solution; we mention only a few: the uniqueness and reconstruction problem was solved in [Citation5Citation10] using the so-called complex geometrical optics (CGO) solutions; for a recent survey see [Citation11]. Stability estimates of log type were obtained in [Citation12, Citation13] and shown to be optimal in [Citation14]. Thus any computational algorithm must rely on regularization. Such computational regularization algorithms following the CGO approach were designed, implemented and analysed in [Citation15Citation19].

Recently, the partial data Calderón problem have been studied intensively. In 3D uniqueness has been proved under certain conditions on ΓD and ΓN,[Citation20Citation24] and in 2D the general problem with localized data i.e. ΓD=ΓN=Γ for some, possibly small, subset ΓΩ has been shown to possess uniqueness.[Citation25] Also stability estimates of log–log type have been obtained for the partial data problem [Citation26]; this suggests that the partial data problem is even more ill-posed and hence requires more regularization than the full data problem. Recently, a computational algorithm for the partial data problem in 2D was suggested and investigated in [Citation27].

The boundary condition in (Equation1.1) is the continuum model which is related to the above-mentioned uniqueness results, and actual electrode measurements can be seen as an approximation to this model, see for instance [Citation28]. Another more realistic model is the Complete Electrode Model (CEM) introduced in [Citation29]. The approach to the reconstruction problem in EIT considered here can be formulated with CEM as well, see [Citation30, Citation31].

A general approach to linear inverse problems with sparsity regularization was given in [Citation32], and in [Citation33, Citation34] the method was adapted to non-linear problems using a so-called generalized conditional gradient method. In [Citation30, Citation35, Citation36], the method was applied to the reconstruction problem in EIT with full boundary data. A study was made in [Citation31] with 3D sparse reconstruction with current injection and voltage measurements on disjoint sets of electrodes on a planar EIT device, where it was possible to reconstruct a 2D position of an inclusion close to the measured boundary. In this paper, we seek a general approach with regards to the measurement boundaries ΓD and ΓN, and the use of prior information to improve the reconstruction further away from the measured boundary. In the numerical examples in Section 4 we consider the case of local data, with Cauchy data on the same part of the boundary. For other approaches to EIT using optimization methods, we refer to [Citation37].

In this paper, we will focus on the partial data problem for which we develop a reconstruction algorithm based on a least squares formulation with sparsity regularization. The results are twofold: first we extend the full data algorithm of [Citation36] to the case of partial data, second we show how prior information about the spatial location of the perturbation in the conductivity can be used in the design of a spatially varying regularization parameter. We will restrict the treatment to 2D, however everything extends to 3D with some minor assumptions on the regularity of the Neumann data.[Citation38]

The data considered here consist of a finite number of Cauchy data, corresponding to the number of applied current patterns, taken on the subsets ΓD and ΓN, i.e.(1.2) {(fk,gk)gkH-1/2(Ω),supp(gk)ΓN,fk=Rσgk|ΓD}k=1K,KN.(1.2)

We assume that the true conductivity is given as σ=σ0+δσ, where σ0 is a known background conductivity. Define the closed and convex subset(1.3) A0{δγH01(Ω)cσ0+δγc-1a.e. inΩ}(1.3)

for some c(0,1), and σ0H1(Ω) where cσ0c-1. Similarly defineA{δγ+σ0δγA0}={γH1(Ω)cγc-1a.e. inΩ,γ|Ω=σ0|Ω}.

The inverse problem is then to approximate δσA0 given the data (Equation1.2).

Let {ψj}j=1 denote a chosen orthonormal basis for H01(Ω). For sparsity regularization, we approximate δσ by argminδγA0Ψ(δγ) using the following Tikhonov functional [Citation36](1.4) Ψ(δγ)k=1KJk(δγ)+P(δγ),δγA0,(1.4)

withJk(δγ)12Rσ0+δγgk-fkL2(ΓD)2,P(δγ)j=1αjcj,

for cjδγ,ψjH1(Ω). The regularization parameter αj>0 for the sparsity-promoting 1 penalty term P is distributed such that each basis coefficient can be regularized differently; we will return to this in Section 3. It should be noted how easy and natural the use of partial data is introduced in this way, simply by only minimizing the discrepancy on ΓD on which the Dirichlet data are known and ignoring the rest of the boundary.

This paper is organized as follows: in Section 2 we derive the Fréchet derivative of Jk and reformulate the optimization problem using the generalized conditional gradient method as a sequence of linearized optimization problems. In Section 3, we explain the idea of the spatially dependent regularization parameter designed for the use of prior information. Then, in Section 4 we show the feasibility of the algorithm by several numerical examples, and finally we conclude in Section 5.

2. Sparse reconstruction

In this section, the sparse reconstruction of δσ based on the optimization problem (Equation1.4) is investigated for a bounded domain ΩR2 with smooth boundary Ω. The penalty term emphasizes that δσ should only be expanded by few basis functions in a given orthonormal basis, and the level of sparsity is controlled by the regularization parameter. Using a distributed regularization parameter, it is possible to further apply prior information about which basis functions that should be included in the expansion of δσ. The partial data problem comes into play in the discrepancy term, in which we only fit the data on part of the boundary. Ultimately, this leads to the algorithm given in Algorithm 1 at the end of this section.

Denote by Fg(σ) the unique solution to (Equation1.1) and let Fg(σ) be its trace (note that Rσg=Fg(σ)). Let γA, gLp(Ω)H-1/2(Ω) for p>1, then following the proofs of Theorem 2.2 and Corollary 2.1 in [Citation35] whilst applying the partial boundary ΓD we have(2.1) limηH1(Ω)0γ+ηAFg(γ+η)-Fg(γ)-(Fg)γηHΓD1/2(Ω)ηH1(Ω)=0.(2.1)

Here (Fg)γ is the linear map that maps η to w|Ω, where w is the unique solution to(2.2) -·(γw)=·(ηFg(γ))inΩ,σwν=0onΩ,ΓDw|Ωds=0.(2.2)

It is noted that (Fg)γ resembles a Fréchet derivative of Fg evaluated at γ due to (Equation2.1), however A is not a linear vector space, thus the requirement γ,γ+ηA.

The first step in minimizing Ψ using a gradient descent-type iterative algorithm is to determine a derivative to the discrepancy terms Jk.

Lemma 2.1

Let γ=σ0+δγ for δγA0, and χΓD be a characteristic function on ΓD. Then(2.3) Ek-Fgk(γ)·FχΓD(Rγgk-fk)(γ)Lr(Ω)H-1(Ω)(2.3)

for some r>1, and the Fréchet derivative (Jk)δγ of Jk on H01(Ω) evaluated at δγ is given by(2.4) (Jk)δγη=ΩEkηdx,δγ+ηA0.(2.4)

Proof

For the proof, the index k is suppressed. First it is proved that ELr(Ω) for some r>1, which is shown by estimates on Fg(γ) and Fh(γ) where hχΓD(Rγg-f). Note that RγgHΓD1/2(Ω) and fL2(ΓD), i.e. hL2(Ω)L2(Ω)H-1/2(Ω). Now using [Citation35, Theorem 3.1], there exists Q>2 such that(2.5) Fh(γ)W1,q(Ω)ChL2(Ω),(2.5)

where q(2,Q)[2,4]. Since Fg(γ)HΓD1(Ω) then Fg(γ)L2(Ω). It has already been established in (Equation2.5) that Fh(γ)W1,q(Ω) for q(2,min{Q,4}), so Fh(γ)Lq(Ω). By Hölder’s generalized inequalityE=-Fg(γ)·Fh(γ)Lr(Ω),1r=12+1q,

and as q>2 then r>1. Let r be the conjugate exponent to r, then r[1,), i.e. the Sobolev imbedding theorem [Citation39] implies that H1(Ω)Lr(Ω) as ΩR2. Thus E(Lr(Ω))(H1(Ω))(H01(Ω))=H-1(Ω).

Now it will be shown that Jδγ can be identified with E. Jδγη is by the chain rule (utilizing that Rγg=Fg(γ)) given as(2.6) Jδγη=ΩχΓD(Rγg-f)(Fg)γηds,(2.6)

where χΓD is enforcing that the integral is over ΓD. The weak formulations of (Equation1.1), with Neumann data χΓD(Rγg-f), and (Equation2.2) are(2.7) ΩγFχΓD(Rγg-f)(γ)·vdx=ΩχΓD(Rγg-f)v|Ωds,vH1(Ω),(2.7) (2.8) Ωγw·vdx=-ΩηFg(γ)·vdx,vH1(Ω).(2.8)

Now by letting vw in (Equation2.7) and vFχΓD(Rγg-f)(γ) in (Equation2.8), we obtain using the definition w|Ω=(Fg)γη thatJδγη=ΩχΓD(Rγg-f)(Fg)γηds=ΩγFχΓD(Rγg-f)(γ)·wdx=-ΩηFg(γ)·FχΓD(Rγg-f)(γ)dx=ΩEηdx.

Remark 2.2

It should be noted that (Jk)δγ is related to the Fréchet derivative Rγ of γRγ evaluated at γ, by (Jk)δγη=ΓD(Rγgk-fk)Rγ[η]gkds.

DefineJδγk=1K(Jk)δγ=-k=1KFgk(γ)·FχΓD(Rγgk-fk)(γ).

For a gradient-type descent method, we seek to find a direction η for which the discrepancy decreases. As JδγH-1(Ω) it is known from Riesz’ representation theorem that there exists a unique function in H01(Ω), denoted by G(δγ), such that(2.9) Jδγη=G(δγ),ηH1(Ω),ηH01(Ω).(2.9)

Now η-G(δγ) points in the direction of steepest descend among the viable directions. Furthermore, since G(δγ)|Ω=0 the boundary condition δσ|Ω=0 will automatically be fulfilled for the approximation. In [Citation40], G(δγ) is called a Sobolev gradient, and it is the unique solution to(-Δ+1)v=JδγinΩ,v=0onΩ,

for which (Equation2.9) is the weak formulation.

In each iteration step, we need to determine a step size si for an algorithm resembling a steepest descent δγi+1=δγi-siG(δγi). Here a Barzilai–Borwein step size rule [Citation36, Citation41, Citation42] will be applied, for which we determine si such that 1si(δγi-δγi-1)=1si(γi-γi-1)G(δγi)-G(δγi-1) in the least squares sense(2.10) siargminss-1δγi-δγi-1-G(δγi)-G(δγi-1)H1(Ω)2.(2.10)

Assuming that δγi-δγi-1,G(δγi)-G(δγi-1)H1(Ω)0 yields(2.11) si=δγi-δγi-1H1(Ω)2δγi-δγi-1,G(δγi)-G(δγi-1)H1(Ω).(2.11)

A maximum step size smax is enforced to avoid the situations where δγi-δγi-1,G(δγi)-G(δγi-1)H1(Ω)0.

With inspiration from [Citation42], si will be initialized by (Equation2.11), after which it is thresholded to lie in [smin,smax], for positive constants smin and smax. It is noted in [Citation42] that Barzilai–Borwein step rules lead to faster convergence if we do not restrict Ψ to decrease in every iteration. Allowing an occasional increase in Ψ can be used to avoid places where the method has to take many small steps to ensure the decrease of Ψ. Therefore, one makes sure that the following so called weak monotonicity is satisfied, which compares Ψ(δγi+1) with the most recent M steps. Let τ(0,1) and MN, then si is said to satisfy the weak monotonicity with respect to M and τ if the following is satisfied [Citation42](2.12) Ψ(δγi+1)maxi-M+1jiΨ(δγj)-τ2siδγi+1-δγiH1(Ω)2.(2.12)

If (Equation2.12) is not satisfied, the step size si is reduced until this is the case. To solve the non-linear minimization problem for (Equation1.4), we iteratively solve the following linearized problem(2.13) ζi+1argminδγH01(Ω)12δγ-(δγi-siG(δγi))H1(Ω)2+sij=1αjcj,δγi+1PA0(ζi+1).(2.13)

Here {ψj}j=1 is an orthonormal basis for H01(Ω) in the H1-metric, and PA0 is the projection of H01(Ω) onto A0 to ensure that (Equation1.1) is solvable (note that H01(Ω) does not imbed into L(Ω), i.e. ζi+1 may be unbounded). By use of the map Sβ:RR defined below, known as the soft shrinkage/thresholding map with threshold β>0,(2.14) Sβ(x)sgn(x)max{x-β,0},xR,(2.14)

the solution to (Equation2.13) is easy to find directly (see also [Citation32, Section 1.5]):(2.15) ζi+1=j=1Ssiαj(dj)ψj,(2.15)

where djδγi-siG(δγi),ψjH1(Ω) are the basis coefficients for δγi-siG(δγi).

Remark 2.3

The soft threshold in (Equation2.15) assumes that the coefficients {dj}j=1 are real valued, as in this case with real-valued conductivity σ and Cauchy-data. For complex-valued admittivity σ, appropriate use of complex conjugation is required for the derivative in Lemma 2.1, and a minimizer for (Equation2.13) can be found in [Citation42].

The projection PA0:H01(Ω)A0 is defined as(2.16) PA0(v)Tc(σ0+v)-σ0,vH01(Ω),(2.16)

where Tc is the following truncation that depends on the constant c(0,1) in (Equation1.3)Tc(v)cwherev<ca.e.,c-1wherev>c-1a.e.,velse.

Since σ0H1(Ω) and cσ0c-1, it follows directly from [Citation43, Lemma 1.2] that Tc and PA0 are well defined, and it is easy to see that PA0 is a projection.

Remark 2.4

In the numerical examples in Section 4, the value of c is chosen small enough that the truncation in PA0 does not occur during the iterations. If tight bounds on σ are known a priori, the truncation may lead to improved rate of convergence and gives reconstructions in the correct dynamic range.

It should also be noted that 0A0 since cσ0c-1, thus we may choose δγ00 as the initial guess in the algorithm. The algorithm is summarized in Algorithm 1. In this paper, the stopping criterion is when the step size si gets below a threshold sstop.

Remark 2.5

Note that jδγi-siG(δγi),ψjH1(Ω)ψj corresponds to only having the discrepancy term in (Equation2.13), while the penalty term sij=1αjδγ,ψjH1(Ω) leads to the soft thresholding in (Equation2.15).

Remark 2.6

The functional Ψ in (Equation1.4) is non-convex. Thus, the best we can hope is to find a local minimum. The quality of the reconstruction depends on the initial guess, and as we expect the reconstruction to be sparse, then δγ00 is a reasonable initial guess.

Remark 2.7

The main computational cost lies in computing Jδγi, which involves solving 2K well-posed PDE’s (note that Fgk(γi) can be reused from the evaluation of Ψ), where K is the number of current patterns used in the measurements. It should be noted that each of the 2K problems consists of solving the same problem, but with different boundary conditions, which leads to only having to assemble and factorize the finite element method (FEM) matrix once per iteration.

3. Prior information

Prior information is typically introduced in the penalty term P for Tikhonov-like functionals, and here the regularization parameter determines how much this prior information is enforced. In the case of sparsity regularization, this implies knowledge of how sparse we expect the solution is in general. Instead of applying the same prior information for each basis function, a distributed parameter is applied. Letαjαμj,

where α is the usual regularization parameter, corresponding to the case where no prior information is considered about specific basis functions. The μj(0,1] will be used to weight the penalty depending on whether a specific basis function should be included in the expansion of δσ. The μj are chosen asμj=1,no prior oncj,0,prior thatcj0,

i.e. if we know that a coefficient in the expansion of δσ should be non-zero, we can choose to penalize that coefficient less. Using a basis with localized support, the sparsity assumption translates to an assumption on the support of the inclusion.

3.1. Applying the FEM basis

In order to improve the sparsity solution for finding small inclusions, it seems appropriate to include prior information about the support of the inclusions. There are different methods available for obtaining such information assuming piecewise constant conductivity [Citation44, Citation45] or real-analytic conductivity.[Citation46] An example of the reconstruction of suppδσ is shown in Figure , where it is observed that numerically it is possible to reconstruct a reasonable convex approximation to the support. Thus, it is possible to acquire estimates of suppδσfor free, in the sense that it is gained directly from the data without further assumptions on the location.

Figure 1. (a) Phantom with kite-shaped piecewise constant inclusion δσ. (b) Reconstruction of suppδσ using monotonicity relations from the approach in [Citation45] by use of simulated noiseless data.

Figure 1. (a) Phantom with kite-shaped piecewise constant inclusion δσ. (b) Reconstruction of ∗suppδσ using monotonicity relations from the approach in [Citation45] by use of simulated noiseless data.

Another approach is to consider other reconstruction methods such as total variation (TV) regularization that tends to give good approximations to the support, but has issues with reconstructing the contrast if the amplitude of δσ is large as seen in Section 4.3. The idea is to be able to apply such information in the sparsity algorithm in order to get good contrast in the reconstruction while maintaining the correct support, even for the partial data problem.

Suppose that as a basis we consider a FEM basis {ψj}j=1N for the subspace VhH01(Ω) of piecewise affine elements. This basis comprises basis functions that are piecewise affine with degrees of freedom at the mesh nodes, i.e. ψj(xk)=δj,k at mesh node xk in the applied mesh. Let δσVh, then δσ(x)=jδσ(xj)ψj(x), i.e. for each node there is a basis function for which the coefficient contains local information about the expanded function; this is convenient when applying prior information about the support of an inclusion. Note that the FEM basis functions are not mutually orthogonal, since basis functions corresponding to neighbouring nodes are non-negative and have overlapping support. However, for any non-neighbouring pair of nodes the corresponding basis functions are orthogonal.

When applying the FEM basis for mesh nodes {xj}j=1N, the corresponding functional isΨ(δγ)=12k=1KRσ0+δγgk-fkL2(ΓD)2+j=1Nαjδγ(xj).

It is evident that the penalty corresponds to determining inclusions with small support, and prior information on the sparsity corresponds to prior information on the support of δσ. We cannot directly utilize (Equation2.15) due to the FEM basis not being an orthonormal basis for H01(Ω), and instead we suggest the following iteration step:(3.1) ζi+1(xj)=Ssiαj/ψjL1(Ω)(δγi(xj)-siG(δγi)(xj)),j=1,2,,N,δγi+1=PA0(ζi+1).(3.1)

Note that the regularization parameter will depend quite heavily on the discretization of the mesh, i.e. for the same domain a good regularization parameter α will be much larger on a coarse mesh than on a fine mesh. This is quite inconvenient, and instead we can weight the regularization parameter according to the mesh cells, by having αjαβjμj. This leads to a discretization of a weighted L1-norm penalty term:αΩfμδγdxαj=1Nβjμjδγ(xj),

where fμ:Ω(0,1] is continuous and fμ(xj)=μj. For a triangulated mesh, the weights βj consists of the node area computed in 2D as 1/3 of the area of suppψj. This corresponds to splitting each cell’s area evenly amongst the nodes, and it will not lead to instability on a regular mesh. This will make the choice of α almost independent of the mesh, and is used in the numerical examples in the following section.

Remark 3.1

The corresponding algorithm with the FEM basis is the same as Algorithm 1, except that the update is applied via (Equation3.1) instead of (Equation2.13).

4. Numerical examples

In this section we illustrate, through several examples, the numerical algorithm implemented using the finite element library FEniCS.[Citation47] We consider the full data case ΓD=ΓN=Ω first without prior information and then with prior information, and afterwards we do the same for the partial data case. Finally, a brief comparison is made with another sparsity promoting method based on TV.

For the following examples Ω is the unit disk in R2 (however, the algorithm can be applied to any bounded domain with smooth boundary). The regularization parameter α is chosen manually by trial and error. The other parameters are σ01, M=5, τ=10-5, smin=1, smax=1000, and the stopping criteria are when the step size is below sstop=10-3. We take K=10 with the applied Neumann data of the form gnc(θ)cos(nθ) and gns(θ)sin(nθ) for n=1,,5 and θ being the angular variable. Such trigonometric current patterns are realistic approximations to electrode measurements, see for instance [Citation28]. For the partial data, an interval Γ=ΓN=ΓD={θ(θ1,θ2)} is considered, and gnc and gns are scaled and translated such that they have n periods in the interval.

When applying prior information, the coefficients μj are chosen as 10-2 where the support of δσ is assumed, and 1 elsewhere. It should be noted that in order to get fast transitions for sharp edges when prior information is applied, a local mesh refinement is used during the iterations to refine the mesh where δγi is large.

For the simulated Dirichlet data, the forward problem is computed on a very fine mesh, and afterwards interpolated onto a different much coarser mesh (with roughly 2800 triangle elements) in order to avoid inverse crimes. White Gaussian noise has been added to the Dirichlet data {fk}k=1K on the discrete nodes on the boundary of the mesh. The standard deviation of the noise is chosen as ϵmaxkmaxxjΓDfk(xj) as in [Citation36], where the noise level is fixed as ϵ=10-2 (corresponding to 1% noise) unless otherwise stated.

Figure 2. Numerical phantoms. (a) Circular piecewise constant inclusion. (b) Kite-shaped piecewise constant inclusion. (c) Multiple C2 inclusions.

Figure 2. Numerical phantoms. (a) Circular piecewise constant inclusion. (b) Kite-shaped piecewise constant inclusion. (c) Multiple C2 inclusions.

Figure shows the numerical phantoms: where one is a simple circular inclusion, another is the non-convex kite-shaped phantom. Finally, we also shortly investigate the case of multiple smoother inclusions.

4.1. Full boundary data

For ΓD=ΓN=Ω, it is possible to get quite good reconstructions of both shape and contrast for the convex inclusions as seen in Figure , and for the case with multiple inclusions there is a reasonable separation of the inclusions.

Figure 3. Full data sparse reconstruction of the phantoms in Figure without prior information on the support of inclusions.

Figure 3. Full data sparse reconstruction of the phantoms in Figure 2 without prior information on the support of inclusions.

For the kite-shaped phantom, we only get what seems like a convex approximation of the shape. It is seen in [Citation36] that the algorithm is able to reconstruct some types of non-convex inclusions such as the hole in a ring-shaped phantom, however those inclusions are much larger which makes it easier to distinguish from similar convex inclusions.

We note that the method is very noise robust. Figure demonstrates that even unreasonably large amounts of noise lead to only small deformations in the shape of the reconstructed inclusion.

Figure 4. Left: Dirichlet data corresponding to g=cos(θ) for the phantom in Figure (a), with 10% and 50% noise level. Middle: full data reconstruction for 10% noise level. Right: full data reconstruction for 50% noise level.

Figure 4. Left: Dirichlet data corresponding to g=cos(θ) for the phantom in Figure 2(a), with 10% and 50% noise level. Middle: full data reconstruction for 10% noise level. Right: full data reconstruction for 50% noise level.

Figure 5. Full data sparse reconstruction of the phantom in Figure (a) varying the assumed support given by a dilation δr. The colour bar is truncated at [1,6]. For δr<0, the contrast in the reconstruction is higher than in the phantom.

Figure 5. Full data sparse reconstruction of the phantom in Figure 2(a) varying the assumed support given by a dilation δr. The colour bar is truncated at [1,6]. For δr<0, the contrast in the reconstruction is higher than in the phantom.

In order to investigate the use of prior information, we consider the phantom in Figure (a), and let B(r) denote a ball centred at the correct inclusion and with radius r. Now we can investigate reconstructions with prior information assuming that the support of the inclusion is B((1+δr)r)¯ for r being the correct radius of the inclusion. Figure shows that underestimating the support of the inclusion δr<0 is heavily enforced, and the contrast is vastly overestimated in the reconstruction as shown in Figure (note that this cannot be seen in Figure as the color scale for the phantom is applied).

Figure 6. Behaviour of full data sparsity reconstruction based on the phantom in Figure (a) by varying δr characterizing the assumed support. The correct support of the inclusion is a ball B(r)¯ while the assumed support is B((1+δr)r)¯. σB is the average of reconstruction σ over B(r) and σBC is the average on the complement of B(r). σmax is the maximum of σ on the mesh nodes.

Figure 6. Behaviour of full data sparsity reconstruction based on the phantom in Figure 2(a) by varying δr characterizing the assumed support. The correct support of the inclusion is a ball B(r∗)¯ while the assumed support is B((1+δr)r∗)¯. σB is the average of reconstruction σ over B(r∗) and σBC is the average on the complement of B(r∗). σmax is the maximum of σ on the mesh nodes.

Interestingly, when overestimating the support, the contrast and support of the reconstructed inclusion does not suffer particularly. Intuitively, this corresponds to increasing δr such that the assumed support of δσ contains the entire domain Ω, which corresponds to the case with no prior information. Denote by σBB(r)-1B(r)σdx the average of the reconstruction σ on B(r)¯, the correct support of the inclusion, and by σBC the average of σ on the complement to B(r). Denote by σmaxmaxjσ(xj) the maximum of σ on the mesh nodes. Then Figure gives a good indication of the aforementioned intuition, where around δr=0 both σB and σmax level off around the correct contrast of the inclusion (the red line) and stays there for δr>0. It should be noted that even a 25% overestimation of the support leads to a better contrast in the reconstruction than if no prior information was applied, as seen in Figure .

Having an overestimation of the support for δσ also seems to be a reasonable assumption. Definitely there is the case of no prior information which means that suppδσ is assumed to be Ω. If the estimation comes from another method such as TV regularization, then the support is typically slightly overestimated while the contrast suffers.[Citation48] Thus, we can use the overestimated support to get a good localization and contrast reconstruction simultaneously.

Figure shows how the reconstruction of the kite-shaped phantom can be vastly improved. Note that not only is suppδσ better approximated, but the contrast is also highly improved. It is not surprising that we can achieve an almost perfect reconstruction if suppδσ is exactly known, however it is a good benchmark to compare the cases for the overestimated support as it shows how well the method can possibly do.

4.2. Partial boundary data

For the partial data problem we choose Γ=ΓD=ΓN={θ(θ1,θ2)} for 0θ1<θ22π.

In Figure we observe that with data on the top half of the unit circle, it is actually possible to get very good contrast and also reasonable localization of the two large inclusions. There is still a clear separation of the inclusions, while the small inclusion is not reconstructed at all. With data on the bottom half, the small inclusion is reconstructed almost as well as with full boundary data, but the larger inclusions are only vaguely visible. This is the kind of behaviour that is expected from partial data EIT, and in practice it implies that we can only expect reasonable reconstruction close to where the measurements are taken.

In the rightmost reconstruction in Figure , there is a small artefact to the left in a region where a reliable reconstruction is expected. This may be due to the partial data problem being very noise sensitive and the low number of current patterns (K=10).

Figure 7. Full data sparse reconstruction of the phantom in Figure (b). The applied prior information for the overestimated support is a 10% dilation of the correct shape.

Figure 7. Full data sparse reconstruction of the phantom in Figure 2(b). The applied prior information for the overestimated support is a 10% dilation of the correct shape.

Figure 8. Sparse reconstruction of the phantom in Figure (c). Left: Γ=Ω. Middle: (θ1,θ2)=(0,π). Right: (θ1,θ2)=(π,2π).

Figure 8. Sparse reconstruction of the phantom in Figure 2(c). Left: Γ=∂Ω. Middle: (θ1,θ2)=(0,π). Right: (θ1,θ2)=(π,2π).

Figure 9. Sparse reconstruction of the phantom with a ball inclusion in Figure (a). (a) 50% boundary data, no prior. (b): 50% boundary data with 5% overestimated support. (c): 25% boundary data, no prior. (d): 25% boundary data with 5% overestimated support.

Figure 9. Sparse reconstruction of the phantom with a ball inclusion in Figure 2(a). (a) 50% boundary data, no prior. (b): 50% boundary data with 5% overestimated support. (c): 25% boundary data, no prior. (d): 25% boundary data with 5% overestimated support.

Figure 10. Sparse reconstruction of the phantom with a kite-shaped inclusion in Figure (b). (a) 50% boundary data, no prior. (b) 50% boundary data with 10% overestimated support. (c) 25% boundary data, no prior. (d) 25% boundary data with 10% overestimated support.

Figure 10. Sparse reconstruction of the phantom with a kite-shaped inclusion in Figure 2(b). (a) 50% boundary data, no prior. (b) 50% boundary data with 10% overestimated support. (c) 25% boundary data, no prior. (d) 25% boundary data with 10% overestimated support.

In Figures and panels (a) and (c) it is observed that as the length of Γ becomes smaller, the reconstructed shape of the inclusion is rapidly deformed. By including prior information about the support of δσ, it is possible to rectify the deformation of the shape, and get reconstructions with almost the correct shape but with a slightly worse reconstructed contrast compared to full boundary data reconstructions. This is observed for the ball- and kite-shaped inclusions in Figures and , respectively.

4.3. Comparison with TV regularization

Another sparsity promoting method is TV regularization, which promotes a sparse gradient in the solution. This can be achieved by minimizing the functional(4.1) ΨTV(δγ)k=1KJk(δγ)+PTV(δγ),δγA0,(4.1)

where the discrepancy terms Jk remain the same as in (Equation1.4), but the penalty term is now given by(4.2) PTV(δγ)αΩδγ2+bdx.(4.2)

Here b>0 is a constant that implies that PTV is differentiable, but chosen small such that PTV approximates αΩδγdx. The computational cost for determining the derivative of ΨTV is the same as for the sparsity regularization in (Equation1.4), as it is dominated by the same 2K PDE’s (see Remark 2.7). Minimizing the functional (Equation4.1) is implemented numerically using a plain steepest descend method.

For the numerical examples, the piecewise constant phantoms in Figure panels (a) and (b) are used, with the same noise level as in the previous sections. The value b=10-5 is used for the penalty term in all the examples.

It should be noted that the colour scale in the following examples is not the same scale as for the phantoms, unlike the previous reconstructions. This is because the TV reconstructions have a significantly lower contrast, in particular for the partial data reconstructions, and would be visually difficult to distinguish from the background conductivity in the correct colour scale.

Figure 11. TV reconstruction of the phantom with a ball inclusion in Figure (a). (a) Full boundary data. (b) 50% boundary data. (c) 25% boundary data.

Figure 11. TV reconstruction of the phantom with a ball inclusion in Figure 2(a). (a) Full boundary data. (b) 50% boundary data. (c) 25% boundary data.

Figure 12. TV reconstruction of the phantom with a kite-shaped inclusion in Figure (b). (a) Full boundary data. (b) 50% boundary data. (c) 25% boundary data.

Figure 12. TV reconstruction of the phantom with a kite-shaped inclusion in Figure 2(b). (a) Full boundary data. (b) 50% boundary data. (c) 25% boundary data.

As seen from Figures and , the support of the inclusion is slightly overestimated in the case of full boundary data, and for the partial data cases the support is slightly larger than the counterparts in Figures and . It is also noticed that the TV reconstructions have a much lower contrast than the 1 sparsity reconstructions, and the contrast for the TV reconstructions is severely reduced when partial data are used. It is also observed that the same type of shape deformation occurs for both methods in case of partial data.

A typical feature of the TV regularization is piecewise constant reconstructions; however, the reconstructions seen here have constant contrast levels with a smooth transition between them. This is partly due to the slight smoothing of the penalty term with the parameter b>0 to make it differentiable, but mostly because the discrepancy terms are not convex which may lead to local minima. The same kind of smooth transitions is also observed in TV-based methods for EIT in [Citation48].

5. Conclusions

We have extended the algorithm developed in [Citation36], for sparse reconstruction in electrical impedance tomography, to the case of partial data. Furthermore, we have shown how a distributed regularization parameter can be applied to utilize spatial prior information. This lead to numerical results showing improved reconstructions for the support of the inclusions and the contrast simultaneously. The use of the distributed regularization parameter enables sharper edges in the reconstruction and vastly reduces the deformation of the inclusions in the partial data problem, even when the prior is overestimated.

The optimization problem is non-convex and the suggested algorithm is therefore only expected to find a local minimum. Initializing with the background conductivity in the numerical examples yields acceptable reconstructions for both full boundary data and partial boundary data.

The algorithm can be generalized for 3D reconstruction, under further assumptions on the boundary conditions {gk}k=1K and the amplitude of the perturbation δσ. This will be considered in a forthcoming paper [Citation38].

Additional information

Funding

This research is supported by the European Research Council [advanced grant number 291405 HD-Tomo].

Notes

No potential conflict of interest was reported by the authors.

References

  • Holder DS. Electrical impedance tomography: methods, history and applications. Bristol: IOP Publishing Ltd; 2005.
  • Abubakar A, Habashy TM, Li M, Liu J. Inversion algorithms for large-scale geophysical electromagnetic measurements. Inverse Prob. 2009;25:30 p. Article ID 123012.
  • York TA. Status of electrical tomography in industrial applications. Proc. SPIE Int. Soc. Opt. Eng. 2001;4188:175–190.
  • Calderón AP. On an inverse boundary value problem. In: Seminar on numerical analysis and its applications to continuum physics (Rio de Janeiro, 1980). Rio de Janeiro: Sociedade Brasiliera de Matemática; 1980. p. 65–73.
  • Sylvester J, Uhlmann G. A global uniqueness theorem for an inverse boundary value problem. Ann. Math. (2). 1987;125:153–169.
  • Nachman AI. Reconstructions from boundary measurements. Ann. Math. (2). 1988;128:531–576.
  • Novikov RG. A multidimensional inverse spectral problem for the equation -Δψ+(u(x)-Eu(x))ψ=0. Funktsional. Anal. i Prilozhen. 1988;22:11–22, 96.
  • Nachman AI. Global uniqueness for a two-dimensional inverse boundary value problem. Ann. Math. (2). 1996;143:71–96.
  • Astala K, Päivärinta L. Calderón’s inverse conductivity problem in the plane. Ann. Math. (2). 2006;163:265–299.
  • Haberman B, Tataru D. Uniqueness in Calderón’s problem with Lipschitz conductivities. Duke Math. J. 2013;162:496–516.
  • Uhlmann G. Electrical impedance tomography and Calderón’s problem. Inverse Prob. 2009;25:39 p. Article ID 123011.
  • Alessandrini G. Stable determination of conductivity by boundary measurements. Appl. Anal. 1988;27:153–172.
  • Alessandrini G. Singular solutions of elliptic equations and the determination of conductivity by boundary measurements. J. Differ. Equ. 1990;84:252–272.
  • Mandache N. Exponential instability in an inverse problem for the Schrödinger equation. Inverse Prob. 2001;17:1435–1444.
  • Siltanen S, Mueller J, Isaacson D. An implementation of the reconstruction algorithm of A. Nachman for the 2D inverse conductivity problem. Inverse Prob. 2000;16:681–699.
  • Knudsen K, Lassas M, Mueller JL, Siltanen S. Regularized D-bar method for the inverse conductivity problem. Inverse Prob. Imaging. 2009;3:599–624.
  • Bikowski J, Knudsen K, Mueller JL. Direct numerical reconstruction of conductivities in three dimensions using scattering transforms. Inverse Prob. 2011;27:015002, 19.
  • Delbary F, Hansen PC, Knudsen K. Electrical impedance tomography: 3D reconstructions using scattering transforms. Appl. Anal. 2012;91:737–755.
  • Delbary F, Knudsen K. Numerical nonlinear complex geometrical optics algorithm for the 3D Calderón problem. Inverse Prob. Imaging. 2014;8:991–1012.
  • Bukhgeim AL, Uhlmann G. Recovering a potential from partial Cauchy data. Comm. Partial Differ. Equ. 2002;27:653–668.
  • Kenig CE, Sjöstrand J, Uhlmann G. The Calderón problem with partial data. Ann. Math (2). 2007;165:567–591.
  • Knudsen K. The Calderón problem with partial data for less smooth conductivities. Comm. Partial Differ. Equ. 2006;31:57–71.
  • Zhang G. Uniqueness in the Calderón problem with partial data for less smooth conductivities. Inverse Prob. 2012;28:105008, 18.
  • Isakov V. On uniqueness in the inverse conductivity problem with local data. Inverse Prob. Imaging. 2007;1:95–105.
  • Imanuvilov OY, Uhlmann G, Yamamoto M. The Calderón problem with partial data in two dimensions. J. Am. Math. Soc. 2010;23:655–691.
  • Heck H, Wang JN. Stability estimates for the inverse boundary value problem by partial Cauchy data. Inverse Prob. 2006;22:1787–1796.
  • Hamilton SJ, Siltanen S. Nonlinear inversion from partial EIT data: computational experiments. In: Stefanov P, Zworski M, editors. Inverse problems and applications. Vol. 615, Contemporary mathematics. Providence (RI): American Mathematical Society; 2014. p. 105–129.
  • Mueller JL, Siltanen S. Linear and nonlinear inverse problems with practical applications. Vol. 10, Computational science & engineering. Philadelphia (PA): Society for Industrial and Applied Mathematics (SIAM); 2012.
  • Somersalo E, Cheney M, Isaacson D. Existence and uniqueness for electrode models for electric current computed tomography. SIAM J. Appl. Math. 1992;52:1023–1040.
  • Gehre M, Kluth T, Lipponen A, Jin B, Seppänen A, Kaipio JP, Maass P. Sparsity reconstruction in electrical impedance tomography: an experimental evaluation. J. Comput. Appl. Math. 2012;236:2126–2136.
  • Gehre M, Kluth T, Sebu C, Maass P. Sparse 3D reconstructions in electrical impedance tomography using real data. Inverse Prob. Sci. Eng. 2014;22:31–44.
  • Daubechies I, Defrise M, De Mol C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Comm. Pure Appl. Math. 2004;57:1413–1457.
  • Bredies K, Lorenz DA, Maass P. A generalized conditional gradient method and its connection to an iterative shrinkage method. Comput. Optim. Appl. 2009;42:173–193.
  • Bonesky T, Bredies K, Lorenz DA, Maass P. A generalized conditional gradient method for nonlinear operator equations with sparsity constraints. Inverse Prob. 2007;23:2041–2058.
  • Jin B, Maass P. An analysis of electrical impedance tomography with applications to Tikhonov regularization. ESAIM: Control Optim. Calcul. Varia. 2012;18:1027–1048.
  • Jin B, Khan T, Maass P. A reconstruction algorithm for electrical impedance tomography based on sparsity regularization. Int. J. Numer. Methods Eng. 2012;89:337–353.
  • Borcea L. Electrical impedance tomography. Inverse Prob. 2002;18:R99–R136.
  • Garde H, Knudsen K. 3D reconstruction for partial data electrical impedance tomography using a sparsity prior. Submitted, 2014. Available from: http://arxiv.org/abs/1412.6288
  • Adams RA, Fournier JJF. Sobolev spaces. 2nd ed. Vol. 140, Pure and applied mathematics (Amsterdam). Amsterdam: Elsevier/Academic Press; 2003.
  • Neuberger JW. Sobolev gradients and differential equations. 2nd ed. Vol. 1670, Lecture notes in mathematics. Berlin: Springer-Verlag; 2010.
  • Barzilai J, Borwein JM. Two-point step size gradient methods. IMA J. Numer. Anal. 1988;8:141–148.
  • Wright SJ, Nowak RD, Figueiredo MAT. Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 2009;57:2479–2493.
  • Stampacchia G. Le problème de dirichlet pour les équations elliptiques du second ordre à coefficients discontinus. Ann. lnst. Fourier. 1965;15:189–257.
  • Kirsch A, Grinberg N. The factorization method for inverse problems. Vol. 36, Oxford lecture series in mathematics and its applications. Oxford: Oxford University Press; 2008.
  • von Harrach B, Ullrich M. Monotonicity-based shape reconstruction in electrical impedance tomography. SIAM J. Math. Anal. 2013;45:3382–3403.
  • von Harrach B, Seo JK. Exact shape-reconstruction by one-step linearization in electrical impedance tomography. SIAM J. Math. Anal. 2010;42:1505–1518.
  • Logg A, Mardal KA, Wells GN. Automated solution of differential equations by the finite element method. Vol. 84, Lecture notes in computational science and engineering. Heidelberg: Springer; 2012; the FEniCS book.
  • Borsic A, Graham BM, Adler A, Lionheart W. In vivo impedance imaging with total variation regularization. IEEE Trans. Med. Imaging. 2010;29:44–54.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.