581
Views
1
CrossRef citations to date
0
Altmetric
Research Article

A Bayesian method for an inverse transmission scattering problem in acoustics

, & ORCID Icon
Pages 2274-2287 | Received 29 Dec 2020, Accepted 28 Mar 2021, Published online: 09 Apr 2021

Abstract

In this paper, we study an inverse transmission scattering problem of a time-harmonic acoustic wave from the viewpoint of Bayesian statistics. In Bayesian inversion, the solution of the inverse problem is the posterior distribution of the unknown parameters conditioned on the observational data. The shape of the scatterer will be reconstructed from full-aperture and limited-aperture far-field measurement data. We first prove a well-posedness result for the posterior distribution in the sense of the Hellinger metric. Then, we employ the Markov chain Monte Carlo method based on the preconditioned Crank-Nicolson algorithm to extract the posterior distribution information. Numerical results are given to demonstrate the effectiveness of the proposed method.

2010 Mathematics Subject Classifications:

1. Introduction

Transmission problems have important applications in various fields of physical, engineering, and mathematical science [Citation1]. Such problems have been extensively studied in the scattering of acoustic, elastic, or electromagnetic waves by many researchers. One of the crucial problems that attracted interest is the inverse scattering problem for determining the physical and geometric properties of the scatterer. In this paper, we are concerned with an inverse transmission scattering problem of time-harmonic acoustic waves.

In the literature, there are many deterministic methods for solving the inverse acoustic transmission scattering problem numerically. These methods can be divided into two categories as iterative methods (see [Citation2–4]) and non-iterative methods (see [Citation5–8]). One of the main difficulties of the iterative methods is to compute the global minimizers due to the existence of local minima. Non-iterative methods such as the linear sampling method [Citation9] need to calculate scattered fields corresponding to many sampling points for determining the scatterer.

Recently, many researchers have devoted themselves to studying inverse problems from a useful viewpoint of Bayesian statistics [Citation10–12]. Meanwhile, Bayesian inference has been developed to solve inverse scattering problems [Citation13–19]. In Bayesian framework, all quantities are viewed as random variables. Then, the inverse problem is reformulated as a problem of statistical inference. Compared with the deterministic approaches, which only provide a point estimate of the solution, the Bayesian method calculate the point estimates, such as the maximum a posterior estimate and the conditioned mean estimate. It is also able to quantify the uncertainty, such as the confidence interval of the solution.

The primary purpose of this paper is to use the Bayesian approach to reconstruct the shape of the scatterer from full-aperture and limited-aperture far field measurement data. We shall choose the Gaussian measures as the prior distribution, as it is tractable for the theoretical analysis and computation in infinite-dimensional settings [Citation12]. Based on such priors [Citation12], we prove the well-posedness of the posterior distribution for this problem. By Bayes' formula, the solution of the inverse problem is the posterior distribution, which is analytically intractable due to the high dimensionality of the integral and the non-linearity of the forward models. There are two primary techniques to explore the information of the posterior distribution. One is based on sampling using Markov chain Monte Carlo (MCMC) methods [Citation20–23]. The other one is a variational Gaussian approximation, which approximates the posterior distribution with respect to Kullback-Leibler divergence [Citation24–26]. Alternatively, we sample the posterior distribution by using the MCMC methods based on the preconditioned Crank-Nicolson (pCN) algorithm, which is independent of discretization dimensionality [Citation27].

The rest of the paper is organized as the following. In Section 2, we present the forward transmission scattering problem, which is reduced to a system of coupled boundary integral equations. Then in Section 3, we apply the Bayesian approach to the inverse transmission problem and prove the well-posedness of the posterior distribution. In Section 4, we describe the Gaussian prior and express the samples from the prior. Numerical results are provided in Section 5 to demonstrate the effectiveness of the proposed method.

2. Direct and inverse scattering problem

In this section, we describe the direct transmission scattering problem, which is based on the boundary integral equation methods for the solution of the transmission problem, and present the inverse transmission scattering problem.

We assume that ΩR2 is a bounded, simply connected domain with C2,α-boundary Γ (0<α1). The direct scattering problem is to find the total field u and the scattered field us satisfying Δu+k12u=0in Ω,(1)Δus+k22us=0in R2Ω¯,(2) and the transmission conditions (3) u=us+uion Γ,(3) (4) uν=usν+uiνon Γ,(4) where kj0 are the wave numbers, j = 1, 2, ν denotes the unit outward normal to Γ. In addition, the scattered fields us are required to satisfy the Sommerfeld radiation condition (5) limrr12usrik2us=0,r=|x|,(5) uniformly with respect to all xˆS:={xR2,|x|=1}. According to the Green representation formula [Citation1], we seek a solution in the form of (6) u(x)=ΓΦ1(x,y)uν(y)ds(y)ΓΦ1(x,y)ν(y)u(y)ds(y), xΩ,(6) (7) us(x)=ΓΦ2(x,y)ν(y)us+(y)ds(y)ΓΦ2(x,y)us+ν(y)ds(y), xR2Ω¯,(7) where the superscript + and denote the limits or traces from outside and inside of Γ to Γ, respectively. The function Φj(x,y):=i4H0(1)(kj|xy|) is the fundamental solution of the two-dimensional Helmholtz equation with wave numbers kj, j = 1, 2. Here H0(1)() is the first kind Hankel function of order zero.

Define boundary integral operators Sj,Kj,Kj and Tj, for xΓ: (Sjφ)(x):=ΓΦj(x,y)φ(y)ds(y),(Kjφ)(x):=ΓΦj(x,y)ν(y)φ(y)ds(y),(Kjφ)(x):=ΓΦj(x,y)ν(x)φ(y)ds(y),(Tjφ)(x):=ν(x)ΓΦj(x,y)ν(y)φ(y)ds(y),with density φ. Then from the jump relations, we see that u and us defined by (Equation6) and (Equation7) solve the boundary value problem (1)–(Equation5), provided u, uν, us+ and u+ν satisfy the system of integral equations [Citation28] K212Ius+S2us+ν=0, xΓ,K112IuνT1u=0, xΓ.Combining with the transmission conditions (Equation3) and (Equation4), we have (8) K212IuS2us+ν=K212Iui+,(8) (9) K112Ius+νT1u=K112Iui+ν.(9) The uniqueness and existence results for the system of boundary integral equations (Equation8) and (Equation9) have been established in [Citation28]. Then the direct transmission scattering problem can be solved by substituting the solutions of equations (Equation8) and (Equation9) into integral equations (Equation6) and (Equation7) to obtain u and us.

Meanwhile, the scattered field us has the asymptotic representation (see [Citation1]) us(x)=eik|x||x|u(xˆ)+O1|x|,|x|,uniformly for all directions xˆ. Then the far field pattern u of the scattered field us is given by (10) u(xˆ)=eiπ/48πk2Γeik2xˆyik2xˆν(y)us+(y)us+ν(y)ds(y).(10) Define the forward mapping as F from the boundary Γ to the solution u, i.e. u=F(Γ),where the operator F is related to the integral formulations (Equation8), (Equation9), and (Equation10). This relation makes it is possible to reconstruct the scatterer's boundary from the far field pattern, which often contains non-negligible errors and uncertainties.

Given the wave numbers kj (j=1,2) and the incident wave ui, the inverse transmission scattering problem can be formulated as (11) MF(Γ)+η=y,(11) where Γ is desired, M is the measurement operator, MF:=(u(xˆ1),,u(xˆN)) denotes the mapping from the boundary Γ to the noise-free observations, yY:=CN is the noisy observations of u, and η is a N-dimensional zero mean Gaussian noise with covariance matrix ΣRN×N, i.e. ηN(0,Σ).

3. Bayesian approach

In this section, we reformulate the inverse problem as a problem r1of statistical inference. The well-posedness of the posterior distribution is presented.

Assume that the scatterer Ω is starlike with respect to the origin. The boundary Γ of the scatterer Ω can be parameterized as Γ:=r(θ)(cosθ,sinθ)=exp(q(θ))(cosθ,sinθ),θ[0,2π),where q(θ)=lnr(θ),0<r(θ)<rmax. Let F:=MF and using the above parameterization, we can rewrite (Equation11) as a statistical model (12) F(q)+η=y,(12) where qX for some suitable Banach spaces X. In this paper, we choose X r1 as the Hölder continuous space, i.e. X:=C2,α([0,2π]), α(0,1], with a norm defined by qX:=q+q+q+supθ,θ~[0,2π]θθ~|q(θ)q(θ~)||θθ~|α.We view the quantities involved in (Equation12) as random variables. The Bayesian formulation recasts the inverse problem into a problem of statistical inference. The solution of the inverse problem is the posterior distribution μy on the random variable q|y. Denote the prior distribution for q by μ0. Since the noise is additive Gaussian, i.e. ηN(0,Σ), the distribution of y conditional on q is (13) π(y|q)exp(ϕ(q;y)),(13) where the potential ϕ(q;y) is defined by ϕ(q;y):=12|F(q)y|Σ2=12|Σ1/2(F(q)y)|2with || denoting the Euclidean norm. Bayes' theorem in infinite dimensions is expressed using Radon-Nikodym derivative of the posterior measure μy with respect to the prior measure μ0: (14) dμydμ0exp(ϕ(q;y)).(14) In the rest of this section, we discuss the well-posedness of the posterior distribution in the sense of Hellinger metric.

Lemma 3.1

For every ϵ>0, there exists M=M(ϵ) such that (15) |F(q)|Σexp(ϵqX2+M),(15) for all qX.

Proof.

From the definition of the operator F, we need to prove (16) |u(xˆj)|exp(ϵqX2+M),j=1,,N,(16) that is (17) eiπ/48πk2Γeik2xˆjyik2xˆjν(y)us+(y)us+ν(y)ds(y)exp(ϵqX2+M).(17) Set Γ=exp(q(θ))(cosθ,sinθ):=eqrˆ, rˆ=(cosθ,sinθ), θ[0,2π). Parameterizing the boundary integral equation (Equation17), we arrive at the estimate, for j=1,,N, (18) |u(xˆj)|=|eiπ/48πk202πeik2xˆj(eqrˆ)ik2xˆjν(θ)us+(eqrˆ)us+ν(eqrˆ)×exp(q)1+(q)2dθ|2πeiπ/48πk2eik2xˆj(eqrˆ)ik2xˆjν(θ)us+(eqrˆ)+us+ν(eqrˆ)×exp(q)1+(q)2.(18) It is known that eik2xˆj(eqrˆ)C,for eqrˆΓ. Due to the well-posedness of equations (Equation8) and (Equation9) [Citation29], we know that us+ and us+ν are bounded, which means ik2xˆjν(θ)us+(eqrˆ)+us+ν(eqrˆ)C.Thus, we can obtain (19) |u(xˆj)|C1+(q)2exp(q),j=1,,N.(19) When q1, it is clear that (20) 1+(q)2C.(20) When q1, according to Young's inequality, we have (21) 1+(q)22q2exp(qX)2exp12ϵ+ϵqX22.(21) In addition, we have (22) exp(q)exp(qX)exp12ϵ+ϵqX22.(22) Combining the estimations (Equation20), (Equation21) and (Equation22) yields |F(q)|Σexp(ϵqX2+M),and then the proof is completed.

Remark

Unless otherwise stated, the constant C at different places may have different values and depend on different variables.

The Lipschitz continuity of the operator F is given in the following the lemma [Citation15].

Lemma 3.2

For every τ>0, there exists L=L(τ)>0 such that (23) |F(q1)F(q2)|ΣLq1q2X,(23) for all q1,q2X with max{q1X,q2X}<τ.

Theorem 3.1

For the problem y=F(q)+η,with the terms defined the same as above. Assume that μ0 is a Gaussian measure satisfying μ0(X)=1. Then the posterior distribution given by (Equation14) is well-defined and μy is absolutely continuous with respect to μ0 (μyμ0). Furthermore, the posterior measure μy is Lipschitz continuous with respect to the data y in the Hellinger distance: there exists L~=L~(τ)>0 such that, for all y, y with max{|y|,|y|}<τ, (24) dH(μy,μy)L~(τ)|yy|Σ.(24)

Proof.

With the boundedness and continuity of the forward mapping F:XY as proved in Lemmas 3.1 and 3.2, we can verify the associated function ϕ satisfying the Assumption 2.6 in [Citation12]. Then, the first result is straightforward with an application of Theorem 6.29 in [Citation12]. Furthermore, we can obtain that the posterior μy is Lipschitz continuous with respect to the data y (Theorem 4.2 in [Citation12]).

Remark

Let ν0 be a reference measure of measures μ1 and μ2, and then the Hellinger distance is defined by dH(μ1,μ2)=12dμ1dν0dμ2dν02dν0.

4. Samples from the prior

It is well known that the prior distribution μ0 plays an important role in Bayesian inference. In this section, we present the Gaussian distribution as our priors.

One often assumes that the prior is a Gaussian distribution defined with mean m0 and covariance operator C0, i.e. μ0=N(m0,C0), where C0:=As is symmetric positive and of trace class, s>12. Here the operator A is assumed to be Laplacian-like in the sense of Assumption 2.9 in [Citation12]. It is motivated to choose this kind of covariance operator since As is a trace class operator on L2[0,2π], which can guarantee bounded variance and almost surely pointwise well-defined samples as μ0(X)=1 holds (see [Citation12]).

We specify the Gaussian prior μ0 which is consistent with the above theory [Citation13]: q(θ)N(0,A1),where A:=d2/dθ2 with the definition domain D(A):=vH2[0,2π];02πv(θ)dθ=0.Here H2[0,2π] is the standard Sobolev space with the periodic boundary condition v(0)=v(2π), i.e. if vH2[0,2π], then vH2[0,2π]2:=nZ(1+n2)2|cn|2<+,where cn's are the coefficients of Fourier expansion series of v. The eigenvalues of A are n2, nN, and the corresponding eigenfunctions are ϕ2n1:=sin(nθ)/π and ϕ2n:=cos(nθ)/π (we take ϕ0:=12π [Citation13]). According to the Karhunen-Loève (KL) expansion [Citation12], we have (25) q(θ)=n=1anncos(nθ)π+bnnsin(nθ)π,(25) where an and bn are independent and identically distributed (i.i.d.) sequences with an,bnN(0,1). Integrating q(θ), we obtain (26) q(θ)=a02πn=1ann3cos(nθ)π+bnn3sin(nθ)π,(26) where a0N(0,1) is a Gaussian random variable. In this way, we can claim that the prior distribution of q is indeed a Gaussian distribution due to the linearity and continuity of the integral operators (see [Citation30]). Moreover, it follows that q is C2,α-Hölder continuous almost surely based on the Gaussian measure qC0,α([0,2π]) [Citation13]. In the numerical implementation, we approximate the samples from the prior by the truncated Karhunen-Loève (KL) expansion (27) q(θ)=a02πn=1Nann3cos(nθ)π+bnn3sin(nθ)π.(27) Before discussing the numerical implementation of Bayesian inference with the Gaussian prior, we briefly present the sampling method based on the MCMC algorithm to draw the samples from the posterior distribution. Here we adopt the pCN-MCMC algorithm [Citation27], which is shown in Algorithm ??. The pCN algorithm has the proposal: qpro=1β2q+βω,where q is the present position, qpro is the proposed position, β[0,1] is the jump parameter controling the stepsize.

5. Numerical results

In the following, we present numerical results to show the performance of the proposed method. We take the above discussed Gaussian as prior and truncated the series in (Equation27) with N=6, and a0, an, and bn obey the same distribution N(0,1). Fix the wavenumbers k1=1, k2=2, and the incident field ui=eik2xd with incident direction d=(1,0). The synthetic data is generated by using the boundary element method solving the forward scattering problem (1)–(Equation5) and adding Gaussian noise to the resulting solution [Citation28], where the noise covariance is assumed to be Σ=δ2I. The pCN-MCMC algorithm is run for J=104 iterations with the first 2×103 samples used as the burn-in. The stepsize β is tuned in a way that the acceptance probability is about 20%. Define the limited aperture γ1:=[0,2π),γ2:=0,3π2,γ3:=[0,π).The far-field pattern data is given by {u(xˆ,d);xˆ:=(cosθ,sinθ),θγi, i=1,2,3}.

Example 5.1

A kite-shaped obstacle: Γkite=(cosθ+0.65cos2θ0.65,1.5sinθ),θ[0,2π).In Figure , the conditioned mean estimates of the posterior distribution are displayed from full-aperture observations with noise levels δ=0.005,0.01,0.05. From these numerical results, one can see that the proposed method is effective to recover the shape of the scatterer. Moreover, the data noise level is a crucial factor to the reconstructed results. It comes as no surprise that the reconstructed results are better with a smaller noise level δ. Fix the noise level δ=0.01. We present the reconstructed results from different limited-aperture data in Figure . The results displayed in Figure show the effect of the aperture of observations on the posterior distribution. The results of reconstruction deteriorate as the aperture gets smaller. Moreover, it is intuitive that the segment of the boundary of the scatterer closed to the location of given data can be captured well. Finally, some statistical information are presented. Without loss of generality, we show the Markov chains of the coefficients a0, a1 and b1 in Figure  and omit the other coefficients. As shown in Figure , these results with different noise levels or different apertures suggest that these Markov chains can reach stability. Figure  shows the trace plots of the negative log-likelihood ϕ(q) (or data-misfit) and the corresponding auto-correlation functions with different noise levels and different apertures, respectively. After a few iterations, it can be observed that the pCN can arrive at the stationary stage. The corresponding auto-correlation functions decay to zero.

Figure 1. Reconstruction for the kite-shaped scatterer with δ=0.005,0.01,0.05.

Figure 1. Reconstruction for the kite-shaped scatterer with δ=0.005,0.01,0.05.

Figure 2. Reconstruction for the kite-shaped scatterer with γ1,γ2,γ3, δ=0.01.

Figure 2. Reconstruction for the kite-shaped scatterer with γ1,γ2,γ3, δ=0.01.

Figure 3. The first row denotes the Markov chains of the coefficients a0, a1 and b1 with δ=0.005,0.01,0.05. The second row denotes the Markov chains of the coefficients a0, a1 and b1 with γ1,γ2,γ3, δ=0.01.

Figure 3. The first row denotes the Markov chains of the coefficients a0, a1 and b1 with δ=0.005,0.01,0.05. The second row denotes the Markov chains of the coefficients a0, a1 and b1 with γ1,γ2,γ3, δ=0.01.

Figure 4. The traceplots of the negative log-likelihood ϕ(q) and the corresponding auto-correlation functions for the kite-shaped scatterer, respectively.

Figure 4. The traceplots of the negative log-likelihood ϕ(q) and the corresponding auto-correlation functions for the kite-shaped scatterer, respectively.

Example 5.2

A pear-shaped obstacle: Γpear=(r(θ)cosθ,r(θ)sinθ),θ[0,2π),where r(θ)=5+sin3θ6. In Figure , we present the reconstructed results from full aperture data with noise level δ=0.005,0.01,0.05. Obviously, the proposed method is effective to reconstruct the shape of the scatterer with different noise levels. When the data is polluted by a smaller noise level, the reconstructed result is better than that contaminated by a larger noise level. Figure  shows the reconstructed scatterer from different limited-aperture data. We find that the proposed method is also useful to recover the boundary of the scatterer from limited-aperture data. In Figure , the Markov chains of the coefficients a0, a1 and b1 are displayed with different noise levels and different limited apertures, respectively. As shown in Figure , these chains can basically arrive at a stable. Figure  displays the traceplots of the negative log-likelihood ϕ(q) and the corresponding auto-correlation functions, respectively. We can observe that the auto-correlation functions quickly decay to zero with larger noise level or smaller limited-aperture data.

Figure 5. Reconstruction for the pear-shaped scatterer with δ=0.005,0.01,0.05.

Figure 5. Reconstruction for the pear-shaped scatterer with δ=0.005,0.01,0.05.

Figure 6. Reconstruction for the pear-shaped scatterer with γ1,γ2,γ3, δ=0.01.

Figure 6. Reconstruction for the pear-shaped scatterer with γ1,γ2,γ3, δ=0.01.

Figure 7. The first row denotes the Markov chains of the coefficients a0, a1 and b1 with δ=0.005,0.01,0.05. The second row denotes the Markov chains of the coefficients a0, a1 and b1 with γ1,γ2,γ3, δ=0.01.

Figure 7. The first row denotes the Markov chains of the coefficients a0, a1 and b1 with δ=0.005,0.01,0.05. The second row denotes the Markov chains of the coefficients a0, a1 and b1 with γ1,γ2,γ3, δ=0.01.

Figure 8. The traceplots of the negative log-likelihood ϕ(q) and the corresponding auto-correlation functions for the pear-shaped scatterer, respectively.

Figure 8. The traceplots of the negative log-likelihood ϕ(q) and the corresponding auto-correlation functions for the pear-shaped scatterer, respectively.

6. Conclusion

In this work, we consider the Bayesian inference for reconstructing the shape of the scatterer. We choose the Gaussian measure as our prior and prove the well-posedness result for the posterior distribution. The numerical results show the effectiveness of the proposed method. In future work, we would like to use the variational Gaussian approximation instead of the sampling methods to extract the knowledge of the posterior distribution. Moreover, we can develop the Bayesian method for more complex models, such as transmission eigenvalue problems, elastic and electromagnetic scattering problems.

Acknowledgments

This work is supported by the National Natural Science Foundation of China under grants No.11771068 and No.11501087.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work is supported by the National Natural Science Foundation of China under grants Nos. 11771068 and 11501087.

References

  • Colton D, Kress R. Inverse acoustic and electromagnetic scattering theory. 2nd ed. Berlin: Springer-Verlag; 1998. (Applied mathematical sciences).
  • Hohage T, Schormann C. A Newton-type method for a transmission problem in inverse scattering. Inverse Probl. 1998;14:1207–1227.
  • Lee KM. Inverse transmission scattering problem via a Dirichlet-to-Neumann map. Eng Anal Bound Elem. 2019;101:214–220.
  • Ghosh Roy DN, Warner J, Couchman LS, et al. Inverse obstacle transmission problem in acoustics. Inverse Probl. 1998;14:903–929.
  • Colton D, Coyle J, Monk P. Recent developments in inverse acoustic scattering theory. SIAM Rev. 2000;42:369–414.
  • Colton D, Kirsch A. A simple method for solving inverse scattering problems in the resonance region. Inverse Probl. 1996;12:383–393.
  • Colton D, Piana M, Potthast R. A simple method using Morozov's discrepancy principle for solving inverse scattering problems. Inverse Probl. 1997;13:1477–1493.
  • Yang J, Zhang B, Zhang R. A sampling method for the inverse transmission problem for periodic media. Inverse Probl. 2012;28:Article ID: 035004.
  • Colton D, Haddar H, Piana M. The linear sampling method in inverse electromagnetic scattering theory. Inverse Probl. 2003;19:S105–S137.
  • Dashti M, Stuart AM. The Bayesian approach to inverse problems. In: Handbook of uncertainty quantification. Cham: Springer; 2017.
  • Kaipio JP, Somersalo E. Statistical and computational inverse problems. New York: Springer-Verlag; 2005. (Applied mathematical sciences).
  • Stuart AM. Inverse problems: A Bayesian perspective. Acta Numer. 2010;5:451–559.
  • Bui-Thanh T, Ghattas O. An analysis of infinite dimensional Bayesian inverse shape acoustic scattering and its numerical approximation. SIAM/ASA J Uncertain Quantification. 2014;2:203–222.
  • Huang J, Deng Z, Xu L. Bayesian approach for inverse interior scattering problems with limited aperture. Appl Anal. 2020;1–14. https://doi.org/https://doi.org/10.1080/00036811.2020.1781828.
  • Li Z, Deng Z, Sun J. Extended-sampling-Bayesian method for limited aperture inverse scattering problems. SIAM J Imag Sci. 2020;13:422–444.
  • Li Z, Liu Y, Sun J, et al. Quality-Bayesian approach to inverse acoustic source problems with partial data. SIAM J Sci Comput. 2021;43:A1062–A1080.
  • Liu J, Liu Y, Sun J. An inverse medium problem using Stekloff eigenvalues and a Bayesian approach. Inverse Probl. 2019;35:Article ID: 94004.
  • Wang Y. Seismic inversion: theory and applications. Hoboken: John Wiley and Sons; 2016.
  • Wang Y, Ma F, Zheng E. Bayesian method for shape reconstruction in the inverse interior scattering problem. Math Probl Eng. 2015;2:1–12.
  • Bui-Thanh T, Ghattas O, Martin J, et al. A computational framework for infinite-dimensional Bayesian inverse problems part I: the linearized case, with application to global seismic inversion. SIAM J Sci Comput. 2013;35:A2494–A2523.
  • Cui T, Law KJH, Marzouk YM. Dimension-independent likelihood-informed MCMC. J Comput Phys. 2016;304:109–137.
  • Martin J, Wilcox LC, Burstedde C, et al. A stochastic Newton MCMC method for large-scale statistical inverse problems with application to seismic inversion. SIAM J Sci Comput. 2012;34:A1460–A1487.
  • Petra N, Martin J, Stadler G, et al. A computational framework for infinite-dimensional Bayesian inverse problems, part II: Stochastic Newton MCMC with application to ice sheet flow inverse problems. SIAM J Sci Comput. 2014;36:A1525–A1555.
  • Archambeau C, Cornford D, Opper M, et al. Gaussian process approximations of stochastic differential equations. In: JMLR: Workshop and Conference Proceedings, Bletchley Park, UK; 2007; 1:1–16.
  • Arridge SR, Ito K, Jin B, et al. Variational Gaussian approximation for poisson data. Inverse Probl. 2018;34:Article ID: 025005.
  • Challis E, Barber D. Gaussian Kullback-Leibler approximate inference. J Mach Learn Res. 2013;14:2239–2286.
  • Cotter SL, Roberts GO, Stuart AM, et al. MCMC methods for functions: modifying old algorithms to make them faster. Statist Sci. 2012;28:424–446.
  • Hsiao GC, Xu L. A system of boundary integral equations for the transmission problem in acoustics. Appl Numer Math. 2011;61:1017–1029.
  • Hsiao GC, Wendland WL. Boundary integral equations. Berlin: Springer-Verlag; 2008.
  • Hairer M. Introduction to stochastic PDEs. 2009. arXiv:0907.4178.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.