814
Views
0
CrossRef citations to date
0
Altmetric
Articles

A total variation regularization method for an inverse problem of recovering an unknown diffusion coefficient in a parabolic equation

&
Pages 1453-1473 | Received 27 Aug 2019, Accepted 18 Jan 2020, Published online: 16 Feb 2020

Abstract

This paper studies an inverse problem of recovering an unknown diffusion coefficient in a parabolic equation. We adopt a total variation regularization method to deal with the ill-posedness. This method has the advantage to solve problems that the solution is non-smooth or discontinuous. By transforming the problem into an optimal control problem, we derive a necessary condition of the control functional. Through some prior estimates of the direct problem, the uniqueness and stability of the minimizer are obtained. In the numerical part, a Gauss–Jacobi iteration scheme is used to deal with the non-linear term. Some numerical examples are presented to illustrate the performance of the proposed algorithm.

2010 Mathematics Subject Classifications:

1. Introduction

The mathematical models describing the diffusion process are of great importance in many scientific fields, such as heat conduction, diffusion, groundwater flow, oil reservoir simulation, etc. The aim of the direct problem is to determine the diffusion field according to the system parameters. However, in many practical situations, the diffusion parameters may not be known. The purpose of the inverse problem is to reconstruct these parameters from the observation data of the diffusion field. In this paper, we consider a simple model based on the following parabolic equation (1) ut(k(x)ux)x=f(x,t),(x,t)DT,u(x,0)=φ(x),x[0,l],u(0,t)=u(l,t)=0,t[0,T],(1) where DT=(0,l)×(0,T], fL2(DT) and φL2[0,l] are two given functions, k(x) is an unknown diffusion coefficient. Assume that the final observation data, i.e. (2) u(x,T)=s(x),x[0,l],(2) is given. The goal of the inverse problem is to reconstruct k(x) in Equation (Equation1). The mathematical model (Equation1) arises in many engineering and physical settings, for example, the heat conduction in a rod. Being different from the corresponding direct problem, the inverse problem is ill posed [Citation1]. In many practical applications, the observation data s(x) in (Equation2) is obtained by measurements. A small noise in s(x) may cause a big change in the reconstructions of k(x), which may lead to a meaningless result [Citation2,Citation3].

There has been many works related to this inverse problem. In [Citation4], a semigroup method is used to analyse the existence of the solution of the inverse problem and the local representation of the unknown diffusion coefficient is obtained at the end points. In [Citation5] the uniqueness and conditional stability of the inverse problem are studied by transforming the problem into a non-linear operator equation. The numerical solution is obtained by an iteratively regularized Gauss–Newton method. In [Citation6] an optimization scheme with both H1-regularization and BV-regularization are discussed. The existence and convergence of the approximate solution to the problem is obtained in the framework of finite element analysis and the Armijo algorithm is applied for the numerical solution. In [Citation7] the problem is transformed into an optimal control problem, and the uniqueness and stability of the control functional are proved. The stable numerical solution is obtained by the gradient iteration method. There are also some other works, see [Citation8–16] and the references therein.

The optimization method is a powerful tool for dealing with the ill-posed problems [Citation17–20]. The basic idea is to restrict the problem to some compact set, and then get the solution by minimizing the following functional [Citation21] minxFxy2+λJ^(x),xX,yY, where F:XY is a forward map, λ>0 is a regularization parameter and J^(x) is a penalty term. According to the prior knowledge of the inversion solution, J^(x) can be chosen differently. The well-known Tikhonov regularization method [Citation22] is to use the L2 norm as a penalty term, and this method usually works well for continuous models. However, in many practical applications, the solution of the problem may not be continuous. For example, in image processing, a common problem of Tikhonov regularization is that it will smooth out the sharp edges of an image and can not preserve the detailed information very well [Citation23]. In [Citation24] a total variation regularization method was proposed by Rudin et al. and good results are achieved in the field of image processing. The definition of total variation [Citation25,Citation26] can be expressed as TV(k):=Ω|k|dx=sup{Ωkdiv(v)dx:vC01(Ω),|v(x)|1}. The total variation of k is also known as the bounded variation of k. The space of all functions in L1(Ω) with bounded variation is denoted by BV(Ω)={kL1(Ω):Ω|k|dx<}, and the corresponding BV-norm is given by kBV=kL1+Ω|k|dx. It can be proved that BV(Ω) is complete under the BV-norm and thus a Banach space. In addition, if Ω is a bounded open set with Lipschitz boundary in Rn(n1), then W1,1BV (see [Citation27] for more details). Total variation regularization is one of the most effective tools to solve problems when the solution is non-smooth or discontinuous.

Compared with the intensive study of total variation regularization method in image processing, the applications of this method for reconstructing an unknown coefficient of partial differential equations are much less, see [Citation28–34]. There are also some difficulties for total variation regularization method both theoretically and numerically [Citation35,Citation36]. For example, the non-differentiability of the TV-term can bring great difficulties to theoretical analysis; in numerical calculation, its nonlinearity and degeneracy can also make the algorithm more complex. In this paper, from a point of view of theoretical analysis, we adopt an optimal control framework to discuss this problem. By introducing a polished TV-term, we derive a necessary condition of the control functional. The necessary condition which is a variational inequality with a coupled system of a parabolic equation. Being different from the previous work in [Citation7], the variational inequality in this paper includes a non-linear term. In order to get the uniqueness and stability results, we also need an additional condition (3) max|k|κ,(3) where κ is a positive constant. In the numerical part, we modify this necessary condition into an equality form and adopt a Gauss–Jacobi iteration scheme to deal with the nonlinear term. The numerical results show that this modification is effective and can reconstruct the non-smooth solutions very well.

The organization of this paper is as follows: Section 2 transforms the problem into an optimal control problem and derives a necessary condition for the control functional. Section 3 studies the well-posedness of the optimal control problem based on the necessary condition. Section 4 discretizes the system and give a numerical algorithm. Section 5 presents some numerical examples to illustrate the effectiveness of the algorithm.

2. Optimal control problem

It is well known that the original problem is ill-posed and cannot be solved directly. The most applicable method for solving this problem is the Tikhonov regularization method. However, Tikhonov regularization method often requires some smoothness of the solution. In many practical applications, the solution of the problem may not always be continuous. To overcome this difficulty, we consider the following total variation regularization-based optimal control problem.

Problem P. Define an admissible domain (4) K:={kL1(0,l):kBV< and 0<ζ1k(x)ζ2, a.e. in (0,l)},(4) where ζ1,ζ2 are two given positive constants. Assume that the observation data s(x) in (Equation2) satisfies the following condition (5) s(x)L2[0,l].(5) The optimal control problem in this study is to find kK such that (6) J(k)=minkKJ(k),(6) (7) J(k)=120l|u(x,T;k)s(x)|2dx+λJ^(k),(7) where J^(k)=0l|k|dx (in order to avoid the potential misleading of the divergence notation, we use the notation |k| instead of |k| in the TV-penalty term, because our problem is one-dimensional), u(x,T;k) is the solution of Equation (Equation1) corresponding to k(x) and λ>0 is a regularization parameter.

Following [Citation36], we modify the penalty term J^(k) as (8) J^β(k)=0l|k|2+βdx,(8) where β is a small positive parameter and in this paper it is taken as β=104. This modification can provide certain advantages in computation, such as differentiability of J^β when k(x)=0. According to (Equation8) the optimal control problem (Equation6) and (Equation7) can be approximated by (9) J(k)=minkKJ(k),(9) (10) J(k)=120l|u(x,T;k)s(x)|2dx+λJ^β(k).(10)

Definition 2.1

cf.[Citation37]

A function uL2((0,T),H01(0,l)), with utL2((0,T),H1(0,l)) is called the a weak solution of Equation (Equation1), if (11) 0lutϕdx+0lkuxϕxdx=0lfϕdx,(11) for any test function ϕ(x)H01(0,l) and a.e. time 0tT and u(x,0)=φ(x).

Lemma 2.2

cf.[Citation38,Citation39]

For Equation (Equation1) we have the following estimate (12) max0tT0lu2dx+0T0l|ux|2dxdtC[0lφ2dx+0T0lf2dxdt],(12) where C is independent of ϕ and f.

Now, we are going to prove the existence of minimizers of problem P. In order to get this result, we first give the continuous property of J(k) by the following lemma.

Lemma 2.3

Assume that {kn,n=1,2,}K, kK, and knkL1[0,l]0, as n, then we have (13) limn0l|u(x,T;kn)s(x)|2dx=0l|u(x,T;k)s(x)|2dx.(13)

Proof.

Let k=kn and choosing the test function φ in (Equation11) as u(kn)(x,) (for abbreviation, we denote it by u(kn)), which depends on x, we obtain 0t0lut(kn)u(kn)dxdt+0t0lkn|ux(kn)|2dxdt=0t0lfu(kn)dxdt. According to Gronwall's inequality and Cauchy's inequality, we have (14) 0l|u(kn)|2dx+0t0lkn|ux(kn)|2dxdtC0lφ2dx+C0t0lf2dxdt.(14) From (Equation14), we know that u(kn)L2((0,T);H01(0,l)) is uniformly bounded for all knK. Therefore, we can extract a subsequence of u(kn), still denoted by u(kn), such that (15) u(x,t;kn)u(x,t;k)L2((0,T);H01(0,l)).(15) Now we will show that u=u(k). Multiplying both sides of (Equation11) by a function ξ(t)C1[0,T] with ξ(T)=0 and taking k=kn, then we have 0T0lut(kn)ϕξ(t)dxdt+0T0lknux(kn)ϕxξ(t)dxdt=0T0lfϕξ(t)dxdt. Integrating by parts, we obtain (16) 0lφ(x)ϕξ(0)dx=0T0lu(kn)ϕξt(t)dxdt+0T0l(knk)ux(kn)ϕxξ(t)dxdt+0T0lkux(kn)ϕxξ(t)dxdt0T0lfϕξ(t)dxdt.(16) From (Equation15) and taking n in (Equation16), we deduce (17) 0lφ(x)ϕξ(0)dx=0T0luϕξt(t)kuxϕxξ(t)dxdt0T0lfϕξ(t)dxdt.(17) Notice that (Equation17) is valid for any ξ(t)C1[0,T] with ξ(T)=0, so we have 0lutϕdx+0lkuxϕxdx=0lfϕdx,ϕH01(0,l), and u(x,0)=φ(x). Hence, u=u(k) by the definition of u(k).

Through the weak convergence of u(kn)u(k), next we will show the convergence of u(kn) in the sense of L2 topology. By taking k=kn and ϕ=u(kn)s, then from (Equation11) we have (18) 12ddtu(kn)sL2(0,l)2+0lkn|(u(kn)s)x|2dx=0lf(u(kn)s)dx0lknsx(u(kn)s)xdx.(18) A similar discussion for u(k), we have (19) 12ddtu(k)sL2(0,l)2+0lk|(u(k)s)x|2dx=0lf(u(k)s)dx0lksx(u(k)s)xdx.(19) Combining (Equation18) and (Equation19), we obtain (20) 12ddtu(kn)u(k)L2(0,l)2+0lkn|(u(kn)s)x|2dx0lk|(u(k)s)x|2dx=0lf(u(kn)u(k))dx+0lksx(u(k)u(kn))xdx+0l(kkn)sx(u(kn)s)xdx0lddt[(u(k)s)(u(kn)u(k))]dx.(20) Furthermore, by using equation (Equation11) for u(kn) and u(k), we deduce (21) 0lddt[(u(k)s)(u(kn)u(k))]dx=0lf(u(kn)u(k))dx+0lk(u(k)u(kn))x(2u(k)s)xdx+0l(kkn)ux(kn)(u(k)s)xdx.(21) Substituting (Equation21) into (Equation20), yields (22) 12ddtu(kn)u(k)L2(0,l)2+{0lkn|(u(kn)s)x|2dx0lk|(u(k)s)x|2dx}=0l(kkn)[sx(u(kn)s)x+ux(kn)(su(k))x]dx+20lk(u(k)s))x(u(kn)u(k))xdx=:An.(22) Rewriting the second term on the left-hand side of (Equation22) and after a simple calculation, we have (23) 12ddtu(kn)u(k)L2(0,l)2+0lkn|(u(kn)u(k))x|2dx=An+0l(kkn)|(u(k)s)x|2dx20lkn(u(kn)u(k))x(u(k)s)xdx=:An+Bn.(23) Integrating (Equation23) with respect to t, we obtain (24) 12u(x,t;kn)u(x,t;k)L2(0,l)20T|An+Bn|dt.(24) By the weak convergence of {u(kn)} and the convergence of {kn}, it is easy to know 0T|An+Bn|dt0,asn. Hence (25) maxt[0,T]u(x,t;kn)u(x,t;k)L2(0,l)20,asn.(25) In addition, according to Hölder's inequality, we obtain (26) 0l|u(kn)s|2dx0l|u(k)s|2dx0l|u(kn)u(k)||u(kn)+u(k)2s|dxu(kn)u(k)L2(0,l)u(kn)+u(k)2sL2(0,l).(26) Then from (Equation5), (Equation14), (Equation25) and (Equation26), we derive the final result (27) limn0l|u(x,T;kn)s(x)|2dx=0l|u(x,T;k)s(x)|2dx.(27) This completes the proof.

From Lemma 2.3, we can easily get the following theorem.

Theorem 2.4

There exists a minimizer kK of J(k), i.e. J(k)=minkKJ(k).

Proof.

It is easy to know that J(k) is non-negative, and thus J(k) has the greatest lower bound infkKJ(k). Let {kn} be a minimizing sequence, namely infkKJ(k)J(kn)infkKJ(k)+1n,n=1,2,. Since J(kn)C and thanks to the particular structure of J, we have knL1(0,l)C,(C is independent of n). By noticing the boundness of {kn}, we also have knW1,1(0,l)C. Therefore, we can extract a subsequence, which is still denoted by {kn}, such that kn(x)k(x)W1,1(0,l),as n . By the Sobolev embedding theorem, we obtain knkL1(0,l)0,as n . Then, from the Lebesgue control convergence theorem and Lemma 2.3, we deduce J(k)liminfnJ(kn)=minkKJ(k). Hence, J(k)=minkKJ(k). This completes the proof.

Theorem 2.4 shows the existence of the minimizer of the optimal control problem. Next, we will establish a necessary condition for the control functional. On the basis of the necessary condition, we will prove the uniqueness and stability of the optimal control problem in the next section.

Theorem 2.5

Let k be the solution of problem P. Then for any rK, by Equation  (Equation1) we have the following variational inequality (28) 0T0l(rk)uxwxdxdt+λ0lZ(k)(rk)dx0,(28) where Z(k)=k/|k|2+β and w satisfies the following equation (29) wt(k(x)wx)x=0,(x,t)DT,w(x,T)=u(x,T)s(x),x[0,l],w(0,t)=w(l,t)=0,t[0,T].(29)

Proof.

For any rK, 0η1, let kη=(1η)k+ηrK. then from (Equation10) we have (30) J(kη)=120l|u(x,T;kη)s(x)|2dx+λ0l|kη|2+βdx.(30) Differentiate with respect to η of both sides of (Equation30), we have (31) dJ(kη)dη|η=0=0l[u(x,T;k)s(x)]uηη|η=0dx+λ0lk(rk)|k|2+βdx0,(31) where uη is the solution of Equation (Equation1) corresponding to kη. Let uη/η|η=0=μ and calculate directly from (Equation1) gives (32) μt(k(x)μx)x=((rk)ux)x,(x,t)DT,μ(x,0)=0,x[0,l],μ(0,t)=μ(l,t)=0,t[0,T].(32) Let Lμ=μt(k(x)μx)x and suppose w satisfying the following equation (33) Lw=wt(k(x)wx)x=0,(x,t)DT,w(x,T)=u(x,T)s(x),x[0,l],w(0,t)=w(l,t)=0,t[0,T],(33) where L is the adjoint operator of L. Then from (Equation33) we obtain (34) 0T0lμLwdxdt=0.(34) From (Equation32) we have (35) 0T0lwLμdxdt=0T0lw((rk)ux)xdxdt=0T0l(rk)uxwxdxdt.(35) Furthermore, by using Green's formula we have (36) 0T0l(wLμμLw)dxdt=0T0l(wμ)t+(μkwxwkμx)xdxdt=0l[u(x,T)s(x)]μ(x,T)dx.(36) Substitution of (Equation34) and (Equation35) into (Equation36) gives (37) 0l[u(x,T)s(x)]μ(x,T)dx=0T0l(rk)uxwxdxdt.(37) Let Z(k)=k|k|2+β, then combining (Equation31) and (Equation37), we have the final result (38) 0T0l(rk)uxwxdxdt+λ0lZ(k)(rk)dx0.(38) This completes the proof.

3. Local uniqueness and stability

This section studies the uniqueness and stability of the optimal control problem P. Since the optimal control problem P is non-convex, one may not expect to get the global uniqueness. However, under the assumption that T is relatively small, we can prove that the minimizer of problem P is locally unique. This is also a main contribution of this study.

In the following Lemma and Theorem, we will use symbol C (independent of T) to denote different constants unless stated otherwise.

Lemma 3.1

cf.[Citation38,Citation39]

For Equation (Equation29) the following estimate holds (39) max0tT0lw2dx+0T0l|wx|2dxdtC0l|u(x,T)s(x)|2dx.(39)

Lemma 3.2

For any bounded continuous function h(x)C[0,l], we have (40) h(x)|h(x0)|+lh(x)L2[0,l],(40) where 0x0l is a fixed point.

Proof.

For 0x0l, we have |h(x)||h(x0)|+|x0xh(x)dx||h(x0)|+(0l1dx)1/2(0l|h(x)|2dx)1/2|h(x0)|+lh(x)L2[0,l]. This completes the proof.

Suppose si(i=1,2) are two observation data that satisfies (Equation2) and (41) s1s2L2[0,l]σ,(41) where σ>0 is an error bound. Let ki(i=1,2) be two minimizers of the optimal control problem and (42) max|ki|κ,i=1,2,(42) where κ is a positive constant. ui(i=1,2) and wi(i=1,2) are solutions of Equations (Equation1) and (Equation29) corresponding to ki(i=1,2), respectively.

Taking u1u2=U,w1w2=W, then from (Equation1) and (Equation29) we can obtain (43) Ut(k1Ux)x=((k1k2)u2x)x,(x,t)DT,U(x,0)=0,x[0,l],U(0,t)=U(l,t)=0,t[0,T],(43) and (44) Wt(k1Wx)x=((k1k2)w2x)x,(x,t)DT,W(x,T)=U(x,T)(s1s2),x[0,l],W(0,t)=W(l,t)=0,t[0,T].(44)

Lemma 3.3

For Equation (Equation43), we have the following estimate (45) max0tT0lU2dx+0T0l|Ux|2dxdtCmaxx[0,l]|k1k2|20T0l|u2x|2dxdt.(45)

Proof.

From (Equation43) we obtain 0t0lUtUdxdt0t0l(k1Ux)xUdxdt=0t0l((k1k2)u2x)xUdxdt. Integrating by parts, we have 0t0l(U22)tdxdt+0t0lk1|Ux|2dxdt=0t0l(k2k1)u2xUxdxdt. According to Young's inequality and noticing (Equation4), we have 120lU2dx+0t0lk1|Ux|2dxdtζ120t0l|Ux|2dxdt+C0t0l|(k2k1)u2x|2dxdtζ120t0l|Ux|2dxdt+Cmaxx[0,l]|k1k2|20t0l|u2x|2dxdt, thus 0lU2dx+0t0l|Ux|2dxdtCmaxx[0,l]|k1k2|20t0l|u2x|2dxdt. This completes the proof.

Lemma 3.4

For Equation (Equation44), we have the following estimate (46) max0tT0lW2dx+0T0l|Wx|2dxdtCmaxx[0,l]|k1k2|20T0l|u2x|2+|w2x|2dxdt+C0l|s1s2|2dx.(46)

Proof.

From (Equation44) we have tT0lWtWdxdttT0l(k1Wx)xWdxdt=tT0l((k1k2)w2x)xWdxdt. This yields 120lW2dx+tT0lk1|Wx|2dxdt=tT0l(k2k1)w2xWxdxdt+120l|U(x,T)(s1s2)|2dx. From (Equation45) and use Young's inequality, we obtain 120lW2dx+tT0lk1|Wx|2dxdtζ12tT0l|Wx|2dxdt+Cmaxx[0,l]|k1k2|2tT0l|w2x|2dxdt+0l|U(x,T)|2dx+0l|s1s2|2dxζ12tT0l|Wx|2dxdt+Cmaxx[0,l]|k1k2|20T0l(|u2x|2+|w2x|2)dxdt+0l|s1s2|2dx, thus 0lW2dx+tT0l|Wx|2dxdtCmaxx[0,l]|k1k2|20T0l(|u2x|2+|w2x|2)dxdt+0l|s1s2|2dx. This completes the proof.

Theorem 3.5

Suppose si(i=1,2) are two observation data, and ki(i=1,2) are the corresponding minimizers of problem P. If there exists a point x0[0,l] such that (47) k1(x0)=k2(x0),(47) then for T1, we have the following estimate maxx[0,l]|k1k2|Cλs1s2L2[0,l], where C is independent of λ and T.

Proof.

Let r=k2 when k=k1 and let r=k1 when k=k2, from (Equation28) we have (48) 0T0l(k2k1)u1xw1xdxdt+λ0lZ(k1)(k2k1)dx0,(48) (49) 0T0l(k1k2)u2xw2xdxdt+λ0lZ(k2)(k1k2)dx0.(49) Combining (Equation48) and (Equation49), we get (50) λ0l[Z(k1)Z(k2)](k1k2)dx0T0l(k1k2)(u1xw1xu2xw2x)dxdt0T0l(k1k2)(Wxu1x+Uxw2x)dxdt.(50) According to the mean value theorem, we have (51) Z(k1)Z(k2)=Z(k~)(k1k2).(51) By noticing (Equation42), we obtain (52) Z(κ)=β(κ2+β)3/2β(k~2+β)3/2=Z(k~).(52) From (Equation50)–(Equation52) and using Cauchy's inequality, we have (53) λZ(κ)0l|(k1k2)|2dx0T0l(k1k2)(Wxu1x+Uxw2x)dxdt120T0l|k1k2|2(|u1x|2+|w2x|2)dxdt+120T0l|Ux|2+|Wx|2dxdtCmaxx[0,l]|k1k2|20T0l(|u1x|2+|u2x|2+|w2x|2)dxdt+C0l|s1s2|2dx,(53) where we have used the estimate in Lemma 3.3 and Lemma 3.4. From (Equation12), (Equation39) and noticing (Equation5) we have (54) 0T0l|u1x|2dxCT,0T0l|u2x|2dxCT,0T0l|w2x|2dxCT.(54) Combining (Equation53) and (Equation54), we have (55) λ0l|(k1k2)|2dxCTmaxx[0,l]|k1k2|2+C0l|s1s2|2dx.(55) According to Lemma 3.2 and assumption (Equation47), we derive (56) λmaxx[0,l]|k1k2|2CTmaxx[0,l]|k1k2|2+C0l|s1s2|2dx.(56) Choose T such that (57) CTλ=12,(57) then combining (Equation56) and (Equation57), we can easily obtain maxx[0,l]|k1k2|Cλs1s2L2[0,l]. This completes the proof.

Remark 3.6

It should be pointed out that the uniqueness and stability of the optimal solution are established on the basis of condition (Equation42). Theoretically speaking, we can not take max|ki|,(i=1,2) too large when we need to solve problem (Equation9) by numerical methods. However, we do not use this condition in the numerical simulation. In our numerical experiments, it seems that the condition max|ki|κ,(i=1,2) can be automatically satisfied when we discretize the system by the central difference scheme, which can be seen in the next section of numerical simulations.

4. Discretization and numerical algorithm

We are already prove the well posedness of the optimization problem (Equation9). In this section, we will adopt a Jacobi iterative algorithm to minimize the cost functional. The key ingredient of the algorithm is based on the following theorem.

Theorem 4.1

Let k be the solution of the optimal control problem P. Then from Equations (Equation1) and (Equation29), we have (58) 0Tuxwxdt+λZ(k)=0.(58)

Remark 4.2

The proof of this theorem can be obtained by a similar discussion of Theorem 2.5, we omit it here.

Next, we adopt a Jacobi iteration scheme to approximate the solution of (Equation58). The advantage of the use of this algorithm is that it can look for the steady solution directly, and it is easy to implementation. Now, let us discretize the term Z(k). As shown in Figure , at a given point o, let E and W denote the two adjoint points, and let e and w be the corresponding two midway points.

Figure 1. A target node o and its neighbours, the spatial step size is h.

Figure 1. A target node o and its neighbours, the spatial step size is h.

Define ||β=||2+β, then Z(k) can be written as Z(k)=k/|k|β. By applying the central difference scheme to Z, we have (59) Z(ZeZw)/h,(59) where h is the spatial step size. Next, we generate further approximations for Z at the two midway points Zeke/|ke|β and Zwkw/|kw|β, here ke and kw are approximated by ke(kEko)/handkw(kokW)/h.

Then, at a target point o, Z(k) can be discretized by (60) Z(k)1|ke|βkokEh21|kw|βkokWh2.(60) Furthermore, by substituting (Equation60) into (Equation58), we obtain (61) λ[1|ke|βkokEh2+1|kw|βkokWh2]+0Tuxwxdt=0.(61) Let Ae=1|ke|β,Aw=1|kw|β, then (Equation61) is equivalent to (62) ko=AeAe+AwkE+AwAe+AwkW+h2λ0TuxwxdtAe+Aw.(62) Let B=Ae+Aw and substituting it into (Equation62), we have (63) ko=AeBkE+AwBkW+h2λB0Tuxwxdt.(63) Then the Gauss–Jacobi iteration scheme[Citation23] can be applied for (Equation63). At each step n, we update k(n) to k(n+1) by (64) ko(n+1)=Ae(n)B(n)kE(n)+Aw(n)B(n)kW(n)+h2λB(n)0Tux(n)wx(n)dt,(64) where k(n)=k(x(n)). The integral term in the right-hand side 0Tuxvxdt can be computed by Simpson's rule or trapezoidal rule.

From the analysis above, we propose the following algorithm:

  1. Choose a function k(x)=k(0)(x) as an initial value of iteration.

  2. Solve Equation (Equation1) to obtain u(n)(x,t), where k(x)=k(n)(x).

  3. Solve Equation (Equation29) to obtain w(n)(x,t), where w(n)(x,T)=u(n)(x,T)s(x), and compute 0Tux(n)vx(n)dt.

  4. Compute ko(n+1)=Ae(n)B(n)kE(n)+Aw(n)B(n)kW(n)+h2λB(n)0Tux(n)wx(n)dt, to obtain the next value of iteration, where the boundary points of k are handled by the Neumann boundary condition.

  5. Select an arbitrarily small positive constant α, compute k(n+1)(x)k(n)(x) and compare it with α. If k(n+1)(x)k(n)(x)<α, stop and take k¯(x)=k(n)(x). Otherwise, let n = n + 1, go to step 2 and go on computing by the induction principle.

5. Numerical experiments

In this section we carried out some numerical experiments to examine the theory and algorithm presented in the above sections.

The forward problem (Equation1) and (Equation29) are solved by the finite volume method. In all the numerical experiments, we take T=1,β=0.0001,τ=T/100,h=l/100,δ=0.01, where τ is the time step size, h is the spatial step size and δ is the noise level. The regularization parameters used in this paper are chosen by the Morozov discrepancy principle (see, e.g. [Citation1,Citation40]). The synthetic data is generated in the following form sδ(x)=uδ(x,T)=u(x,T)[1+δ×random (x)]. where random(x) denotes the uniform distribution in [1,1]. The source term f(x,t) and the initial term φ(x) in Equation (Equation1) are taken as f(x,t)=(sinx2sinxcos2x+sin3x)et, and φ(x)=sinx, respectively.

Example 5.1

In the first example, we take k(x)=12|sinx|+12,x[0,2π]. The initial value of iteration k(0)=1 and the regularization parameter λ=0.005.

Figure  shows the reconstructions of k(x), where the '–o–' line is the exact coefficient and the '–x–' line is the inversion coefficient. It can be seen that the diffusion coefficient k(x) is non-smooth at x=π, and the inversion algorithm can give a satisfactory result.

Figure 2. Reconstructions of the coefficient k(x) in example 1, k(0)(x)=1, λ=0.005.

Figure 2. Reconstructions of the coefficient k(x) in example 1, k(0)(x)=1, λ=0.005.

Example 5.2

In the second example, we take k(x)={2,0x<20,1,20x30. The initial value of iteration k(0)=1 and the regularization parameter λ=0.05.

In this case, the change of k(x) is mainly reflected on the discontinuous point. In order to show the inversion process of the total variation algorithm, we compare the numerical results of different iteration times. As shown in Figure , the '–o–' line is the exact coefficient, and the '–•–' line and the '–x–' line are the inversion coefficient after 50 iterations and 300 iterations, respectively. From the numerical result, we can see that our algorithm works well for the discontinuous case.

Figure 3. Reconstructions of the coefficient k(x) in example 2, k(0)(x)=1, λ=0.05.

Figure 3. Reconstructions of the coefficient k(x) in example 2, k(0)(x)=1, λ=0.05.

Example 5.3

In the third example, we take k(x)={110x+1,0x<10,2,10x30. The initial value of iteration k(0)=1 and the regularization parameter λ=0.01.

Figure  shows different iteration times of the numerical results. The '–o–' line is the exact coefficient k(x). The '–•–' line is the inversion coefficient k¯(x) after 50 iteration times. The '–x–' line is the inversion coefficient k¯(x) after 300 iteration times. It can be seen that after 300 iterations the coefficient k(x) is recovered very well.

Figure 4. Reconstructions of the coefficient k(x) in example 3, k(0)(x)=1, λ=0.01.

Figure 4. Reconstructions of the coefficient k(x) in example 3, k(0)(x)=1, λ=0.01.

6. Conclusions

In this paper, we investigated an inverse problem of identifying an unknown coefficient of a parabolic equation. The total variation regularization method is proposed to deal with the ill-posedness. By transforming the problem into an optimal control problem, the existence, necessary condition as well as the stability of the minimizer of the control functional are established. Through a modification of the necessary condition into an equality form, we designed a Jacobi iteration scheme to obtain numerical solution. The numerical experiments show that this algorithm is effective and stable to reconstruct non-smooth solutions.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the NNSF of China (National Natural Science Foundation of China) [grant number 11771068].

References

  • Kirsch A. An introduction to the mathematical theory of inverse problems. New York: Springer; 2011.
  • Deng Z, Yang L, Yu J, et al. An inverse problem of identifying the coefficient in a nonlinear parabolic equation. Nonlinear Anal Theory Methods Appl. 2009;71(12):6212–6221. doi: 10.1016/j.na.2009.06.014
  • Sun J. An eigenvalue method using multiple frequency data for inverse scattering problems. Inverse Problems. 2012;28(2):025012. doi: 10.1088/0266-5611/28/2/025012
  • Demir A, Hasanov A. Identification of the unknown diffusion coefficient in a linear parabolic equation by the semigroup approach. J Math Anal Appl. 2008;340(1):5–15. doi: 10.1016/j.jmaa.2007.08.004
  • Isakov V, Kindermann S. Identification of the diffusion coefficient in a one-dimensional parabolic equation. Inverse Problems. 2000;16(3):665–680. doi: 10.1088/0266-5611/16/3/309
  • Keung YL, Zou J. Numerical identifications of parameters in parabolic systems. Inverse Problems. 1998;14(1):83–100. doi: 10.1088/0266-5611/14/1/009
  • Deng Z, Yang L, Yu J, et al. Identifying the diffusion coefficient by optimization from the final observation. Appl Math Comput. 2013;219(9):4410–4422.
  • Chen Q, Liu JJ. Solving an inverse parabolic problem by optimization from final measurement data. J Comput Appl Math. 2006;193(1):183–203. doi: 10.1016/j.cam.2005.06.003
  • Cheng J, Liu JJ. A quasi Tikhonov regularization for a two-dimensional backward heat problem by a fundamental solution. Inverse Problems. 2008;24(6):065012. doi: 10.1088/0266-5611/24/6/065012
  • Deng Z, Yang L. An inverse problem of identifying the radiative coefficient in a degenerate parabolic equation. Chin Ann Math Ser B. 2014;35(1):355–382. doi: 10.1007/s11401-014-0836-x
  • Egger H, Engl HW. Tikhonov regularization applied to the inverse problem of option pricing: convergence analysis and rates. Inverse Problems. 2005;21(3):1027–1045. doi: 10.1088/0266-5611/21/3/014
  • Yang L, Dehghan M, Yu J, et al. Inverse problem of time-dependent heat sources numerical reconstruction. Math Comput Simul. 2011;81(8):1656–1672. doi: 10.1016/j.matcom.2011.01.001
  • Yang L, Yu JN, Deng Z. An inverse problem of identifying the coefficient of parabolic equation. Appl Math Model. 2008;32(10):1984–1995. doi: 10.1016/j.apm.2007.06.025
  • Deng Z, Yang L, Yao B, et al. An inverse problem of determining the shape of rotating body by temperature measurements. Appl Math Model. 2018;59:464–482. doi: 10.1016/j.apm.2018.02.002
  • Yang L, Deng Z, Hon Y. Simultaneous identification of unknown initial temperature and heat source. Dyn Syst Appl. 2016;25(1–4):583–602.
  • Yamamoto M, Zou J. Simultaneous reconstruction of the initial temperature and heat radiative coefficient. Inverse Problems. 2001;17(4):1181–1202. doi: 10.1088/0266-5611/17/4/340
  • Isakov V. Inverse parabolic problems with the final overdetermination. Commun Pure Appl Math. 1991;44(2):185–209. doi: 10.1002/cpa.3160440203
  • Jiang D, Feng H, Zou J. Convergence rates of Tikhonov regularizations for parameter identification in a parabolic-elliptic system. Inverse Problems. 2012;28(10):104002. doi: 10.1088/0266-5611/28/10/104002
  • Deng Z, Hon Y, Yang L. An optimal control method for nonlinear inverse diffusion coefficient problem. J Optim Theory Appl. 2014;160(3):890–910. doi: 10.1007/s10957-013-0302-z
  • Deng Z, Yu J, Yang L. Identifying the coefficient of first-order in parabolic equation from final measurement data. Math Comput Simulat. 2008;77(4):421–435. doi: 10.1016/j.matcom.2008.01.002
  • Isakov V. Inverse problems for partial differential equations. New York: Springer; 2006.
  • Tikhonov A, Goncharsky AV, Stepanov VV, et al. Numerical methods for the solution of ill-posed problems. Dordrecht: Springer; 1995.
  • Shen J, Chan T. Mathematical models for local nontexture inpaintings. SIAM J Appl Math. 2002;62(3):1019–1043. doi: 10.1137/S0036139900368844
  • Rudin LI, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms. Phys D. 1992;60(1):259–268. doi: 10.1016/0167-2789(92)90242-F
  • Hào DN, Quyen TN. Convergence rates for total variation regularization of coefficient identification problems in elliptic equations I. Inverse Problems. 2011;27(7):075008. doi: 10.1088/0266-5611/27/7/075008
  • Wunderli T. A partial regularity result for an anisotropic smoothing functional for image restoration in BV-space. J Math Anal Appl. 2008;339(2):1169–1178. doi: 10.1016/j.jmaa.2007.07.013
  • Giusti E. Minimal surfaces and functions of bounded variation. Boston: Birkhäuser; 1984.
  • Bachmayr M, Burger M. Iterative total variation schemes for nonlinear inverse problems. Inverse Problems. 2009;25(10):105004. doi: 10.1088/0266-5611/25/10/105004
  • Chan TF, Tai X. Identification of discontinuous coefficients in elliptic problems using total variation regularization. SIAM J Sci Comput. 2003;25(3):881–904. doi: 10.1137/S1064827599326020
  • Dou Y, Han B. Total variation regularization for the reconstruction of a mountain topography. Appl Numer Math. 2012;62(1):1–20. doi: 10.1016/j.apnum.2011.09.004
  • Hào DN, Quyen TN. Convergence rates for total variation regularization of coefficient identification problems in elliptic equations II. J Math Anal Appl. 2012;388(1):593–616. doi: 10.1016/j.jmaa.2011.11.008
  • Li L, Han B. A new iteratively total variational regularization for nonlinear inverse problems. J Comput Appl Math. 2016;298:40–52. doi: 10.1016/j.cam.2015.11.033
  • Pan B, Xu D, Xu Y, et al. TV-like regularization for backward parabolic problems. Math Methods Appl Sci. 2017;40(4):957–969. doi: 10.1002/mma.4027
  • Wang L, Liu JJ. Total variation regularization for a backward time-fractional diffusion problem. Inverse Problems. 2013;29(11):115013. doi: 10.1088/0266-5611/29/11/115013
  • Osher S, Burger M, Goldfarb D, et al. An iterative regularization method for total variation-based image restoration. Multiscale Model Simul. 2005;4(2):460–489. doi: 10.1137/040605412
  • Acar R, Vogel CR. Analysis of bounded variation penalty methods for ill-posed problems. Inverse Problems. 1994;10(6):1217–1229. doi: 10.1088/0266-5611/10/6/003
  • Evans L. Partial differential equations. 2nd ed. Graduate Studies in Mathematics; Providence: American Mathematical Society; 2010.
  • Friedman A. Partial differential equations of parabolic type. Englewood Cliffs, NJ: Prentice-Hall; 1964.
  • Ladyzhenskaia OA, Solonnikov VA, Ural'ceva NN. Linear and quasi-linear equations of parabolic type. Providence (RI): American Mathematical Society; 1968.
  • Xiao T, Yu S, Wang Y. Numerical solution for inverse problems. Beijing: Science Press; 2003.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.