496
Views
10
CrossRef citations to date
0
Altmetric
Articles

Determination of the initial condition in parabolic equations from integral observations

&
Pages 1138-1167 | Received 26 Jun 2016, Accepted 19 Aug 2016, Published online: 16 Sep 2016

Abstract

We study the problem of determining the initial condition in parabolic equations with time-dependent coefficients from integral observations which can be regarded as generalizations of point-wise interior observations. Our approach is new in the sense that for determining the initial condition we do not assume that the data available in the whole space domain at the final moment or in a subset of the space domain during a certain time interval, but some integral observations during a time interval. We propose a variational method in combination with Tikhonov regularization for solving the problem and then discretize it by finite difference splitting methods. The discretized minimization problem is solved by the conjugate gradient method and tested on computer to show its efficiency. Also as a by-product of the variational method, we propose a numerical scheme for estimating the degree of ill-posedness of the problem.

AMS Subject Classifications:

1. Introduction and Problem setting

The problem of determining the initial condition in parabolic equations is very important in practice. It is arising in data assimilation,[Citation1,Citation2] in image processing,[Citation3] in heat conduction processes,[Citation4,Citation26,Citation27] etc. However, for determining the initial condition, most of the authors assume that either the data are available in the whole space domain at the final moment,[Citation1,Citation2,Citation5,Citation6] or in a subset of the space domain during a certain time interval [Citation7]. In this paper, we look at the problem from a practical point of view: the data can be measured at certain points of the space domain and only during a certain time interval, furthermore, as we obtain the data by some instruments, they are averaged, i.e. of some integral forms. Thus, in this point of view, our problem is that of determining the initial condition in parabolic equations with possibly time-dependent coefficients from some integral observations during a certain time interval. The mathematical formulation of the problem is as follows:

Let Ω be an open bounded domain in Rn. Denote by Ω the boundary of Ω, Q:=Ω×(0,T], and S:=Ω×(0,T]. Suppose that(1.1) aij,bL(Q),(1.1) (1.2) aij=aji,i,j{1,2,,n},(1.2) (1.3) λξRn2i,j=1naij(x,t)ξiξjΛξRn2,ξRn,(1.3) (1.4) 0b(x,t)μ,a.e. inQ,(1.4)

with λ, Λ and μ being given positive constants. Consider the problem(1.5) ut-i,j=1nxi(aij(x,t)uxj)+b(x,t)u=finQ,u=0onS,u|t=0=vinΩ(1.5)

with fL2(Q) and vL2(Ω).

To introduce the notion of weak solution to this problem, we use the standard Sobolev spaces H1(Ω), H1,0(Q) and H1,1(Q) (see e.g. [Citation8, p.111]). Furthermore, for a Banach space B, we defineL2(0,T;B)={u:u(t)Ba.e.t(0,T)anduL2(0,T;B)<},

with the normuL2(0,T;B)2=0Tu(t)B2dt.

In the sequel, we shall use the space W(0, T) defined asW(0,T)={u:uL2(0,T;H01(Ω)),utL2(0,T;(H01(Ω)))},

equipped with the normuW(0,T)2=uL2(0,T;H1(Ω))2+utL2(0,T;(H01(Ω)))2.

We note that (H01(Ω))=H-1(Ω).

The solution of the problem (Equation1.5) is understood in the weak sense as follows: A solution in W(0, T) of the problem (Equation1.5) is a function u(x,t)W(0,T) satisfying the identity(1.6) 0Tut,ηH-1(Ω),H1(Ω)dt+Qi,j=1naij(x,t)uxiηxj+b(x,t)uη-fηdxdt=0ηW(0,T)(1.6)

and(1.7) u|t=0=v.(1.7)

It can be proved that under the above conditions there exists a unique solution in W(0, T) to the problem (Equation1.5) [Citation4,Citation8Citation10]. Furthermore, there exists a constant c depending only on the coefficients aij, b and Ω such that(1.8) uW(0,T)c(vL2(Ω)+fL2(Q)).(1.8)

If we assume that aij,bC1(Q) and vH01(Ω), then it can be proved that utL2(0,T;L2(Ω)) [Citation9,Citation10].

A standard inverse problem to (Equation1.5) is the backward parabolic equation: given u(xT) find v, which is the subject of a lot of research, see the recent survey,[Citation5] or [Citation6,Citation11Citation13] and the references therein. However, in this setting we have to observe the solution in the whole space domain that is hardly realized in practice. Recently, the authors of [Citation7] tried to reconstruct the initial condition v from the measurements of u in ω×(τ,T), where ω is a subdomain of Ω and τ>0 is a constant. They proved some stability estimates and solved the problem by Tikhonov regularization. In this paper, we will reconstruct the initial condition v(x) when we can observe u(xt) through the integral observation operators li,i=1,,N during a certain time interval, say, (τ,T]. Namely, suppose that we are given N nonnegative weight functions ωiL1(Ω) with Ωωi(x)dx>0 and we observe u by(1.9) liu(x,t)=Ωωi(x)u(x,t)dx=hi(t),t(τ,T),i=1,,N.(1.9)

The problem is to reconstruct the initial condition v from these observations.

Before going further, let us discuss observations (Equation1.9). First, any measurement is an averaged process, that is of the form (Equation1.9). Second, such a kind of observations is a generalization of point observations. Indeed, let xiΩ,i=1,,N be given, Si be a neighbourhood of xi and ωi(x) be chosen as(1.10) ωi(x)=1|Si|,&ifxSi,0,&otherwise(1.10)

where |Si| is the volume of Si. We see that |Si| is similar to the width of the instrument and when we let |Si| tend to zero we have the point observation. It should be noted also that as the solution to (Equation1.5) is understood in the weak sense, its value at certain point does not always have a meaning, but it does in the above averaged sense. Third, with this kind of observations, the data need not be always available at the whole space domain and at any time. Thus, our problem setting is more practical than the related ones.

We will solve the problem by a variational method in combination with Tikhonov regularization and will use the conjugate gradient method to implement the algorithm on computer which is the subject of Section 2. The problem is discretized by the splitting finite difference method, although it can be done by the finite element method instead. The reason is that the finite difference method is easy for the problems with time-dependent coefficients and the splitting method splits the multi-dimensional problems into a sequence of one-dimensional ones which is very fast. The difficulty with the finite difference method applied to our problem is that we work with weak solutions. A full theory for this will be presented in Section 2.3. There we prove the convergence of the solution of the discretized variational problem to the solution of the continuous variational one. Furthermore, we prove that the inverse problem of determining v from the observations (Equation1.9) is ill-posed and propose a simple technique to approximate its degree of ill-posedness. The numerical results will be given in Section 3.

2. Variational method

2.1. Variational method for the continuous problem

First, let us note that the inverse problem of reconstructing the initial condition v from the observations li(u),i=1,2,,N is ill-posed.

Remark 2.1:

If vH01(Ω), aij,bC1([0,T];L(Ω)),i,j=1,,n and there exists a constant μ1 such that |aij/t|,|b/t|μ1, then the operator C maps vL2(Ω) to (l1u(x,t;v),,lNu(x,t;v)) is compact from L2(Ω) to (L2(τ,T))N. Hence the problem of reconstructing v from Cu(v)=(h1,,hN)(L2(τ,T))N is ill-posed.

If the conditions of the theorem are satisfied, then uH01,1(Q). Therefore, (l1u(x,t;v),,lNu(x,t;v))(H1(τ,T))N which is compactly embedded in (L2(Ω))N. Hence, the operator C from L2(Ω) to (L2(τ,T))N is compact.

Now we analyse the degree of ill-posedness of this problem. We denote by uo the solution of the problem(2.1) ut-i,j=1nxiaij(x,t)uxj+b(x,t)u=finQ,u=0onS,u|t=0=0inΩ,(2.1)

and u[v] the solution of the problem(2.2) ut-i,j=1nxiaij(x,t)uxj+b(x,t)u=0inQ,u=0onS,u|t=0=vinΩ.(2.2)

Then, u(v)=u[v]+uo. Hence(2.3) liu(v)=liu[v]+liuo:=Civ+liuo(2.3)

where Ci,i=1,,N mapping v to liu[v],i=1,,N are linear bounded operators from L2(Ω) to L2(τ,T).

Define the linear operatorC=(C1,,CN):L2(Ω)(L2(τ,T))NCv=(C1v,,CNv)forvL2(Ω),

where the scalar product in (L2(τ,T))N is defined by: if ħ1=(ħ11,,ħN1) and ħ2=(ħ12,,ħN2) are in (L2(τ,T))N, then (ħ1,ħ2)(L2(τ,T))N=i=1N(ħi1,ħi2)L2(τ,T). Hence the norm in (L2(τ,T))N is defined by ħ(L2(τ,T))N2=ħiL2(τ,T)2 for ħ=(ħ1,,ħN)(L2(τ,T))N. Set ħi=hi-liuo,i=1,,N and ħ=(ħ1,,ħN). Then the problem of reconstructing the initial condition v in (Equation1.5) from the observations (Equation1.9) has the form(2.4) Cv=ħ.(2.4)

To characterize the ill-posedness of this problem we have to estimate the singular values of C (eigenvalues of CC). To this goal, we introduce now a variational method for solving the reconstruction problem.

To emphasize the dependence of the solution u(xt) to (Equation1.5) on the initial condition v we sometimes write u(xtv) or u(v) if there is no confusion. A natural method for reconstructing v from the observations li(u),i=1,2,,N (Equation1.9) is to minimize the misfit functional(2.5) J0(v)=12i=1Nliu(v)-hiL2(τ,T)2(2.5)

with respect to vL2(Ω). Because the solution to the reconstruction problem is not unique, we should have an estimation to it. Let v be an a priori guess of v. We now combine the above least squares problem with Tikhonov regularization as follows: minimize the functional(2.6) Jγ(v)=J0(v)+γ2v-vL2(Ω)2(2.6)

with respect to vL2(Ω) with γ being positive Tikhonov regularization parameter.

Since γ>0, it is easily seen that the problem of minimizing Jγ(v) over L2(Ω) has a unique solution. Now we prove that Jγ(v) is Fréchet differentiable and derive a formula for its gradient. In doing so, consider the adjoint problem:(2.7) -pt-i,j=1nxi(aij(x,t)pxj)+b(x,t)p=i=1Nωi(liu-hi)χ(τ,T)(t)inQ,p=0onS,p(x,T)=0inΩ,(2.7)

where χ(τ,T)(t)=1, if t(τ,T) and χ(τ,T)(t)=0, otherwise. The solution to (Equation2.7) is a function pW(0,T) satisfying the identity-0Tpt,ηH-1(Ω),H1(Ω)dt+Q[i,j=1naij(x,t)pxiηxj+b(x,t)pη-i=1Nωi(liu-hi)χ(τ,T)(t)η]dxdt=0ηW(0,T)

andp|t=T=0.

Note that if we reverse time direction in (Equation2.7), we get the same problem as in (Equation1.5). Therefore, there exists a unique solution pW(0,T) to (Equation2.7).

Theorem 2.2:

The gradient J0(v) of the objective functional J0(v) at v is given by(2.8) J0(v)=p(x,0),(2.8)

where p(xt) is the solution to the adjoint problem (Equation2.7).

For a small variation δv of v, we haveJ0(v+δv)-J0(v)=i=1N12liu(v+δv)-hiL2(τ,T)2-i=1N12liu(v)-hiL2(τ,T)2=i=1N12liδu(v)L2(τ,T)2+i=1Nliδu(v),liu(v)-hiL2(τ,T)=i=1Nliδu(v),liu(v)-hiL2(τ,T)+o(δvL2(Ω))

where δu(v) is the solution to the problem(2.9) δut-i,j=1nxiaij(x,t)δuxj+b(x,t)δu=0inQ,δu(x,t)=0onS,δu(x,0)=δvinΩ.(2.9)

From (Equation1.9), we haveJ0(v+δv)-J0(v)=i=1Nliδu,liu-hiL2(τ,T)+o(δvL2(Ω))=i=1NτT(Ωωiδudx)(liu-hi)dt+o(δvL2(Ω))=i=1NτTΩδuωi(liu-hi)dxdt+o(δvL2(Ω))=0TΩδui=1Nωi(liu-hi)χ(τ,T)(t)dxdt+o(δvL2(Ω))

Using Green’s formula (Theorem 3.18, [Citation8]) for (Equation2.7) and (Equation2.9), we have0TΩδui=1Nωi(liu-hi)χ(τ,T)(t)dxdt=Ωδvp(x,0)dx.

HenceJ0(v+δv)-J0(v)=Ωδvp(x,0)dx+o(δvL2(Ω))=p(x,0),δvL2(Ω)+o(δvL2(Ω)).

Consequently, J0 is Fréchet differentiable and its gradient can be written in the formJ0(v)v=p(x,0).

The proof is complete.

From this result, we see that the functional Jγ(v) is also Fréchet differentiable and its gradient has the form(2.10) Jγ(v)v=p(x,0)+γ(v-v).(2.10)

where p(xt) is solution to the adjoint problem (Equation2.7).

Now we return to the question of estimating the degree of ill-posedness of our reconstructing problem. Since Ci,i=1,,N, are defined by (Equation2.3), from the above theorem we have Cig=pi(x,0), where pi is the solution to the adjoint problem(2.11) -pit-i=1nxi(ai(x,t)pixi)+b(x,t)pi=ωi(x)g~(t)inQ,pi=0onS,pi(x,T)=0inΩ(2.11)

with gL2(τ,T) and g~(t)=g(t) for t(τ,T) and g~(t)=0, otherwise.

From (Equation2.3), we haveJ0(v)=12i=1Nliu(v)-hiL2(τ,T)2=12i=1NCiv-(hi-liuo)L2(τ,T)2=12Cv-ħ(L2(τ,T))N2.

HenceJ0(v)=C(Cv-ħ)=i=1NCi(Civ-ħi).

If we take hi such that hi=liuo, thenJ0(v)=CCv=i=1NCiCiv.

Due to Theorem 2.2, CCv=i=1NCiCiv=p0(x,0), where p0 is the solution of the adjoint problem(2.12) -p0t-i,j=1nxiaij(x,t)p0xj+b(x,t)p0=i=1Nωiliu[v]χ(τ,T)(t)inQ,p0=0onS,p0(x,T)=0inΩ.(2.12)

Thus, if vL2(Ω) is given, we have the value CCv=p0(x,0). However, an explicit form of CC is not available to analyse its eigenvalues and eigenfunctions. Despite this, when we discretize the problem, the finite-dimensional approximations Ch to C are matrices, and so we can apply the Lanczos algorithm [Citation14] to estimate the eigenvalues of ChCh based on the values ChChvh. This approach seems to be applicable to any problems and we will test it for our inverse problem as we did in [Citation15]. The numerical simulation of this approach will be given in Section 3.

2.2. Conjugate gradient method

We now use the conjugate gradient algorithm to approximate the initial condition v as follows, [Citation16,Citation28]:(2.13) vk+1=vk+αkdk,dk=-Jγ(vk)&ifk=0,-Jγ(vk)+βkdk-1&ifk>0,(2.13)

where(2.14) βk=Jγ(vk)2Jγ(vk-1)2,αk=argminα0Jγ(vk+αdk).(2.14)

We now calculate αk which is the solution to the problem αk=argminα0Jγ(vk+αdk). We haveJγ(vk+αdk)=i=1N12liu(vk+αdk)-hiL2(τ,T)2+γ2vk+αdk-vL2(Ω)2=i=1N12Ci(vk+αdk)+liuo-hiL2(τ,T)2+γ2αdk+vk-vL2(Ω)2=i=1N12αCidk+liu(vk)-hiL2(τ,T)2+γ2αdk+vk-vL2(Ω)2.

HenceJγ(vk+αdk)α=αi=1NCidkL2(τ,T)2+i=1NCidk,liu(vk)-hiL2(τ,T)+γαdkL2(Ω)2+γdk,vk-vL2(Ω).

Letting J(vk+αdk)α=0, we obtain(2.15) αk=-i=1NCidk,liu(vk)-hiL2(τ,T)+γdk,vk-vL2(Ω)i=1NCidkL2(τ,T)2+γdkL2(Ω)2=-i=1Ndk,Ci(liu(vk)-hi)L2(Ω)+γdk,vk-vL2(Ω)i=1NCidkL2(τ,T)2+γdkL2(Ω)2=-dk,i=1NCi(liu(vk)-hi)+γ(vk-v)L2(Ω)i=1NCidkL2(τ,T)2+γdkL2(Ω)2=-Jγ(vk),dkL2(Ω)i=1NCidkL2(τ,T)2+γdkL2(Ω)2.(2.15)

From (Equation2.13), αk can be rewritten as(2.16) αk=-Jγ(vk),-Jγ(vk)L2(Ω)i=1NCidkL2(τ,T)2+γdkL2(Ω)2&ifk=0,-Jγ(vk),-Jγ(vk)+βkdk-1L2(Ω)i=1NCidkL2(τ,T)2+γdkL2(Ω)2&ifk>0.(2.16)

Therefore,(2.17) αk=Jγ(vk)L2(Ω)2i=1NCidkL2(τ,T)2+γdkL2(Ω)2,k=0,1,2,(2.17)

2.3. Variational method for the discretized problem

The minimization problem can be discretized by the standard finite element method and the convergence of the scheme can be proved by the standard method,[Citation17] therefore, we do not present it here. In what follows, we will discretize the direct problem by the splitting method and present the variational method for the discretized problem. The reason why we use this method instead of the finite element method is that the finite difference method is easy to implement. Furthermore, the splitting finite difference method splits a multi-dimensional problem into a sequence of one-dimensional ones, hence it is very fast [Citation18Citation20] (see also [Citation21Citation23]). In doing so we have to restrict some conditions on the domain and coefficients.

First, we suppose that Ω is the open parallelepiped (0,L1)×(0,L2)××(0,Ln) in Rn. Second, in (Equation1.5), we suppose that aij=0, if ij, and for simplicity from now on we denote aii by ai. Thus, the system (Equation1.5) has the form(2.18) ut-i=1nxiai(x,t)uxi+b(x,t)u=finQ,u=0onS,u|t=0=vinΩ.(2.18)

and (Equation1.6) becomes(2.19) 0Tut,ηH-1(Ω),H1(Ω)dt+Qi=1nai(x,t)uxiηxi+b(x,t)uη-fηdxdt=0ηW(0,T).(2.19)

Following [Citation24], we subdivide the domain Ω into small cells by the rectangular uniform grid specified by0=xi0<xi1=hi<<xiNi=Li,i=1,,n.

Here hi=Li/Ni is the grid size in the xi-direction, i=1,,n. The grid vertices are denoted by xk:=(x1k1,,xnkn), where k:=(k1,,kn), 0kiNi. We also denote by h:=(h1,,hn) the vector of spatial grid sizes and Δh:=h1hn. Let ei be the unit vector in the xi-direction, i=1,,n, i.e. e1=(1,0,,0) and so on. Denote by(2.20) ω(k):={xΩ:(ki-0.5)hixi(ki+0.5)hi,i=1,,n},ωi+(k):={xΩ:kihixi(ki+1)hi,(kj-0.5)hjxj(kj+0.5)hj,ji}.(2.20)

In the following, we denote the set of indices of internal grid points by Ωh, i.e.(2.21) Ω¯h:={k=(k1,,kn):0kiNi,i=1,,n},Ωh:={k=(k1,,kn):1kiNi-1,i=1,,n},Πh:=Ω¯h\Ωh.(2.21)

We also make use of the following sets(2.22) Ωhi:={k=(k1,,kn):0kiNi-1,0kjNj,ji}(2.22)

for i=1,,n. For a function u(xt) defined in Q, we denote by uk(t) its approximate value at (xk,t) and if there no confusion we denote it sometimes by uk.

Suppose that u¯={uk,kΩ¯h} is a grid function defined in QhT:=Ω¯h×(0,T) we define the norm(2.23) u¯H1,0(QhT)2:=Δh0TkΩ¯hu¯k(t)2+i=1nkΩ¯i-u¯xik(t)2dt(2.23)

and if u¯ has the weak derivative with respect to t we define(2.24) u¯H1,1(QhT)2:=Δh0TkΩ¯huh(k,t)2+uth(k,t)2+i=1nkΩ¯i-uxih(k,t)2dt(2.24)

withuxih(k,t):=uh(k+ei,t)-uh(k,t)hi

being the forward difference quotient of the grid function uh. We also define the backward difference quotient of u byux¯ih(k,t):=uh(k,t)-uh(k-ei,t)hi.

For the grid function u¯ we define the following interpolations in Q:

(1)

Piecewise constant:(2.25) u¯~(x,t):=u¯k(t),(x,t)ω(k)×(0,T).(2.25)

(2)

Multilinear(2.26) u¯^(x,t):=u¯k(t)+i=1nu¯xik(t)(xi-kihi)+1i<jnu¯xixjk(t)(xi-kihi)(xj-kjhj)++u¯x1x2xnk(t)i=1n(xi-kihi),(x,t)ω+(k)×(0,T).(2.26) From these formulas, with n=2 we have(2.27) u¯^(x,t):=u¯k(t)+u¯x1k(t)(x1-k1h1)+u¯x2k(t)(x2-k2h2)+u¯x1x2k(t)(x1-k1h1)(x2-k2h2),(2.27) with n=3:(2.28) u¯^(x,t):=u¯k(t)+i=13u¯xik(t)(xi-kihi)+1i<j3u¯xixjk(t)(xi-kihi)(xj-kjhj)+u¯x1x2x3k(t)(x1-k1h1)(x2-k2h2)(x3-k3h3).(2.28) It is clear from (Equation2.25) and (Equation2.26) that the interpolation u¯~ is constant in each subdomain ω(k) while u¯^ is multi-linear in ω+(k). Moreover, u¯^ belongs to H1,1(QhT). We have the following properties of the multi-linear interpolation.

Based on Chapter 6 of [Citation24, p.229], since the domain Ω is a parallelepiped, we have the following results.

Lemma 2.3:

Suppose that the grid function u¯ satisfies the inequality(2.29) u¯H1,0(QhT)c,(2.29)

with c being a constant independent of h. Then, the multi-linear interpolation (Equation2.26) of u¯ is bounded in H1,0(Q).

Lemma 2.4:

Suppose that the grid function u¯ satisfies the inequality(2.30) u¯H1,1(QhT)c,(2.30)

with c being a constant independent of h. Then, the multi-linear interpolation (Equation2.26) of u¯ is bounded in H1,1(Q).

Asymptotic relationships between the two interpolations as the grid size h tends to zero are stated in the following lemmas.

Lemma 2.5:

Suppose that the hypothesis of Lemma 2.3 is fulfilled. Then if {u¯^(x,t)}h weakly converges to a function u(xt) in L2(Q) as the grid size h tends to zero, the sequence {u¯~(x,t)}h also weakly converges to u(xt) in the same manner.

Lemma 2.6:

Suppose that the hypothesis of Lemma 2.4 is fulfilled. Then if {u¯^(x,t)}h converges strongly to a function u(xt) in L2(Q) as the grid size h tends to zero, the sequence {u¯~(x,t)}h also converges to u(xt) in the same manner.

Lemma 2.7:

Suppose that the hypothesis of Lemma 2.4 is fulfilled and i{1,2,3}. Then if the sequence of derivatives {u¯^xi(x,t)}h converges weakly to a function u(xt) in L2(Q) as h tends to zero, the sequence {u¯~xi(x,t)}h also converges to u(xt) in the same manner, whereu¯~xi(x,t):=u¯xi(k),xωi+(k).

For a function z defined in Ω, we define its average by(2.31) zk=1|ω(k)|ω(k)z(x)dx=1Δhω(k)z(x)dx.(2.31)

Here, if k belongs to the boundary of Ω, we put zk=0. We approximate the integrals in (Equation2.19) as follows(2.32) QutηdxdtΔh0TkΩ¯hdu¯k(t)dtη¯k(t)dt,(2.32) (2.33) Qai(x,t)uxiηxidxdtΔh0TkΩhia¯ik(t)u¯xik(t)η¯xik(t)dt,(2.33) (2.34) Qb(x,t)uηdxdtΔh0TkΩ¯hb¯k(t)u¯k(t)η¯k(t)dt,(2.34) (2.35) QfηdxdtΔh0TkΩ¯hf¯k(t)η¯k(t)dt.(2.35)

Substituting the integrals (Equation2.32)–(Equation2.35) into (Equation2.19), we have the following equality (dropping the time variable t for a moment)(2.36) Δh0TkΩ¯h(du¯kdt+b¯ku¯k-f¯k)η¯k+i=1nkΩhia¯iku¯xikη¯xikdt=0.(2.36)

Using the summation by parts formula combines with condition u¯k=η¯k=0 when ki=0 and ki=Ni, we have(2.37) kΩhia¯iku¯xikη¯xik=kΩhia¯iku¯k+ei-u¯khiη¯k+ei-η¯khi=kΩhia¯iku¯k+ei-u¯khi2η¯k+ei-kΩhia¯iku¯k+ei-u¯khi2η¯k=kΩha¯i-ku¯k-u¯k-eihi2η¯k-kΩha¯iku¯k+ei-u¯khi2η¯k=kΩha¯i-ku¯k-u¯k-eihi2-a¯iku¯k+ei-u¯khi2η¯k,(2.37)

with a¯i-k=a¯k-ei. Replacing into (Equation2.36) and approximating the initial condition u¯k(0)=v¯k,kΩ¯h, we obtain the following system approximating the original problem (Equation2.19), (Equation1.7)(2.38) du¯dt+(Λ1++Λn)u¯-F¯=0,u¯(0)=v¯.(2.38)

with u¯={uk,kΩ¯h}, function v¯={vk,kΩ¯h} being a grid function approximating the initial condition v and F¯={fk,kΩh} where f¯k is defined by formula (Equation2.31). The coefficients matrix Λi has the form(2.39) (Λiu¯)k=b¯ku¯kn+a¯i-khi2(u¯k-u¯k-ei)-a¯ikhi2(u¯k+ei-u¯k),2kiNi-2,a¯i-khi2u¯k-a¯ikhi2(u¯k+ei-u¯k),ki=1,a¯i-khi2(u¯k-u¯k-ei)+a¯ikhi2u¯k,ki=Ni-1.(2.39)

We note that the coefficients matrices Λi defined by (Equation2.39) is positive semi-definite (see e.g. [Citation23]).

By the same scheme presented in [Citation24], or more detailed in [Citation15] for the method of finite difference for the Neumann problem, we can prove the following results.

Lemma 2.8:

Let u¯ be a solution to the Cauchy problem (Equation2.38).

(a)

If vL2(Ω), then there exists a constant c independent of h and the coefficients of Equation (Equation1.5) such that(2.40) maxt[0,T]ΔhkΩ¯h|u¯k(t)|2+Δh0Ti=1nkΩhi|u¯xik|2dtc(ΔhkΩ¯h|v¯k|2+Δh0TkΩ¯h|f¯k|2dt).(2.40)

(b)

If ai,bC1([0,T],L(Ω)) with |ai/t|,|b/t|μ2<. Then(2.41) maxt[0,T]ΔhkΩ¯h|u¯k(t)|2+Δh0T(|u¯tk(t)|2+i=1nkΩhi|u¯xik|2)dtc(ΔhkΩ¯h|v¯k|2+Δhi=1nkΩhi|v¯xik|2+Δh0TkΩ¯h|f¯k|2dt).(2.41)

From these estimates, exactly following [Citation24] and [Citation15] we can prove the convergence of the difference-differential problem (Equation2.38) to the solution of the Dirichlet problem (Equation1.5).

Theorem 2.9:

(1)

If vL2(Ω), then the multi-linear interpolation (Equation2.26) u^h of the difference-differential problem (Equation2.38) in QT weakly converges in L2(Q) to the solution uH1,0(Q) of the Dirichlet problem (Equation1.5) and its derivatives with respect to xi,i=1,,n, converges weakly in L2(Q) to uxi.

(2)

If ai,bC1([0,T],L(Ω)) with |ai/t|,|b/t|μ2<, and vH1(Ω), then u^h strongly converges L2(Q) to u.

We now turn to approximating the minimization problem (Equation2.6). Using the representation in Section 2.1, we have the first-order optimality condition for this problem as follows(2.42) Jγ(v)=C(Cv-ħ)+γ(v-v)=p(x,0)+γ(v-v)=0.(2.42)

For the discretized problem (Equation2.38) we approximate the functional Jγ as follows.

First, we approximate the functional li(v) by(2.43) lihu¯(v¯)=ΔhkΩhω¯iku¯k(v¯),i=1,,N.(2.43)

As in the continuous problem, we define uo¯ by the solution to the Cauchy problem(2.44) du¯dt+(Λ1++Λn)u¯-F¯=0,u¯(0)=0,(2.44)

where F¯ is defined as in (Equation2.38), and u¯[v¯] the solution to the Cauchy problem(2.45) du¯dt+(Λ1++Λn)u¯=0,u¯(0)=v¯.(2.45)

Then,(2.46) lihu¯(v¯)=lihu¯[v¯]+lihuo¯=ΔhkΩhω¯iku¯k[v¯]+ΔhkΩhω¯ikuo¯k,i=1,,N.(2.46)

Hence the operator Aihv¯~=lihu¯[v¯] is linear and bounded from L2(Ω) into L2(τ,T), for i=1,,N. Furthermore, if p¯ is a solution to the Cauchy problem(2.47) dp¯dt+(Λ1++Λn)p¯-G¯=0,p¯(T)=0,(2.47)

with G¯={ω¯ikg(t),kΩh} where ω¯ik is defined in formula (Equation2.31), then Aihg=p¯~(x,0).

Thus, the discretized version of C has the form Ch:=(C1h,,CNh) and the functional Jγ is now approximated as follows:(2.48) Jγh(vh):=12Chv¯~-ħ(L2(τ,T))N2+γ2v¯-v¯L2(Ω)2=12i=1NCihv¯~-ħiL2(τ,T)2+γ2v¯-v¯L2(Ω)2.(2.48)

Here, for simplicity of notation, we again set ħi=hi-lihuo¯. For this discretized optimization problem we have the first-order optimality condition(2.49) Ch(Chv¯-ħ)+γ(v¯-v¯)=0.(2.49)

If we suppose that condition (2) of Theorem 2.9 is satisfied, then Chv¯-Cv(L2(τ,T))N and Chg-CgL2(τ,T) tend to zero as h tends to zero. Following [Citation25] we can prove the following result.

Theorem 2.10:

Let condition (2) of Theorem 2.9 be satisfied. Then the interpolation v¯hγ^ of the solution v¯hγ of the problem (Equation2.48) converges to the solution vγ of the problem (Equation2.6) in L2(Ω) as h tends to zero.

2.3.1. Discretization in time

To obtain the finite difference scheme for Equation (Equation2.38), we divide the time interval [0, T] into M sub-intervals by the points ti,i=0,,M,t0=0,t1=Δt,,tM=MΔt=T. For simplifying the notation, we set uk,m:=uk(tm). We also denote by Fk,m:=Fk(tm) and Λim=Λi(tm), m=0,,M. In the following, we drop the spatial index for simplifying the notation. For the one-dimensional case, we use the standard Crank–Nicholson method, for multi-dimensional ones, we process as follows.

2.3.2. Splitting method

In order to obtain a splitting scheme for the Cauchy problem (Equation2.38), we also discrete the time interval in the same with finite difference method. We denote um+δ:=u¯(tm+δΔt),Λim:=Λi(tm+Δt/2). We introduce the following implicit two-circle component-by-component splitting scheme [Citation18](2.50) um+i2n-um+i-12nΔt+Λimum+i2n+um+i-12n4=0,i=1,2,,n-1,um+12-um+n-12nΔt+Λnmum+12+um+n-12n4=Fm2+Δt8ΛnmFm,um+n+12n-um+12Δt+Λnmum+n+12n+um+124=Fm2-Δt8ΛnmFm,um+1-i-12n-um+1-i2nΔt+Λimum+1-i-12n+um+1-i2n4=0,i=n-1,n-2,,1,u0=v¯.(2.50)

Equivalently,(2.51) Ei+Δt4Λimum+i2n=Ei-Δt4Λimum+i-12n,i=1,2,,n-1,En+Δt4Λnmum+12-Δt2Fm=En-Δt4Λnmum+n-12n,En+Δt4Λnmum+n+12n=En-Δt4Λnmum+12+Δt2Fm,Ei+Δt4Λimum+1-i-12n=Ei-Δt4Λimum+1-i2n,i=n-1,n-2,,1,u0=v¯,(2.51)

where Ei is the identity matrix corresponding to Λi,i=1,,n. The splitting scheme (Equation2.51) can be rewritten in the following compact form(2.52) um+1=Amum+ΔtBmfm,m=0,,M-1,u0=v¯,(2.52)

with(2.53) Am=A1mAnmAnmA1m,Bm=A1mAnm,(2.53)

where Aim:=(Ei+Δt4Λim)-1(Ei-Δt4Λim),i=1,,n.

2.3.3. Variational method for the multi-dimensional case

To complete the variational method for multi-dimensional cases, we use the splitting method for the forward problem and(2.54) J0h,Δt(v¯):=12i=1Nm=MkΩhωikuk,m(v¯)-him2,(2.54)

where is the first index for which Δt>τ, and uk,m(v¯) shows its dependence on the initial condition v¯ with m being the index of grid points on time axis. The notation ωik=ωi(xk) indices the approximation of the function ωi(x) in Ωh at points xk. Normally, we take its average over the cell where xk is located as defined by (Equation2.31). We note that in the definition of J0h,Δt the multiplier Δt has been dropped as it plays no role.

For solving the problem (Equation2.54) by the conjugate gradient method, we first calculate the gradient of the objective function J0h,Δt(v¯) and it is shown by the following theorem.

Theorem 2.11:

The gradient J0h,Δt(v¯) of the objective function J0h,Δt at v¯ is given by(2.55) J0h,Δt(v¯)=A0(A-1)η-1,(2.55)

where η satisfies the adjoint problem(2.56) ηm=(Am+1)ηm+1+i=1NωikkΩhωikum+1(v¯)-him+1,m=M-1,M-2,,-1,ηM=0.(2.56)

Here the matrix (Am) is given by(2.57) (Am)=E1-Δt4Λ1mE1+Δt4Λ1m-1En-Δt4ΛnmEn+Δt4Λnm-1×En-Δt4ΛnmEn+Δt4Λnm-1E1-Δt4Λ1mE1+Δt4Λ1m-1.(2.57)

For a small variation δv¯ of v¯, we have from (Equation2.54) that(2.58) J0h,Δt(v¯+δv¯)-J0h,Δt(v¯)=12i=1Nm=MkΩhωikuk,m(v¯+δv¯)-him2-12i=1Nm=MkΩhωikuk,m(v¯)-him2=12i=1Nm=MkΩh(ωikwk,m)2+i=1Nm=MkΩhωikwk,mkΩhωikuk,m(v¯)-him=12i=1Nm=MkΩh(ωikwk,m)2+i=1Nm=MkΩhωikwk,mψik,m=12i=1Nm=MkΩh(ωikwk,m)2+i=1Nm=M(ωiwm,ψim).(2.58)

where wk,m:=uk,m(v¯+δv¯)-uk,m(v¯) and ψik,m=kΩhωikuk,m-him,kΩh, and the inner product is that of RN1××Nn.

It follows from (Equation2.52) that w is the solution to the problem(2.59) wm+1=Amwm,m=0,,M-1w0=δv¯.(2.59)

Taking the inner product of the both sides of the mth equation of (Equation2.59) with an arbitrary vector ηmRN1××Nn, summing the results over m=-1,,M-1, we obtain(2.60) m=-1M-1(wm+1,ηm)=m=-1M-1(Amwm,ηm)=m=-1M-1(wm,(Am)ηm).(2.60)

Here (Am) is the adjoint matrix of Am. Consider the adjoint problem(2.61) ηm=(Am+1)ηm+1+i=1Nωiψim+1,m=M-1,M-2,,-1,ηM=0.(2.61)

Taking the inner product of the both sides of the first equation of (Equation2.61) with an arbitrary vector wm+1, summing the results over m=-1,,M-1, we obtain(2.62) m=-1M-1(wm+1,ηm)=m=-1M-1(wm+1,(Am+1)ηm+1)+i=1Nm=-1M-1(wm+1,ωiψim+1)=m=M(wm,(Am)ηm)+i=1Nm=M(wm,ωiψim).(2.62)

From (Equation2.60) and (Equation2.62), we have(2.63) i=1Nm=M(wm,ωiψm)+(wM,(AM)ηM)=(w-1,(A-1)η-1)=(δv¯,(A0)(A-1)η-1).(2.63)

On the other hand, it can be proved by induction that i=1Nm=MkΩh(ωikwk,m)2=o(δv¯). Hence, it follows from the condition ηM=0, (Equation2.58) and (Equation2.63) that(2.64) J0Δt(v¯+δv¯)-J0Δt(v¯)=(δv¯,(A0)(A-1)η-1)+o(δv¯).(2.64)

Consequently, the gradient of the objective function J0h can be written as(2.65) J0Δt(v¯)v¯=(A0)(A-1)η-1.(2.65)

Note that, since the coefficient matrices Λim,i=1,,n,m=-1,,M-1 are symmetric, we have(Aim)=((Ei+Δt4Λim)-1(Ei-Δt4Λim))=((Ei-Δt4Λim))((Ei+Δt4Λim)-1)=(Ei-Δt4Λim)(Ei+Δt4Λim)-1.

Hence(2.66) (Am)=(A1mAnmAnmA1m)=(A1m)(Anm)(Anm)(A1m)=(E1-Δt4Λ1m)(E1+Δt4Λ1m)-1(En-Δt4Λnm)(En+Δt4Λnm)-1×(En-Δt4Λnm)(En+Δt4Λnm)-1(E1-Δt4Λ1m)(E1+Δt4Λ1m)-1.(2.66)

The proof is complete.

The conjugate gradient method for the discretized functional (Equation2.54) can be written by following steps:

Step 1Given an initial approximation v0 and calculate the residual r^0=i=1N[liu(v0)-hi] by solving the splitting scheme (Equation2.50) with v¯ being replaced by the initial approximation v0 and set k=0.

Step 2Calculate the gradient r0=-Jγ(v0) given in (Equation2.55) by solving the adjoint problem (Equation2.56). Then set d0=r0.

Step 3Calculateα0=r02i=1Nlid02+γd0

where lid0 can be calculated from the splitting scheme (Equation2.50) with v¯ being replaced by d0 and F=0. Then, setv1=v0+α0d0.Step 4For k=1,2,, calculate rk=-J0(vk),dk=rk+βkdk-1, whereβk=rk2rk-12.Step 5Calculate αkαk=rk2i=1Nlidk2+γdk

where lidk can be calculated from the splitting scheme (Equation2.50) with v¯ being replaced by dk and F=0. Then, setvk+1=vk+αkdk.

3. Numerical simulation

To illustrate the efficiency of the proposed algorithm, we present in this section some numerical tests. The results for the one-dimensional case with approximate singular values will be presented in Examples 1–4, respectively. The results for two-dimensional cases will be given in Examples 5–7. In all examples we test our algorithm for (1) very smooth initial, (2) for continuous, but not smooth and (3) for discontinuous initial conditions. Thus, the degree of difficulties increases with examples. We note that although in the theory we only prove the convergence of our difference scheme for smooth initial conditions, but it works for another cases as our numerical examples show. Also, we vary the observation points as well as time interval of observations.

3.1. One-dimensional numerical examples

Set Ω=(0,1), the final time T=1. The one-dimensional system has the form:(3.1) ut-(a(x,t)ux)x+b(x,t)u=f(x,t)inQ,u(0,t)=u(1,t)=0,t(0,T],u(x,0)=v(x)inΩ.(3.1)

In the all tests for this case, we set the coefficients a(x,t)=x2t+2xt+1 and b(x,t)=0. For discretization, we take the grid size to be 0.02 and the random noise is 0.01.

We take 3 observations of the form (Equation1.9) at x1=0.1,x2=0.5 and x3=0.9. The weight functions ωi(x),i=1,2,3 are chosen as follows(3.2) ωi(x)=12ϵifx(xi-ϵ,xi+ϵ)0otherwisewithϵ=0.01.(3.2)

Thus, the reconstruction problem is to determine v from the observations (l1u,l2u,l3u) for t(τ,T) with 0τT. Following Section 2.1, we denote the solution to the system (Equation3.1) with v0 by uo. Then the operator C defined by Cv=(l1(u-uo),l2(u-uo),l2(u-uo)) is bounded and linear from L2(Ω) to (L2(τ,T))3. To characterize the degree of ill-posedness of the reconstruction problem, we have to analyse the singular values of the operator C.

The result of approximate singular values of the solution operator obtained by our method described in Section 2.1 for different τ are given in Figure . From these singular values we see that our inverse problem is severely ill-posed and the ill-posedness depends on the time interval of observations: the less observation time is, the more ill-posed the problem is.

Figure 1. Singular values: three observations and various time intervals of observations.

Figure 1. Singular values: three observations and various time intervals of observations.

Now we show numerical results for reconstructing different initial conditions with different time intervals of observations and different positions for observations.

Example 1:

We set the exact solution to (Equation3.1) uexact=etsin(2πx). Hence, the exact initial condition is given by v(x)=sin(2πx), and the right-hand side f(x,t)=(1+4π2(x2t+2xt+1))etsin(2πx)-2πet(2xt+2t)cos(2πx). Then a priori estimate v=1/2, τ=0, the regularization parameter is chosen to be γ=5×10-3. The first iteration of the CG method v0=0.

The reconstruction result and the exact initial condition v for various positions of observations are shown in Figure . The numerical results show that the quality of the reconstruction depends on the position of observations, the result is better when observations distribute uniformly in Ω.

 

Figure 2. Example 1: Reconstruction results for (a) 3 uniform observation points in (0, 0.5), error in L2-norm =0.006116; (b) 3 uniform observation points in (0.5, 1), error in L2-norm =0.006133; (c) 3 uniform observation points in (0.25, 0.75), the error in L2-norm =0.0060894; (d) 3 uniform observation points in Ω, the error in L2-norm =0.0057764 .

Figure 2. Example 1: Reconstruction results for (a) 3 uniform observation points in (0, 0.5), error in L2-norm =0.006116; (b) 3 uniform observation points in (0.5, 1), error in L2-norm =0.006133; (c) 3 uniform observation points in (0.25, 0.75), the error in L2-norm =0.0060894; (d) 3 uniform observation points in Ω, the error in L2-norm =0.0057764 .

Example 2:

In this example, the initial condition is chosen to be the same as Example 1, v=1/2, and regularization parameter is also γ=5×10-3, but the starting time for observation is taken to be τ=0.01,τ=0.05,τ=0.1 and τ=0.3. The comparison of reconstruction results is displayed in Figure .

Table shows the relation between the starting point of observation τ and the error in the L2-norm of the algorithm when the number of observations is N=3 and the regularization parameter is γ=5×10-3.

We now test the algorithm for nonsmooth initial conditions. In Examples 3 and 4, we choose v, set f=0, and solve the direct problem by the Crank–Nicholson method to find an approximation to the exact solution u, then use these data for reconstructing the exact initial condition v. Here, we use different mesh-sizes for the direct solvers to avoid ‘inverse crime’.

 

Figure 3. Reconstruction result of Example 2: (a) τ=0.01; (b) τ=0.05; (c) τ=0.1; (d) τ=0.3.

Figure 3. Reconstruction result of Example 2: (a) τ=0.01; (b) τ=0.05; (c) τ=0.1; (d) τ=0.3.

Table 1. Example 2: Behaviour of the algorithm with different starting points of observation τ.

Example 3:

In this example, the initial condition is given byv(x)=2x,&ifx0.5,2(1-x),&otherwise,

and v=1/2. The regularization parameter and the starting time τ are chosen to be the same as Example 1. The comparison of reconstruction results is displayed in Figure .

 

Figure 4. Reconstruction result of Example 3: (a) τ=0.01; (b) τ=0.05; (c) τ=0.1; (d) τ=0.3.

Figure 4. Reconstruction result of Example 3: (a) τ=0.01; (b) τ=0.05; (c) τ=0.1; (d) τ=0.3.

Example 4:

In this example, the initial condition is given byv(x)=1,&if0.25x0.75,0,&otherwise,

and v=1/2. The regularization parameter and the starting time τ are chosen to be the same as Example 1. The comparison of reconstruction results is displayed in Figure .

 

Figure 5. Reconstruction result of Example 4: (a) τ=0.01 ; (b) τ=0.05; (c) τ=0.1; (d) τ=0.3.

Figure 5. Reconstruction result of Example 4: (a) τ=0.01 ; (b) τ=0.05; (c) τ=0.1; (d) τ=0.3.

3.2. Two-dimensional numerical examples

Set Ω=(0,1)×(0,1),T=1. Set further x=(x1,x2). The problem Dirichlet problem has the form(3.3) ut-(a1(x1,x2,t)ux1)x1-(a2(x1,x2,t)ux2)x2+b(x1,x2,t)u=f(x1,x2,t)inQ,u(x1,0,t)=u(x1,1,t)=u(0,x2,t)=u(1,x2,t)=0,0x1,x21,t(0,T],u(x1,x2,0)=v(x1,x2)inΩ.(3.3)

For the tests we take a1(x,t)=a2(x,t)=0.5[1-12(1-t)cos(3πx1)cos(3πx2)] and b(x,t)=x12+x22+2x1t+1.

For discretization, we take the grid sizes 0.02 and the random noise 0.01. The weight functions ωi(x) are chosen as follows(3.4) ωi(x)=12ϵifx(x1i-ϵ,x1i+ϵ)×(x2i-ϵ,x2i+ϵ)0otherwisewithϵ=0.01.(3.4)

As said in the introduction the operators li defined by (Equation1.9) with these weight functions can be regarded as ‘practical’ point-wise observations. We shall test our algorithm for different distributions of xi: they will be taken either in the whole Ω or in a part of it: (0,0.5)×(0,0.5), (0.5,1)×(0,0.5), (0.5,1)×(0.5,1) or (0,0.5)×(0.5,1). The number of observation N will be taken either 4 or 9.

 

Figure 6. Example 5. Reconstruction results: (a) Exact initial condition v; (b) reconstruction of v; (c) point-wise error; (d) the comparison of v|x1=1/2| and its reconstruction.

Figure 6. Example 5. Reconstruction results: (a) Exact initial condition v; (b) reconstruction of v; (c) point-wise error; (d) the comparison of v|x1=1/2| and its reconstruction.

Table 2. Example 5: Behaviour of the algorithm when the number of observations and the positions of observations vary.

Table 3. Example 5: Behaviour of the algorithm when the number of observations and the positions of observations vary.

Figure 7. Example 6. Reconstruction results: (a) Exact initial condition v; (b) reconstruction of v; (c) point-wise error; (d) the comparison of v|x1=1/2| and its reconstruction.

Figure 7. Example 6. Reconstruction results: (a) Exact initial condition v; (b) reconstruction of v; (c) point-wise error; (d) the comparison of v|x1=1/2| and its reconstruction.

Figure 8. Example 7. Reconstruction results: (a) Exact initial condition v; (b) reconstruction of v; (c) point-wise error; (d) the comparison of v|x1=1/2| and its reconstruction.

Figure 8. Example 7. Reconstruction results: (a) Exact initial condition v; (b) reconstruction of v; (c) point-wise error; (d) the comparison of v|x1=1/2| and its reconstruction.

Example 5:

We test algorithm for the smooth initial condition given by v(x)=sin(πx1)sin(πx2) and u(x,t)=(1-t)v(x). The source term f is thus given byf(x,t)=-v(x)(1-(1-t)π2+0.5π2cos(3πx1)cos(3πx2)-(x12+x22+2x1t+1))+0.75π2(1-t)2(sin(3πx1)cos(3πx2)cos(πx1)sin(πx2)+cos(3πx1)sin(3πx2)sin(πx1)cos(πx2)).

The measurement is taken at 9 points, the regularization parameter is set to be γ=5×10-3. We test the algorithm with the guess v=1/2. The reconstruction result and the exact initial condition v are shown in Figure . The algorithm stops after 38 iterations and computational time is 87.496271 s and the error in L2-norm is 0.02346. The behaviour of the algorithm when observation regions and number of observations vary is shown in Tables and . We can see that the more number of observations we take, the more accuracy we have.

Now we test the algorithm for nonsmooth initial conditions. In Examples 6 and 7, we choose v and set f=0, then use the splitting method to find an approximation to the solution u. After that we use the obtained data to test our algorithm.

Example 6:

In this example, we choose 9 observations, set the regularization parameter γ=5×10-3 and v a multi-linear function given byv(x)=2x2ifx21/2andx2x1andx11-x2,2(1-x2)ifx21/2andx2x1andx11-x2,2x1ifx11/2andx1x2andx21-x1,2(1-x1)otherwise

and v=1/2. The reconstruction results and the exact initial condition v are shown in Figure . The algorithm stops after 45 iterations and the computational time is 103.394200 s and the error in L2-norm is 0.024959 .

Example 7:

We test the algorithm with the piecewise constant initial conditionv(x)=1if1/4x13/4and1/4x23/4,0otherwise

and v=1/2. The algorithm stops after 100 iterations and the computational time is 109.423362 s and the error in L2-norm is 0.033148 (see Figure ).

Acknowledgements

The constructive comments of the reviewers are kindly acknowledged.

Additional information

Funding

This research was supported by Vietnam National Foundation for Science and Technology Development (NAFOSTED) [grant number 101.02-2014.54].

Notes

No potential conflict of interest was reported by the authors.

References

  • Agoshkov VI. Optimal control methods and the method of adjoint equations in problems of mathematical physics. Moscow: Russian Academy of Sciences, Institute for Numerical Mathematics; 2003. 257pp. Russian.
  • Shutyaev VP. Control operators and iterative algorithms in variational data assimilation problems. Moscow: Nauka; 2001. 240pp. Russian.
  • Aubert G, Kornprobst P. Mathematical problems in image processing. New York (NY): Springer; 2006.
  • Hào DN. Methods for inverse heat conduction problems. Frankfurt/Main, Bern, New York (NY), Paris: Peter Lang Verlag; 1998.
  • Boussetila N, Rebbani F. Optimal regularization method for ill-posed Cauchy problems. Electron. J. Differ. Equ. 2006;147:1–15.
  • Hào DN, Duc NV. Stability results for backward parabolic equations with time dependent coefficients. Inverse Prob. 2011;27:025003, 20pp.
  • Li J, Yamamoto M, Zou J. Conditional stability and numerical reconstruction of initial temperature. Commun. Pure Appl. Anal. 2009;8:361–382.
  • Tröltzsch F. Optimal control of partial differential equations: theory, methods and applications. Providence (RI): American Mathematical Society; 2010.
  • Ladyzhenskaya OA, Solonnikov VA, Ural’ceva NN. Linear and quasilinear equations of parabolic type. Providence (RI): American Mathematical Society; 1968.
  • Wloka J. Partial differential equations. Cambridge: Cambridge University Press; 1987.
  • Klibanov MV. Carleman estimates for the regularization of ill-posed Cauchy problems. Appl. Numer. Math. 2015;94:46–74.
  • Klibanov MV, Kuzhuget AV, Golubnichiy KV. An ill-posed problem for the Black-Scholes equation for a profitable forecast of prices of stock options on real market data. Inverse Prob. 2016;32:015010, 16pp.
  • Oanh NTN. A splitting method for a backward parabolic equation with time-dependent coefficients. Comput. Math. Appl. 2013;65:17–28.
  • Trefethen LN, Bau D III. Numerical linear algebra. Philadelphia: SIAM; 1997.
  • Hào DN, Oanh NTN. Determination of the initial condition in parabolic equations from boundary observations. J. Inverse Ill-Posed Prob. 2016;24:195–220.
  • Nocedal J, Wright SJ. Numerical optimization. 2nd ed. New York (NY): Springer; 2006.
  • Hinze M. A variational discretization concept in control constrained optimization: the linear-quadratic case. Comput. Optimiz. Appl. 2005;30:45–61.
  • Marchuk GI. Methods of numerical mathematics. New York (NY): Springer-Verlag; 1975.
  • Marchuk GI. Splitting and alternating direction methods. In Ciarlet PG, Lions JL, editors. Handbook of numerical mathematics. Volume 1: finite difference methods. North-Holland, Amsterdam: Elsevier Science Publisher B.V.; 1990.
  • Yanenko NN. The method of fractional steps. Berlin: Springer-Verlag; 1971.
  • Hào DN, Thành NT, Sahli H. Splitting-based gradient method for multi-dimensional inverse conduction problems. J. Comput. Appl. Math. 2009;232:361–377.
  • Oanh NTN, Huong BV. Determination of a time-dependent term in the right hand side of linear parabolic equations. Acta Math. Vietnam. 2016;41:313–335.
  • Thành NT. Infrared thermography for the detection and characterization of buried objects [PhD thesis]. Brussels, Belgium: Vrije Universiteit Brussel; 2007.
  • Ladyzhenskaya OA. The boundary value problems of mathematical physics. New York (NY): Springer-Verlag; 1985.
  • Hào DN, Thanh PX, Lesnic D, Johansson BT. A boundary element method for a multi-dimensional inverse heat conduction problem. Int. J. Comput. Math. 2012;89:1540–1554.
  • Hào DN. A noncharacteristic Cauchy problem for linear parabolic equations II: a variational method. Numer. Funct. Anal. Optim. 1992;13:541–564.
  • Hào DN. A noncharacteristic Cauchy problem for linear parabolic equations III: a variational method and its approximation schemes. Numer. Funct. Anal. Optim. 1992;13:565–583.
  • Nemirovskii AS. The regularizing properties of the adjoint gradient method in ill-posed problems. Zh. vychisl. Mat. Mat. Fiz. 1986;26:332–347; Engl. Transl. in U.S.S.R. Comput. Maths. Math. Phys. 1986;26:7–16.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.