286
Views
2
CrossRef citations to date
0
Altmetric
Original Articles

New approach to identify the initial condition in degenerate hyperbolic equation

ORCID Icon, &
Pages 484-512 | Received 23 Jun 2017, Accepted 14 Apr 2018, Published online: 16 May 2018

ABSTRACT

This paper deals with the determination of an initial condition in degenerate hyperbolic equation from final observations. With the aim of reducing the execution time, this inverse problem is solved using an approach based on double regularization: a Tikhonov’s regularization and regularization in equation by viscose-elasticity. So, we obtain a sequence of weak solutions of degenerate linear viscose-elastic problems. Firstly, we prove the existence and uniqueness of each term of this sequence. Secondly, we prove the convergence of this sequence to the weak solution of the initial problem. Also we present some numerical experiments to show the performance of this approach.

AMS SUBJECT CLASSIFICATIONS:

1. Introduction and main results

The inverse problems of finding unknown coefficients of a PDE from the partial knowledge of the system in a limited time interval are of interest in many applications: cosmology [Citation1], medicine [Citation2], in data assimilation [Citation3], it is used for the numerical weather forecast, ocean circulation, environmental prediction, geophysics [Citation4], remote sensing [Citation5] and many other areas.

Recently such type of problems received much attention, in particular, in degenerate parabolic and hyperbolic equations. Indeed, many problems coming from physics (models of Kolmogorov type in [Citation6], boundary layer models in [Citation7], . . . ), economics (Black-Merton-Scholes equations in [Citation8]) and biology (Fleming-Viot models in [Citation9] and Wright-Fisher models in [Citation10]) are described by degenerate parabolic or hyperbolic equations [Citation11]. This work is the continuity of [Citation12] and [Citation13], in which we identify the initial condition and we study numerically the null controllability of degenerate/singular parabolic problem. In this paper, we are interested in the estimation of the initial state of the degenerate hyperbolic problem (Equation1.1) below, thanks to a final observations ψobs. The resolution of this inverse problem using only the Tikhonov’s regularization takes a considerable time of execution, to reduce it, we propose a new approach based on double regularization: the Tikhonov’s regularization and regularization in equation by viscose-elasticity.

Consider the degenerate hyperbolic equation(1.1) ttψ-x(xαxψ)=f,(x,t)Ω×]0,T[,ψ(0,t)=ψ(1,t)=0,t]0,T[,tψ(x,0)=v0(x),xΩ,ψ(x,0)=ψ0(x),xΩ.(1.1)

where, Ω=]0,1[, fL2(Ω×]0,T[), and α]0,1[.

Let (tj)j0,1,2M+1 a discritization of [0, T] with tj=jΔt, Δt is the step in time, and T=(M+1)Δt.

We suppose that the observation ψobs of ψ is known in some points in vicinity of T:

N0,1,2M+1 such that ψobs is known in tj, with jI=N,N+1,M+1.

The problem can be stated as follows:(1.2) findu=(ψ0,v0)Aadsuch thatE(u)=minuAadE(u),(1.2)

where the cost function E is defined as follows(1.3) E(u)=12jIψ(tj)-ψjobsL2(Ω)2.(1.3)

Subject to ψ being the weak solution of the hyperbolic problem (Equation1.1), with initial state u. The space Aad is the set of admissible initial states (will be defined later).

To solve this inverse problem, we propose an approach based on double regularization. The first is a regularization by viscose-elasticity, this gives the following degenerate linear viscose-elastic problem(1.4) ttψ-x(xαxψ)-λxxtψ=f,λ>0,(x,t)Ω×]0,T[,ψ(0,t)=ψ(1,t)=0,t]0,T[,tψ(x,0)=v0(x),xΩ,ψ(x,0)=ψ0(x),xΩ.(1.4)

The second, is Tikhonov’s regularization. Since the problem (Equation1.2) is ill-posed in the sense of Hadamard, some regularization technique is needed in order to guarantee numerical stability of the computational procedure even with noisy input data. The problem thus consists in minimizing a functional of the form(1.5) J(u)=12tjIψ(tj)-ψobs(tj)L2(Ω)2+ε2u-u¯L2(Ω)×L2(Ω)2,(1.5)

subject to ψ is the weak solution of the problem (Equation1.4) with initial state u.

Here, the last term in (Equation1.5) stands for the so-called Tikhonov-type regularization ([Citation14,Citation15]), ε being a small regularizing coefficient that provides extra convexity to the functional J. u¯ an a priori (information) knowledge of the exact initial condition of the problem (Equation1.1).

Firstly, we present a new theorem which gives the existence and uniqueness of the weak solutions of the problems (Equation1.4), for each λ>0. Secondly, we prove that: when λ0, the sequence (ψλ)λ>0 of weak solutions of the problem (Equation1.4) converges to the weak solution of the problem (Equation1.1). Thirdly, with the aim of showing that the minimization problem and the direct problem are well-posed, we prove that the solution’s behaviour changes continuously with the initial conditions, for this we prove the continuity of the function φ:(ψ0,v0)ψ, where ψ is the weak solution of (Equation1.4) with initial state (ψ0,v0). Finally, we prove the differentiability of the functional J, which gives the existence of the gradient of J, that is computed using the adjoint state method. Also we present some numerical experiments to show the performance of this approach.

We now specify some notations. Let us introduce the functional spaces (see [Citation16Citation18])Hα,01=}uHα1u(0)=u(1)=0{,Hα1=}uL2(Ω)HLoc1(]0,1])xα2uxL2(Ω){,

with the inner productu,vHα1=Ω(uv+xαuxvx)dx.

The weak formulation of the problem (Equation1.4) is:(1.6) Ωttψvdx+Ωxαxψxvdx+Ωλxtψxvdx=Ωfvdx,vH01(Ω).(1.6)

Let(1.7) Aad=}(u,v)Hα1(Ω)×L2(Ω);uHα1(Ω)randvL2(Ω)r{,whererR+.(1.7)

We have the following results

Theorem 1.1:

[Citation19]]For any fL1(0,T,L2(Ω)) and (ψ0,v0)Hα1(Ω)×L2(Ω), there exists a unique weak solution which solves the problem (Equation1.1) such thatψC0,T;Hα1(Ω)C10,T;L2(Ω).

Moreover, there exists a constant C=C(T,α) such that for any solution of (Equation1.1)(1.8) ψL(0,T;L2(Ω))+tψL(0,T;L2(Ω))+xα(xψ)2L(0,T;L1(Ω))Cψ0Hα1(Ω)+fL1(0,T;L2(Ω))+v0L2(Ω).(1.8)

Theorem 1.2:

Let Ω a bounded regular open of R. We assume that v0L2(Ω) and ψ0Hα1(Ω) then the problem (Equation1.4) has a unique weak solution such that(1.9) ψL20,T;Hα1(Ω)L0,T;L2(Ω),tψL2(0,T;L2(Ω)),(1.9)

and we have the estimate(1.10) supt[0,T]ψ(t)L2(Ω)2+tψL2(0,T;L2(Ω))2+xαxψL2(0,T;L2(Ω))2+λxtψL2(0,T;L2(Ω))2Cψ0Hα1(Ω)2+v0L2(Ω)2+fL2(0,T;L2(Ω)2.(1.10)

The constant C depending on Ω, and T.

Theorem 1.3:

When λ0, the sequence of the weak solutions of the problem (Equation1.4) converges to the weak solution of the problem (Equation1.1).

Theorem 1.4:

Under the assumptions of Theorem 1.2, the functional J is continuous on Aad.

Theorem 1.5:

Under the assumptions of Theorem 1.2, the functional J is G-derivable on Aad.

2. Proof

Proof of Theorem 1.2:

For any positive integer n, consider the nondegenerate wave equation(2.1) ttψn-x(xα+1nxψn)-λxxtψn=f(x,t),ψn(0,t)=ψn(1,t)=0,t0;T,ψn(x,0)=ψ0(x),xΩ,tψn(x,0)=v0(x),xΩ.(2.1)

By the classical theory of wave equations, the system (Equation2.1) admits a unique classical solution ψn. Multiplying both sides of the first equation of (Equation2.1) by tψn and integrating it in Ω, we have that(2.2) Ωttψntψndx+Ωxα+1nxψnxtψndx+λΩ(xtψn)2dx=Ωftψndx.(2.2) Ω is independent of t, and by green’s formula, we arrive at(2.3) 12ddtΩ(tψn)2dx+12ddtΩxα+1n(xψn)2dx+λΩ(xtψn)2dxΩftψndx.(2.3)

Integrating between 0 and t, with t[0,T], gives(2.4) tψn(t)L2(Ω)2+xα+1nxψn(t)L2(Ω)2tψn(0)L2(Ω)2+xα+1nxψ0L2(Ω)2+fL2(0,T;L2(Ω))2+0ttψn(s)L2(Ω)2ds.(2.4)

We have xα1x]0,1[, and 1n<1, then(2.5) xα+1n2xΩ.(2.5)

We pose(2.6) C1=v0L2(Ω)2+2ψ0H1(Ω)2+fL2(0,T;L2(Ω))2.(2.6)

Therefore(2.7) tψn(t)L2(Ω)2+xα+1nxψn(t)L2(Ω)2C1+0ttψn(s)L2(Ω)2ds.(2.7)

We have(2.8) ψn(x,t)=0ttψn(x,s)ds+ψn(x,0),(2.8)

then(2.9) ψn(x,t)2=0ttψn(x,s)ds2+ψn(x,0)2+2ψn(x,0)0ttψn(x,s)ds.(2.9)

By Hölder inequality, we arrive at(2.10) 0ttψn(x,s)dst0ttψn(x,s)2ds12,(2.10)

then(2.11) 0ttψn(x,s)ds2T0ttψn(x,s)2ds.(2.11)

In addition we have 2aba2+b2 from where(2.12) 2ψn(x,0)0ttψn(x,s)dsψn(x,0)2+0ttψn(x,s)ds2ψn(x,0)2+T0ttψn(x,s)2ds.(2.12)

From (Equation2.9), (Equation2.11) and (Equation2.12), we arrive at(2.13) ψn(t)L2(Ω)22ψ0L2(Ω)2+2T0ttψn(s)L2(Ω)2ds.(2.13)

Let(2.14) C2=C1+2ψ0L2(Ω)2andM=2T+1,(2.14)

and(2.15) y(t)=ψn(x,t)L2(Ω)2+tψn(t)L2(Ω)2+xα+1nxψn(t)L2(Ω)2.(2.15)

From (Equation2.7) and (Equation2.13), we arrive at(2.16) y(t)C2+M0ty(s)ds.(2.16)

Gronwall’s Lemma gives(2.17) y(t)C2exp(MT)t[0,T].(2.17)

Let M1=C2exp(MT), which gives(2.18) ψn(x,t)L2(Ω)2+tψn(t)L2(Ω)2+xα+1nxψn(t)L2(Ω)2M1,t[0,T].(2.18)

Hence,(2.19) xαxψn(t)L2(Ω)2M1,t[0,T],(2.19)

and there exist a subsequence {ψnj} of {ψn} and a function ψL(0,T;L2(Ω))L20,T;Hα1(Ω) satisfying tψL(0,T;L2(Ω)), such that as nj (2.20) ψnjψweakly inL(0,T;L2(Ω)),ψnjψweakly inL2(0,T;Hα1(Ω)),tψnjtψweakly inL2(0,T;L2(Ω)),(xα+1nj)xψnjxαxψweakly inL2(0,T;L2(Ω)).(2.20)

Since {ψnj} is the solution of system (Equation2.1), we have that ψnj(x,0)=ψ0 and tψnj(x,0)=v0.

Return to Equation (Equation2.3), by integrating between 0 and T, we obtain(2.21) 12Ω(tψnj)2(T)dx-12Ω(tψnj)2(0)dx+12Ωxα+1nj(xψnj)2(T)dx-12Ωxα+1nj(xψnj)2(0)dx+λ0TΩ(xtψnj)2dxdt0TΩftψnjdxdt.(2.21)

From where(2.22) λ0TΩ(xtψnj)20TΩ|ftψnj|+12Ω(tψnj)2(t=0)+Ωxα+1nj(xψnj)2(t=0).(2.22)

From (Equation2.18), we obtain(2.23) λxtψnjL2(0,T,L2(Ω))212fL2(0,T,L2(Ω))2+12tψnjL2(0,T,L2(Ω))2+12M1.(2.23)

and(2.24) tψnjL2(0,T,L2(Ω))2TM1.(2.24)

Then(2.25) λxtψnjL2(0,T,L2(Ω))212fL2(0,T,L2(Ω))2+12+T2M1.(2.25)

Consequently,(2.26) λxtψnjλxtψweakly inL2(0,T;L2(Ω)).(2.26)

The weak formulation of (Equation2.1) is(2.27) Ωttψnvdx+Ωxα+1nxψnxvdx+λΩxtψnxvdx=ΩfvdxvH01(Ω).(2.27)

If we take vH01(Ω) as vH01(Ω)1, we obtain(2.28) <ttψnj,v>L2(Ω)2xα+1nxψnjL2(Ω)vH01(Ω)+λxtψnjL2(Ω)vH01(Ω)+fL2(Ω)vH01(Ω)(2.28) (2.29) ttψnjH-1(Ω)2xα+1nxψnjL2(Ω)+λxtψnjL2(Ω)+fL2(Ω).(2.29)

From (Equation2.18) and (Equation2.25), we obtain(2.30) ttψnjL20,T;H-1(Ω)<.(2.30)

Hence,(2.31) ttψnjis bounded inL2(0,T,H-1(Ω)).(2.31)

We conclude that(2.32) ψnjψweakly inL(0,T;L2(Ω)),ψnjψweakly inL2(0,T;Hα1(Ω)),tψnjtψweakly inL2(0,T;L2(Ω)),λxtψnjλxtψweakly inL2(0,T;L2(Ω)),(xα+1nj)xψnjxαxψweakly inL2(0,T;L2(Ω)),ttψnjttψweakly inL2(0,T;H-1(Ω)).(2.32)

We find upon passing to weak limits that(2.33) Ωttψv+Ωxαxψxv+λΩxtψxvdx=Ωfv,vH01(Ω),a.e.t]0,T[.(2.33)

We obtain that ψ is the weak solution of(2.34) ttψ-x(xαxψ)-λxxtψ=f(x,t),ψ(0,t)=ψ(1,t)=0,t0;T,ψ(x,0)=ψ0(x),xΩ,tψ(x,0)=v0(x)xΩ.(2.34)

Next, we prove the existence of weak solutions of (Equation1.4) for any (ψ0,v0)Hα1(Ω)×L2(Ω) and fL2(0;T;L2(Ω)). Let ψ0m, v0m and fm be Cauchy sequences of smooth functions, respectively, such that as m,

ψ0mψ0 in Hα1(Ω), v0mv0 in L2(Ω) and fmf in L2(0,T;L2(Ω)).

Denote by ψm the solution of (Equation1.4) associated to (ψ0m,v0m) and fm, and ψn the solution of (Equation1.4) associated to (ψ0n,v0n) and fn.

We have the following variational problem(2.35) Ωtt(ψn-ψm)vdx+Ωxαx(ψn-ψm)xvdx+Ωλxt(ψn-ψm)xvdx=Ω(fn-fm)vdx,vH01(Ω),(ψn-ψm)(x,t)=0,xΩ,t]0;T[,(ψn-ψm)(x,0)=(ψ0n-ψ0m),xΩ,t(ψn-ψm)(x,0)=(v0n-v0m),xΩ.(2.35)

Similarly at Equations (Equation2.18) and (Equation2.19), let M=2T+1, andM1=4exp(MT)v0n-v0mL2(Ω)2+ψ0n-ψ0mH1(Ω)2+fn-fmL2(0,T;L2(Ω))2.

We obtain(2.36) tψn-tψm)L2(0,T;L2(Ω))2M1,(2.36)

and(2.37) ψn-ψmL(0,T;L2(Ω))2M1,(2.37)

and(2.38) ψn-ψmL2(0,T;Hα1(Ω))2M1.(2.38)

Therefore, there exists ψL2(0,T;Hα1(Ω))L(0,T;L2(Ω)) and tψL2(0,T;L2(Ω)), such that as m (2.39) ψmψinL(0,T;L2(Ω)), andtψmtψinL2(0,T;L2(Ω)),andψmψinL2(0,T;Hα1(Ω)).(2.39)

Now, we prove that the weak solution of problem (Equation1.4) is unique.

Let ψ1 and ψ2 two weak solutions of the problem (Equation1.4).

Let Dψ=ψ1-ψ2, consequently Dψ verifies(2.40) ΩttDψvdx+ΩxαxDψxvdx+ΩλxtDψxvdx=0,vH01(Ω)Dψ(x,t)=0,xΩ,t]0;T[Dψ(x,0)=0,xΩtDψ(x,0)=0,xΩ.(2.40)

By same way to obtain Equation (Equation2.18), we get(2.41) DψL2(0,T;Hα1(Ω))2=0.(2.41)

From whereψ1=ψ2,a.e.t[0,T].

Proof of Theorem 1.3:

Let the problem(2.42) ttψλ-x(xαxψλ)-λxxtψλ=f(x,t),ψλ(0,t)=ψλ(1,t)=0,t0;T,ψλ(x,0)=ψ0(x),xΩ,tψλ(x,0)=v0(x),xΩ.(2.42)

Multiplying both sides of the first equation of (Equation2.42) by tψλ and integrating it in Ω, we have that(2.43) Ωttψλtψλdx+Ωxαxψλxtψλdx+λΩ(xtψλ)2dx=Ωftψλdx.(2.43) Ω is independent of t, and by green’s formula, we arrive at(2.44) 12ddtΩ(tψλ)2dx+12ddtΩxα(xψλ)2dx+λΩ(xtψλ)2dxΩftψλdx,(2.44)

Integrating between 0 and t, with t[0,T], gives(2.45) tψλ(t)L2(Ω)2+xαxψλ(t)L2(Ω)2v0L2(Ω)2+ψ0Hα1(Ω)2+fL2(0,T;L2(Ω))2+0ttψλ(s)L2(Ω)2ds.(2.45)

We pose(2.46) C1=v0L2(Ω)2+ψ0Hα1(Ω)2+fL2(0,T;L2(Ω))2.(2.46)

Therefore we have(2.47) tψλ(t)L2(Ω)2+xαxψλ(t)L2(Ω)2C1+0ttψλ(s)L2(Ω)2ds.(2.47)

We have(2.48) ψλ(x,t)=0ttψλ(x,s)ds+ψλ(x,0),(2.48)

By the same way as that used to obtain (Equation2.8)–(Equation2.13), we arrive at(2.49) ψλ(t)L2(Ω)22ψ0L2(Ω)2+2T0ttψλ(s)L2(Ω)2ds.(2.49)

Let C2=C1+2ψ0L2(Ω)2andM=1+2T, and(2.50) y(t)=ψλ(x,t)L2(Ω)2+tψλ(t)L2(Ω)2+xαxψλ(t)L2(Ω)2.(2.50)

From (Equation2.47) and (Equation2.49), we arrive at(2.51) y(t)C2+M0ty(s)ds.(2.51)

Gronwall’s Lemma gives(2.52) y(t)C2exp(MT)t[0,T].(2.52)

Let M1=C2exp(MT), which gives(2.53) ψλ(x,t)L2(Ω)2+tψλ(t)L2(Ω)2+xαxψλ(t)L2(Ω)2M1,t[0,T].(2.53)

Hence,(2.54) tψλis bounded inL(0,T,L2(Ω)),(2.54)

and(2.55) ψλis bounded inL(0,T,L2(Ω)),(2.55)

and(2.56) ψλis bounded inL2(0,T,Hα1(Ω)).(2.56)

Return to Equation (Equation2.44), by integrating between 0 and T we obtain(2.57) 12Ω(tψλ)2(T)+12Ωxα(xψλ)2(T)dx+λ0TΩ(xtψλ)20TΩftψλ+12Ω(tψλ)2(0)+12Ωxα(xψλ)2(0)dx.(2.57)

From where(2.58) λ0TΩ(xtψλ)20TΩ|ftψλ|+12Ω(tψλ)2(0)+12Ωxα(xψλ)2(0)dx.(2.58)

Since λ<<1 then(2.59) 0TΩ(λxtψλ)20TΩλ(xtψλ)2.(2.59)

From where(2.60) λxtψλL2(0,T,L2(Ω))212fL2(0,T,L2(Ω))2+12tψλ(t)L2(0,T,L2(Ω))2+tψλ(0)L2(Ω)2+xαxψλ(0)L2(Ω)2.(2.60)

From Equation (Equation2.53)(2.61) λxtψλL2(0,T,L2(Ω))212fL2(0,T,L2(Ω))2+12M1T+2M1.(2.61)

Consequently,(2.62) λtxψλis bounded inL2(0,T,L2(Ω)).(2.62)

The weak formulation of (Equation2.42) is(2.63) Ωttψλvdx+Ωxαxψλxvdx+λΩxtψλxvdx=Ωfvdx,vH01(Ω).(2.63)

We take vH01(Ω) as vH01(Ω)1, we obtain(2.64) <ttψλ,v>L2(Ω)xαxψλL2(Ω)vH01(Ω)+λxtψλL2(Ω)vH01(Ω)+fL2(Ω)vH01(Ω).(2.64)

Since xαxα xΩ, then(2.65) ttψλH-1(Ω)xαxψλL2(Ω)+λxtψλL2(Ω)+fL2(Ω).(2.65)

From Equations (Equation2.53) and (Equation2.61), we have(2.66) ttψλL20,T;H-1(Ω)<.(2.66)

Hence,(2.67) ψλψweakly inL(0,T;L2(Ω)),ψλψweakly inL2(0,T;Hα1(Ω)),tψλtψweakly inL2(0,T;L2(Ω)),ttψλttψweakly inL2(0,T;H-1(Ω)),λxtψλ0weakly inL2(0,T;L2(Ω)).(2.67)

We find upon passing to weak limits that(2.68) Ωttψv+Ωxαxψxv=Ωfv,vH01(Ω).(2.68)

Hence, ψ is the weak solution of the problem (Equation1.1).

Proof of Theorem 1.4:

The continuity of the functional J is deduced from the continuity of the functionφ:(ψ0,v0)ψ.

We have the following lemma

Lemma 2.1:

Let ψ the weak solution of (Equation1.4) with initial state (ψ0,v0). The function(2.69) φ:Hα1(Ω)×L2(Ω)L20,T;Hα1(Ω)L0,T;L2(Ω)(ψ0,v0)ψ(2.69)

is continuous.

Proof of Lemma 2.1:

Let (δψ0,δv0)Hα1(Ω)×L2(Ω) a small variation such that (ψ0+δψ0,v0+δv0)Aad.

Consider δψ=ψδ-ψ, with ψ is the weak solution of (Equation1.4) with initial state (ψ0, v0) and ψδ is the weak solution of (Equation1.4) with initial state (ψ0+δψ0,v0+δv0). Consequently, δψ is the solution of the following variational problem(2.70) Ωttδψvdx+Ωxαxδψxvdx+Ωλxtδψxvdx=0,vH01(Ω),δψ(t,x)=0,t0;T,xΩ,δψ(t=0)=δψ0(x),xΩ,tδψ(t=0)=δv0(x),xΩ.(2.70)

Hence, δψ is weak solution of (Equation1.4) with f=0. We apply the estimate in Theorem 1.2, we obtain(2.71) supt[0,T]δψ(t)L2(Ω)2+tδψL2(0,T;L2(Ω))2+xαxδψL2(0,T;L2(Ω))2+λxtδψL2(0,T;L2(Ω))2Cδψ0Hα1(Ω)2+δv0L2(Ω)2.(2.71)

From where,(2.72) δψL0,T;L2(Ω)2Cδψ0Hα1(Ω)2+δv0L2(Ω)2,(2.72)

and(2.73) δψL20,T;Hα1(Ω)2Cδψ0Hα1(Ω)2+δv0L2(Ω)2.(2.73)

Which implies the continuity of the function(2.74) φ:Hα1(Ω)×L2(Ω)L20,T;Hα1(Ω)L0,T;L2(Ω)(ψ0,v0)ψ.(2.74)

Hence, the cost function J is continuous on Aad.

Proof of Theorem 1.5:

The differentiability of the functional J is deduced from the differentiability of the functionφ:(ψ0,v0)ψ,

where ψ is the weak solution of the problem (Equation1.4) with initial condition (ψ0,v0). We have the following result

Lemma 2.2:

Let ψ the weak solution of (Equation1.4) with initial state (ψ0,v0). The function(2.75) φ:Hα1(Ω)×L2(Ω)L20,T;Hα1(Ω)L0,T;L2(Ω)(ψ0,v0)ψ(2.75)

is G-derivable.

Proof of Lemma 2.2:

Let b=(ψ0,v0)Aad and δb=(δψ0,δv0) a small variation such that b+δbAad, we define the function(2.76) φ(b):δbAadδψ,(2.76)

where δψ is the solution of the following variational problem(2.77) Ωttδψvdx+Ωxαxδψxvdx+Ωλxtδψxvdx=0,vH01(Ω),δψ(0,t)=δψ(1,t)=0,t]0;T[,δψ(x,0)=δψ0,xΩ,tδψ(x,0)=δv0,xΩ.(2.77)

We pose(2.78) ϕ(b)=φ(b+δb)-φ(b)-φ(b)δb.(2.78)

We want to show that(2.79) ϕ(b)=o(δb).(2.79)

We easily verify that the function ϕ is solution of following variational problem(2.80) Ωttϕvdx+Ωxαxϕxvdx+Ωλxtϕxvdx=0,vH01(Ω),ϕ(0,t)=ϕ(1,t)=0,t]0;T[,ϕ(x,0)=δψ0-δψ02-δv02,xΩ,tϕ(x,0)=δv0,xΩ.(2.80)

By the same way as that used in the proof of continuity, we deduce(2.81) ϕL0,T;L2(Ω)2Cδψ0-δψ02-δv02Hα1(Ω)2+δv0L2(Ω)2,(2.81)

and(2.82) ϕL20,T;Hα1(Ω)2Cδψ0-δψ02-δv02Hα1(Ω)2+δv0L2(Ω)2.(2.82)

Hence, The function φ:(ψ0,v0)ψ is G-derivable, and we deduce that J is G-derivable on Aad.

Now, we compute the gradient of J using the adjoint state method.

3. Adjoint state method

We define the Gâteaux derivative of ψ at k=(ψ0,v0) in the direction h=(h1,h2)L2(Ω), by(3.1) ψ^=lims0ψ(k+sh)-ψ(k)s,(3.1) ψ(k+sh) is the solution of (Equation1.4) with initial state k+sh, and ψ(k) is the solution of (Equation1.4) with initial state k.

We compute the Gâteaux (directional) derivative of (Equation1.4) at k in some direction hL2(Ω), and we get the so-called tangent linear model(3.2) t2ψ^-x(xαxψ^)-x(λxtψ^)=0,ψ^(0,t)=ψ^(1,t)=0,t0;T,ψ^(x,0)=h1,x0;1,tψ^(x,0)=h2,x0;1.(3.2)

We introduce the adjoint variable P, and we integrate(3.3) 010Tttψ^P-010Tx(xαxψ^(x))P-010Tx(λxtψ^)P=0.(3.3)

We have(3.4) 0T01x(λxtψ^)P=0Tλxtψ^P01-0T01λxtψ^xP.(3.4)

We pose P(x=0)=P(x=1)=0, then we obtain(3.5) 0T01x(λxtψ^)P=-0Tλtψ^xP01+0T01λtψ^x2P.(3.5)

Since ψ^(0,t)=ψ^(1,t)=0 t]0,T[, then tψ^(0,t)=tψ^(1,t)=0 t]0,T[.

From where(3.6) 0T01x(λxtψ^)P=01λψ^x2P0T-0T01λψ^xxtP.(3.6)

Since xΩ P(x,t=T)=0, then xP(x,T)=0 and x2P(x,T)=0. In addition, we have ψ^(x,0)=h1 xΩ. which gives(3.7) 0T01x(λxtψ^)P=-01λh1x2P(t=0)-0T01λψ^xxtP.(3.7)

Based on the calculations done in the previous paragraph, we have(3.8) 010Tttψ^P-010Tx(xαxψ^(x))P=h1,-tP(t=0)L2(Ω)+h2,P(0)L2(Ω).(3.8)

This gives(3.9) 0TttP-x(xαxP)+λxxtP,ψ^L2(Ω)=h1,-tP(t=0)-λx2P(t=0)L2(Ω)+h2,P(0)L2(Ω)P(x=0)=P(x=1)=0,P(T)=0,tP(T)=0.(3.9)

The discretization in time of (Equation3.9), using the Rectangular integration method, gives(3.10) j=0M+1ψ^(tj),ttP(tj)-x(xαxP(tj))+λxxtP(tj)L2(Ω)Δt=h1,-tP(t=0)-λx2P(t=0)L2(Ω)+h2,P(0)L2(Ω),P(x=0)=P(x=1)=0,P(T)=0,tP(T)=0.(3.10)

We pose u¯=(ψ0¯,v0¯), the Gâteaux derivative of the cost functionJ(ψ0,v0)=ε2(ψ0-ψ0¯L2(Ω)2+v0-v0¯)L2(Ω)2+12jIψ(tj)-ψobs(tj)L2(Ω)2

at k=(ψ0,v0) in the direction h=(h1,h2)L2(Ω) is given byJ^(h)=lims0J(k+sh)-J(k)s.

After some compute, we arrive at(3.11) J^(h)=εψ0-ψ0¯,h1L2(Ω)+εv0-v0¯,h2L2(Ω)+jIψ(tj)-ψobs(tj),ψ^L2(Ω)dt.(3.11)

The adjoint problem is(3.12) ttP(tj)-x(xαxP(tj))+x(λxtP(tj))=1Δt(ψ(tj)-ψobs(tj)),jI,ttP(tj)-x(xαxP(tj))+x(λxtP(tj))=0,jI,P(x=0)=P(x=1)=0,P(T)=0,tP(T)=0.(3.12)

The problem (Equation3.12) is retrograde, we make the change of variable tT-t, the gradient of J becomes(3.13) Jψ0=tP(T)+εψ0-ψ0¯-λx2P(T),Jv0=P(T)+εv0-v0¯.(3.13)

To calculate a gradient of J, we solve two problems: direct problem (Equation1.4), and the adjoint problem (Equation3.12) with the change of variable tT-t.

4. Discretization of problem

Step 1. Full discretization

Discrete approximations of these problems need to be made for numerical implementation. To resolve the direct problem and the adjoint problem, we use the Method θ-schema in time. This method is unconditionally stable for 1>θ12.

Let h the step in space and Δt the step in time.

Let xi=ih with i0,1,2N+1,

tj=jΔt with j0,1,2M+1,

fij=f(xi,tj).

Let ψij=ψ(xi,tj), a(x)=xα, b(xi)=a(xi) and(4.1) ϕ(xi,tj)=ϕij=b(xi)xψ(xi,tj).(4.1)

We have(4.2) 2ψt2(xi,tj)ψij+1-2ψij+ψij-1Δt2(4.2) (4.3) x(bxψj)(xi)1-θh(ϕi+1j-ϕij)+θ2h(ϕi+1j+1-ϕij+1)+θ2h(ϕi+1j-1-ϕij-1)(4.3)

with back affine approximation of ψ in xi we get(4.4) ψ(xi-h,tj)ψ(xi,tj)-hxψ(xi,tj)(4.4) (4.5) xψ(xi,tj)ψ(xi,tj)-ψ(xi-1,tj)h,(4.5)

this gives(4.6) ψjx(xi)1h(ψij-ψi-1j),(4.6)

the same, with back affine approximation of ψ in xi+1 we get(4.7) ψjx(xi+1)1h(ψi+1j-ψij).(4.7)

From where(4.8) 1h(ϕi+1j-ϕij)b(xi+1)h1h(ψi+1j-ψij)-b(xi)h1h(ψij-ψi-1j)b(xi+1)h1h(ψi+1j-ψij)+b(xi)h1h(ψi+1j-ψij)-b(xi)h1h(ψi+1j-ψij)-b(xi)h1h(ψij-ψi-1j)(4.8) (4.9) 1h(ϕi+1j-ϕij)b(xi+1)-b(xi)h1h(ψi+1j-ψij)+b(xi)ψi+1j-2ψij+ψi-1jh2.(4.9)

Let(4.10) da(xi)=b(xi+1)-b(xi)h,(4.10)

and We haveλtxxψλΔth2ψi+1j+1-2ψij+1+ψi-1j+1-ψi+1j-2ψij+ψi-1j.

Hence, ttψ-Aψ=f is approximated by(4.11) g1(xi)ψi-1j+1+g2(xi)ψij+1+g3(xi)ψi+1j+1=k1(xi)ψi-1j+k2(xi)ψij+k3(xi)ψi+1j+h1(xi)ψi-1j-1+h2(xi)ψij-1+h3(xi)ψi+1j-1+Δt2[1-θfij+θ2(fij+1+fij-1)](4.11)

where(4.12) g1(xi)=-θΔt22h2b(xi)-λΔth2(4.12) (4.13) g2(x)=1+θΔt2h2b(xi)+da(xi)θΔt22h+2λΔth2,(4.13) (4.14) g3(x)=-θΔt22h2b(xi)+da(xi)θΔt22h-λΔth2,(4.14) (4.15) k1(xi)=1-θΔt2h2b(xi)-λΔth2,(4.15) (4.16) k2(xi)=2-1-θΔt2hda(xi)-21-θΔt2h2b(xi)+2λΔth2,(4.16) (4.17) k3(xi)=1-θΔt2hda(xi)+1-θΔt2h2b(xi)-λΔth2,(4.17) (4.18) h1(xi)=θΔt22h2b(xi),(4.18) (4.19) h2(x)=-1+θΔt2h2b(xi)+da(xi)θΔt22h,(4.19) (4.20) h3(x)=θΔt22h2b(xi)+da(xi)θΔt22h.(4.20)

We have(4.21) g1(xi)ψi-1j+1+g2(xi)ψij+1+g3(xi)ψi+1j+1=k1(xi)ψi-1j+k2(xi)ψij+k3(xi)ψi+1j+h1(xi)ψi-1j-1+h2(xi)ψij-1+h3(xi)ψi+1j-1+Δt2[1-θfij+θ2(fij+1+fij-1)],(4.21)

with ψj=ψiji1,2,..N we have(4.22) Kψj+1=Bψj+Cψj-1+Vjavecj1,2,..Mψ0=ψ0(ih)i1,2,..Nψ1=ψ1(ih)i1,2,..N.(4.22)

where(4.23) ψ1(ih)ψ0(ih)+Δtv0+Δt22ttψ(ih,t=0)(4.23) (4.24) ψ1(ih)ψ0(ih)+Δtv0(ih)+Δt22x(a+γ)xψ(ih,t=0)+f(ih,t=0)+λΔt22tΔψ(ih,t=0).(4.24)

We have Δ(tψ)(ih,t=0)=Δv0(ih), from whereψ1(ih)ψ0(ih)+Δtv0(ih)+Δt22x(a)xψ(ih,t=0)+f(ih,t=0)+λΔt2v0((i+1)h)-2v0(ih)+v0((i-1)h)2h2,

andK=g2(x1)g3(x1)00g1(x2)g2(x2)g3(x2)00g1(x3)g2(x3)g3(x3)00g1(x4)g2(x4)g3(x4)00...0....00g1(xN-1)g2(xN-1)g3(xN-1)00g1(xN)g2(xN) B=k2(x1)k3(x1)00k1(x2)k2(x2)k3(x2)00k1(x3)k2(x3)k3(x3)00k1(x4)k2(x4)k3(x4)00...0....00k1(xN-1)k2(xN-1)k3(xN-1)00k1(xN)k2(xN) C=h2(x1)h3(x1)00h1(x2)h2(x2)h3(x2)00h1(x3)h2(x3)h3(x3)00h1(x4)h2(x4)h3(x4)00...0....00h1(xN-1)h2(xN-1)h3(xN-1)00h1(xN)h2(xN) Vj=Δt2[1-θf1j+θ2(f1j+1+f1j-1)]Δt2[1-θf2j+θ2(f2j+1+f2j-1)]....Δt2[1-θfN-1j+θ2(fN-1j+1+fN-1j-1)]Δt2[1-θfNj+θ2(fNj+1+fNj-1)] Step 2. Discretization of the functional(4.25) J(u)=ε201(ψ0-ψ0¯)2dx+01(v0-v0¯)2dx+12jIψ(tj)-ψobs(tj)L2(Ω)2.(4.25)

We recall that the method of Thomas Simpson to calculate an integral isabf(x)dxh2f(x0)+2i=1N+12-1f(x2i)+4i=1N+12f(x2i+1)+f(xN+1),

with x0=a, xN+1=b, xi=a+ih, i1..N+1.

Let the functions(4.26) W(x)=(ψ0-ψ0¯)2(x)xΩ,(4.26) (4.27) S(x)=(v0-v0¯)2(x)xΩ,(4.27)

and(4.28) φ(x,t)=(ψ-ψobs)2(x,t)tI,xΩ.(4.28)

We have01W(x)dxh2W(0)+2i=1N+12-1W(x2i)+4i=1N+12W(x2i+1)+W(1),

and01S(x)dxh2S(0)+2i=1N+12-1S(x2i)+4i=1N+12S(x2i+1)+S(1).

Let(t)=01φ(x,t)dx.

This gives(t)h2φ(0,t)+2i=1N+12-1φ(x2i,t)+4i=1N+12φ(x2i+1,t)+φ(1,t).

From where12jIψ(tj)-ψobs(tj)L2(Ω)212jI(tj).

Therefore(4.29) J(u)εh4W(0)+2i=1N+12-1W(x2i)+4i=1N+12W(x2i+1)+W(1)+εh4S(0)+2i=1N+12-1S(x2i)+4i=1N+12S(x2i+1)+S(1)+12tjI(tj).(4.29)

The main steps for descent method at each iteration are:

  • Calculate ψk solution of (Equation1.4) with initial condition (ψ0,v0),

  • Calculate Pk solution of adjoint problem,

  • Calculate the descent direction dk=-j(ψ0,v0),

  • Find tk=argmint>0 j((ψ0,v0)+tdk),

  • Update the variable (ψ0,v0)=(ψ0,v0)+tkdk.

The algorithm ends when j(ψ0,v0)<μ, where μ is a given precision.

The value tk is chosen by the inaccurate linear search by Rule Armijo-Goldstein as following:

  • let αi,β[0,1[ and α>0 (for example α=12 and β=0.9)

  • if j((ψ0,v0)k+αidk)j((ψ0,v0)k)+βαidkTdk

  • tk=αi and stop

  • if not, αi=ααi.

5. Numerical experiments

In this section, we do three tests:

In the first test, we show numerically that : when λ0, the sequence (ψλ)λ>0 of weak solutions of the problem (Equation1.4) converges to the weak solution of the problem (Equation1.1). And we make a comparison between the proposed method and method based only on regularization Tikhonov.

In the second test, we study the impact of the percent of final knowledge (ψobs) on the construction of the initial state.

In the third test, we study the noise resistance of the proposed method.

Remark: It is known in the literature that ε is very difficult to determine([Citation20,Citation21]). In order to choose ε, we make each test for different values of ε, and we choose the value of ε which gives the best result.

We did all the tests on Pc with the following configurations: Intel Core i3 CPU 2.27GHz; RAM=4GB(2.93usable). We take step in space h=160 and step in time Δt=160.

5.1. Convergence of the sequence (ψλ)λ>0, when λ0

Letψ0exact=x(1-x)Tandv0exact=-x(1-x)T2, (ψ0exact,v0exact) the true state, and is the state to estimate.

In figures below, the true initial state is drawn red, and rebuilt initial state of the problem (Equation1.4) in blue.

Figure 1. Test with λ=10-2. The value of λ is not enough small, this figure shows that we can’t rebuild the solution.

Figure 1. Test with λ=10-2. The value of λ is not enough small, this figure shows that we can’t rebuild the solution.

Figure 2. Test with λ=10-4. The value of λ is not enough small, we can’t rebuild the true state. But the reconstructed initial condition starts to get near to the true state..

Figure 2. Test with λ=10-4. The value of λ is not enough small, we can’t rebuild the true state. But the reconstructed initial condition starts to get near to the true state..

Figure 3. Test with λ=10-6. The reconstructed initial condition is near to the true state.

Figure 3. Test with λ=10-6. The reconstructed initial condition is near to the true state.

Figure 4. Test with λ=10-8. This figure shows that we can rebuild the solution.

Figure 4. Test with λ=10-8. This figure shows that we can rebuild the solution.

Table 1. Minimum value of J as function of λ. To get a better value of J, we must choose the value of λ small enough.

Table shows the values of J obtained for different lambda value’s. These tests show numerically that when λ0, the sequence (ψλ)λ>0 of weak solutions of the problem (Equation1.4) converges to the weak solution of the problem (Equation1.1) (Figures ). The rebuilt initial state begins to approach the true initial state as soon as λ be lower than 10-6 (Figures and ).

To validate this result, we did another test with λ=10-6, to construct the initial state (ψ0exact,v0exact) with ψ0exact=x(e-ex) and v0exact=e-ex-xex, we obtain the result follows

Figure 5. This figure shows that we can rebuild ψ0 (left) and v0 (right).

Figure 5. This figure shows that we can rebuild ψ0 (left) and v0 (right).

This test (Figure ) shows that we can rebuild the initial state (ψ0exact,v0exact) with λ=10-6. As the first test, we found that we can rebuild the initial state with λ10-6. And when λ0, the sequence (ψλ)λ>0 of weak solutions of the problem (Equation1.4) converges to the weak solution of the problem (Equation1.1).

In order to compare the proposed method based in double regularization with the Tikonov’s method, we did another test based only on tikhonov’s regularization, to reconstruct the initial state (ψ0exact,v0exact), with ψ0exact=x(1-x)Tandv0exact=-x(1-x)T2. The algorithm has found the solution with the minimum value of J equal to 8.78.10-05 in 72 h 54 min. Compared with the results in Table , we have a 50% of time gain with the double regularization method. This shows the effectiveness of this method to reduce the execution time to find the initial condition of the problem (Equation1.1).

5.2. The impact of the percent of final knowledge ψobs on the construction of the initial state

In these tests, we study the necessary percent of final knowledge of ψobs to estimate the initial state. In all tests we take λ=10-8.

Figure 6. Test with 100% of final knowledge of ψobs. This figure shows that we can rebuild ψ0 (left) and v0 (right).

Figure 6. Test with 100% of final knowledge of ψobs. This figure shows that we can rebuild ψ0 (left) and v0 (right).

Figure 7. Test with 50% of final knowledge of ψobs. This figure shows that we can rebuild ψ0 (left) and v0 (right).

Figure 7. Test with 50% of final knowledge of ψobs. This figure shows that we can rebuild ψ0 (left) and v0 (right).

Figure 8. Test with 40% of final knowledge of ψobs. This figure shows that we can rebuild ψ0 (left) and v0 (right).

Figure 8. Test with 40% of final knowledge of ψobs. This figure shows that we can rebuild ψ0 (left) and v0 (right).

Figure 9. Test with 30% of final knowledge of ψobs. This figure shows that we can rebuild ψ0 (left), but we can’t rebuild v0 (right).

Figure 9. Test with 30% of final knowledge of ψobs. This figure shows that we can rebuild ψ0 (left), but we can’t rebuild v0 (right).

Figure 10. Test with 20% of final knowledge of ψobs. This figure shows that we can rebuild ψ0 (left), but we can’t rebuild v0 (right).

Figure 10. Test with 20% of final knowledge of ψobs. This figure shows that we can rebuild ψ0 (left), but we can’t rebuild v0 (right).

These tests show that we can rebuild ψ0 with 20% of final knowledge of ψobs (Figures ), but we lose v0 as soon as the percent of final knowledge be lower than 40% (Figures and ). Remark, The figure shows that we can’t rebuild the initial state in the case where ε=0 (Test without double regulation).

Figure 11. Test without regularization. This figure shows that we can’t rebuild ψ0 (left) and v0 (right).

Figure 11. Test without regularization. This figure shows that we can’t rebuild ψ0 (left) and v0 (right).

Figure 12. Test with err=5%(ψ0exact,v0exact)2. This figure shows that we can rebuild ψ0 (left), and v0 is nearly v0exact, v0-v0exact2=0.1344 (right).

Figure 12. Test with err=5%‖(ψ0exact,v0exact)‖2. This figure shows that we can rebuild ψ0 (left), and v0 is nearly v0exact, ‖v0-v0exact‖2=0.1344 (right).

Figure 13. Test with err=10%(ψ0exact,v0exact)2. This figure shows that we can rebuild ψ0 (left), and v0 begins to move away from v0exact, v0-v0exact2=0.1938 (right).

Figure 13. Test with err=10%‖(ψ0exact,v0exact)‖2. This figure shows that we can rebuild ψ0 (left), and v0 begins to move away from v0exact, ‖v0-v0exact‖2=0.1938 (right).

Figure 14. Test with err=(ψ0exact,v0exact)2. This figure shows that we can’t rebuild ψ0 (left), and v0 is far from v0exact (right).

Figure 14. Test with err=‖(ψ0exact,v0exact)‖2. This figure shows that we can’t rebuild ψ0 (left), and v0 is far from v0exact (right).

Figure 15. Test with errobs=5%ψexact2 . This figure shows that we can rebuild ψ0 (left) and v0 (right).

Figure 15. Test with errobs=5%‖ψexact‖2 . This figure shows that we can rebuild ψ0 (left) and v0 (right).

Figure 16. Test with errobs=10%ψexact2 . This figure shows that we can rebuild ψ0 (left) and v0 begins to move away from v0exact (right).

Figure 16. Test with errobs=10%‖ψexact‖2 . This figure shows that we can rebuild ψ0 (left) and v0 begins to move away from v0exact (right).

Figure 17. Test with errobs=20%ψexact2 . This figure shows that we can rebuild ψ0 (left) and v0 is far from v0exact (right).

Figure 17. Test with errobs=20%‖ψexact‖2 . This figure shows that we can rebuild ψ0 (left) and v0 is far from v0exact (right).

Figure 18. Test with errobs=30%ψexact2 . This figure shows that ψ0 begins to move away from ψ0exact (left) and v0 is far from v0exact (right).

Figure 18. Test with errobs=30%‖ψexact‖2 . This figure shows that ψ0 begins to move away from ψ0exact (left) and v0 is far from v0exact (right).

5.3. The noise resistance of the proposed method

The data u¯=(ψ0¯,v0¯) and ψobs are assumed to be corrupted by measurement errors, which we will refer to as noise. In particular, we suppose that u¯=uexact+e and ψobs=ψexact+eobs. Let err=e2 and errobs=eobs2. We consider that we have 50% of knowledge in final observation, we take λ=10-8. And we did two tests:

In the first, we suppose errobs=0, and we study the impact of err on the construction of the initial state. In the second test, we suppose err=0, and we study the impact of errobs on the construction of the initial state.

5.3.1. Impact of err on the construction of the initial state

These tests (Figures ) show that the proposed algorithm is uniformly stable to noise. And we can conclude that the tolerable percentage of err to rebuild the initial state is err=5%(ψ0exact,v0exact)2.

5.3.2. Impact of errobs on the construction of the initial state

These tests (Figures ) show that the proposed algorithm is uniformly stable to noise. And we can rebuild ψ0 with errobs20% (Figures ), but we lose v0 as soon as errobs be superior than 5% (Figures ).

6. Conclusion

We have presented in this paper a new method based on double regularization applied to the determination of an initial state of hyperbolic degenerated problem from final observations. So, we have obtained a sequence of weak solutions of degenerate linear viscose-elastic problems. Firstly, we have proved the existence and uniqueness of each term of this sequence, and that this sequence converges to the weak solution of the initial problem. Secondly, we have proved that the solution’s behaviour changes continuously with the initial conditions. Also, in the numerical part, we have studied the noise resistance of the proposed method, and we have shown that this method proves effective to reduce the execution time.

Disclosure statement

No potential conflict of interest was reported by the authors.

References

  • Frisch U , Matarrese S , Mohayaee R , Sobolevski A . A reconstruction of the initial conditions of the universe by optimal mass transportation. Nature. 2002;417(6886):260–262.
  • Kuchment P , Kunyansky L . Mathematics of thermoacoustic tomography. Eur J Appl Math. 2008;19:191–224.
  • Kalnay E . Atmospheric modeling, data assimilation and predictability. 2nd ed. New York (NY): Cam-bridge University Press; 2003.
  • Gouveia WP , Scales JA . Bayesian seismic waveform inversion: parameter estimation and uncertainty analysis. J Geophys Res. 1998;103:2759–2779.
  • Rodgers CD . Inverse methods for atmospheric sounding. London: World Scientific Press; 2000.
  • Beauchard K , Zuazua E . Some controllability results for the 2D Kolmogorov equation. Ann Inst H Poincar Anal Non Linaire. 2009;26(5):1793–1815. DOI:10.1016/j.anihpc.2008.12.005. MR2566710 (2011b:93007).
  • Buchot J-M , Raymond J-P . A linearized model for boundary layer equations, Optimal control of complex structures (Oberwolfach, 2000). International series of numerical mathematics. vol. 139. Birkhuser: Basel; 2002. p. 31–42. MR1901628 (2003g:35013).
  • Emamirad H , Goldstein GR , Goldstein JA . Chaotic solution for the Black-Scholes equation. Proc Am Math Soc. 2012;140(6):2043–2052. DOI:10.1090/S0002-9939-2011-11069-4. MR2888192.
  • Fleming WH , Viot M . Some measure-valued Markov processes in population genetics theory. Indiana Univ Math J. 1979;28(5):817–843. DOI:10.1512/iumj.1979.28.28058. MR542340 (81a:60059).
  • Shimakura N . Partial differential operators of elliptic type, Translations of mathematical monographs. vol. 99. Providence (RI): American Mathematical Society; 1992. Translated and revised from the 1978 Japanese original by the author. MR1168472 (93h:35002).
  • Fragnelli G , Mugnai D . Carleman estimates for singular parabolic equations with interior degeneracy and non smooth coefficients. Adv Nonlinear Anal. 2016. DOI:10.1515/anona-2015-0163.
  • Atifi K , Essoufi E-H . Data assimilation and null controllability of degenerate/singular parabolic problems. Electron J Differ Equ. 2017;2017(135):1–17. ISSN: 1072-6691. Available from: http://ejde.math.txstate.edu or http://ejde.math.unt.edu
  • Atifi K , Essoufi E-H , Khouiti B , et al . Identifying initial condition in degenerate parabolic equation with singular potential. Int J Differ Equ. 2017;17. DOI:10.1155/2017/1467049. Article ID 1467049.
  • Bonnet M . Problmes inverses. Master recherche Ecole Centrale de Paris Mention Matire, Structures, Fluides, Rayonnement Spcialit Dynamique des Structures et Systmes Coupls, octobre 2008.
  • Jens F . Generalized Tikhonov regularization. Basic theory and comprehensive results on convergence rates Dissertation, Fakultt fr Mathematik Technische Universitt Chemnitz, Oktober 2011.
  • Alabau-Boussouira F , Cannarsa P , Fragnelli G . Carleman estimates for degenerate parabolic operators with applications to null controllability. J Evol Equ. 2006;6:161–204.
  • Ait Ben Hassi EM , Ammar-Khodja F , Hajjaj A , et al . Carleman estimates and null controllability of degenerate parabolic systems. J Evol Equ Control Theory. 2013;2:441–459.
  • Cannarsa P , Fragnelli G . null controllability of semilinear degenerate parabolic equations in bounded domains. Electron J Differ Equ. 2006;136:1–20.
  • Zhang M , Gao H . Null controllability of some degenerate wave equations. Syst Sci Complex. 2016. DOI:10.1007/s11424-016-5281-3.
  • Delbos F . Problmes d’optimisation Non Linaire avec Contraintes en Tomographie de Rflexion 3D. Thse. 2004.
  • Hansen PC . Analysis of discrete ill-posed problems by mean of the Lcurve. SIAM Rev. 1992;34:561–580.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.