612
Views
4
CrossRef citations to date
0
Altmetric
Research Article

Unknown source identification problem for space-time fractional diffusion equation: optimal error bound analysis and regularization method

, &
Pages 2040-2084 | Received 13 Oct 2020, Accepted 24 Feb 2021, Published online: 19 Mar 2021

Abstract

In this paper, the problem of unknown source identification for the space-time fractional diffusion equation is studied. In this equation, the time fractional derivative used is a new fractional derivative, namely, Caputo-Fabrizio fractional derivative. We have illustrated that this problem is an ill-posed problem. Under the assumption of a priori bound, we obtain the optimal error bound analysis of the problem under the source condition. Moreover, we use a modified quasi-boundary regularization method and Landweber iterative regularization method to solve this ill-posed problem. Based on a priori and a posteriori regularization parameter selection rules, the corresponding convergence error estimates of the two regularization methods are obtained, respectively. Compared with the modified quasi-boundary regularization method, the convergence error estimate of Landweber iterative regularization method is order-optimal. Finally, the advantages, stability and effectiveness of the two regularization methods are illustrated by examples with different properties.

2010 Mathematics Subject Classifications:

1. Introduction

In the past few decades, the fractional diffusion equation was attracted much attention from many scholars. For fractional derivatives, different fractional derivatives are used in different fractional diffusion equations. There are many kinds of fractional derivatives, such as Caputo fractional derivative, Riesz fractional derivative, Grunwald Letnikov fractional derivative, Riemann-Liouville fractional derivative and so on. In order to solve the problem of singular kernel of fractional derivative, Caputo and Fabrizio developed a new fractional derivative with smooth kernel, called fractional Caputo-Fabrizio derivative (FCFD) in 2015 [Citation1]. It could describe the inhomogeneity of materials and the fluctuation of different scales. About the direct problem of fractional Caputo-Fabrizio derivative (FCFD) equation, there are a lot of researches results. Losada and Nieto studied some properties of FCFD [Citation2]. The following were some well-posed problems of differential equations with FCFD studied by some scholars, such as Cauchy problem, uniqueness of solutions, existence of solutions, etc. In [Citation3], the authors studied the Cauchy problem of the space-time fractional diffusion equation, where the time fractional derivative was a new fractional derivative Caputo-Fabrizio fractional derivative. Firstly, the expression of the solution of the linear diffusion equation was obtained by Laplace transform, proving the uniqueness of the solution. Secondly, under the global Lipschitz assumption, the regularity of weak solutions in linear and nonlinear cases was studied, and the existence of local mild solutions was given under the local Lipschitz assumption. In [Citation4], Xiangcheng Zheng and Hong Wang studied a class of nonlinear with variable order FCFD fractional order ordinary differential equation. It was proved that the problem was well-posed. The method was further extended and the well-posedness results of linear partial differential equations were proved. In [Citation5], Al-Salti considered a class of boundary value problem for time fractional heat conduction equations with FCFD. The existence and uniqueness of the solution were proved by the method of separation of variables and the initial value problem of fractional order differential. The Caputo-Fabrizio fractional heat conduction equation was transformed into Volterra integral equation to solve. Different solutions were given for different parameters. As a differential operator, FCFD had a wide range of applications. It was applied to many models in many fields, such as dynamic system, biomedicine, electro magnetics and so on. In [Citation6], Baleanu proposed a new FCFD model that included an index kernel. The uniqueness of the solution was proved. The convergence was analysed. Through numerical experiments, it was proved that the new fractional order model was better than the integer order model with ordinary time derivative. In [Citation7], Al-khedhairi proposed a financial model with non-smooth FCFD. The existence and uniqueness of the solution of the model was proved. The local stability was analysed and the key dynamic properties of the model were studied. In [Citation8], the authors consider the case of linear fractional differential equations: CFDa,tαu(t)λu(t)=f(t),ta,where 1<α<2, a(,t) and CFDa,tαν(t)=12αatexp(α1α(ts))2ν(s)s2ds, and studied the explicit form of the general solution of fixed point. On this basis, the uniqueness of the solution to the initial value problem of nonlinear differential equations was studied. In [Citation9,Citation10], the form of numerical solution and algorithm of differential equations with FCFD were mainly studied. In [Citation9], the authors introduced the second-order scheme for the distribution of particle anomalous motion function and derived a spatial fractional diffusion equation with time FCFD based on a continuous time random walk model 0CFDtγu(x,y,t)=αu(x,y,t)|x|α+βu(x,y,t)|y|β+f(x,y,t).It was shown that the scheme was unconditionally stable. Through theoretical proof and numerical verification, the stability of the scheme was proved and the optimal estimation was obtained. In [Citation10], based on the new fractional derivative FCFD, the authors introduced the Crank-Nicolson difference scheme for solving the fractional Cattaneo equation: u(x,t)t+αu(x,t)tα=2u(x,t)x2+f(x,t),where (x,t)=[0,L]×[0,T], 1<α<2, fC[0,T]. On the basis of uniform division, a priori estimate of discrete L(L2) error with optimal order of convergence speed O(τ2+h2) was established. Numerical experiments verified the applicability and accuracy of the format and provide support for theoretical analysis.

In recent years, involved in fractional differential equations, Caputo fractional derivative and Riemann-Liouville fractional derivative are the most common fractional derivatives in the inverse problem literature. Moreover, most fractional differential equations only do fractional operations on time, not spatial fractional order. To our knowledge, so far, few papers have discussed the inverse problem of Caputo-Fabrizio fractional derivative. There is no literature to use the regularization method to deal with the ill-posed problem of differential equations containing Caputo-Fabrizio fractional derivative.

Nowadays, it is a hot topic in the field of inverse problems to deal with ill-posed problems by regularization method. There are many regularization methods to deal with ill-posed problem. In the early days, most scholars used Tikhonov regularization method [Citation11,Citation12] to deal with inverse problems. It is one of the oldest regularization methods. Later, on the basis of Tikhonov regularization method, some scholars studied the modified Tikhonov regularization method [Citation13], the fractional Tikhonov regularization method [Citation14,Citation15] and the simplified Tikhonov regularization method [Citation16]. In dealing with the inverse problem of bounded domain, the regularization methods are as follows: quasi-boundary regularization method [Citation17–19], the truncation regularization method [Citation20,Citation21], a modified quasi-boundary regularization method [Citation22], a mollification regularization method [Citation23], quasi-reversibility regularization method [Citation24,Citation25], Landweber iterative regularization method [Citation26–28], etc. In solving the inverse problem in unbounded domain, the common regularization methods are Fourier truncation regularization method [Citation29–32]. In [Citation33], Liu and Feng considered a backward problem of spatial fractional diffusion equation, and constructed a regularization method based on the improved ‘kernel’ idea, namely the improved kernel method. The modified kernel method can also deal with more ill-posed problem [Citation34–37].

In this paper, we first prove the uniqueness of the solution of problem (Equation1), and illustrate the ill-posedness of the inverse problem (Equation1), in other words, the solution does not depend continuously on the measured data. Secondly, we use the modified quasi-boundary regularization method to solve the ill-posed problem (Equation1). Based on a priori and a posteriori regularization parameter selection rules, we give the convergence error estimates between the exact solution and the regularization solution. However, the convergence error estimates obtained by the modified quasi-boundary regularization method will lead to saturation phenomenon. In addition, we propose a second regularization method, Landweber iterative regularization method, to solve this ill-posed problem. Through the study, we find that the error estimation results obtained by Landweber iterative regularization method will not produce saturation phenomenon, and the convergence error estimates under the a priori and a posteriori regularization parameter choice rules are order optimal.

In this paper, we consider the following problem: (1) CFD0,tαu(x,t)+(L)su(x,t)=f(x)q(t),inΩ×(0,T),u(x,t)=0,onΩ×(0,T),u(x,0)=φ(x),in{0}×Ω,u(x,T)=g(x),xΩ,(1) where 0<α<1, 0<s1, Ω is a bounded domain in Rd (d=1,2,3) with sufficient glossy boundary Ω. CFD0,tα is the Caputo-Fabrizio operator for fractional derivatives of order α which is defined as follows: assume a>0, νH1(0,a), and α(0,1), for a function with a first derivative ν, its Caputo-Fabrizio fractional derivative is defined as [Citation5]: CFD0,tαν(t)=11α0texpα1α(tz)ν(z)zdz,for t0.In problem (Equation1), when f(x) are known, the problem is well-being problem. If f(x) is unknown, using the additional condition u(x,T)=g(x), we can get the space-dependent source term f(x). But in practice, g(x) can only be obtained by measurement, so there will be errors between the measured data and the exact data.

Assumption 1.1

We assume that the measured data function gδ(x) and the exact data function g(x) satisfy (2) gδ()g()δ,(2) where the constant δ>0 represents a noise level and denotes the L2(Ω) norm.

Definition 1.1

Without loss of generality, admit a family of eigenvalues 0<λ1λ2λn, limnλn=+, with the function set {Xn}n=1 constituting an orthonormal basis in space L2(Ω).

Denote the eigenvalues of the fractional power operators (L) as λn and the corresponding eigenfunctions as Xn(x)H2(Ω)H01(Ω), then we can get (L)sXn(x)=λnsXn(x),in Ω,0<s1,Xn(x)=0,on Ω.According to the result in [Citation3], the solution of the problem (Equation1) is (3) u(x,t)=n=111+λns(1α)expαλnst1+λns(1α)φn+fn1+λns(1α)0tq(z)expαλns(zt)1+λns(1α)dzXn(x):=n=1Qn(t)φn+fn0tq(z)Qn(tz)dzXn(x),(3) where Qn(t)=11+λns(1α)exp(αλnst1+λns(1α)), φn=(φ(x),Xn(x)), and fn=(f(x),Xn(x)) are the Fourier coefficients. Applying additional condition u(x,T)=g(x) and combining with formula (Equation3), we can get (4) g(x)=n=111+λns(1α)expαλnsT1+λns(1α)φn+fn1+λns(1α)0Tq(z)expαλns(zT)1+λns(1α)dzXn(x):=n=1Qn(T)φn+fn0Tq(z)Qn(Tz)dzXn(x).(4) Moreover, we have (5) gn=Qn(T)φn+fn0tq(z)Qn(Tz)dz,(5) where Qn(T)=11+λns(1α)exp(αλnsT1+λns(1α)), and gn=(g(x),Xn(x)) is the Fourier coefficients.

Let h(x)=g(x)n=1Qn(T)φnXn(x), then hn=gnQn(T)φn. Therefore, (Equation4) and (Equation5) can be rewritten as (6) h(x)=n=1fn0Tq(z)Qn(Tz)dzXn(x),(6) (7) hn=fn0Tq(z)Qn(Tz)dz,(7) where hn=(h(x),Xn(x)) is the Fourier coefficients.

Consequently, we obtain the exact solution of the source term as follows (8) f(x)=n=1hn0Tq(z)Qn(Tz)dzXn(x).(8) Due to hn=gnQn(T)φn, then hnδ=gnδQn(T)φn. Consequently, hnδhn=gnδQn(T)φngn+Qn(T)φn=gnδgn.Therefore, according to (Equation2), we have (9) hδ()h()δ.(9) The organizational structure of this paper is as follows. In the second section, we analyse the uniqueness of the solution and ill-posedness of the problem (Equation1). In addition, the conditional stability result is also given. In the third section, we first give some preliminary results, and then use these preliminary results to obtain the optimal error bound estimates of the problem (Equation1). Next, we use the modified quasi-boundary regularization method to solve the inverse problem (Equation1). In Section 4, we obtain the convergence error estimates between the regularized solution and the exact solution under the a priori and a posteriori regularization parameter selection rules. In Section 5, we present a second regularization method, Landweber iterative regularization method, to deal with this ill-posed problem. In Section 6, we compare the error estimates obtained by two regularization methods for the inverse problem (Equation1). In Section 7, three numerical examples of different properties are given. In Section 8, we make a brief summary of the whole paper.

2. Conditional stability results, uniqueness and ill-posed analysis for inverse problem (1)

In this section, we mainly discuss uniqueness of solution, the ill-posed analysis and conditional stability of the unknown source identification problem of inverse problem (Equation1).

Definition 2.1

For any γ>0 and 0<s1, defined (10) D((Ls)γ)=ϕL2(Ω)|n=1(λns)2γ|(ϕ,Xn)|2<,(10) where (,) is the inner product in L2(Ω). Then D((Ls)γ) is a Hilbert space with the norm (11) ϕD((Ls)γ):=n=1(λns)2γ|(ϕ,Xn)|212.(11)

Some Lemmas and Theorems given below are the results of conditional stability and some conclusions, which can help us analyse the ill-posed of inverse problem (Equation1). It is applicable to the whole paper.

Lemma 2.1

For any 0<α<1 and 0<s1, we obtain (12) (1+λns(1α))expαλnsT1+λns(1α)C0expαT1αλns,(12) where C0=max{C¯,C_}.

Proof.

Due to αλnsT1+λns(1α)<αλnsT(1α)λns=αT1α, according to function exp(z) is an increasing function, we can get (13) expαλnsT1+λns(1α)expαT1α.(13) Combining (Equation13) and 0<α<1, we obtain (1+λns(1α))expαλnsT1+λns(1α)(1+λns)expαT1α.If 0<λns<1, we have (1+λns)expαT1α2expαT1α=2λnsλnsexpαT1α2λ1sexpαT1αλns:=C¯expαT1αλns,where C¯=2λ1s.

If λns1, we have (1+λns(1α))expαλnsT1+λns(1α)2expαT1αλns:=C_expαT1αλns,where C_=2.

Therefore, we can get (1+λns(1α))expαλnsT1+λns(1α)C0expαT1αλns,where C0=max{C¯,C_}.

The proof of Lemma 2.1 is completed.

Theorem 2.1

Let q(t)C[0,T] satisfy q(t)>q(0)>0 for all t[0,T]. Therefore, the solution u(x,t) and f(x) of inverse problem (Equation1) is unique.

Proof.

According to exp(αλns(zt)1+λns(1α))=expαλns(tz)1+λns(1α), if zt, we can get exp(αλns(zt)1+λns(1α))>0. According to Lemma 2.1, we have (14) 0Tq(z)Qn(Tz)dz=0Tq(z)1(1+λns(1α))expαλns(Tz)1+λns(1α)dzq00T1(1+λns(1α))expαλns(Tz)1+λns(1α)dzq0C0λns0Texpα(Tz)1αdz=q0(1α)C0αλns1expαT1α.(14) According to q0>0,0<α<1,λn>0 and exp(αT1α)<1, we know (15) 0Tq(z)Qn(Tz)dzq0(1α)C0αλns1expαT1α:=C11λns>0,(15) where C1=q0(1α)C0α(1exp(αT1α)).

From (Equation7), if h(x)=0, that is g(x)=φ(x)=0, we have f(x)=0. Further, according to (Equation3), we know u(x,t)=0.

The proof of Theorem 2.1 is completed.

Due to n, λn, i.e.1Qn(Tz). Then, it can be concluded from formula (Equation8) that the small perturbation of g(x) will cause a great change of f(x), which is a ill-posed problem. Next, we will give the conditional stability results of the source term f(x). In order to discuss the convergence of errors, we make the following assumptions:

Assumption 2.1

It is assumed that f(x)D((Ls)p2) satisfy the following a priori bound condition (16) f()D((Ls)p2)=n=1(λns)p|(f,Xn)|212E,(16) where 0<s1, both E and p are positive constants.

Theorem 2.2

Suppose f(x)D((Ls)p2) satisfies the a priori bound condition (Equation16), the following results are obtained (17) f()C2E2p+2h()pp+2,p>0,(17) where C2=(C11)pp+2.

Proof.

According to the formula for the solution of source term f(x) and using the Hölder inequality, we can get (18) f()2=n=1fn2=n=110Tq(z)Qn(Tz)dz2hn2=n=110Tq(z)Qn(Tz)dz2hn4p+2hn2pp+2=n=1hn20Tq(z)Qn(Tz)dzp+22p+2hn2pp+2n=1hn20Tq(z)Qn(Tz)dzp+22p+2n=1hn2pp+2.(18) According to Lemma 2.1, (Equation15) and (Equation16), we obtain (19) n=1hn20Tq(z)Qn(Tz)dzp+2=n=110Tq(z)Qn(Tz)dzphn20Tq(z)Qn(Tz)dz2(C11)pn=1fn2(λns)p(C11)pE2.(19) Let us take formula (Equation19) into formula (Equation18), we have f()2(C11)2pp+2E4p+2n=1hn2pp+2.Therefore, f()C2E2p+2h()pp+2,p>0,where C2=(C11)pp+2. We have finished the proof of Theorem 2.2.

3. Preliminary results and optimal error bound of the problem (1)

3.1. Preliminaries

Suppose that X and Y are infinite dimensional Hilbert spaces. K:XY is regarded as a linear injective operator of X and Y, and K has a non-closed range R(K).

Consider the following operator Equation [Citation38–42]: (20) Kx=y.(20) All operators R:YX can be seen as a special method to solve the inverse problem (Equation20). The approximate solution of operator Equation (Equation20) is expressed by R(yδ). Assume that yδY is the noise data obtained by measurement and satisfies (21) yδ()y()δ,(21) where δ is the noise level.

Suppose M be a bounded set. For MX, based on assumption (Equation21), the worst case error Δ(δ,R) of X in set M identified by yδ is defined as follows [Citation39–41,Citation43]: (22) Δ(δ,R):=sup{RyδxxM,yδY,Kxyδδ}.(22) The definition of the optimal error bound (or the best possible error bound) is the infimum of all mappings R:YX: (23) ω(δ):=infRΔ(δ,R).(23) Next, let us review some of the best results.

Set M=Mφ,E is considered as a set of elements that satisfy the source condition, i.e. (24) Mφ,E={xXx=[φ(KK)]12v,vE},(24) where the operator function φ(KK) is defined spectral representation [Citation41,Citation44,Citation45] (25) φ(KK)=0aφ(λ)dEλ.(25) In (Equation25), KK=0aλdEλ is the spectral decomposition of operator KK, Eλ is the spectrum family of operator KK, a denotes a constant and satisfies KKa. φ(λ) is a strictly convex function, and φ(λ)C(σ(KK)), φ(0)=0. The spectrum of operator KK is denoted by σ(KK).

A method R0 is called [Citation38,Citation46]:

  1. Optimal on the set M if Δ(δ,R)=ω(δ,E).

  2. Order optimal on the set M if Δ(δ,R)Cω(δ,E) with C1.

In order to obtain the optimal error bound (or the best possible error bound) of error Δ(δ,R) in the worst case defined in formula (Equation22), we need to make some assumptions for the function φ. Suppose the function φ in formula (Equation24) satisfies the following assumptions:

Assumption 3.1

([Citation38,Citation41,Citation45]) In formula (Equation24), φ(λ):(0,a](0,) is a continuous function, then it has the following properties:

  1. limλ0φ(λ)=0.

  2. φ is strictly monotonically increasing on (0,a].

  3. ρ(λ)=λφ1(λ):(0,φ(a)](0,aφ(a)] is convex.

According to the above assumptions, the following theorem estimates give the general formula for finding the optimal error bound.

Theorem 3.1

[Citation38,Citation41,Citation43,Citation45]

Let the expression of Mφ,E be given by formula (Equation24), and let Assumption 3.1 be hold and δ2E2σ(KKφ(KK)), then (26) ω(δ,Mφ,E)=Eρ1δ2E2.(26)

From Theorem 3.1, we can get the best possible error bound. This is a best result, but when we use it in a problem, two difficulties may arise. First of all, it is hard to know the convexity of ρ, which is sometimes violated. Second, even for small δ,δ2E2 may not belong to σ(KKφ(KK)). We will use the following two conclusions to deal with these two difficulties.

Lemma 3.1

[Citation47]

If ρ is not necessarily convex, we have

  1. Eρ1(δ2E2)ω(δ,Mφ,E)2Eρ1(δ2E2) for δ2E2σ(KKφ(KK)).

  2. ω(δ,Mφ,E)2Eρ1(δ2E2) for δ2E2σ(KKφ(KK)).

Lemma 3.2

[Citation47]

Let KK be compact and let λ1>λ2> be the ordered eigenvalued of KK. If there exists a constant k>0 such that φ(λi+1)kφ(λi) for all iN, then ω(δ,Mφ,E)kEρ1δ2E2for δ(0,δ1], where δ1=Eλ1φ(λ1).

3.2. Optimal error bound for problem (1)

In this section, we mainly study the optimal error bound of problem (Equation1). We only discuss the identification space-dependent source term f(x), where f(x)Mp,E, Mp,E is as follows (27) f(x)Mp,E={fL2(Ω)fD((Ls)p2)E,p>0},(27) where fD((Ls)p2)=n=1((λns)p(f,Xn)2)12 is a norm in Sobolev space.

Problem (Equation1) is transformed into an operator equation by operator K:L2(Ω)L2(Ω): (28) Kf(x)=h(x).(28) From h(x)=n=1[0Tq(z)11+λns(1α)exp(αλns(Tz)1+λns(1α))dzfn]Xn(x), we have (29) Kf(x)=n=10Tq(z)11+λns(1α)expαλns(Tz)1+λns(1α)dzfnXn(x),(29) where fn=(f(x),Xn()).

So, we obtain K=0Tq(z)11+λns(1α)expαλns(Tz)1+λns(1α)dz.For the convenience of subsequent discussion, in this section, let q(z)=1. So, K:L2(Ω)L2(Ω) can be written as follows (30) K=0T11+λns(1α)expαλns(Tz)1+λns(1α)dz,(30) where λn is the eigenvalues of the operator L.

Due to K=0T11+λns(1α)exp(αλns(Tz)1+λns(1α))dz, then (31) KK=KK=0T11+λns(1α)expαλns(Tz)1+λns(1α)dz2.(31) Now let us reconstruct (Equation27) into (Equation24), and we can get the following proposition:

Proposition 3.1

Considering the operator equation in formula (Equation28), the set M given in formula (Equation27) is equivalent to the general source set: (32) Mp,E={f(x)L2(Ω)φ[KK]12E},(32) where φ=φ(λ) is given in the form of the following parameters (33) λ(l)=0T11+ls(1α)expαls(Tz)1+ls(1α)dz2,φ(l)=(ls)p,1l<.(33)

Proof.

By a priori bound condition f(x)D((Ls)p2)E, we get the following inequality (34) (Ls)p2f(x)E.(34) From (Equation32) and (Equation34), we obtain the expression of operator function φ(KK): (35) φ(KK)=(Ls)p.(35) Combining with (Equation31), we obtain the function φ(λ) in (Equation33).

We have finished the proof of Proposition 3.1.

Proposition 3.2

The function φ(λ) defined by (Equation33) is continuous and has the following properties:

  1. limλ0φ(λ)=0.

  2. φ is a strictly monotonically increasing function.

  3. ρ(λ)=λφ1(λ) is a strictly monotonically increasing function and expressed in the form of parameters: (36) λ(l)=(ls)p,ρ(l)=(ls)p0T11+ls(1α)expαls(Tz)1+ls(1α)dz2,1l<.(36)

  4. ρ1(λ) is a strictly monotonically increasing function and expressed in the form of parameters: (37) λ(l)=(ls)p0T11+ls(1α)expαls(Tz)1+ls(1α)dz2,ρ1(l)=(ls)p,1l<.(37)

  5. The inverse function ρ1(l) satisfies: (38) ρ1(λ)=(Cα,T)2pp+2λpp+2for λ0,(38) where Cα,T=α1exp(αT1α).

Proof.

Conclusions (i), (ii), (iii) and (iv) are obvious, and we only prove conclusion (v). We can prove the (Equation38) formula by proving limλ0F(λ)=1, where F(λ)=ρ1(λ)(α1exp(αT1α))2pp+2λpp+2.limλ0F(λ)=limlρ1(λ)α1exp(αT1α)2pp+2λpp+2=liml(ls)pα1exp(αT1α)2pp+2(ls)p0T11+ls(1α)expαlsT1+ls(1α)dz2pp+2=liml1α1exp(αT1α)2pp+2(ls)0T11+ls(1α)expαlsT1+ls(1α)dz2pp+2=liml1α1exp(αT1α)2pp+2lsαlsexpαls(zT)1+ls(1α)|0T2pp+2=liml1α1exp(αT1α)2pp+21α1expαlsT1+ls(1α)2pp+2=liml1α1exp(αT1α)2pp+21α1expαT1ls+(1α)2pp+2=liml1α1exp(αT1α)2pp+21α1expαT1α2pp+2=1.We have finished the proof of Proposition 3.2.

Theorem 3.2

Suppose noise assumption (Equation9) and (Equation27) hold. Therefore, the optimal error bound of the inverse problem (Equation1) are as follows:

  1. If δ2E2σ(KKφ(KK)) and δ0, we obtain (39) (Cα,T)pp+2δpp+2E2p+2ω(δ,Mp,E)2(Cα,T)pp+2δpp+2E2p+2.(39)

  2. If δ2E2σ(KKφ(KK)) and δ0, we obtain (40) 14p2(Cα,T)pp+2δpp+2E2p+2ω(δ,Mp,E)2(Cα,T)pp+2δpp+2E2p+2.(40)

Proof.

From (Equation38), we have (41) Eρ1δ2E2=(Cα,T)pp+2δpp+2E2p+2.(41) Consequently, according to Lemma 3.1, we obtain conclusion (i).

According to (Equation33), we have (42) φ(λi+1)φ(λi)=(i2)p(i+1)2p14p.(42) In Lemma 3.2, take k=(14)p, we can get conclusion (ii). We have finished the proof of Theorem 3.2.

In the next section, we mainly introduce the first regularization method to deal with inverse problem (Equation1), namely the modified quasi-boundary regularization method.

4. A modified quasi-boundary regularization method and convergence estimation

In this section, we first give the regularization solution of a modified quasi-boundary regularization method corresponding to the exact solution f(x) of the inverse problem (Equation1). Secondly, the a priori and a posteriori Hölder-type error convergence estimates are given by using a priori and a posteriori regularization parameter selection rule.

We can transform the ill-posed problem (Equation1) into the following integral equation for analysis: (Kf)(x):=Ωk(x,ξ)f(ξ)dξ=h(x),where k(x,ξ)=n=10Tq(z)Qn(Tz)dzXn(x)Xn(ξ).Obviously kernel function k(x,ξ)=k(ξ,x), so K is a self-adjoint operator.

Due to Xn(x) is a set of standard orthogonal bases in L2(Ω). So, it is easy to know that (43) σn=0Tq(z)11+λns(1α)expαλns(Tz)1+λns(1α)dz=0Tq(z)Qn(Tz)dz,n=1,2,3,,(43) is the singular value of operator K.

For the standard quasi-boundary regularization method, its main idea is to add a penalty term to the termination data, that is, u(x,T)=gδ(x)μf(x), then the regularization solution of the inverse problem can be obtained. In this paper, in order to solve the inverse problem (Equation1), we use a modified quasi-boundary regularization method, i.e. replacing u(x,T)=g(x) with uμδ(x,T)+μ(Ls)fμδ(x)=gδ(x). The regularization solution fμδ(x) can be obtained by the following system of equations: (44) CFD0,tαuμδ(x,t)+(L)suμδ(x,t)=fμδ(x)q(t),in Ω×(0,T),uμδ(x,t)=0,on Ω×(0,T),uμδ(x,0)=φ(x),in {0}×Ω,uμδ(x,T)+μ(Ls)fμδ(x)=gδ(x),xΩ,(44) where μ>0 is a regularization parameter.

Next, we will show that fμδ(x) is the regularization solution of the source term f(x) in problem (Equation1). By separating the variables, we can get the following form of uμδ(x,t) (45) uμδ(x,t)=n=111+λns(1α)expαλnst1+λns(1α)φn+11+λns(1α)(fμδ)n0tq(z)expαλns(zt)1+λns(1α)dzXn(x):=n=1Qn(t)φn+(fμδ)n0tq(z)Qn(tz)dzXn(x),(45) where (fμδ)n=(fμδ(x),Xn(x)). According to uμδ(x,T)+μ(Ls)fμδ(x)=gδ(x), we obtain (46) Qn(T)φn+(fμδ)n0Tq(z)Qn(Tz)dz+μλns(fμδ)n=gnδ,(46) where gnδ=(gδ(x),Xn(x)).

Thus, according to hnδ=gnδQn(T)φn and (Equation46), we have (47) (fμδ)n=hnδ0Tq(z)Qn(Tz)dz+μλns,(47) where hnδ=(hδ(x),Xn(x)).

Consequently, we have (48) fμδ(x)=n=1hnδ0Tq(z)Qn(Tz)dz+μλnsXn(x).(48) The regularization solution with exact measurable data is as follows: (49) fμ(x)=n=1hn0Tq(z)Qn(Tz)dz+μλnsXn(x).(49)

4.1. The error convergent estimate with an a priori regularization parameter choice rule

Theorem 4.1

Suppose q(t)C[0,T] satisfy q(t)>q0>0, t[0,T]. Assuming that the a priori bound condition (Equation16) and noise assumption (Equation9) hold, the exact solution of problem (Equation1) is given by (Equation6). Let the regularization solution of the modified quasi-boundary regularization method be expressed as (Equation48), and the convergence error estimates of the following Hölder-type are obtained

  1. If 0<p<4 and choose μ=(δE)4p+2, we obtain the following error estimate result: (50) fμδ()f()12C1+C3E2p+2δpp+2.(50)

  2. If p4 and choose μ=(δE)23, we obtain the following error estimate result: (51) fμδ()f()12C1+C4E13δ23,(51)

where C3=C1p4(4p1)1p44p,C4=C11λ1s(2p2).

Proof.

By the triangle inequality, we know (52) fμδ()f()fμδ()fμ()+fμ()f().(52) We first give the error estimate of the first term in inequality (Equation52). According to (Equation48), (Equation49) and (Equation9), we obtain the following result (53) fμδ()fμ()2=n=1hnδ0Tq(z)Qn(Tz)dz+μλnsXn(x)n=1hn0Tq(z)Qn(Tz)dz+μλnsXn(x)2=n=110Tq(z)Qn(Tz)dz+μλns(hnδhn)Xn(x)2=n=110Tq(z)Qn(Tz)dz+μλns2(hnδhn)2supn1(A1(n))2δ2,(53) where A1(n)=10Tq(z)Qn(Tz)dz+μλns.

Using Lemma 2.1 and (Equation15), we have A1(n)1C1λns+μλns12C1μ12.Consequently, according to (Equation9), we can get (54) fμδ()fμ()12C1μ12δ.(54) In the next moment, we give the error estimate of the second term in formula (Equation52).

From (Equation49), we can deduce that (55) fμ()f()2=n=1hn0Tq(z)Qn(Tz)dz+μλnsXn(x)n=1hn0Tq(z)Qn(Tz)dzXn(x)2=n=1hn0Tq(z)Qn(Tz)dzhn(0Tq(z)Qn(Tz)dz+μλns)0Tq(z)Qn(Tz)dz(0Tq(z)Qn(Tz)dz+μλns)Xn(x)2=n=1hnμλns0Tq(z)Qn(Tz)dz(0Tq(z)Qn(Tz)dz+μλns)Xn(x)2=n=1hn0Tq(z)Qn(Tz)dz2μλns0Tq(z)Qn(Tz)dz+μλns2.(55) By a priori bound condition (Equation16), we have fμ()f()2=n=1hn0Tq(z)Qn(Tz)dz2μλns0Tq(z)Qn(Tz)dz+μλns2(λns)p(λns)p=n=1fn2(λns)pμλns0Tq(z)Qn(Tz)dz+μλns2(λns)psupn1(A2(n))2n=1fn2(λns)psupn1(A2(n))2E2,where A2(n)=μλns0Tq(z)Qn(Tz)dz+μλns(λns)p2.

Using Lemma 2.1 and (Equation15), we can get the following result A2(n)μλnsμλns+C11λnsλnp2s=μλns(2p2)μλn2s+C1.Let H(y)=μy2p2μy2+C1,where y=λns.

If 0<p<4, due to limy0H(y)=0 and limyH(y)=0.

Consequently, H(y)supy(0,+)H(y)H(y0),where y0(0,+), and y0 satisfies H(y0)=0.

According to the expression of H(y), we have H(y)=(2p2)μy1p2(μy2+C1)2μyμy2p2(μy2+C1)2,i.e.y0=(C1(4p)μp)12. Then, (56) H(y)H(y0)=C1p44p11p44pμp4.(56) If p4, y=λnsλ1s>0, we have (57) H(y)=μy2p2μy2+C1=μμy2+C11yp22μC11λ1s(p22)=C11λ1s(2p2)μ.(57) According to (Equation56) and(Equation57), we obtain (58) A2(n)C1p4(4p1)1p44pμp4:=C3μp4,0<p<4,C11λ1s(2p2)μ:=C4μ,p4,(58) where C3=C1p4(4p1)1p44p,C4=C11λ1s(2p2).

Consequently, (59) fμ()f()C3Eμp4,0<p<4,C4Eμ,p4.(59) Regularization parameter μ selection rules are as follows: (60) μ=δE4p+2,0<p<4,δE23,p4.(60) Combining (Equation52), (Equation54), (Equation59) and (Equation60), we can get the convergence error estimate (61) fμδ()f()12C1+C3E2p+2δpp+2,0<p<4,12C1+C4E13δ23,p4.(61) We have finished the proof of Theorem 4.1.

4.2. The error convergent estimate with an a posteriori parameter choice rule

In this section, we give a selection rule of the a posteriori regularization parameter, that is, we choose the regularization parameter μ in regularization resolution by using Morozov's discrepancy principle [Citation43]. In Theorem 2.2, under the results of conditional stability, we can obtain the error convergence estimate between the regularization solution (Equation48) and the exact solution.

Morozov's discrepancy principle for our case is to find μ such that (62) Kfμδ()hδ()=τδ,(62) where τ>1 is constant.

Lemma 4.1

Let ρ(μ)=Kfμδ()hδ(), then the following properties are established

  1. ρ(μ) is a continuous function;

  2. limμ0ρ(μ)=0;

  3. limμ+ρ(μ)=hδ();

  4. ρ(μ) is a strictly increasing function over μ(0,+).

Proof.

According to ρ(μ)=n=1μλns0Tq(z)Qn(Tz)dz+μλns2(hnδ)212,it can be seen that these four properties are obvious.

Remark 4.1

By the Lemma 4.1, if hδ()τδ>0, μ selected by formula (Equation62) is unique.

Lemma 4.2

If 0<μ<1, we can get (63) 1μC5τ14p+2Eδ4p+2,0<p<2,C6τ1Eδ,p2,(63)

where C5=q1αC1(p4+12)(p+2)(2pp+2)12p44,C6=q1αC1λ1s(1p2).

Proof.

According to (Equation62) and (Equation9), we have τδ=Kfμδ()hδ()=n=1μλns0Tq(z)Qn(Tz)dz+μλnshnδXn(x)n=1μλns0Tq(z)Qn(Tz)dz+μλns(hnδhn)Xn(x)+n=1μλns0Tq(z)Qn(Tz)dz+μλnshnXn(x)δ+I.By a priori bound condition (Equation16), we obtain I=n=1μλns0Tq(z)Qn(Tz)dz+μλnshnXn(x)=n=1μλns0Tq(z)Qn(Tz)dz0Tq(z)Qn(Tz)dz+μλns(λns)p2hn0Tq(z)Qn(Tz)dz(λns)p2Xn(x)=n=1μλns0Tq(z)Qn(Tz)dz0Tq(z)Qn(Tz)dz+μλns(λns)p22fn2(λns)p12supn1A3(n)n=1fn2(λns)p12Esupn1A3(n),where A3(n)=μλns0Tq(z)Qn(Tz)dz0Tq(z)Qn(Tz)dz+μλns(λns)p2.

According to Qn(t)=11+λns(1α)exp(αλnst1+λns(1α)), we have (64) 0Tq(z)Qn(Tz)dz=11+λns(1α)0Tq(z)expαλns(zT)1+λns(1α)dzqC[0,T]11+λns(1α)0Texpαλns(zT)1+λns(1α)dz=qC[0,T]1αλnsexpαλns(zT)1+λns(1α)|0T=qC[0,T]1αλns1expαλnsT1+λns(1α)qC[0,T]1αλns:=q11αλns,(64) where q1=qC[0,T].

Using Lemma 2.1 and (Equation64) to reduce A3(n), we obtain A3(n)μλns0Tq(z)Qn(Tz)dzC11λns+μλns(λns)p2μλnsq11αλnsC11λns+μλns(λns)p2=q1αμ(λns)1p2C1+μ(λns)2.Let H(r)=q1αμr(1p2)C1+μr2,where r=λns.

If 0<p<2, let H(r0)=0.

According to the expression of H(r), we have H(r)=q1α(1p2)μrp2(μr2+C1)2μrμr1p2(μr2+C1)2,Let H(r)=0, we can get r0=(2pp+2C1μ)12.

Then, (65) H(r)H(r0)=q1αC1(p4+12)(p+2)(2pp+2)12p44μp+24.(65) If p2, H(r)=q1αμC1+μr21r(p21),due to r=λnsλ1s, so (66) H(r)q1αμC1+μr21λ1s(p21)q1αC1λ1s(1p2)μ.(66) Consequently, (67) A3(n)q1αC1(p4+12)(p+2)(2pp+2)12p44μp+24:=C5μp+24,0<p<2,q1αC1λ1s(1p2)μ:=C6μ,p2.(67) Then, (τ1)δC5Eμp+24,0<p<2,C6Eμ,p2.Therefore, we have (68) 1μC5τ14p+2Eδ4p+2,0<p<2,C6τ1Eδ,p2,(68) where C5=q1αC1(p4+12)(p+2)(2pp+2)12p44,C6=q1αC1λ1s(1p2).

We have finished the proof of Lemma 4.2.

Theorem 4.2

Suppose q(t)C[0,T] satisfy q(t)>q0>0, t[0,T]. Let the a posteriori regularization parameter selection rule satisfy Morozov's discrepancy principle (Equation62). The a priori bound condition (Equation16) and noise assumption (Equation9) hold. There are two results as follows:

  1. If 0<p<2, we obtain the following error estimation result: (69) fμδ()f()12C1C5τ12p+2+C2(τ+1)pp+2E2p+2δpp+2.(69)

  2. If p2, we obtain the following error estimation result: (70) fμδ()f()12C1C6τ1+C2(τ+1)12E12δ12.(70)

Proof.

We prove the conclusion through the following three steps:

Firstly, by triangle inequality, we have (71) fμδ()f()fμδ()fμ()+fμ()f().(71) Secondly, we give an estimate for the second term in formula (Equation71). From (Equation55), we can get K(fμ(x)f(x))=n=1hnμλns0Tq(z)Qn(Tz)dz+μλnsXn(x)=n=1(hnhnδ)μλns0Tq(z)Qn(Tz)dz+μλnsXn(x)+n=1hnδμλns0Tq(z)Qn(Tz)dz+μλnsXn(x).From (Equation9) and (Equation62), we have K(fμ()f())=n=1hnμλns0Tq(z)Qn(Tz)dz+μλnsXn(x)=n=1(hnhnδ)μλns0Tq(z)Qn(Tz)dz+μλnsXn(x)+n=1hnδμλns0Tq(z)Qn(Tz)dz+μλnsXn(x)n=1(hnhnδ)μλns0Tq(z)Qn(Tz)dz+μλnsXn(x)+n=1hnδμλns0Tq(z)Qn(Tz)dz+μλnsXn(x)δ+τδ=(τ+1)δ.By a priori bound condition (Equation16), we know fμ()f()D((Ls)p2)=n=1hn0Tq(z)Qn(Tz)dzμλns0Tq(z)Qn(Tz)dz+μλns(λns)p2212n=1hn0Tq(z)Qn(Tz)dz2(λns)p12=n=1fn2(λns)p12E.By Theorem 2.2, we deduce that (72) fμ()f()C2(τ+1)pp+2E2p+2δpp+2.(72) Finally, we give an estimate for the first term in formula (Equation71). According to the proof result of the first term in Theorem 4.1, we have (73) fμδ()fμ()12C1μ12δ.(73) According to Lemma 4.2, we can get (74) fμδ()fμ()12C1C5τ12p+2E2p+2δpp+2,0<p<2,12C1C6τ1E12δ12,p2.(74) From (Equation71), (Equation72) and (Equation74), we have (75) fμδ()f()C2(τ+1)pp+2E2p+2δpp+2+12C1C5τ12p+2E2p+2δpp+2,0<p<212C1C6τ1E12δ12,p2=12C1C5τ12p+2+C2(τ+1)pp+2E2p+2δpp+2,0<p<2,12C1C6τ1+C2(τ+1)12E12δ12,p2.(75) We have finished the proof of Theorem 4.2.

In the next section, we propose the second regularization method for solving inverse problem (Equation1), namely, Landweber iterative regularization method. The a priori and a posteriori convergence error estimates are given under the selection rules of a priori and a posteriori regularization parameter.

5. Landweber iterative regularization method and convergence analysis

Now let us solve the corresponding regularization solution fm,δ(x) of Landweber iterative regularization method for inverse problem (Equation1). If we replace equation Kf = h with operator equation f=(IaKK)f+aKh, we can obtain the regularization solution fm,δ(x). The iterative scheme is as follows: (76) f0,δ(x)=0,fm,δ(x)=(IaKK)fm1,δ(x)+aKhδ(x),m=1,2,3,,(76) where a is called the relaxation factor, I is a unit operator, m is the iterative step number, m is called the regularization parameter and satisfies 0<a<1K2.

Let operator Rm:L2(Ω)L2(Ω) be as follows: Rm=ak=0m1(IaKK)kK,m=1,2,3,.So, by direct calculation, we obtain fm,δ(x)=Rmhδ(x)=ak=0m1(IaKK)kKhδ(x).By the singular values (Equation43) of the operator K and (Equation76), we obtain the Landweber iterative regularization solution (77) fm,δ(x)=n=11(1a(0Tq(z)Qn(Tz)dz)2)m0Tq(z)Qn(Tz)dzhnδXn(x),(77) where hnδ=(hδ(x),Xn(x)).

The regularization solution with accurate measurable data is as follows (78) fm(x)=n=11(1a(0Tq(z)Qn(Tz)dz)2)m0Tq(z)Qn(Tz)dzhnXn(x).(78) Next, we discuss the a priori and a posteriori convergence error estimates by Landweber iterative regularization method. Iteration step number m must be known. Based on two kinds of regularization parameter selection rules, we give the Hölder-type convergence estimates between the regularization solution and the exact solution, respectively.

5.1. The convergent error estimate with an a priori parameter choice rule

Theorem 5.1

Suppose q(t)C[0,T] satisfy q(t)>q0>0, t[0,T]. Assuming that a priori condition (Equation16) and the noise assumption (Equation9) hold, and the exact solution of the problem (Equation1) is formula (Equation6). The corresponding Landweber iterative regularization solution fm,δ(x) is represented by the formula (Equation77). If we select the regularization parameter m=b, where (79) b=Eδ4p+2,(79) then we obtain the following convergence error estimate (80) fm,δ()f()C7E2p+2δpp+2,(80) where b denotes the largest integer less than or equal to b and C7=a+(paC12)p4 is positive constant.

Proof.

By the triangle inequality, we have (81) fm,δ()f()fm,δ()fm()+fm()f().(81) On the one hand, from (Equation77), (Equation78), and (Equation9), we obtain fm,δ()fm()2=n=11(1a(0Tq(z)Qn(Tz)dz)2)m0Tq(z)Qn(Tz)dzhnδXn(x)n=11(1a(0Tq(z)Qn(Tz)dz)2)m0Tq(z)Qn(Tz)dzhnXn(x)2=n=11(1a(0Tq(z)Qn(Tz)dz)2)m0Tq(z)Qn(Tz)dz(hnδhn)Xn(x)2n=1(1(1a(0Tq(z)Qn(Tz)dz)2)m)2(0Tq(z)Qn(Tz)dz)2δ2supn1(B1(n))2δ2,where B1(n):=1(1a(0Tq(z)Qn(Tz)dz)2)m0Tq(z)Qn(Tz)dz.

Because σn is the singular value of operator K, and 0<a<1K2, so we obtain 0<a(0Tq(z)Qn(Tz)dz)2<1.

If 0<x<1, we have inequality x<x.Consequently, 11a0Tq(z)Qn(Tz)dz2m11a0Tq(z)Qn(Tz)dz2m.Applying Bernoulli inequality (1x)h1hx, we can get 11a0Tq(z)Qn(Tz)dz2mam0Tq(z)Qn(Tz)dz,i.e. B1(n)am.So, we have (82) fm,δ()fm()supn1B1(n)δamδ.(82) On the other hand, by Lemma 2.1, (Equation15) and (Equation78), we have fm()f()2=n=11(1a(0Tq(z)Qn(Tz)dz)2)m0Tq(z)Qn(Tz)dzhnXn(x)n=110Tq(z)Qn(Tz)dzhnXn(x)2=n=1(1a(0Tq(z)Qn(Tz)dz)2)m0Tq(z)Qn(Tz)dzhnXn(x)2=n=11a0Tq(z)Qn(Tz)dz22m(λns)p(λns)pfn2supn1(B2(n))2n=1(λns)pfn2supn1(B2(n))2E2,where B2(n):=(1a(0Tq(z)Qn(Tz)dz)2)m(λns)p2.

According to Lemma 2.1 and (Equation15), we have B2(n)1aC1λns2m(λns)p2.Let F(r):=(1a(C1r)2)m(r)p2, r:=λns.

Assume r0 satisfies F(r0)=0, according to the F(r) expression, we can get F(r)=p2(r)p211aC1r2m+m1aC1r2m12aC12(r)3p2.Let F(r)=0, we have r0=aC12(4m+p)p12.So, F(r)F(r0)1p4m+pmpaC12p4(m+1)p4paC12p4(m+1)p4,then, B2(n)paC12p4(m+1)p4.Therefore, (83) fm()f()paC12p4(m+1)p4E.(83) Combining (Equation79), (Equation81), (Equation82) and (Equation83), we obtain fm,δ()f()a+paC12p4E2p+2δpp+2:=C7E2p+2δpp+2,where C7=a+(paC12)p4. The proof of Theorem 5.1 is finished.

5.2. The convergent error estimate with an a posteriori parameter choice rule

In this section, based on the Morozov discrepancy principle [Citation43], we give the a posteriori regularization parameter selection rule and obtain the convergence error estimate under the a posteriori regularization parameter selection rule.

Let τ>1 be a given constant. The method of selecting the a posteriori regularization parameter m=m(δ)N0 is that the iteration stops when m satisfies (84) Kfm,δ()hδ()τδ(84) for the first time, where hδτδ is constant.

Lemma 5.1

Let ρ(m)=Kfm,δ()hδ(), so we can get the following properties

  1. ρ(m) is a continuous function;

  2. limm0ρ(m)=hδ;

  3. limm+ρ(m)=0;

  4. ρ(m) is a strictly decreasing function for any m(0,+).

Proof.

We can easily get these properties by formula ρ(m)=n=1(1a(0Tq(z)Qn(Tz)dz)2)2m(hnδ)212.

Remark 5.1

By the Lemma 5.1, the uniqueness of m is selected by the method of formula (Equation84).

Lemma 5.2

For a given constant τ>1, it is assumed that the a priori bound condition (Equation16), the regularization solution of inverse problem (Equation1) and noise assumption (Equation9) hold. If the regularization parameter is chosen based on Morozov's discrepancy principle (Equation84), then the regularization parameter m=m(δ) satisfies the inequality (85) mq1α(τ1)4p+2p+2aC12Eδ4p+2.(85)

Proof.

Owing to |1a(0Tq(z)Qn(Tz)dz)2|<1, combining (Equation9) and (Equation77), we obtain (86) τδKfm1,δ()hδ()=n=1(1a(0Tq(z)Qn(Tz)dz)2)m1hnδXn(x)n=1(1a(0Tq(z)Qn(Tz)dz)2)m1(hnδhn)Xn(x)+n=1(1a(0Tq(z)Qn(Tz)dz)2)m1hnXn(x)δ+n=1(1a(0Tq(z)Qn(Tz)dz)2)m1hnXn(x).(86) According to (Equation16), we have n=11a0Tq(z)Qn(Tz)dz2m1hnXn(x)=n=11a0Tq(z)Qn(Tz)dz2m1×0Tq(z)Qn(Tz)dz(λns)p2(λns)p2fnXn(x)1a0Tq(z)Qn(Tz)dz2m1=n=11a0Tq(z)Qn(Tz)dz2m1×0Tq(z)Qn(Tz)dz(λns)p22(λns)p2fn1a0Tq(z)Qn(Tz)dz2m1212supn1B3(n)n=1(λns)p2fn212supn1B3(n)E,where B3(n):=(1a(0Tq(z)Qn(Tz)dz)2)m10Tq(z)Qn(Tz)dz(λns)p2.

According to Lemma 2.1, (Equation15) and (Equation64), we have (87) B3(n)1aC1λns2m1(λns)p2q11αλnsq1α1aC1λns2m1(λns)p21.(87) Let F(h):=q1α(1a(C1h)2)m1(h)p21, h:=λns.

Assume h satisfies F(h)=0, according to the F(h) expression, we have F(h)=q1α(m1)1aC1λns2m22aC12(h)p24+q1αp21(h)p221aC1λns2m1.Let F(h)=0, we obtain h=aC12(4m+p2)(p+2)12.So F(h)F(h)=q1α1aC12(p+2)aC12(4m+p2)m1aC12(4m+p2)(p+2)p+24q1αp+2aC12p+24mp+24.Consequently, B3(n)q1αp+2aC12p+24mp+24.Therefore, we obtain (88) n=11a0Tq(z)Qn(Tz)dz2m1hnXn(x)supn1B3(n)Eq1αp+2aC12p+24mp+24E.(88) Combining (Equation86) and (Equation88), we obtain (τ1)δq1αp+2aC12p+24mp+24E,i.e. mq1α(τ1)4p+2p+2aC12Eδ4p+2.The proof of Lemma 5.2 is completed.

Lemma 5.3

According to (Equation9) and (Equation84), we have (89) K(fm()f())(τ+1)δ.(89)

Proof.

We note K(fm()f())=n=11a0Tq(z)Qn(Tz)dz2mhnXn(x)=n=11a0Tq(z)Qn(Tz)dz2m(hnhnδ)Xn(x)+n=11a0Tq(z)Qn(Tz)dz2mhnδXn(x),i.e. (90) K(fm()f())=n=11a0Tq(z)Qn(Tz)dz2mhnXn(x)=n=11a0Tq(z)Qn(Tz)dz2m(hnhnδ)Xn(x)+n=11a0Tq(z)Qn(Tz)dz2mhnδXn(x)n=11a0Tq(z)Qn(Tz)dz2m(hnhnδ)Xn(x)+n=11a0Tq(z)Qn(Tz)dz2mhnδXn(x)(τ+1)δ.(90) The proof of Lemma 5.3 is completed.

Theorem 5.2

Assume q(t)C[0,T] satisfy q(t)>q0>0, t[0,T]. Let fm,δ(x) be the Landweber iterative regularization solution corresponding to the exact solution f(x) of inverse problem (Equation1). It is assumed that the noise assumption (Equation9) and the a priori condition (Equation16) hold, and the regularization parameter m=m(δ) satisfies the iteration stopping criterion (Equation84). Therefore, we obtain the convergence error estimates as follows: (91) fm,δ()f()C8E2p+2δpp+2,(91) where C8:=(p+2C12)12(q1α(τ1))2p+2+C2(τ+1)pp+2 is positive constant.

Proof.

According to the triangle inequality, we obtain (92) fm,δ()f()fm,δ()fm()+fm()f().(92) Using Lemma 5.2 and (Equation82), we have (93) fm,δ()fm()amδap+2aC1212q1α(τ1)2p+2E2p+2δpp+2=p+2C1212q1α(τ1)2p+2E2p+2δpp+2.(93) According to (Equation16), we have fm()f()D((Ls))p2=n=1(1a(0Tq(z)Qn(Tz)dz)2)2m(0Tq(z)Qn(Tz)dz)2hn2(λns)p12n=1hn2(0Tq(z)Qn(Tz)dz)2(λns)p12=n=1fn2(λns)p12E.Moreover, according to Theorem 2.2 and Lemma 5.3, we have (94) fm()f()C2(τ+1)pp+2E2p+2δpp+2.(94) Combining (Equation92), (Equation93) and (Equation94), we obtain fm,δ()f()C8E2p+2δpp+2,where C8:=(p+2C12)12(q1α(τ1))2p+2+C2(τ+1)pp+2.

The proof of Theorem 5.2 is completed.

6. Analysis and comparison of the optimal approximation

In Sections 4 and 5, we use the modified quasi-boundary regularization method and Landweber iterative regularization method to obtain the regularization solution of problem (Equation1), respectively. Based on the a priori bound condition (Equation16), the a prioriand a posteriori error estimates are obtained under different regularization parameter selection rules. Next, we will compare the convergence error estimates of the two regularization methods.

For some regularization methods, the convergence error estimates obtained do not hold for all p, but only for some 0<p<p0. We call this phenomenon saturation effect. If a regularization method produces saturation effect, it means that the order of the error estimation between the corresponding regularized solution and the exact solution cannot be increased with the increase of the assumption of smoothness of the solution.

We choose f()D((Ls)p2)=(n=1(λns)p|(f,Xn)|2)12E as a priori bound condition(where p>0). From Tables  and Theorem 3.2, whether a priori or a posteriori, the modified quasi-boundary regularization method will produce saturation effect compared with Landweber iterative regularization method. Compared with the modified quasi-boundary regularization method, the error estimates obtained by Landweber iterative regularization method are order-optimal.

Table 1. A priori convergence error estimates of two regularization methods.

Table 2. A posteriori convergence error estimates of two regularization methods.

From the above analysis, we know that under the a priori bound condition (Equation16), the error estimation result obtained by Landweber iterative regularization method is order-optimal.

7. Numerical experiments

In this section, we illustrate the stability and effectiveness of the two regularization methods and the superiority of the Landweber iterative regularization method through several numerical examples with different properties.

The analytic solution of the problem (Equation1) is difficult to obtain. Therefore, we construct the termination data g(x) by solving the following well-posed problem. (95) CFD0,tαu(x,t)+(L)su(x,t)=f(x)q(t),in Ω×(0,T),u(x,t)=0,on Ω×(0,T),u(x,0)=φ(x),in {0}×Ω,(95) with the given data f(x),q(t),φ(x).

In general, we assume d=1,Ω=(0,1) and φ(x)=0. In the finite difference algorithm, we make the grid size of space variable and time variable be Δx=1M and Δt=1T respectively. In space interval [0,1], grid points are marked as xi=iΔx,i=0,1,,M, and in time interval [0,T], grid points are marked as tn=nΔt, n=0,1,,N. For the function u on the grid point, we can approximately express it as uinu(xi,tn).

To simplify the calculation, let Lu=uxx. Then the space has the following discrete form: (96) (Ls)u(xi,tn)1(Δx)2s(ui+1n2uin+ui1n).(96) The Caputo-Fabrizio fractional derivative is approximated by [Citation48]: (97) CFD0,tαu(xi,tn)1αΔtj=1n(uinj+1uinj)ωj,(97) where i=1,2,,M1, n=1,2,,N and ωj=exp(jα1α)exp((j1)α1α). In (Equation95), according to initial condition and boundary condition, we can get a numerical solution for well-posed problem (Equation95) from the finite difference scheme (98) 1αΔtj=1n(uinj+1uinj)ωj=1(Δx)2s(ui+1n2uin+ui1n)+q(tn)f(xi).(98) Denote Un=(u1n,u2n,,uM1n)T,F=(f(x1),f(x2),,f(xM1))T, then according to scheme (Equation98), we obtain the following iterative scheme AU1=q(t1)F,AUn=σ(β1Un1+β2Un2++βn1U1)+q(tn)F,where σ=1αΔt, βj=ωjωj+1.

A is a tridiagonal matrix given by A(M1)×(M1)=2(Δx)2s+σ1(Δx)2s1(Δx)2s2(Δx)2s+σ1(Δx)2s1(Δx)2s2(Δx)2s+σ1(Δx)2s1(Δx)2s2(Δx)2s+σ.For the regularization problem (Equation44) corresponding to the modified quasi-boundary regularization method, we use the above finite difference scheme to discretize the first equation in (Equation44), we obtain AV1=q(t1)Fμδ,AVn=σ(β1Vn1+β2Vn2++βn1V1)+q(tn)Fμδ,where Vn=(v1n,v2n,,vM1n) with vin, is the approximate value of uμδ(xi,tn) and Fμδ=(f1δ,f2δ,,fM1δ) with fiδ is the approximate value of regularization solution fμδ(xi).

Moreover, we can infer the following relationship (99) VN=DFμδ,(99) where D is a matrix.

In addition, according to the fourth formula in (Equation44), we have (100) viN=gδ(xi)+μ1(Δx)2s(fi+1δ2fiδ+fi1δ),  i=1,2,,M1.(100) To transform (Equation100) into matrix form, we have (101) VN=GδμYFμδ,(101) where Gδ=(gδ(x1),gδ(x2,,gδ(xM1), and Y is a tridiagonal matrix given by Y(M1)×(M1)=2(Δx)2s1(Δx)2s1(Δx)2s2(Δx)2s1(Δx)2s1(Δx)2s2(Δx)2s1(Δx)2s1(Δx)2s2(Δx)2s.From (Equation99) and (Equation101), we obtain the unknow source vector satisfies the following format (102) (D+μY)Fμδ=Gδ.(102) For the Landweber iterative regularization method, similar to the modified quasi-boundary regularization method, in solving the numerical problem of inverse problem (Equation1), we need to find a matrix K to make Kf=UN=gδ hold. The matrix K satisfies the following conditions: K1=Aˆ,K2=Aˆσj=0n2(ωj+1ωj)K3j,Kn=Aˆσj=0n2(ωj+1ωj)Knj+1,n=3,,N,K=KN,where Aˆ(M+1)×(M+1)=0A(M1)×(M1)10.Therefore, the regularization solution is obtained by the following formula: fm,δ=ak=0m1(IaKK)kKgδ.By adding random perturbation to noise data g(x), the data with errors are obtained, gδ=g+ϵ(rand(size(g))),where rand() can produce a random number with a mean value of 0 and a variance of 1, and ε represents the relative error level. The absolute error levels are as follows: (103) δ=1(M+1)i=1M+1(gigiδ)2.(103) A priori regularization parameter is based on the smooth condition of the exact solution, which is difficult to obtain in general. For the two regularization methods, the a posteriori regularization parameter selection rule is selected as an example to prove the stability and effectiveness of the two regularization methods. By direct calculation, we obtain λn=n2 and Xn(x)=2πsinnx for n=1,2,. Let T = 1, p(t)=et, τ=1.01, p = 2, s=12. Choosing M = 100, N = 40, we give three examples of different properties.

Example 7.1

Consider smooth function f(x)=sin(7πx)ex,x[0,1].

Example 7.2

Consider piecewise smooth function f(x)=0,x[0,14),4(x14),x[14,12),4(x34),x[12,34),0,x[34,1].

Example 7.3

Consider non-smooth function f(x)=0,x[0,0.2],1,x(0.2,0.4],0,x(0.4,0.6],1,x(0.6,0.8],0,x(0.8,1].

Figure  shows the exact solution and its approximation solution under a modified quasi-boundary regularization method and Landweber iterative regularization method for α=0.2 at different error level ϵ=0.001,ϵ=0.0005,ϵ=0.0001 with Example 7.1. Figure  shows the exact solution and its approximation solution under a modified quasi-boundary regularization method and Landweber iterative regularization method for α=0.9 at different error level ϵ=0.001,ϵ=0.0005,ϵ=0.0001 with Example 7.1. Table  shows the comparison of absolute error of two regularization methods of Example 7.1 for different α with ϵ=0.001,0.0005,0.0001. Table  shows the comparison of two regularization methods of the CPU time of Example 7.1 for different α with ϵ=0.001,0.0005,0.0001.

Figure 1. The exact solution and approximate solution of two regularization methods of Example 7.1 with α=0.2 for ϵ=0.001,0.0005,0.0001. (a) α=0.2,ϵ=0.001 (b) α=0.2,ϵ=0.0005 (c) α=0.2,ϵ=0.0001. (a) ϵ=0.001. (b) ϵ=0.0005 and (c) ϵ=0.0001.

Figure 1. The exact solution and approximate solution of two regularization methods of Example 7.1 with α=0.2 for ϵ=0.001,0.0005,0.0001. (a) α=0.2,ϵ=0.001 (b) α=0.2,ϵ=0.0005 (c) α=0.2,ϵ=0.0001. (a) ϵ=0.001. (b) ϵ=0.0005 and (c) ϵ=0.0001.

Figure 2. The exact solution and approximate solution of two regularization methods of Example 7.1 with α=0.9 for ϵ=0.001,0.0005,0.0001. (a) α=0.9,ϵ=0.001 (b) α=0.9,ϵ=0.0005 (c) α=0.9,ϵ=0.0001. (a) ϵ=0.001. (b) ϵ=0.0005 and (c) ϵ=0.0001.

Figure 2. The exact solution and approximate solution of two regularization methods of Example 7.1 with α=0.9 for ϵ=0.001,0.0005,0.0001. (a) α=0.9,ϵ=0.001 (b) α=0.9,ϵ=0.0005 (c) α=0.9,ϵ=0.0001. (a) ϵ=0.001. (b) ϵ=0.0005 and (c) ϵ=0.0001.

Table 3. Absolute error of two regularization methods of Example 7.1 for different α with ϵ=0.001,0.0005,0.0001.

Table 4. The CPU time of two regularization methods of Example 7.1 for different α with ϵ=0.001,0.0005,0.0001.

For the two regularization methods, in Figures and , we find that the smaller the erroe level ε, the better the fitting effect. Moreover, the fitting effect of Landweber iterative regularization method is better than that of a modified quasi-boundary regularization method. From Table , we find that the smaller α and ε, the smaller the absolute error. Moreover, the error of Landweber iterative regularization method is smaller than that of a modified quasi-boundary regularization method. From Table , we find that when α is larger and ε is smaller, the CPU time is longer. In addition, the CPU time of the modified quasi-boundary regularization method is shorter than that of Landweber iterative regularization method.

Figure  shows the exact solution and its approximation solution under a modified quasi-boundary regularization method and Landweber iterative regularization method for α=0.2 at different error level ϵ=0.001,ϵ=0.0005,ϵ=0.0001 with Example 7.2. Figure  shows the exact solution and its approximation solution under a modified quasi-boundary regularization method and Landweber iterative regularization method for α=0.9 at different error level ϵ=0.001,ϵ=0.0005,ϵ=0.0001 with Example 7.2. Table  shows the comparison of absolute error of two regularization methods of Example 7.2 for different α with ϵ=0.001,0.0005,0.0001. Table  shows the comparison of two regularization methods of the CPU time of Example 7.2 for different α with ϵ=0.001,0.0005,0.0001.

Figure 3. The exact solution and approximate solution of two regularization methods of Example 7.2 with α=0.2 for ϵ=0.001,0.0005,0.0001. (a).α=0.2,ϵ=0.001 (b).α=0.2,ϵ=0.0005 (c).α=0.2,ϵ=0.0001. (a) ϵ=0.001. (b) ϵ=0.0005 and (c) ϵ=0.0001.

Figure 3. The exact solution and approximate solution of two regularization methods of Example 7.2 with α=0.2 for ϵ=0.001,0.0005,0.0001. (a).α=0.2,ϵ=0.001 (b).α=0.2,ϵ=0.0005 (c).α=0.2,ϵ=0.0001. (a) ϵ=0.001. (b) ϵ=0.0005 and (c) ϵ=0.0001.

Figure 4. The exact solution and approximate solution of two regularization methods of Example 7.2 with α=0.9 for ϵ=0.001,0.0005,0.0001. (a).α=0.9,ϵ=0.001 (b).α=0.9,ϵ=0.0005 (c).α=0.9,ϵ=0.0001. (a) ϵ=0.001. (b) ϵ=0.0005 and (c) ϵ=0.0001.

Figure 4. The exact solution and approximate solution of two regularization methods of Example 7.2 with α=0.9 for ϵ=0.001,0.0005,0.0001. (a).α=0.9,ϵ=0.001 (b).α=0.9,ϵ=0.0005 (c).α=0.9,ϵ=0.0001. (a) ϵ=0.001. (b) ϵ=0.0005 and (c) ϵ=0.0001.

Table 5. Absolute error of two regularization methods of Example 7.2 for different α with ϵ=0.001,0.0005,0.0001.

Table 6. The CPU time of two regularization methods of Example 7.2 for different α with ϵ=0.001,0.0005,0.0001.

From Figures and , we can find that the smaller ε is, the better the fitting effect is. In addition, compared with the modified quasi-boundary regularization method, the Landweber iterative regularization method is more effective. From Table , we find that the smaller α and ε are, the smaller the absolute error is. In addition, compared with the modified quasi-boundary regularization method, the error of Landweber iterative regularization method is smaller. From Table , we find that the larger α is, the smaller ε is, the longer the CPU time is. Moreover, the CPU time of the modified quasi-boundary regularization method is shorter than that of Landweber iterative regularization method.

Figure  shows the exact solution and its approximation solution under a modified quasi-boundary regularization method and Landweber iterative regularization method for α=0.2 at different error level ϵ=0.001,ϵ=0.0005,ϵ=0.0001 with Example 7.3. Figure  shows the exact solution and its approximation solution under a modified quasi-boundary regularization method and Landweber iterative regularization method for α=0.9 at different error level ϵ=0.001,ϵ=0.0005,ϵ=0.0001 with Example 7.3. Table  shows the comparison of absolute error of two regularization methods of Example 7.3 for different α with ϵ=0.001,0.0005,0.0001. Table  shows the comparison of two regularization methods of the CPU time of Example 7.3 for different α with ϵ=0.001,0.0005,0.0001.

Figure 5. The exact solution and approximate solution of two regularization methods of Example 7.3 with α=0.2 for ϵ=0.001,0.0005,0.0001. (a).α=0.2,ϵ=0.001 (b).α=0.2,ϵ=0.0005 (c).α=0.2,ϵ=0.0001. (a) ϵ=0.001. (b) ϵ=0.0005 and (c) ϵ=0.0001.

Figure 5. The exact solution and approximate solution of two regularization methods of Example 7.3 with α=0.2 for ϵ=0.001,0.0005,0.0001. (a).α=0.2,ϵ=0.001 (b).α=0.2,ϵ=0.0005 (c).α=0.2,ϵ=0.0001. (a) ϵ=0.001. (b) ϵ=0.0005 and (c) ϵ=0.0001.

Figure 6. The exact solution and approximate solution of two regularization methods of Example 7.3 with α=0.9 for ϵ=0.001,0.0005,0.0001. (a).α=0.9,ϵ=0.001 (b).α=0.9,ϵ=0.0005 (c).α=0.9,ϵ=0.0001. (a) ϵ=0.001. (b) ϵ=0.0005 and (c) ϵ=0.0001.

Figure 6. The exact solution and approximate solution of two regularization methods of Example 7.3 with α=0.9 for ϵ=0.001,0.0005,0.0001. (a).α=0.9,ϵ=0.001 (b).α=0.9,ϵ=0.0005 (c).α=0.9,ϵ=0.0001. (a) ϵ=0.001. (b) ϵ=0.0005 and (c) ϵ=0.0001.

Table 7. Absolute error of two regularization methods of Example 7.3 for different α with ϵ=0.001,0.0005,0.0001.

Table 8. The CPU time of two regularization methods of Example 7.3 for different α with ϵ=0.001,0.0005,0.0001.

From Figures and , we find that the smaller ε is, the better the fitting effect is. In addition, compared with the modified quasi-boundary regularization method, the Landweber iterative regularization method has better fitting effect. From Table , we find that the absolute error decreases with the increase of α and decreases with the decrease of ε. In addition, compared with the modified quasi-boundary regularization method, the error of Landweber iterative regularization method is smaller. From Table , we find that the larger α is, the smaller ε is, the longer the CPU time is. In addition, the CPU time of the modified quasi-boundary regularization method is shorter than that of Landweber iterative regularization method.

According to the above three examples of different properties, it can be proved that the absolute error obtained by Landweber iterative regularization method is smaller than that of a modified quasi-boundary regularization method for given α and ε. Moreover, through Figures , we find that the fitting effect of Landweber iterative regularization method is better than that of a modified quasi-boundary regularization method. Therefore, the Landweber iterative regularization method is more effective than the modified quasi-boundary regularization method. However, in terms of the CPU time, a modified quasi-boundary regularization method is shorter than Landweber iterative regularization method.

8. Conclusion

In this paper, the problem of unknown source identification for space-time fractional diffusion equation is studied. In this equation, the time fractional derivative is a new fractional derivative Caputo-Fabrizio fractional derivative. We prove that this problem is an ill-posed problem. Based on a priori assumption, the optimal error bound analysis of the problem under the source condition is obtained. In addition, we solve the inverse problem (Equation1) by two different regularization methods. The convergence error estimates of the two regularization methods are obtained under a priori and a posteriori regularization parameter selection rules. Compared with the modified quasi-boundary regularization method, we prove that the convergence error estimates obtained by Landweber iterative regularization method are order optimal. Finally, numerical examples of several different properties are given to demonstrate the advantages and disadvantages, stability and effectiveness of the two regularization methods, and the advantages of the two regularization methods in dealing with the inverse problem.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

The project is supported by the National Natural Science Foundation of China [grant number 11961044], the Doctor Fund of Lan Zhou University of Technology.

References

  • Caputo M, Fabrizio M. A new definition of fractional derivative without singular kernel. Progr Fract Differ Appl. 2015;1(2):73–85.
  • Losada J, Nieto JJ. Properties of a new fractional derivative without singular kernel. Progr Fract Differ Appl. 2015;1(2):87–92.
  • Tuan NH, Zhou Y. Well-posedness of an initial value problem for fractional diffusion equation with Caputo-Fabrizio derivative. Chaos Soliton Fract. 2020;138:109966.
  • Zheng XC, Wang H, Fu HF. Well-posedness of fractional differential equations with variable-order Caputo-Fabrizio derivative. Chaos Soliton Fract. 2020;138:109966.
  • Al-Salti N, Karimov E, Kerbal S. Boundary-value problems for fractional heat equation involving Caputo-Fabrizio derivative. New Trends Math Sci. 2016;4(4):79–89.
  • Baleanu D, Jajarmi A, Mohammadi H, et al. A new study on the mathematical modelling of human liver with Caputo-Fabrizio fractional derivative. Chaos Soliton Fract. 2020;134:109705.
  • Al-khedhairi A. Dynamical analysis and chaos synchronization of a fractional-order novel financial model based on Caputo-Fabrizio derivative. Eur Phys J Plus. 2019;134(10):532.
  • Al-Salti N, Karimov E, Sadarangani K. On a differential equation with Caputo-Fabrizio fractional derivative of order 1<β≤2 and application to mass-spring-damper system. Progr Fract Differ Appl. 2016;2(4):257–263.
  • Shi JK, Chen MH. A second-order accurate scheme for two-dimensional space fractional diffusion equations with time Caputo-Fabrizio fractional derivative. Appl Numer Math. 2020;151:246–262.
  • Liu ZG, Cheng AJ, Li XL. A second order Crank-Nicolson scheme for fractional Cattaneo equation based on new fractional derivative. Appl Math Comput. 2017;311:361–374.
  • Wang JG, Wei T, Zhou YB. Tikhonov regularization method for a backward problem for the time-fractional diffusion equation. Appl Math Model. 2013;37(18–19):8518–8532.
  • Yang F, Zhang P, Li XX, et al. Tikhonov regularization method for identifying the space-dependent source for time-fractional diffusion equation on a columnar symmetric domain. Adv Differ Equ. 2020;2020:128.
  • Yang F, Fu CL, Li XX. A modified tikhonov regularization method for the cauchy problem of Laplace equation. Acta Math Sci. 2015;35(6):1339–1348.
  • Xiong XT, Xue X. A fractional Tikhonov regularization method for identifying a space-dependent source in the time-fractional diffusion equation. Appl Math Comput. 2019;349:292–303.
  • Yang F, Pu Q, Li XX. The fractional Tikhonov regularization methods for identifying the initial value problem for a time-fractional diffusion equation. J Comput Appl Math. 2020;380:112998.
  • Wang JG, Wei T, Zhou Y. Optimal error bound and simplified Tikhonov regularization method for a backward problem for the time-fractional diffusion equation. J Comput Appl Math. 2015;279(18–19):277–292.
  • Feng XL, Eldén L. Solving a Cauchy problem for a 3D elliptic PDE with variable coefficients by a quasi-boundary-value method. Inverse Probl. 2014;30(1):015005.
  • Yang F, Wang N, Li XX, et al. A quasi-boundary regularization method for identifying the initial value of time-fractional diffusion equation on spherically symmetric domain. Inverse Ill-posed Probl. 2019;27(5):609–621.
  • Yang F, Sun YR, Li XX, et al. The quasi-boundary regularization value method for identifying the initial value of heat equation on a columnar symmetric domain. Numer Algor. 2019;82(2):623–639.
  • Yang F, Zhang P, Li XX. The truncation method for the Cauchy problem of the inhomogeneous Helmholtz equation. Appl Anal. 2019;98(5):991–1004.
  • Yang F, Pu Q, Li XX, et al. The truncation regularization method for identifying the initial value on non-homogeneous time-fractional diffusion-wave equations. Mathematics. 2019;7:1007.
  • Wei T, Wang JG. A modified quasi-boundary value method for an inverse source problem of the time-fractional diffusion equation. Appl Numer Math. 2014;78:95–111.
  • Yang F, Fu CL, Li XX. A mollification regularization method for unknown source in time-fractional diffusion equation. Int J Comput Math. 2014;91(7):1516–1534.
  • Zhang HW, Qin HH, Wei T. A quasi-reversibility regularization method for the Cauchy problem of the Helmholtz equation. Int J Comput Math. 2011;88(4):839–850.
  • Yang F, Fu CL. The quasi-reversibility regularization method for identifying the unknown source for time fractional diffusion equation. Appl Math Model. 2015;39(5–6):1500–1512.
  • Yang F, Ren YP, Li XX. Landweber iteration regularization method for identifying unknown source on a columnar symmetric domain. Inverse Probl Sci Eng. 2018;26(8):1109–1129.
  • Yang F, Liu X, Li XX. Landweber iteration regularization method for identifying unknown source of the modified Helmholtz equation. Bound Value Probl. 2017;2017(1):1–16.
  • Yang F, Zhang Y, Li XX. Landweber iteration regularization method for identifying the initial value problem of the time-space fractional diffusion-wave equation. Numer Algor. 2020;83:1509–1530.
  • Xiong XT, Fu CL, Li HF. Fourier regularization method of a sideways heat equation for determining surface heat flux. J Math Anal Appl. 2006;317(1):331–348.
  • Li XX, Lei JL, Yang F. An a posteriori fourier regularization method for identifying the unknown source of the space-fractional diffusion equation. J Inequal Appl. 2014;2014(1):1–13.
  • Yang F, Fu CL, Li XX, et al. The Fourier regularization method for identifying the unknown source for the modified Helmholtz equation. Acta Math Sci. 2014;34(4):1040–1047.
  • Yang F, Fan P, Li XX, et al. Fourier truncation regularization method for a time-fractional backward diffusion problem with a nonlinear source. Mathematics. 2019;7(9):865.
  • Liu SS, Feng LX. A posteriori regularization parameter choice rule for a modified kernel method for a time-fractional inverse diffusion problem. J Comput Appl Math. 2019;353:355–366.
  • Liu SS, Feng LX. Optimal error bound and modified kernel method for a space-fractional backward diffusion problem. Adv Differ Equ. 2018;2018:268.
  • Liu SS, Feng LX. A modified kernel method for a time-fractional inverse diffusion problem. Adv Differ Equ. 2015;2015:342.
  • Yang F, Fu JL, Li XX. A potential-free field inverse Schrödinger problem: optimal error bound analysis and regularization method. Inverse Probl Sci Eng. 2020;28(9):1209–1252.
  • Zhang ZQ, Ma YJ. A modified kernel method for numerical analytic continuation. Inverse Probl Sci Eng. 2013;21(5):840–853.
  • Tautenhahn U. Optimality for ill-posed problems under general source conditions. Numer Funct Anal Optim. 1998;19(3–4):377–398.
  • Ivanchov MI. The inverse problem of determining the heat source power for a parabolic equation under arbitrary boundary conditions. J Math Sci. 1998;88(3):432–436.
  • Schr o¨er T, Tautenhahn U. On the optimal regularization methods for solving linear ill-posed problems. Z Anal Anwend. 1994;13(4):697–710.
  • Tautenhahn U. Optimal stable approximations for the sideways heat equation. J Inverse Ill-Posed Probl. 1997;5(3):287–307.
  • Vainikko G. On the optimality of methods for ill-posed problems. Z Anal Anwend. 1987;6(4):351–362.
  • Hohage T. Regularization of exponentially ill-posed problems. Numer Funct Anal Optim. 2000;21(3–4):439–464.
  • Engl HW, Hanke M, Neubauer A. Regularization of inverse problem. Dordrecht: Kluwer Academic Publishers; 1996.
  • Tautenhahn U, Gorenflo R. On optimal regularization methods for fractional differentiation. J Anal Appl. 1999;18(2):449–467.
  • Tautenhahn U. Optimal stable solution of Cauchy problems for elliptic equations. Z Anal Anwend. 1996;15:961–984.
  • Hofmann B, Tautenhahn U, H a¨mrik U. Conditional stability estimates for ill-posed PDE problem by using interpolation. Preprint 2011–16, 2011. Available from: http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-72654.
  • Atangana A, Alqahtani RT. Numerical approximation of the space-time Caputo-Fabrizio fractional derivative and application to groundwater pollution equation. Adv Differ Equ. 2016;2016(1):156.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.