636
Views
4
CrossRef citations to date
0
Altmetric
Research Article

Three Landweber iterative methods for solving the initial value problem of time-fractional diffusion-wave equation on spherically symmetric domain

, &
Pages 2306-2356 | Received 06 Apr 2020, Accepted 28 Mar 2021, Published online: 17 Apr 2021

Abstract

In this paper, the inverse problem for identifying the initial value of time-fractional diffusion wave equation on spherically symmetric region is considered. The exact solution of this problem is obtained by using the method of separating variables and the property the Mittag–Leffler functions. This problem is ill-posed, i.e. the solution(if exists) does not depend on the measurable data. Three different kinds landweber iterative methods are used to solve this problem. Under the priori and the posteriori regularization parameters choice rules, the error estimates between the exact solution and the regularization solutions are obtained. Several numerical examples are given to prove the effectiveness of these regularization methods.

2010 Mathematics Subject Classifications:

1. Introduction

Nowadays, people find that the fractional derivative has much advantages in solving practical problem, such as in medical engineering [Citation1], chemistry and biochemistry [Citation2], finance and economics [Citation3–6], inverse scattering [Citation7]. Up to now, a lot of achievements have been made in solving the direct problems [Citation8–12] of fractional differential equations. However, when solving practical problems, the initial value, or source term, or diffusion coefficient, or part of the boundary value [Citation13,Citation14] is unknown, and it is necessary to invert them through some measurement data, which puts forward the inverse problem of fractional diffusion equation. The study of the fractional diffusion equation is still on an initial stage, the direct problem of it has been studied in [Citation15–17].

For the inverse problem of time-fractional diffusion equation as 0<α<1, there are a lot of research results. For identifying the unknown source, one can see [Citation18–26]. About backward heat conduction problem, one can see [Citation27–34]. About identifying the initial value problem, one can see [Citation35–37]. For an inverse unknown coefficient problem of a time-fractional equation, one can see [Citation38,Citation39]. About identifying the source term and initial data simultaneous of time-fractional diffusion equation, one can see [Citation40,Citation41]. About identifying some unknown parameters in time-fractional diffusion equation, one can see [Citation42]. About the inverse problem for the heat equation on a columnar axis-symmetric area, one can see [Citation43–48]. In [Citation43–45], Landweber regularization method, a simplified Tikhonov regularization method and a spectral method are used to identify source term on a columnar axis-symmetric area. In [Citation46–48], the authors considered a backward problem on a columnar axis-symmetric domain. In [Citation46], Yang et al. used the quasi-boundary value regularization method to solve the inverse problem for determining the initial value of heat equation with inhomogeneous source on a columnar symmetric domain. The error estimate between the regular solution and the exact solution under the corresponding regularization parameter selection rules is obtained. Finally, numerical example is given to verify that the regularization method is very effective for solving this inverse problem. In [Citation47], Cheng et al. used the modified Tikhonov regularization method to do with the inverse time problem for an axisymmetric heat equation. Finally, Hölder type error estimate between the approximate solution and exact solution is obtained. In [Citation48], Djerrar et al. used standard Tikhonov regularization method to deal with an axisymmetric inverse problem for the heat equation inside the cylinder arb, and numerical examples is used to show that this method is effective and stable. About the inverse problem for the time-fractional diffusion equation as 0<α<1 on a columnar and spherical symmetric areas, one can see [Citation49,Citation50]. In [Citation49], Xiong proposed a backward problem model of a time-fractional diffusion-heat equation on a columnar axis-symmetric domain. Yang et al. in [Citation50] used Landweber iterative regularization method to solve identifying the initial value of time-fractional diffusion equation on spherically symmetric domain. Compare with the inverse problem of time-fractional diffusion equation as 0<α<1, there are little research result for the inverse problem of time-fractional diffusion wave equation as 1<α<2. Šišková et al. in [Citation51] used the regularization method to solve the inverse source problem of time-fractional diffusion wave equation. Liao et al. in [Citation52] used conjugate gradient method combined with Morozovs discrepancy principle to identify the unknown source for the time-fractional diffusion wave equation. Šišková et al. in [Citation53] used the regularization method to deal with an inverse source problem for a time-fractional wave equation. Gong et al. in [Citation54] used a generalized Tikhonov to identify the time-dependent source term in a time-fractional diffusion-wave equation. In recent years, in physical oceanography and global meteorology, the inversion of initial boundary value problems has always been a hot issue. In order to increase the accuracy of numerical weather prediction, usually by the model combined with the observation data of the inversion of initial boundary value problems, and in numerical weather prediction model, provide a reasonable initial field. At present, many domestic and foreign ocean circulation model, atmospheric general circulation model, numerical weather prediction model and torrential rain forecasting model belongs to the inversion of initial boundary value problems during initialization, so such problems of scientific research application prospect is very broad. Yang et al. in [Citation55] used the truncated regularization method to solve the inverse initial value problem of the time-fractional inhomogeneous diffusion wave equation. Yang et al. in [Citation56] used the Landweber iterative regularization method to solve the inverse problem for identifying the initial value problem of a space time-fractional diffusion wave equation. Wei et al. in [Citation57] used the Tikhonov regularization method to solve the inverse initial value problem of time-fractional diffusion wave equation. Wei et al. in [Citation58] used the conjugate gradient algorithm combined with Tikhonov regularization method to identify the initial value of time-fractional diffusion wave equation. Until now, we find that there are few papers for the inverse problem of time-fractional diffusion-wave equation on a columnar axis-symmetric domain and spherically symmetric domain. In [Citation59], the authors used the Landweber iterative method to solve an inverse source problem of time-fractional diffusion-wave equation on spherically symmetric domain. It is assumed that the grain is of a spherically symmetric domain diffusion geometry as illustrated in Figure (a-b), which is actually consistent with laboratory measurements of helium diffusion from a physical point of view from apatite. As a consequence of radiogenic production and diffusive loss, u(r,t) which only depends on the spherically radius r and t denotes the concentration of helium. For the inverse problem of inversion initial value in spherically symmetric region, there are few research results at present. Whereupon, in this paper, we consider the inverse problem to identify the initial value of time-fractional diffusion-wave equation on spherically symmetric region and give three regularization methods to deal with this inverse problem in order to find a effective regular method.

In this paper, we consider the following problem: (1) Dtαu(r,t)2rur(r,t)urr(r,t)=f(r),0<r<r0,0<t<T,1<α<2,u(r0,t)=0,0tT,u(r,0)=φ(r),0rr0,ut(r,0)=ψ(r),0rr0,limr0u(r,t) bounded,0tT,u(r,T)=g(r),0rr0,(1) where r0 is the radius, Dtαu(r,t) is the Caputo fractional derivative (1<α<2), it is defined as (2) Dtαu(r,t)=1Γ(2α)0t2u(r,s)s2ds(ts)α1,1<α<2.(2) The existence and uniqueness of the direct problem solution has been proved in the [Citation60]. The inverse problem is to use the measurement data g(r) and the known function f(r) to identify the unknown initial data φ(r), ψ(r). The inverse initial value problem can be transformed into two cases:

(IVP1):

Assuming ψ(r) is known, we use the final value data g(r) and the known function f(r) to invert the initial value φ(r).

(IVP2):

Assuming φ(r) is known, we use the final value data g(r) and the known function f(r) to invert the initial value ψ(r).

Because the measurements are error-prone, we remark the measurements with error as fδ and gδ and satisfy (3) fδfL2[0,r0;r2]δ,(3) (4) gδ(r)g(r)L2[0,r0;r2]δ,(4) where δ>0. In this paper, L2[0,r0;r2] represents the Hilbert space with weight r2 on the interval [0,r0] of the Lebesgue measurable function. (,) and represent the inner product and norm of the space of [0,r0;r2], respectively. is defined as follows: (5) =0r0r2||2dr12.(5) This paper is organized as follows. In Section 2, we recall and state some preliminary theoretical results. In Section 3, we analyse the ill-posedness of the problem (IVP1) and the problem (IVP2), and give the conditional stability result. In Section 4, we give the corresponding a priori error estimates and posteriori error estimates for three regularization methods. In Section 5, we conduct some numerical tests to show the validity of the proposed regularization methods. Since most of the solutions of fractional partial differential equations contain special functions (Mittag–Leffler functions), and the calculation of these functions is quite difficult. In this paper, the difficulties are overcome through [Citation61,Citation62]. Finally, we give some concluding remarks.

2. Preliminary results

In this section, we give some important Lemmas.

Lemma 2.1

[Citation57]

If 0<α<2, and βR be arbitrary. Suppose μ satisfy πα2<μ<min{π,πα}. Then there exists a constant C1=C(α,β,μ)>0 such that (6) |Eα,β(z)|C11+|z|,μ|arg(z)|π.(6)

Lemma 2.2

[Citation57]

For 1<α<2, βR and η>0, we have (7) Eα,β(η)=1Γ(βα)η+O1η2,η.(7)

Lemma 2.3

For 12<γ<1, a1 and a2 are relaxation factor and satisfy 0<a1Eα,12(λnTα)<1, 0<a2T2Eα,22(λnTα)<1, m11, m21, we have (8) supλn>0[1(1a1Eα,12(λnTα))m1]γ1Eα,1(λnTα)a1m1,(8) (9) supλn>0[1(1a2T2Eα,22(λnTα))m2]γ1TEα,2(λnTα)a2m2.(9)

Proof.

Refer to the appendix for the details of the proof.

Lemma 2.4

For 12<γ<1, m11, m21, λn=(nπr0)2>0, 0<a1Eα,12(λnTα)<1, 0<a2T2Eα,22(λnTα)<1, we have (10) supλn>0(1a1Eα,12(λnTα))m1Eα,1p2(λnTα)c(a1,p)m1p4,(10) (11) supλn>0(1a2T2Eα,22(λnTα))m2Tp2Eα,2p2(λnTα)c(a2,p)m2p4,(11) where the constant c(a1,p) is given by c(a1,p)=(p4a1)p4 and c(a2,p) is given by c(a2,p)=(p4a2)p4.

Proof.

Refer to the appendix for the details of the proof.

Lemma 2.5

For 1<α<2 and any fixed T>0, there is at most a finite index set I1={n1,n2,,nN} such that Eα,1((nπr0)2Tα)=0 for nI1 and Eα,1((nπr0)2Tα)0 for nI1. Meanwhile there is at most a finite index set I2={m1,m2,,mM} such that Eα,2((nπr0)2Tα)=0 for nI2 and Eα,2((nπr0)2Tα)0 for nI2.

Proof.

From Lemma 2.2, we know that there exists L0>0 such that Eα,1nπr02Tα12Γ(1α)nπr02Tα<0,nπr02Tα>L0,for 1<α<2, thus we know Eα,1((nπr0)2Tα)=0 only if (nπr0)2TαL0. Since limn+(nπr0)2=+, there are only finite (nπr0)2 satisfying (nπr0)2TαL0. The proof for Eα,2((nπr0)2Tα) is similar.

Remark 2.1

The index sets I1 and I2 may be empty, that means the singular values for the operators K1 and K2 are not zeros. Here and below, all the results for I1= and I2= are regarded as the special cases.

Lemma 2.6

[Citation57]

For 1<α<2 and λn=(nπr0)2, there exists positive constants C_, C¯ depending on α, T such that (12) C_λn|Eα,1(λnTα)|C¯λn,nI1,(12) (13) C_λn|Eα,2(λnTα)|C¯λn,nI2.(13)

Lemma 2.7

For 12<γ<1, m31, m41, λn=(nπr0)2>0, 0<a1Eα,12((nπr0)2Tα)<1, 0<a2T2Eα,22((nπr0)2Tα)<1, we have (14) 1a1Eα,12nπr02Tαm3(1+n2)p2C3(m3+1)p4,(14) (15) 1a2T2Eα,22nπr02Tαm4(1+n2)p2C4(m4+1)p4,(15) where C3=(a1C_2r04pπ4)p4 and C4=(a2T2C_2r04pπ4)p4.

Proof.

Refer to the appendix for the details of the proof.

Lemma 2.8

For 12<γ<1, m51, m61, λn>0, 0<a1Eα,12((nπr0)2Tα)<1, 0<a2T2Eα,22((nπr0)2Tα)<1, we have (16) 1a1Eα,1γ+1nπr02Tαm5(1+n2)p2C5m5p2(γ+1),(16) (17) 1a2Tγ+1Eα,2γ+1nπr02Tαm6(1+n2)p2C6m6p2(γ+1),(17) where C5=(C_r02π2)p2(2a1(γ+1)p)p2(γ+1) and C6=(C_Tr02π2)p2(2a2(γ+1)p)p2(γ+1).

Proof.

Refer to the appendix for the details of the proof.

Lemma 2.9

For 0<γ1, m51, m61, 0<a1Eα,1γ+1((nπr0)2Tα)<1, 0<a2Tγ+1Eα,2γ+1((nπr0)2Tα)<1, we have (18) 11a1Eα,1γ+1nπr02Tαm5Eα,1nπr02Tα(a1m5)1γ+1,(18) (19) 11a2Tγ+1Eα,2γ+1nπr02Tαm6TEα,2nπr02Tα(a2m6)1γ+1.(19)

Proof.

Refer to the appendix for the details of the proof.

Lemma 2.10

For a1>0, a2>0, p>0, m1>0, m2>0, we have (20) F(x)=C¯r02π21a1C_2r04x2π4m11(x)p21C7(m1+1)p+24,(20) (21) G(x)=TC¯r02π21a2T2C_2r04x2π4m21(x)p21C8(m2+1)p+24,(21) where C7=C¯r02π2(a1C_r04(p+2)π4)p+24 and C8=TC¯r02π2(a2TC_r04(p+2)π4)p+24.

Proof.

Refer to the appendix for the details of the proof.

Lemma 2.11

For a1>0, a2>0, p>0, m51, m61, we have (22) F(x)=C¯r02π21a1C_r02xπ2γ+1m51(x)p21C9(m5+1)p+22(γ+1),(22) (23) G(x)=TC¯r02π21a2Tγ+1C_r02xπ2γ+1m61(x)p21C10(m6+1)p+22(γ+1),(23) where C9=C¯r02π2(C_r02π2)p21(a1p+2)p+22(γ+1) and C10=TC¯r02π2(TC_r02π2)p21(a1p+2)p+22(γ+1).

Proof.

Refer to the appendix for the details of the proof.

3. The ill-posedness and the conditional stability

Define (24) Hp=fL2[0,r0;r2];n=1(1+n2)p2(fn,Rn(r)),(24) where (,) is the inner product in L2[0,r0;r2], then Hp is a Hilbert space with the norm φ()Hp:=n=1(1+n2)p2(φn,Rn(r)),and ψ()Hp:=n=1(1+n2)p2(ψn,Rn(r)).

Theorem 3.1

Let φ(r), ψ(r)L2[0,r0;r2], then there exists a unique weak solution and the weak solution for (Equation1) is given by (25) u(r,t)=n=1tα1Eα,αnπr02tα(f(r),Rn(r))+Eα,1nπr02tα(φ(r),Rn(r))+tEα,2nπr02tα(ψ(r),Rn(r))Rn(r),(25) where (φ(r),Rn(r)) and (ψ(r),Rn(r)) are the Fourier coefficients.

Let t = T in (Equation25), we have u(r,T)=n=1Tα1Eα,αnπr02Tα(f(r),Rn(r))+Eα,1nπr02Tα(φ(r),Rn(r))+TEα,2nπr02Tα(ψ(r),Rn(r))Rn(r)=g(r).Denote g1(r):=g(r)n=1Tα1Eα,αnπr02Tαfn+TEα,2nπr02TαψnRn(r),and g2(r):=g(r)n=1Tα1Eα,αnπr02Tαfn+Eα,1nπr02TαφnRn(r).Then we have (26) g1(r)=n=1φnEα,1nπr02TαRn(r),(26) and (27) g2(r)=n=1ψnTEα,2nπr02TαRn(r).(27) Now we put the definitions of φn and ψn into (Equation26) and (Equation27), then the problem (IVP1) and the problem (IVP2) become the following integral equations (28) (K1φ)(ξ)=0r0κ1(r,ξ)φ(ξ)dξ=g1(r),(28) where the integral kernel is (29) κ1(r,ξ)=n=1Eα,1nπr02TαRn(r)Rn(ξ).(29) And (30) (K2ψ)(ξ)=0r0κ2(r,ξ)ψ(ξ)dξ=g2(r),(30) where the integral kernel is (31) κ2(r,ξ)=n=1TEα,2nπr02TαRn(r)Rn(ξ),(31) and due to [Citation59], we know the linear operators K1 and K2 are compact from L2[0,r0;r2] to L2[0,r0;r2]. The problem (IVP1) and the problem (IVP2) are ill-posed.

Let K1 be the adjoint of K1 and K2 be the adjoint of K2. Since Rn(r)=2nπsin(nπrr0)r03nπrr0 is a standard orthogonal system with weight r2 in the L2[0,r0;r2], it is easy to verity K1K1Rn(ξ)=Eα,12nπr02TαRn(ξ),and K2K2Rn(ξ)=T2Eα,22nπr02TαRn(ξ).Hence, the singular values of K1 are σ1n(1)=|Eα,1((nπr0)2Tα)|. Define (32) ψn(1)(r)=Rn(r),Eα,1nπr02Tα0,Rn(r),Eα,1nπr02Tα<0.(32) It is clear that {ψn(1)}n=1 are orthonormal in L2[0,r0;r2], we can verity (33) K1Rn(ξ)=σ1n(1)ψn(1)(r)=Eα,1nπr02TαRn(r),(33) (34) K1ψn(1)(r)=σ1n(1)Rn(ξ)=Eα,1nπr02Tαψn(1)(ξ).(34) Therefore, the singular system of K1 is (σ1n(1);Rn,ψn(1)).

By the similar verification, we know the singular system of K2 is (σ2n(2);Rn,ψn(2)), where σ2n(2)=|TEα,2((nπr0)2Tα)| and (35) ψn(2)(r)=Rn(r),Eα,2nπr02Tα0,Rn(r),Eα,2nπr02Tα<0.(35) In the following, the integral kernels given in (Equation29) and (Equation31) are rewritten as (36) κ1(r,ξ)=n=1,nI1Eα,1nπr02TαRn(r)Rn(ξ),(36) (37) κ2(r,ξ)=n=1,nI2TEα,2nπr02TαRn(r)Rn(ξ).(37) It is not hard to prove that the kernel spaces of the operators K1 and K2 are N(K1)=span {Rn;nI1} for I1; N(K1)={0} for I1=,N(K2)=span {Rn;nI2} for I2; N(K2)={0} for I2=,and the ranges of the operators K1 and K2 are R(K1)=(g1,Rn)Eα,1nπr02Tα2g1L2[0,r0;r2]|(g1,Rn)=0,nI1;n=1,nI1(g1,Rn)Eα,1nπr02Tα2<+,R(K2)=(g1,Rn)Eα,1nπr02Tα2g2L2[0,r0;r2]|(g2,Rn)=0,nI2;n=1,nI2(g2,Rn)TEα,2nπr02Tα2<+.Therefore, we have the following existence of the solutions for the integral equations:

Theorem 3.2

If I1=, for any g1R(K1), there exists a unique solution in L2[0,r0;r2] for the integral Equation  (Equation28) given by (38) φ(r)=n=1g1nEα,1nπr02TαRn(r).(38) If I1, for any g1R(K1), there exists infinitely many solutions for the integral equation  (Equation28) , but exists only one best approximate solution in L2[0,r0;r2] as (39) φ(r)=n=1,nI1g1nEα,1nπr02TαRn(r).(39)

Proof.

Suppose φ(ξ)=n=1φnRn(ξ), put g1=n=1,nI1g1nRn(r) into (Equation28), according to the orthonormality of {Rn}, it is not hard to obtain the results.

Theorem 3.3

If I2=, for any g2R(K2), there exists a unique solution in L2[0,r0;r2] for the integral equation  (Equation30) given by (40) ψ(r)=n=1g2nTEα,2nπr02TαRn(r).(40) If I2, for any g2R(K2), there exists infinitely many solutions for the integral equation  (Equation30) , but exists only one best approximate solution as (41) ψ(r)=n=1,nI2g2nTEα,2nπr02TαRn(r).(41)

Proof.

The proof is similar to the Theorem 3.2.

We have the following theorem on conditional stability:

Theorem 3.4

When φ(r) satisfies the a-priori bound condition (42) φ(r)HpE1,p>0,(42) where E1 and p are positive constants, we have (43) φ(r)C11E12p+2g1pp+2,(43) where C11=(π2C_r02)pp+2 is a constant.

Proof.

Due to (Equation39) and Ho¨lder inequality, we have (44) φ(r)2=n=1,nI1φn2=n=1,nI1g1n2Eα,12nπr02Tα=n=1,nI1g1n4p+2Eα,12nπr02Tαg1n2pp+2n=1,nI1g1n2Eα,1p+2nπr02Tα2p+2n=1,nI1g1n2pp+2.(44) Applying Lemma 2.6 and (Equation39), we obtain (45) n=1,nI1g1n2Eα,1p+2nπr02Tαn=1,nI1g1n2Eα,12nπr02TαλnC_p=n=1,nI1φn2n2π2C_r02pn=1,nI1π2C_r02pφHp2.(45) Combining (Equation44) and (Equation45), we can get φ(r)C11φHp2p+2g1pp+2.

Theorem 3.5

As ψ(r) satisfies a-priori bound condition (46) ψ(r)HpE2,p>0,(46) where E2 and p are positive constants, then (47) ψ(r)C12E22p+2g2pp+2,(47) where C12=(π2TC_r02)pp+2 is a constant.

Proof.

The proof is similar to the Theorem 3.4, so it is omitted.

4. Regularization method and error estimation

By referring to [Citation59,Citation64,Citation65], we find a classical Landweber regularization method and two fractional Landweber iterative regularization methods, but it has not been explained which of these methods is better. So, in this section, we mainly make use of two kinds of fractional Landweber regularization methods and the classical Landweber regularization method to solve the problem (IVP1) and the problem (IVP2). The error estimates between the exact solution and the corresponding regular solution are given, respectively. By using three regularization methods to solve the same problem, an optimal regularization method is obtained.

By [Citation64], the fractional Landweber regularization solution is given as follows: (48) φm1,δ=n=1,nI111a1Eα,12nπr02Tαm1γEα,1nπr02Tαg1nδRn(r),(48) and (49) ψm2,δ=n=1,nI211a2T2Eα,22nπr02Tαm2γTEα,2nπr02Tαg2nδRn(r).(49) By [Citation59], the Landweber regularization solution is given as follows: (50) φm3,δ=n=1,nI111a1Eα,12nπr02Tαm3Eα,1nπr02Tαg1nδRn(r),(50) and (51) ψm4,δ=n=1,nI211a2T2Eα,22nπr02Tαm4TEα,2nπr02Tαg2nδRn(r).(51) By [Citation65], the modified iterative regularization solution is given as follows: (52) φm5,δ=n=1,nI111a1Eα,1γ+1nπr02Tαm5Eα,1nπr02Tαg1nδRn(r),(52) and (53) ψm6,δ=n=1,nI211a2Tγ+1Eα,2γ+1nπr02Tαm6TEα,2nπr02Tαg2nδRn(r),(53) where a1, a2 are relaxation factors and satisfy a1, a2>0, 12<γ1.

4.1. The priori error estimate

Lemma 4.1

Suppose f(x) and g(x) be L2(0,r0;r2), there is a constant M1 to make: Q(r)=g(r)n=1Tα1Eα,αnπr02TαfnRn(r)2[gL2[0,r0;r2]2+M12fL2[0,r0;r2]2],where M1=C1r02n2π2T.

Proof.

From Lemma 2.1, we have Eα,αnπr02TαC1nπr02Tα=C1r02n2π2Tα.Thus, Q(r)2=g(r)n=1Tα1Eα,αnπr02TαfnRn(r)2=n=1gnTα1Eα,αnπr02Tαfn22n=1gn2+2T2α2Eα,α2nπr02Tαn=1fn22[gL2[0,r0;r2]2+M12fL2[0,r0;r2]2],where M1=C1r02n2π2T. We complete the proof of Lemma 4.1.

Remark 4.1

Suppose Qδ(r)=gδ(r)n=1Tα1Eα,αnπr02TαfnδRn(r).From Lemma 4.1, (Equation3) and (Equation4), we easily know Q(r)Qδ(r)2(M12+1)δ.

Define an orthogonal projection operator Q1:L2[0,r0;r2]R(K1)¯, combining Equations (Equation3) and (Equation4), we have: Q1g1δQ1g1g1δg1=gδg2(M12+1)δ.Define an orthogonal projection operator Q2:L2[0,r0;r2]R(K2)¯, combining Equations (Equation3) and (Equation4), we have: Q2g2δQ2g2g2δg2=gδg2(M12+1)δ.

Theorem 4.1

Let m1=[(E1δ)4p+2]. If the a-priori condition  (Equation42) and Lemma  4.1 hold, we have the following convergence estimate (54) φm1,δ(r)φ(r)C13E12p+2δpp+2,(54) where C13=c(a1,p)(C_r02π2)p2+2a1(M12+1), and [x] denotes the largest integer smaller than or equal to x.

Proof.

Using the triangle inequality, we have φm1,δ(r)φ(r)φm1,δ(r)φm1(r)+φ(r)φm1(r)=I1+I2.From (Equation8) and Remark 4.1, we have I12=φm1,δ(r)φm1(r)2=n=1,nI111a1Eα,12nπr02Tαm1γEα,1nπr02Tα(g1nδg1n)Rn(r)2supn1,nI111a1Eα,12nπr02Tαm12γEα,12nπr02Tαn=1,nI1(g1nδg1n)22a1m1(M12+1)δ2.Then we can get (55) φm1,δ(r)φm1(r)2a1m1(M12+1)δ.(55) For the second part, (56) I22=φ(r)φm1(r)2=n=1,nI1111a1Eα,12nπr02Tαm1γEα,1nπr02Tαg1nRn(r)2n=1,nI11a1Eα,12nπr02Tαm1Eα,1nπr02Tαg1nRn(r)2=n=1,nI11a1Eα,12nπr02Tα2m1Eα,12nπr02Tαg1n2=n=1,nI11a1Eα,12nπr02Tα2m1φn2(r)=n=1,nI11a1Eα,12nπr02Tα2m1(1+n2)p(1+n2)pφn2(r)E12supn1(B(n))2,(56) where B(n):=(1a1Eα,12((nπr0)2Tα))m1(1+n2)p2.

From (Equation10), we have (57) B(n)1a1Eα,12nπr02Tαm1(n2)p21a1Eα,12nπr02Tαm1C_r02π2p2Eα,1p2nπr02TαC_r02π2p2c(a1,p)m1p4.(57) Combining (Equation55), (Equation56) and (Equation57), Theorem 4.1 is proved.

Theorem 4.2

Let m2=[(E2δ)4p+2]. If the a-priori condition  (Equation46) and Lemma 4.1 hold, we have the following convergence estimate (58) ψm2,δ(r)ψ(r)C14E22p+2δpp+2,(58) where C14=c(a2,p)(C_r02Tπ2)p2+2a2(M12+1), and [x] denotes the largest integer smaller than or equal to x.

Proof.

The proof process is similar to Theorem 4.1, so it is omitted.

Theorem 4.3

Let m3=[(E1δ)4p+2]. If the a-priori condition  (Equation42) and Lemma 4.1 hold, we have the following convergence estimate (59) φm3,δ(r)φ(r)C15E12p+2δpp+2,(59) where C15=2a1(M12+1)+C3, and [x] denotes the largest integer smaller than or equal to x.

Proof.

By the triangle inequality, we have φm3,δ(r)φ(r)φm3,δ(r)φm3(r)+φm3(r)φ(r)=I1+I2.On the one hand, from Lemma 4.1, we have I12=φm3,δ(r)φm3(r)2=n=1,nI111a1Eα,12nπr02Tαm3Eα,12nπr02Tα(g1nδg1n)Rn(r)22(M12+1)supn1(A(n))2δ2,where A(n):=1(1a1Eα,12((nπr0)2Tα))m3Eα,1((nπr0)2Tα).

From Bernoulli inequality, we can deduce that 11a1Eα,12nπr02Tαm3a1m3Eα,1nπr02Tα.Thus (60) φm3,δ(r)φm3(r)2a1m3(M12+1)δ.(60) On the other hand, using the a-priori bound condition, we can deduce that (61) I22=φm3(r)φ(r)2=n=1,nI111a1Eα,12nπr02Tαm31Eα,1nπr02Tαg1nRn(r)2=n=1,nI1(1a1Eα,12nπr0)2Tα2m3Eα,12nπr02Tαg1n2=n=1,nI1(1a1Eα,12nπr0)2Tα2m3φn2(r)=n=1,nI1(1a1Eα,12nπr0)2Tα2m3(1+n2)p(1+n2)pφn2(r)supn1(B(n))2E12,(61) where B(n):=(1a1Eα,12((nπr0)2Tα))m3(1+n2)p2.

From (Equation16), we have (62) B(n)C3(m3+1)p4.(62) Combining (Equation60), (Equation61) and (Equation62), Theorem 4.3 is proved.

Theorem 4.4

Let m4=[(E2δ)4p+2]. If the a-priori condition  (Equation46) and Lemma 4.1 hold, we have the following convergence estimate (63) ψm4,δ(r)ψ(r)C16E22p+2δpp+2,(63) where C16=2a2(M12+1)+C4, and [x] denotes the largest integer smaller than or equal to x.

Proof.

The proof process is similar to Theorem 4.3, so it is omitted.

Theorem 4.5

Let m5=[(E1δ)2(γ+1)p+2]. If the a-priori condition  (Equation42) and Lemma 4.1 hold, we have the following convergence estimate (64) φm5,δ(r)φ(r)C17E12p+2δpp+2,(64) where C17=a11γ+12(M12+1)+C5, and [x] denotes the largest integer smaller than or equal to x.

Proof.

By the triangle inequality, we have (65) φm5,δ(r)φ(r)φm5,δ(r)φm5(r)+φm5(r)φ(r)=I1+I2.(65) On the one hand, we have I1=n=1,nI111a1Eα,1γ+1nπr02Tαm5Eα,1nπr02Tα(g1nδg1n)Rn(r).Due to (Equation18) and Lemma 4.1, we have (66) I1(a1m5)1γ+12(M12+1)δ.(66) On the other hand, we have I2=n=1,nI11a1Eα,1γ+1nπr02Tαm5Eα,1nπr02Tαg1nRn(r)=n=1,nI11a1Eα,1γ+1nπr02Tαm5φnRn(r)=n=1,nI11a1Eα,1γ+1nπr02Tαm5(n2+1)p2(n2+1)p2φnRn(r)E1supn11a1Eα,1γ+1nπr02Tαm5(n2+1)p2.From (Equation16), we have (67) I2C5m5p2(γ+1)E1.(67) Combining (Equation65), (Equation66) and (Equation67), Theorem 4.5. is proved.

Theorem 4.6

Let m6=[(E2δ)2(γ+1)p+2]. If the a-priori condition  (Equation46) and Lemma 4.1 hold, we have the following convergence estimate (68) ψm6,δ(r)ψ(r)C18E22p+2δpp+2,(68) where C18=a21γ+12(M12+1)+C6, and [x] denotes the largest integer smaller than or equal to x.

Proof.

The proof process is similar to Theorem 4.5, so it is omitted.

4.2. The posteriori error estimate

Assume τ1>2(M12+1) be the constant, the selection of m1=m1(δ)N0 is that when m1 satisfying (69) K1φm1,δ(r)g1δ(r)τ1δ(69) appears for the first time, the iteration stops, where g1δ(r)τ1δ.

Lemma 4.2

Assume ρ(m1)=K1φm1,δ(r)g1δ(r), then

  1. ρ(m1) is a continuous function;

  2. limm10ρ(m1)=g1δ(r);

  3. limm1ρ(m1)=0;

  4. ρ(m1) is strictly decreasing for any m1(0,+).

Theorem 4.7

If formula  (Equation3) and  (Equation4) is true, then the regularization parameter m1=m1(δ)N0 satisfies (70) m1C19E1δ4p+2,(70) where C19=(C7τ12(M12+1))4p+2.

Proof.

From (Equation48), we have Rm1g1=n=1,nI111a1Eα,12nπr02Tαm1γEα,1nπr02Tαg1nRn(r).For g1H2[0,r0;r2], we have K1Rm1g1g12=n=1,nI1111a1Eα,12nπr02Tαm1γg1nRn(r)2n=1,nI11a1Eα,12nπr02Tαm1g1nRn(r)2.Since |1a1Eα,12((nπr0)2Tα)|<1, we have K1Rm1I1.

From the Morozov's discrepancy principle, we obtain K1Rm11g1g1K1Rm11g1δg1δ(K1Rm11I)(g1g1δ)(τ12(M12+1))δ.Using the a-prior bound condition, we have K1Rm11g1g1=n=1,nI111a1Eα,12nπr02Tαm11γ1g1nRn(r)n=1,nI11a1Eα,12nπr02Tαm11Eα,1nπr02Tα(1+n2)p2φn(1+n2)p2supn1H(n)E1,where H(n):=(1a1Eα,12((nπr0)2Tα))m11|Eα,1((nπr0)2Tα)|(1+n2)p2.

So we have H(n)E1(τ12(M12+1))δ. From (Equation20), we obtain H(n)C7(m1+1)p+24.So we have C7(m1+1)p+24E1(τ12(M12+1))δ.Then we obtain m1C7τ12(M12+1)4p+2E1δ4p+2.The convergence result is given in the following theorem.

Theorem 4.8

Assuming that Lemma  4.1 and  (Equation39) are valid, and the regularization parameter are given by Equation  (Equation70) , then (71) φm1,δ(r)φ(r)C20E12p+2δpp+2,(71) where C20=C11(τ1+2(M12+1))pp+2+2a1C19(M12+1).

Proof.

Using the triangle inequality, we have (72) φm1,δ(r)φ(r)φm1,δ(r)φm1(r)+φm1(r)φ(r).(72) From Theorem 4.1, we have (73) φm1,δ(r)φm1(r)2a1m1(M12+1)δ2a1C19(M12+1)E1δ2p+2δ.(73) For the second part, combining (Equation33) and 0<1(1a1Eα,12((nπr0)2Tα))m1]<1, we have K1(φm1(r)φ(r))=n=1,nI1111a1Eα,12nπr02Tαm1γg1nRn(r)n=1,nI11a1Eα,12nπr02Tαm1g1nRn(r)n=1,nI11a1Eα,12nπr02Tαm1(g1ng1nδ)Rn(r)+n=1,nI11a1Eα,12nπr02Tαm1g1nδRn(r).From (Equation3), (Equation4) and the Morozov's discrepancy principle, we have K1φm1(r)g1(r)τ1+2(M12+1)δ.Since φm1(r)φ(r)Hpn=1,nI1111a1Eα,12nπr02Tαm1γEα,1nπr02Tαg1nRn(r)Hpn=1,nI11a1Eα,12nπr02Tαm1Eα,1nπr02Tαg1nRn(r)Hpn=1,nI1g1n2Eα,12nπr02Tα(1+n2)p12n=1,nI1(1+n2)pφn212E1.Combining Theorem 3.4 and (Equation70), we have (74) φm1(r)φ(r)C11τ1+2(M12+1)pp+2E12p+2δpp+2.(74) Combining (Equation72), (Equation73) and (Equation74), we obtain the convergence estimate.

Assuming that τ2>2(M12+1) is the given constant, the selection of m2=m2(δ)N0 is that when m2 satisfying (75) K2ψm2,δ(r)g2δ(r)τ2δ(75) appears for the first time, the iteration stops, where g2δ(r)τ2δ.

Lemma 4.3

Assume ρ(m2)=K2ψm2,δ(r)g2δ(r), then

  1. ρ(m2) is a continuous function;

  2. limm20ρ(m2)=g2δ;

  3. limm2ρ(m2)=0;

  4. ρ(m2) is strictly decreasing for any m2(0,+).

Theorem 4.9

If formula  (Equation3) and  (Equation4) is true, then the regularization parameter m2=m2(δ)N0 satisfies: (76) m2C21E2δ4p+2,(76) where C21=(C8τ22(M12+1))4p+2.

Proof.

The proof process is similar to Theorem 4.7, so it is omitted.

Theorem 4.10

Assuming that  (Equation3) ,  (Equation4) and  (Equation41) are valid, and the regularization parameters are given by  (Equation76) , then we have (77) ψm2,δ(r)ψ(r)C22E22p+2δpp+2,(77) where C22=2a2C21(M12+1)+C12(τ2+2(M12+1))pp+2.

Proof.

The proof process is similar to Theorem 4.8, so it is omitted.

Assuming that τ3>2(M12+1) is the given constant, the selection of m3=m3(δ)N0 is that when m3 satisfying (78) K1φm3,δ(r)g1δ(r)τ3δ(78) appears for the first time, the iteration stops, where g1δ(r)τ3δ.

Lemma 4.4

Assuming ρ(m3)=K1φm3,δ(r)g1δ(r), then

  1. ρ(m3) is a continuous function;

  2. limm30ρ(m3)=g1δ;

  3. limm3ρ(m3)=0;

  4. ρ(m3) is strictly decreasing for any m3(0,+).

Theorem 4.11

If formula  (Equation3) and  (Equation4) is true, then the regularization parameter m3=m3(δ)N0 satisfies: (79) m3C23E1δ4p+2,(79) where C23=(C7τ32(M12+1))4p+2.

Proof.

From (Equation50), we have Rm3g1=n=1,nI111a1Eα,12nπr0Tαm3Eα,1nπr02Tαg1nRn(r).For g1H2[0,r0;r2], we have K1Rm3g1g12=n=1,nI111a1Eα,12nπr02Tαm3g1nRn(r)n=1,nI1g1nRn(r)2=n=1,nI11a1Eα,12nπr02Tα2m3g1n2.Since |1a1Eα,12((nπr0)2Tα)|<1, we have K1Rm3I1.

On the one hand, we have K1Rm31g1g1K1Rm31g1δg1δ(K1Rm31I)(g1g1δ)(τ32(M12+1))δ.On the other hand, from the a-priori bound condition, we know K1Rm31g1g1=n=1,nI11a1Eα,12nπr02Tαm31g1nRn(r)=n=1,nI11a1Eα,12nπr02Tαm31Eα,1nπr02Tαn=1,nI1(1+n2)p2φn(1+n2)p2supn1H(n)E1,where H(n):=(1a1Eα,12((nπr0)2Tα))m31|Eα,1((nπr0)2Tα)|(1+n2)p2.

So, we have H(n)E1(τ32(M12+1))δ. Thus we have C7(m3+1)p+24E1τ32(M12+1)δ,and m3C7τ32(M12+1)4p+2E1δ4p+2.

The convergence result is given in the following theorem.

Theorem 4.12

Assuming that  (Equation3) ,  (Equation4) and  (Equation39) are valid, and the regularization parameters are given by  (Equation79) , then we hold (80) φm3,δ(r)φ(r)C24E12p+2δpp+2,(80) where C24=C11(τ3+2(M12+1))pp+2+2a1C23(M12+1).

Proof.

By the triangle inequality, we have (81) φm3,δ(r)φ(r)φm3,δ(r)φm3(r)+φm3(r)φ(r)=I1+I2.(81) For the first part, combining Theorem 4.3 and (Equation79), we have (82) I1=φm3,δ(r)φm3(r)2a1m3(M12+1)δ2a1C23(M12+1)E1δ2p+2δ.(82) For the second part, we have K1(φm3(r)φ(r))=n=1,nI111a1Eα,12nπr02Tαm3g1nRn(r)n=1,nI1g1nRn(r)=n=1,nI11a1Eα,12nπr02Tαm3(g1ng1nδ)Rn(r)+n=1,nI11a1Eα,12nπr02Tαm3g1nδRn(r).From (Equation3) and(Equation4) and the Morozov's discrepancy principle, we have K1φm3(r)g1(r)τ3+2(M12+1)δ.Since φm3(r)φ(r)Hp=n=1,nI111a1Eα,12nπr02Tαm3Eα,1nπr02Tαg1nRn(r)n=1,nI11Eα,1nπr02Tαg1nRn(r)Hpn=1,nI1g1n2Eα,12nπr02Tα(1+n2)p12n=1,nI1(1+n2)pφn212E1.Under the a-prior bound condition of φHpE1(p>0), we obtain (83) φm3(r)φ(r)C11τ3+2(M12+1)pp+2E12p+2δpp+2.(83) Combining (Equation81), (Equation82) and (Equation83), we obtain the convergence estimate.

Assuming that τ4>2(M12+1) is the constant, the selection of m4=m4(δ)N0 is that when m4 satisfying (84) K2ψm4,δ(r)g2δ(r)τ4δ(84) appears for the first time, the iteration stops, where g2δ(r)τ4δ.

Lemma 4.5

Assuming ρ(m4)=K2ψm4,δ(r)g2δ(r), then

  1. ρ(m4) is a continuous function;

  2. limm40ρ(m4)=g2δ;

  3. limm4ρ(m4)=0;

  4. ρ(m4) is strictly decreasing for any m4(0,+).

Theorem 4.13

If formula  (Equation3) and  (Equation4) is true, then the regularization parameter m4=m4(δ)N0 satisfies (85) m4C25E2δ4p+2,(85) where C25=(C8τ42(M12+1))4p+2.

Proof.

The proof process is similar to Theorem 4.11, so it is omitted.

Theorem 4.14

Assuming that  (Equation3) ,  (Equation4) and  (Equation41) are valid, and the regularization parameters are given by  (Equation85) , then we hold (86) ψm4,δ(r)ψ(r)C26E22p+2δpp+2,(86) where C26=2a2C25(M12+1)+C12(τ4+2(M12+1))pp+2.

Proof.

The proof process is similar to Theorem 4.12, so it is omitted.

Assuming that τ5>2(M12+1) is the given constant, the selection of m5=m5(δ)N0 is that when m5 satisfying (87) K1φm5,δ(r)g1δ(r)τ5δ(87) appears for the first time, the iteration stops, where g1δ(r)τ5δ.

Lemma 4.6

Assuming ρ(m5)=K1φm5,δ(r)g1δ(r), then

  1. ρ(m5) is a continuous function;

  2. limm50ρ(m5)=g1δ;

  3. limm5ρ(m5)=0;

  4. ρ(m5) is strictly decreasing for any m5(0,+).

Theorem 4.15

For 0<γ1, m51, 0<a1Eα,1γ+1((nπr0)2Tα)<1, then we hold (88) m5C27E1δ2(γ+1)p+2,(88) where C27=(C9(τ52(M12+1)))2(γ+1)p+2.

Proof.

From (Equation52), we have Rm5g1=n=1,nI111a1Eα,1γ+1nπr02Tαm5Eα,1nπr02Tαg1nRn(r),then K1Rm5g1g12=K1Rm5g1K1φ2=n=1,nI111a1Eα,1γ+1nπr02Tαm5g1nRn(r)n=1,nI1g1nRn(r)2=n=1,nI11a1Eα,1γ+1nπr02Tαm5g1nRn(r)2,which implies K1Rm5I1.

On the one hand, we have K1Rm51g1g1K1Rm51g1δg1δ(K1Rm51I)(g1g1δ)τ5δ(K1Rm51I)2(M12+1)δτ52(M12+1)δ.On the other hand, we obtain K1Rm51g1g12=n=1,nI1(1a1Eα,1γ+1nπr0)2Tαm51g1n2=n=1,nI11a1Eα,1γ+1nπr02Tαm51Eα,1nπr02Tαn=1,nI1(1+n2)p2(1+n2)p2φn2supn1(Q(n))2E12,where Q(n):=(1a1Eα,1γ+1((nπr0)2Tα))m51Eα,1((nπr0)2Tα)(1+n2)p2.

From Lemma 2.6, we have Q(n)1a1C_r02n2π2γ+1m51C¯r02n2π2(1+n2)p2.From (Equation22), we know Q(n)C9(m5+1)p+22(γ+1).Then we can deduce that C9(m5+1)p+22(γ+1)E1τ52(M12+1)δ.

Theorem 4.16

Assuming that  (Equation3) ,  (Equation4) and  (Equation39) are valid, and the regularization parameters are given by  (Equation88) , then (89) φm5,δ(r)φ(r)C28E12p+2δpp+2,(89) where C28=(a1C27)1γ+12(M12+1)+C11(τ5+2(M12+1))pp+2.

Proof.

By the triangle inequality, we have (90) φm5,δ(r)φ(r)φm5,δ(r)φm5(r)+φm5(r)φ(r)=I1+I2.(90) For the first part, combining Theorem 4.5, we have (91) I1=φm5,δ(r)φm5(r)(a1m5)1γ+12(M12+1)δ(a1C27)1γ+12(M12+1)E1δ2p+2δ.(91) For the second part, we have K1(φm5(r)φ(r))=n=1,nI111a1Eα,1γ+1nπr02Tαm5g1nRn(r)n=1,nI1g1nRn(r)=n=1,nI11a1Eα,1γ+1nπr02Tαm5(g1ng1nδ)Rn(r)+n=1,nI11a1Eα,1γ+1nπr02Tαm5g1nδRn(r).From (Equation3), (Equation4) and the Morozov's discrepancy principle, we have K1φm5(r)g1(r)τ5+2(M12+1)δ.Since φm5(r)φ(r)Hp=n=1,nI111a1Eα,1γ+1nπr02Tαm5Eα,1nπr02Tαg1nRn(r)n=1,nI11Eα,1nπr02Tαg1nRn(r)Hpn=1,nI1g1n2Eα,12nπr02Tα(1+n2)p12n=1,nI1(1+n2)pφn212E1.Under the a-prior bound condition of φHpE1(p>0), we have (92) φm5(r)φ(r)C11τ5+2(M12+1)pp+2E12p+2δpp+2.(92) Combining (Equation90), (Equation91) and (Equation92), we obtain the convergence estimate.

Assuming that τ6>2(M12+1) is the given constant, the selection of m6=m6(δ)N0 is that when m6 satisfies (93) K2ψm6,δ(r)g2δ(r)τ6δ(93) appears for the first time, the iteration stops, where g2δ(r)τ6δ.

Lemma 4.7

Assuming ρ(m6)=K2ψm6,δ(r)g2δ(r), then

  1. ρ(m6) is a continuous function;

  2. limm60ρ(m6)=g2δ;

  3. limm6ρ(m6)=0;

  4. ρ(m6) is strictly decreasing for any m6(0,+).

Theorem 4.17

If formula  (Equation3) and  (Equation4) is true, then the regularization parameter m6=m6(δ)N0 satisfies (94) m6C29E2δ2(γ+1)p+2,(94) where C29=(C10τ62(M12+1))2(γ+1)p+2.

Proof.

The proof process is similar to Theorem 4.15, so it is omitted.

Theorem 4.18

Assuming that  (Equation3) ,  (Equation4) and  (Equation41) are valid, and the regularization parameters are given by  (Equation94) , then we hold (95) ψm6,δ(r)ψ(r)C30E22p+2δpp+2,(95) where C30=(a2C29)1γ+12(M12+1)+C12(τ6+2(M12+1))pp+2.

Proof.

The proof process is similar to Theorem 4.16, so it is omitted.

5. Numerical implementation and numerical examples

In this section, we present numerical examples obtained by using Matlab language [Citation70,Citation71] to illustrate the usefulness of the proposed method. Since the analytic solution of (Equation1) is difficult to obtain, we construct the final data g(r) by solving the forward problem with the given data φ(r) and ψ(r) by the finite difference method.

We choose T = 1, r0=π. Let Δt=T/N and Δr=π/M be the step sizes for time and space variables, respectively. The grid points in the time interval are labelled tk=kΔt, k=1,2,,N, the grid points in the space interval are ri=iΔr, i=1,2,,M, and set uik=u(ri,tk).

Based on the idea in [Citation61–63,Citation66–69], we approximate the time-fractional derivatives by (96) Dtαu(ri,tk)=(Δt)1αΓ(3α)j=1k1b01Δt(uikuik1)j=1k1(bkj1bkj)1Δt(uijuij1)bk1ν(xi),(96) where i=1,2,,M1,k=1,2,,N and bj=(j+1)1αj1α.

We approximate the space derivatives by (97) ur(ri,tk)ui+1kuikΔr,(97) (98) urr(ri,tk)ui+1k2uik+ui1k(Δr)2.(98) Next, the measured data is given by the following random form (99) fδ(r)=f(r)+δf(r)(2rand(size(f(r)))1),(99) (100) gδ(r)=g(r)+δg(r)(2rand(size(g(r)))1).(100) In order to make the sensitivity analysis for numerical results, we calculate the L2[0,r0;r2] error by (101) e(φ)=1M+1(φ(r)φmi,δ(r))2,i=1,,6.(101) The relative L2[0,r0;r2] error is defined by (102) eR(φ)=(φ(r)φmi,δ(r))2φ2(r),i=1,,6.(102) And (103) e(ψ)=1M+1(ψ(r)ψmi,δ(r))2,i=1,,6.(103) The relative L2[0,r0;r2] error is defined by (104) eR(ψ)=(ψ(r)ψmi,δ(r))2ψ2(r),i=1,,6.(104) Denote Uk=(u1k,u2k,,uM1k)T,f=(f(r1),f(r2),,f(rM1))T. Then we obtain the following iterative scheme (105) AU1=f,AUk=h(ω1Uk1+ω2Uk2++ωk1U1)+f,k=2,3,,N,(105) where h=(Δt)1αΓ(3α), ωi=bi1bi, and A=(ai,j)(M1)×(M1) is a tridiagonal matrix, here A(M1)×(M1)=d11(Δr)2a321(Δr)2d21(Δr)2a521(Δr)2dM1,where di=hb0+1(Δr)2(2+2i), i=1,2,,M1, and aj+12=1+2j, j=1,2,,M2.

Thus we can obtain g(r)=UN by iterative scheme (Equation105).

For the regularized problem, we can also use the finite difference scheme to discretize Equation (Equation48).

In the computational procedure, we take p = 1. In discrete format, we take M = 100, N = 50 to compute the direct problem. We use the dichotomy method to solve (Equation69), and obtain a posteriori regularization parameter, where τ=1.1.

Example 5.1

Take function φ(r)=sin(r).

In Figures , we give numerical results of Example 5.1 under the a posteriori parameter choice rule for various noise levels ϵ=0.01,0.008,0.005 in the case of α=1.2,1.5,1.8. It can be seen that the numerical error also decreases when the noise is reduced. And the smaller α, the better the approximate effect.

Figure 1. The exact solution and regular solution of Landweber regularization method by using the a posteriori parameter choice rule for Example 5.1. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 1. The exact solution and regular solution of Landweber regularization method by using the a posteriori parameter choice rule for Example 5.1. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 2. The exact solution and regular solution of fractional Landweber regularization method by using the a posteriori parameter choice rule for Example 5.1. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 2. The exact solution and regular solution of fractional Landweber regularization method by using the a posteriori parameter choice rule for Example 5.1. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 3. The exact solution and regular solution of modified iterative regularization method by using the a posteriori parameter choice rule for Example 5.1. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 3. The exact solution and regular solution of modified iterative regularization method by using the a posteriori parameter choice rule for Example 5.1. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Example 5.2

Take function φ(r)=2r,0<r<π2,2(πr),π2r<π.In Figures , we give numerical results of Example 5.2 under the a posteriori parameter choice rule for various noise levels ϵ=0.01,0.008,0.005 in the case of α=1.2,1.5,1.8. It can be seen that the numerical error also decreases when the noise is reduced. And the smaller α, the better the approximate effect.

Figure 4. The exact solution and regular solution of Landweber regularization method by using the a posteriori parameter choice rule for Example 5.2. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 4. The exact solution and regular solution of Landweber regularization method by using the a posteriori parameter choice rule for Example 5.2. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 5. The exact solution and regular solution of fractional Landweber regularization method by using the a posteriori parameter choice rule for Example 5.2. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 5. The exact solution and regular solution of fractional Landweber regularization method by using the a posteriori parameter choice rule for Example 5.2. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 6. The exact solution and regular solution of modified iterative regularization method by using the a posteriori parameter choice rule for Example 5.2. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 6. The exact solution and regular solution of modified iterative regularization method by using the a posteriori parameter choice rule for Example 5.2. (a) α=1.2, (b) α=1.5, (c) α=1.8.

We fix ϵ=0.01. For Table , when α=1.2, the iterative steps of Landweber regularization method, fractional Landweber regularization method and modified iterative method are 597, 75 and 137. When α=1.5, the iterative steps of Landweber regularization method, fractional Landweber regularization method and modified iterative method are 272, 41 and 67. When α=1.8, the iterative steps of Landweber regularization method, fractional Landweber regularization method and modified iterative method are 262, 42 and 67. We can deduce that ϵ=0.01 is fixed, the fractional Landweber regularization method has fewer iteration steps. For ϵ=0.008 and ϵ=0.005, the same result is obtained.

Table 1. The iteration steps of Example 5.1 for different regularization method.

We fix α=1.2. When ϵ=0.01, the iterative steps of Landweber regularization method, fractional Landweber regularization method and modified iterative method are 597, 75 and 137. When ϵ=0.008, the iterative steps of Landweber regularization method, fractional Landweber regularization method and modified iterative method are 766, 93 and 170. When ϵ=0.005, the iterative steps of Landweber regularization method, fractional Landweber regularization method and modified iterative method are 1362, 138 and 261. We can deduce that α=1.2 is fixed, the fractional

Landweber regularization method has fewer iteration steps. For α=1.5 and α=1.8, the same result is obtained.

In summary, the fractional Landweber regularization method has fewer iteration steps.

In Table , we use a computer with Intel(R) Core(TM) i5-6200U CPU @ 2.30 GHz 2.40 GHz and RAM of 4.00 GB to calculate the CPU time. The specific analysis is as follows: we fix α=1.2. For Table , when ϵ=0.01, the CPU time of Landweber regularization method, fractional Landweber regularization method and modified iterative method are 11.9304 s, 1.7742 s and 2.8694 s. When ϵ=0.008, the CPU time of Landweber regularization method, fractional Landweber regularization method and modified iterative method are 15.4281 s, 1.5056 s and 3.4975 s. When ϵ=0.005, the CPU time of Landweber regularization method, fractional Landweber regularization method and modified iterative method are 27.4445 s, 2.4426 s and 5.4024 s. We can deduced that α=1.2 is fixed, the fractional Landweber regularization method has fewer CPU time. For α=1.5 and α=1.8, the same result is obtained.

Table 2. The CPU time of Example 5.1 for different regularization method.

We fix ϵ=0.01. When α=1.2, the CPU time of Landweber regularization method, fractional Landweber regularization method and modified iterative method are 11.9304 s, 1.7742 s and 2.8694 s. When α=1.5, the CPU time of Landweber regularization method, fractional Landweber regularization method and modified iterative method are 5.8485 s, 0.6786 s and 1.6987 s. When α=1.8, the CPU time of Landweber regularization method, fractional Landweber regularization method and modified iterative method are 5.7179 s, 0.7475 s and 1.7893 s. We can deduced that ϵ=0.01 is fixed, the fractional Landweber regularization method has fewer CPU time. For ϵ=0.008 and ϵ=0.005, the same result is obtained. In summary, the fractional Landweber regularization method has fewer CPU time.

By Table , we can deduce that the errors between exact solution and approximate solution are smaller for fixed α and the smaller measurement error. And we infer that the error between exact solution and approximation solution of fractional Landweber method is smaller than that obtained by Landweber regularization method and modified iterative method for fixed α and ε.

Table 3. Error behaviour of Example 5.1 for different α with ϵ=0.01,0.005.

From Table , we can obtain the error between exact solution and approximation solution of fractional Landweber method is smaller than that obtained by Landweber regularization method and modified iterative method for fixed α and ε.

Table 4. Error behaviour of Example 5.2 for different α with ϵ=0.01,0.005.

In Figures , we give numerical results of Example 5.1 under the a posteriori parameter choice rule for various noise levels ϵ=0.01,0.008,0.005 in the case of α=1.2,1.5,1.8. It can be seen that the numerical error also decreases when the noise is reduced. And the smaller α, the better the approximate effect.

In Figures , we give numerical results of Example 5.2 under the a posteriori parameter choice rule for various noise levels ϵ=0.01,0.008,0.005 in the case of α=1.2,1.5,1.8. It can be seen that the numerical error also decreases when the noise is reduced. And the smaller α, the better the approximate effect.

In Figures , we give numerical results of Example 5.3 under the a posteriori parameter choice rule for various noise levels ϵ=0.01,0.008,0.005 in the case of α=1.2,1.5,1.8. It can be seen that the numerical error also decreases when the noise is reduced. And the smaller α, the better the approximate effect.

Figure 7. The exact solution and regular solution of Landweber regularization method by using the a posteriori parameter choice rule for Example 5.3. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 7. The exact solution and regular solution of Landweber regularization method by using the a posteriori parameter choice rule for Example 5.3. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 8. The exact solution and regular solution of fractional Landweber regularization method by using the a posteriori parameter choice rule for Example 5.3. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 8. The exact solution and regular solution of fractional Landweber regularization method by using the a posteriori parameter choice rule for Example 5.3. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 9. The exact solution and regular solution of modified iterative regularization method by using the a posteriori parameter choice rule for Example 5.3. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 9. The exact solution and regular solution of modified iterative regularization method by using the a posteriori parameter choice rule for Example 5.3. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Example 5.3

Take function φ(r)=0,0<rπ2,1,π2<r<π.

In Figures , we give numerical results of Example 5.4 under the a posteriori parameter choice rule for various noise levels ϵ=0.01,0.008,0.005 in the case of α=1.2,1.5,1.8. It can be seen that the numerical error also decreases when the noise is reduced. And the smaller α, the better the approximate effect.

Figure 10. The exact solution and regular solution of Landweber regularization method by using the a posteriori parameter choice rule for Example 5.4. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 10. The exact solution and regular solution of Landweber regularization method by using the a posteriori parameter choice rule for Example 5.4. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 11. The exact solution and regular solution of fractional Landweber regularization method by using the a posteriori parameter choice rule for Example 5.4. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 11. The exact solution and regular solution of fractional Landweber regularization method by using the a posteriori parameter choice rule for Example 5.4. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 12. The exact solution and regular solution of modified iterative regularization method by using the a posteriori parameter choice rule for Example 5.4. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 12. The exact solution and regular solution of modified iterative regularization method by using the a posteriori parameter choice rule for Example 5.4. (a) α=1.2, (b) α=1.5, (c) α=1.8.

In Figures , we give numerical results of Example 5.5 under the a posteriori parameter choice rule for various noise levels ϵ=0.01,0.008,0.005 in the case of α=1.2,1.5,1.8. It can be seen that the numerical error also decreases when the noise is reduced. And the smaller α, the better the approximate effect.

Figure 13. The exact solution and regular solution of Landweber regularization method by using the a posteriori parameter choice rule for Example 5.5. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 13. The exact solution and regular solution of Landweber regularization method by using the a posteriori parameter choice rule for Example 5.5. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 14. The exact solution and regular solution of fractional Landweber regularization method by using the a posteriori parameter choice rule for Example 5.5. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 14. The exact solution and regular solution of fractional Landweber regularization method by using the a posteriori parameter choice rule for Example 5.5. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 15. The exact solution and regular solution of modified iterative regularization method by using the a posteriori parameter choice rule for Example 5.5. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 15. The exact solution and regular solution of modified iterative regularization method by using the a posteriori parameter choice rule for Example 5.5. (a) α=1.2, (b) α=1.5, (c) α=1.8.

In Figures , we give numerical results of Example 5.6 under the a posteriori parameter choice rule for various noise levels ϵ=0.01,0.008,0.005 in the case of α=1.2,1.5,1.8. It can be seen that the numerical error also decreases when the noise is reduced. And the smaller α, the better the approximate effect.

Figure 16. The exact solution and regularization solution of Landweber regularization method by using the a posteriori parameter choice rule for Example 5.6. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 16. The exact solution and regularization solution of Landweber regularization method by using the a posteriori parameter choice rule for Example 5.6. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 17. The exact solution and regularization solution of fractional Landweber regularization method by using the a posteriori parameter choice rule for Example 5.6. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 17. The exact solution and regularization solution of fractional Landweber regularization method by using the a posteriori parameter choice rule for Example 5.6. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 18. The exact solution and regularization solution of modified iterative method by using the a posteriori parameter choice rule for Example 5.6. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 18. The exact solution and regularization solution of modified iterative method by using the a posteriori parameter choice rule for Example 5.6. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Example 5.4

Take function ψ(r)=αcos(r).

Example 5.5

Take function ψ(r)=2r+π,0<rπ2, 2rπ,π2r<π.

Example 5.6

Take function ψ(r)=1,0<rπ3,0,π3<r2π3,1,2π3<r<π.From the above conclusion, we know that the fractional Landweber regularization method is the most efficient and accurate. Next, the error of the regular solution and the exact solution is given in Figure  under the rule of selecting the posterior regularization parameters when we choose ϵ=0.01,0.05,0.08, respectively. We can know that if the noise levels becomes lager, the error between the exact solution and the regular solution by using the a posteriori parameter choice rule will become lager.

Figure 19. The exact solution and regular solution of fractional Landweber regularization method by using the a posteriori parameter choice rule for Example 5.6. (a) α=1.2, (b) α=1.5, (c) α=1.8.

Figure 19. The exact solution and regular solution of fractional Landweber regularization method by using the a posteriori parameter choice rule for Example 5.6. (a) α=1.2, (b) α=1.5, (c) α=1.8.

We can deduce that when ε and α are larger, the error between the exact solution and the regular solution by using the a posteriori parameter choice rule is greater.

6. Conclusion

An inverse problem for the time-fractional diffusion-wave equation on spherically symmetric domain is considered. Based on the conditional stability, we propose Landweber regularization method, fractional Landweber regularization method and modified iterative method to deal with the problem and derive the a-priori and a posteriori convergence eatimate. In addition, numerical examples verify that the fractional Landweber iterative regularization method is more efficient and accurate. In the following research work, I will continue to study the inverse problems of some equations on special regions, such as cylindrical symmetric regions or spherically symmetric regions, which should be very interesting, and then use appropriate regularization methods to solve them.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The project is supported by the National Natural Science Foundation of China [No. 11961044], the Doctor Fund of Lan Zhou University of Technology.

References

  • Hall MG, Barrick TR. From diffusion-weighted MRI to anomalous diffusion imaging. Magnet Reson Med. 2008;59(3):447–455.
  • Yuste SB, Acedo L, Lindenberg K. Reaction front in an A+B→C reaction-subdiffusion process. Phys Rev E. 2004;69(3):036126.
  • Scalas E, Gorenflo R, Mainardi F. Fractional calculus and continuous-time finance. Phys A. 2000;284(1):376–384.
  • Kohlmann M, Tang T. Fractional calculus and continuous-time finance III: the diffusion limit. In: Mathematical finance; 2001. p. 171–180, Birkhauser Verlag, Basel-Boton-Berlin.
  • Meerschaert MM, Scalas E. Coupled continuous time random walks in finance. Phys A. 2006;370(1):114–118.
  • Mainardi F, Raberto M, Gorenflo R, et al. Fractional calculus and continuous-time finance II: the waiting-time distribution. Phys A. 2000;287(3-4):468–481.
  • Li ZX. An iterative ensemble Kalman method for an inverse scattering problem in acoustics. Mod Phys Lett B. 2020;34(28):12.
  • Luchko Y. Initial-boundary-value problems for the generalized multi-term time-fractional diffusion equation. J Math Anal Appl. 2011;374(2):538–548.
  • Luchko Y. Some uniqueness and existence results for the initial-boundary value problems for the generalized time-fractional diffusion equation. Comput Math Appl. 2010;59(5):1766–1772.
  • Luchko Y. Initial-boundary-value problems for the one-dimensional time-fractional diffusion equation. Fract Calc Appl Anal. 2012;15(1):141–160.
  • Luchko Y. Maximum principle for the generalized time-fractional diffusion equation. J Math Anal Appl. 2009;351(1):218–223.
  • Li ZY, Luchko Y, Yamamoto M. Asymptotic estimates of solutions to initial-boundary-value problems for distributed order time-fractional diffusion equations. Fract Calc Appl Anal. 2014;17(4):1114–1136.
  • Hossein J, Afshin B, Seddigheh B. A novel approach for solving an inverse reaction-diffusion-convection problem. J Optim Theory Appl. 2019;183(2):688–704.
  • Afshin B, Seddigheh B. Reconstructing unknown nonlinear boundary conditions in a time-fractional inverse reaction-diffusion-convection problem. Numer Meth Part D E. 2019;35(3):976–992.
  • Luchko Y. Boundary value problems for the generalized time-fractional diffusion equation of distributed order. Fract Calc Appl Anal. 2009;12(4):409–422.
  • Bai ZB, Qiu TT. Existence of positive solution for singular fractional differential equation. Appl Math Comput. 2008;215(7):2761–2767.
  • Kemppainen J. Existence and uniqueness of the solution for a time-fractional diffusion equation. Fract Calc Appl Anal. 2011;14(3):411–417.
  • Wang JG, Wei T. Quasi-reversibility method to identify a space-dependent source for the time-fractional diffusion equation. Appl Math Model. 2015;39(20):6139–6149.
  • Zhang ZQ, Wei T. Identifying an unknown source in time-fractional diffusion equation by a truncation method. Appl Math Comput. 2013;219(11):5972–5983.
  • Wang JG, Zhou YB, Wei T. Two regularization methods to identify a space-dependent source for the time-fractional diffusion equation. Appl Numer Math. 2013;68(68):39–57.
  • Yang F, Fu CL, Li XX. A quasi-boundary value regularization method for determining the heat source. Math Method Appl Sci. 2015;37(18):3026–3035.
  • Yang F, Fu CL. The quasi-reversibility regularization method for identifying the unknown source for time fractional diffusion equation. Appl Math Model. 2015;39(5-6):1500–1512.
  • Yang F, Fu CL, Li XX. Identifying an unknown source in space-fractional diffusion equation. Acta Math Sci. 2014;34(4):1012–1024.
  • Yang F, Fu CL, Li XX. The inverse source problem for time fractional diffusion equation: stability analysis and regularization. Inverse Probl Sci Eng. 2015;23(6):969–996.
  • Yang F, Ren YP, Li XX, et al. Landweber iterative method for identifying a space-dependent source for the time-fractional diffusion equation. Bound Value Probl. 2017;2017(1):163.
  • Yang F, Liu X, Li XX. Landweber iterative regularization method for identifying the unknown source of the time-fractional diffusion equation. Adv Differ Equ. 2017;2017(1):388.
  • Zheng GH, Wei T. Two regularization methods for solving a Riesz–Feller space-fractional backward diffusion problem. Inverse Probl. 2010;26:115017(22pp).
  • Cheng H, Fu CL. An iteration regularization for a time-fractional inverse diffusion problem. Appl Math Model. 2012;36(11):5642–5649.
  • Liu JJ, Yamamoto M. A backward problem for the time-fractional diffusion equation. Appl Anal. 2010;89(11):1769–1788.
  • Yang F, Ren YP, Li XX. The quasi-reversibility method for a final value problem of the time-fractional diffusion equation with inhomogeneous source. Math Method Appl Sci. 2018;41(5):1774–1795.
  • Wang LY, Liu JJ. Data regularization for a backward time-fractional diffusion problem. Comput Math Appl. 2012;64(11):3613–3626.
  • Wang JG, Zhou YB, Wei T. A posteriori regularization parameter choice rule for the quasi-boundary value method for the backward time-fractional diffusion problem. Appl Math Lett. 2013;26(7):741–747.
  • Tuan NH, Long LD, Nguyen VT, et al. On a final value problem for the time-fractional diffusion equation with inhomogeneous source. Inverse Probl Sci Eng. 2017;25(9):1367–1395.
  • Taghavi A, Babaei A, Mohammadpour A. A stable numerical scheme for a time fractional inverse parabolic equation. Inverse Probl Sci Eng. 2017;25(10):1474–1491.
  • Yang F, Zhang Y, Li XX. The quasi-boundary value regularization method for identifying the initial value with discrete random noise. Bound Value Probl. 2018;2018(1):108.
  • Yang F, Zhang Y, Liu X, et al. The Quasi-boundary value method for identifying the initial value of the space-time an fractional diffusion equation. Acta Math Sci. 2020;40B(3):641–658.
  • Yang F, Pu Q, Li XX. The fractional Tikhonov regularization methods for identifying the initial value problem for a time-fractional diffusion equation. J Comput Appl Math. 2020;380:112998.
  • Ozbilge E, Demir A. Inverse problem for a time-fractional parabolic equation. J Inequal Appl. 2015;2015(1):81.
  • Babaei A, Banihashemi S. A stable numerical approach to solve a time-fractional inverse heat conduction problem. Iran J Sci Technol A. 2018;42(4):2225–2236.
  • Ruan ZS, Yang J, Lu XL. Tikhonov regularisation method for simultaneous inversion of the source term and initial data in a time-fractional diffusion equation. E Asian J Appl Math. 2015;5(3):273–300.
  • Yang F, Zhang P, Li XX, et al. Tikhonov regularization method for identifying the space-dependent source for time-fractional diffusion equation on a columnar symmetric domain. Adv Differ Equ. 2020;2020:128.
  • Li GS, Zhang DL, Jia XZ, et al. Simultaneous inversion for the space-dependent diffusion coefficient and the fractional order in the time-fractional diffusion equation. Inverse Probl. 2013;29:065014.
  • Yang F, Ren YP, Li XX. Landweber iteration regularization method for identifying unknown source on a columnar symmetric domain. Inverse Probl Sci Eng. 2018;26(8):1109–1129.
  • Cheng W, Zhao LL, Fu CL. Source term identification for an axisymmetric inverse heat conduction problem. Comput Math Appl. 2010;59(1):142–148.
  • Cheng W, Ma YJ, Fu CL. Identifying an unknown source term in radial heat conduction. Inverse Probl Sci Eng. 2012;20(3):335–349.
  • Yang F, Sun YR, Li XX. The quasi-boundary value method for identifying the initial value of heat equation on a columnar symmetric domain. Numer Algor. 2019;82(2):623–639.
  • Cheng W, Fu CL. A modified Tikhonov regularization method for an axisymmetric backward heat equation. Acta Math Sin. 2010;26(11):2157–2164.
  • Djerrar I, Alem L, Chorfi L. Regularization method for the radially symmetric inverse heat conduction problem. Bound Value Probl. 2017;2017(1):159.
  • Xiong XT, Ma XJ. A backward identifying problem for an axis-symmetric fractional diffusion equation. Math Model Anal. 2017;22(3):311–320.
  • Yang F, Wang N, Li XX. A quasi-boundary regularization method for identifying the initial value of time-fractional diffusion equation on spherically symmetric domain. J Inverse Ill-Posed Probl. 2019;27(5):609–621.
  • Šišková K, Slodička M. Identification of a source in a fractional wave equation from a boundary measyrement. J Comput Appl Math. 2019;349:172–186.
  • Liao KF, Li YS, Wei T. The identification of the time-dependent source term in time-fractional diffusion-wave equations. E Asian J Appl Math. 2019;9(2):330–354.
  • Šišková K, Slodička M. Recognition of a time-dependent source in a time-fractional wave equation. Appl Numer Math. 2017;121:1–17.
  • Gong XH, Wei T. Reconstruction of a time-dependent source term in a time-fractional diffusion-wave equation. Inverse Probl Sci Eng. 2019;27(11):1577–1594.
  • Yang F, Pu Q, Li XX, et al. The truncation regularization method for identifying the initial value on non-homogeneous time-fractional diffusion-wave equations. Mathematics. 2019;7(11):1007.
  • Yang F, Zhang Y, Li XX. Landweber iterative method for identifying the initial value problem of the time-space fractional diffusion-wave equation. Numer Algor. 2020;83(4):1509–1530.
  • Wei T, Zhang Y. The backward problem for a time-fractional diffusion-wave equation in a bounded domain. Comput Math Appl. 2018;75(10):3632–3648.
  • Xian J, Wei T. Determination of the initial data in a time-fractional diffusion-wave problem by a final time data. Comput Math Appl. 2019;78(8):2525–2540.
  • Yang F, Wang N, Li XX. Landweber iterative method for an inverse source problem of time-fractional diffusion-wave equation on spherically symmetric domain. J Appl Anal Comput. 2020;10(2):514–529.
  • Sakamoto K, Yamamoto M. Initial value/boundary value problems for fractional diffusion-wave equations and applications to some inverse problems. J Math Anal Appl. 2011;382(1):426–447.
  • Murio DA. Implicit finite difference approximation for time fractional diffusion equations. Comput Math Appl. 2008;56(4):1138–1145.
  • Zhuang P, Liu F. Implicit difference approximation for the time fractional diffusion equation. J Appl Math Comput. 2006;22(3):87–99.
  • Klann E, Ramlau R. Regularization by fractional filter methods and data smoothing. Inverse Probl. 2008;24(2):025018.
  • Han YZ, Xiong XT, Xue XM. A fractional Landweber method for solving backward time-fractional diffusion problem. Comput Math Appl. 2019;78(1):81–91.
  • Xiong XT, Xue XM, Qian Z. A modified iterative regularization method for ill-posed problems. Appl Numer Math. 2017;122:108–128.
  • Abbaszadeh M, Dehghan M. Numerical and analytical investigations for neutral delay fractional damped diffusion-wave equation based on the stabilized interpolating element free Galerkin(IEFG) method. Appl Numer Math. 2019;145:488–506.
  • Dehghan M, Abbaszadeh M. A finite difference/finite element technique with error estimate for space fractional tempered diffusion-wave equation. Comput Math Appl. 2018;75(8):2903–2914.
  • Dehghan M, Abbaszadeh M. A Legendre spectral element method(ESM) based on the modified bases for solving neutral delay distributed-order fractional damped diffusion-wave equation. Math Method Appl Sci. 2018;41(9):3476–3494.
  • Abbaszadeh M, Dehghan M. An improved meshless method for solving two-dimensional distributed order time-fractional diffusion-wave equation with error estimate. Numer Algor. 2016;75(1):173–211.
  • Irfan T. Fundamentals of MATLAB language; 2019.
  • Irfan T. Introduction to MATLAB; 2019.

Appendix

Proof

Proof of Lemma 2.3

We define two functions with τ12:=a1Eα,12(λnTα): φ(τ1)=a1[1(1τ12)m1]2γτ12,and ϕ(τ1)=[1(1τ12)m1]2γτ12.Obviously φ(τ1)=a1ϕ(τ1). These two functions are continuous when τ1(0,1).

For 12<γ<1 and τ1(0,1), using Lemma 3.3 in [Citation63], we have ϕ(τ1)m1. Therefore, supλn>0[1(1a1Eα,12(λnTα))m1]γ1Eα,1(λnTα)a1m1.The proof of (Equation9) is similar, so it is omitted.

Proof

Proof of Lemma 2.4

We introduce a new variable x:=Eα,12(λnTα)<1/a1 and define a function h(x)=(1a1x)m1xp4. It is easy to verify that there exists a unique x0=za1(z+m1) with z=p4 such that h(x0)=0, we then have h(x)h(x0)1zz+m1m1za1z+m1z<za1z1z+m1z<za1z1m1z:=c(a1,p)m1p4.The proof of (Equation11) is similar, so it is omitted.

Proof

Proof of Lemma 2.7

From Lemma 2.6, we have 1a1Eα,12nπr02Tαm3(1+n2)p21a1C_2r04n4π4m3(n2)p2.Define x:=n2, F(x):=1a1C_2r04x2π4m3xp2,it is easy to find the unique point x0=(a1C_2r04π44m3+pp)12 such that F(x0)=0. Therefore F(x)F(x0)1a1C_2r04π4pπ4a1C_2r04(4m3+p)m3pπ4a1C_2r04(4m3+p)p4a1C_2r04pπ4p4(m3+1)p4.The proof of (Equation15) is similar, so it is omitted.

Proof

Proof of Lemma 2.8

For the first inequality, we have the following results. From Lemma 2.6, we have 1a1Eα,1γ+1nπr02Tαm5(1+n2)p21a1C_r02n2π2γ+1m5(n2)p2.Define x:=n2, F(x):=1a1C_r02xπ2γ+1m5xp2,it is easy to find the unique point x0=C_r02π2(2a1m5(γ+1)p+a1)1γ+1 such that F(x0)=0. Therefore F(x)F(x0)=2a1m5(γ+1)pC_r02π2γ+1+a1C_r02π2γ+1p2(γ+1)=C_r02π2p22a1m5(γ+1)p+a1p2(γ+1)C_r02π2p22a1(γ+1)pp2(γ+1)m5p2(γ+1).For the second inequality, we have the following inequality. From Lemma 2.6, we have 1a2Tγ+1Eα,2γ+1nπr02Tαm6(1+n2)p21a2C_Tr02n2π2γ+1m6(n2)p2.Define y:=n2, G(y):=1a2C_Tr02yπ2γ+1m6yp2,it is easy to find the unique point y0=C_Tr02π2(2a2m6(γ+1)p+a2)1γ+1 such that G(y0)=0. Therefore G(y)G(y0)=2a2m6(γ+1)pC_r02π2γ+1+a2C_r02π2γ+1p2(γ+1)=C_r02π2p22a2m6(γ+1)p+a2p2(γ+1)C_r02π2p22a2(γ+1)pp2(γ+1)m6p2(γ+1).

Proof

Proof of Lemma 2.9

Let f(σ1n):=(1(1a1σ1nγ+1)m5)/σ1n and σ1n:=Eα,1((nπr0)2Tα), β1:=a1σ1nγ+1, then σ1n=(β1a1)1γ+1 and f(β1)=(1(1β1)m5)(β1/a1)1γ+1.

Since 0<a1Eα,1γ+1((nπr0)2Tα)<1, we have 0<β1<1. Hence from 0<1γ+1<1, 1(1β1)m5<1 and we have the following inequality holds, (A1) f(β1)=(1(1β1)m5)(β1/a1)1γ+1(1(1β1)m5)1γ+1(β1/a1)1γ+1=1(1β1)m5β11γ+1a11γ+1.(A1) Due to for n0 and x1, we have (A2) (1+x)n1+nx.(A2) Combining (EquationA1) and (EquationA2), we have f(β1)(a1m5)1γ+1.The proof of (Equation19) is similar, so it is omitted.

Proof

Proof of Lemma 2.10

If x0 satisfies F(x0)=0, we obtain x0=a1C_r04π44m1+p2p+212,therefore, we have F(x)F(x0)C¯r02π2a1C_r04(p+2)π4p+24(m1+1)p+24.The proof of (Equation21) is similar, so it is omitted.

Proof

Proof of Lemma 2.11

If x0 satisfies F(x0)=0, we obtain x0=C_r02π2a1p+21γ+1((2m52)(γ+1)+p+2)1γ+1,therefore, we have F(x)F(x0)C¯r02π2C_r02π2p21a1p+2p+22(γ+1)(m5+1)p+22(γ+1).The proof of (Equation23) is similar, so it is omitted.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.