Abstract
We study the problem of determining the initial condition in parabolic equations with time-dependent coefficients from integral observations which can be regarded as generalizations of point-wise interior observations. Our approach is new in the sense that for determining the initial condition we do not assume that the data available in the whole space domain at the final moment or in a subset of the space domain during a certain time interval, but some integral observations during a time interval. We propose a variational method in combination with Tikhonov regularization for solving the problem and then discretize it by finite difference splitting methods. The discretized minimization problem is solved by the conjugate gradient method and tested on computer to show its efficiency. Also as a by-product of the variational method, we propose a numerical scheme for estimating the degree of ill-posedness of the problem.
1. Introduction and Problem setting
The problem of determining the initial condition in parabolic equations is very important in practice. It is arising in data assimilation,[Citation1,Citation2] in image processing,[Citation3] in heat conduction processes,[Citation4,Citation26,Citation27] etc. However, for determining the initial condition, most of the authors assume that either the data are available in the whole space domain at the final moment,[Citation1,Citation2,Citation5,Citation6] or in a subset of the space domain during a certain time interval [Citation7]. In this paper, we look at the problem from a practical point of view: the data can be measured at certain points of the space domain and only during a certain time interval, furthermore, as we obtain the data by some instruments, they are averaged, i.e. of some integral forms. Thus, in this point of view, our problem is that of determining the initial condition in parabolic equations with possibly time-dependent coefficients from some integral observations during a certain time interval. The mathematical formulation of the problem is as follows:
Let be an open bounded domain in . Denote by the boundary of , and . Suppose that(1.1) (1.1) (1.2) (1.2) (1.3) (1.3) (1.4) (1.4)
with , and being given positive constants. Consider the problem(1.5) (1.5)
with and .
To introduce the notion of weak solution to this problem, we use the standard Sobolev spaces , and (see e.g. [Citation8, p.111]). Furthermore, for a Banach space B, we define
with the norm
In the sequel, we shall use the space W(0, T) defined as
equipped with the norm
We note that .
The solution of the problem (Equation1.5(1.5) (1.5) ) is understood in the weak sense as follows: A solution in W(0, T) of the problem (Equation1.5(1.5) (1.5) ) is a function satisfying the identity(1.6) (1.6)
and(1.7) (1.7)
It can be proved that under the above conditions there exists a unique solution in W(0, T) to the problem (Equation1.5(1.5) (1.5) ) [Citation4,Citation8–Citation10]. Furthermore, there exists a constant c depending only on the coefficients , b and such that(1.8) (1.8)
If we assume that and , then it can be proved that [Citation9,Citation10].
A standard inverse problem to (Equation1.5(1.5) (1.5) ) is the backward parabolic equation: given u(x, T) find v, which is the subject of a lot of research, see the recent survey,[Citation5] or [Citation6,Citation11–Citation13] and the references therein. However, in this setting we have to observe the solution in the whole space domain that is hardly realized in practice. Recently, the authors of [Citation7] tried to reconstruct the initial condition v from the measurements of u in , where is a subdomain of and is a constant. They proved some stability estimates and solved the problem by Tikhonov regularization. In this paper, we will reconstruct the initial condition v(x) when we can observe u(x, t) through the integral observation operators during a certain time interval, say, . Namely, suppose that we are given N nonnegative weight functions with and we observe u by(1.9) (1.9)
The problem is to reconstruct the initial condition v from these observations.
Before going further, let us discuss observations (Equation1.9(1.9) (1.9) ). First, any measurement is an averaged process, that is of the form (Equation1.9(1.9) (1.9) ). Second, such a kind of observations is a generalization of point observations. Indeed, let be given, be a neighbourhood of and be chosen as(1.10) (1.10)
where is the volume of . We see that is similar to the width of the instrument and when we let tend to zero we have the point observation. It should be noted also that as the solution to (Equation1.5(1.5) (1.5) ) is understood in the weak sense, its value at certain point does not always have a meaning, but it does in the above averaged sense. Third, with this kind of observations, the data need not be always available at the whole space domain and at any time. Thus, our problem setting is more practical than the related ones.
We will solve the problem by a variational method in combination with Tikhonov regularization and will use the conjugate gradient method to implement the algorithm on computer which is the subject of Section 2. The problem is discretized by the splitting finite difference method, although it can be done by the finite element method instead. The reason is that the finite difference method is easy for the problems with time-dependent coefficients and the splitting method splits the multi-dimensional problems into a sequence of one-dimensional ones which is very fast. The difficulty with the finite difference method applied to our problem is that we work with weak solutions. A full theory for this will be presented in Section 2.3. There we prove the convergence of the solution of the discretized variational problem to the solution of the continuous variational one. Furthermore, we prove that the inverse problem of determining v from the observations (Equation1.9(1.9) (1.9) ) is ill-posed and propose a simple technique to approximate its degree of ill-posedness. The numerical results will be given in Section 3.
2. Variational method
2.1. Variational method for the continuous problem
First, let us note that the inverse problem of reconstructing the initial condition v from the observations is ill-posed.
Remark 2.1:
If , and there exists a constant such that , then the operator C maps to is compact from to . Hence the problem of reconstructing v from is ill-posed.
If the conditions of the theorem are satisfied, then . Therefore, which is compactly embedded in . Hence, the operator C from to is compact.
Now we analyse the degree of ill-posedness of this problem. We denote by the solution of the problem(2.1) (2.1)
and u[v] the solution of the problem(2.2) (2.2)
Then, . Hence(2.3) (2.3)
where mapping v to are linear bounded operators from to .
Define the linear operator
where the scalar product in is defined by: if and are in , then . Hence the norm in is defined by for . Set and . Then the problem of reconstructing the initial condition v in (Equation1.5(1.5) (1.5) ) from the observations (Equation1.9(1.9) (1.9) ) has the form(2.4) (2.4)
To characterize the ill-posedness of this problem we have to estimate the singular values of (eigenvalues of ). To this goal, we introduce now a variational method for solving the reconstruction problem.
To emphasize the dependence of the solution u(x, t) to (Equation1.5(1.5) (1.5) ) on the initial condition v we sometimes write u(x, t; v) or u(v) if there is no confusion. A natural method for reconstructing v from the observations (Equation1.9(1.9) (1.9) ) is to minimize the misfit functional(2.5) (2.5)
with respect to . Because the solution to the reconstruction problem is not unique, we should have an estimation to it. Let be an a priori guess of v. We now combine the above least squares problem with Tikhonov regularization as follows: minimize the functional(2.6) (2.6)
with respect to with being positive Tikhonov regularization parameter.
Since , it is easily seen that the problem of minimizing over has a unique solution. Now we prove that is Fréchet differentiable and derive a formula for its gradient. In doing so, consider the adjoint problem:(2.7) (2.7)
where , if and , otherwise. The solution to (Equation2.7(2.7) (2.7) ) is a function satisfying the identity
and
Note that if we reverse time direction in (Equation2.7(2.7) (2.7) ), we get the same problem as in (Equation1.5(1.5) (1.5) ). Therefore, there exists a unique solution to (Equation2.7(2.7) (2.7) ).
Theorem 2.2:
The gradient of the objective functional at v is given by(2.8) (2.8)
where p(x, t) is the solution to the adjoint problem (Equation2.7(2.7) (2.7) ).
For a small variation of v, we have
where is the solution to the problem(2.9) (2.9)
From (Equation1.9(1.9) (1.9) ), we have
Using Green’s formula (Theorem 3.18, [Citation8]) for (Equation2.7(2.7) (2.7) ) and (Equation2.9(2.9) (2.9) ), we have
Hence
Consequently, is Fréchet differentiable and its gradient can be written in the form
The proof is complete.
From this result, we see that the functional is also Fréchet differentiable and its gradient has the form(2.10) (2.10)
where p(x, t) is solution to the adjoint problem (Equation2.7(2.7) (2.7) ).
Now we return to the question of estimating the degree of ill-posedness of our reconstructing problem. Since are defined by (Equation2.3(2.3) (2.3) ), from the above theorem we have , where is the solution to the adjoint problem(2.11) (2.11)
with and for and , otherwise.
From (Equation2.3(2.3) (2.3) ), we have
Hence
If we take such that , then
Due to Theorem 2.2, , where is the solution of the adjoint problem(2.12) (2.12)
Thus, if is given, we have the value . However, an explicit form of is not available to analyse its eigenvalues and eigenfunctions. Despite this, when we discretize the problem, the finite-dimensional approximations to are matrices, and so we can apply the Lanczos algorithm [Citation14] to estimate the eigenvalues of based on the values . This approach seems to be applicable to any problems and we will test it for our inverse problem as we did in [Citation15]. The numerical simulation of this approach will be given in Section 3.
2.2. Conjugate gradient method
We now use the conjugate gradient algorithm to approximate the initial condition v as follows, [Citation16,Citation28]:(2.13) (2.13)
where(2.14) (2.14)
We now calculate which is the solution to the problem We have
Hence
Letting , we obtain(2.15) (2.15)
From (Equation2.13(2.13) (2.13) ), can be rewritten as(2.16) (2.16)
Therefore,(2.17) (2.17)
2.3. Variational method for the discretized problem
The minimization problem can be discretized by the standard finite element method and the convergence of the scheme can be proved by the standard method,[Citation17] therefore, we do not present it here. In what follows, we will discretize the direct problem by the splitting method and present the variational method for the discretized problem. The reason why we use this method instead of the finite element method is that the finite difference method is easy to implement. Furthermore, the splitting finite difference method splits a multi-dimensional problem into a sequence of one-dimensional ones, hence it is very fast [Citation18–Citation20] (see also [Citation21–Citation23]). In doing so we have to restrict some conditions on the domain and coefficients.
First, we suppose that is the open parallelepiped in . Second, in (Equation1.5(1.5) (1.5) ), we suppose that , if , and for simplicity from now on we denote by . Thus, the system (Equation1.5(1.5) (1.5) ) has the form(2.18) (2.18)
and (Equation1.6(1.6) (1.6) ) becomes(2.19) (2.19)
Following [Citation24], we subdivide the domain into small cells by the rectangular uniform grid specified by
Here is the grid size in the -direction, . The grid vertices are denoted by , where , . We also denote by the vector of spatial grid sizes and . Let be the unit vector in the -direction, , i.e. and so on. Denote by(2.20) (2.20)
In the following, we denote the set of indices of internal grid points by , i.e.(2.21) (2.21)
We also make use of the following sets(2.22) (2.22)
for . For a function u(x, t) defined in Q, we denote by its approximate value at and if there no confusion we denote it sometimes by .
Suppose that is a grid function defined in we define the norm(2.23) (2.23)
and if has the weak derivative with respect to t we define(2.24) (2.24)
with
being the forward difference quotient of the grid function . We also define the backward difference quotient of u by
For the grid function we define the following interpolations in Q:
(1) | Piecewise constant:(2.25) (2.25) | ||||
(2) | Multilinear(2.26) (2.26) From these formulas, with we have(2.27) (2.27) with :(2.28) (2.28) It is clear from (Equation2.25(2.25) (2.25) ) and (Equation2.26(2.26) (2.26) ) that the interpolation is constant in each subdomain while is multi-linear in . Moreover, belongs to . We have the following properties of the multi-linear interpolation. |
Lemma 2.3:
Suppose that the grid function satisfies the inequality(2.29) (2.29)
with c being a constant independent of h. Then, the multi-linear interpolation (Equation2.26(2.26) (2.26) ) of is bounded in .
Lemma 2.4:
Suppose that the grid function satisfies the inequality(2.30) (2.30)
with c being a constant independent of h. Then, the multi-linear interpolation (Equation2.26(2.26) (2.26) ) of is bounded in .
Asymptotic relationships between the two interpolations as the grid size h tends to zero are stated in the following lemmas.
Lemma 2.5:
Suppose that the hypothesis of Lemma 2.3 is fulfilled. Then if weakly converges to a function u(x, t) in as the grid size h tends to zero, the sequence also weakly converges to u(x, t) in the same manner.
Lemma 2.6:
Suppose that the hypothesis of Lemma 2.4 is fulfilled. Then if converges strongly to a function u(x, t) in as the grid size h tends to zero, the sequence also converges to u(x, t) in the same manner.
Lemma 2.7:
Suppose that the hypothesis of Lemma 2.4 is fulfilled and . Then if the sequence of derivatives converges weakly to a function u(x, t) in as h tends to zero, the sequence also converges to u(x, t) in the same manner, where
For a function z defined in , we define its average by(2.31) (2.31)
Here, if k belongs to the boundary of , we put . We approximate the integrals in (Equation2.19(2.19) (2.19) ) as follows(2.32) (2.32) (2.33) (2.33) (2.34) (2.34) (2.35) (2.35)
Substituting the integrals (Equation2.32(2.32) (2.32) )–(Equation2.35(2.35) (2.35) ) into (Equation2.19(2.19) (2.19) ), we have the following equality (dropping the time variable t for a moment)(2.36) (2.36)
Using the summation by parts formula combines with condition when and , we have(2.37) (2.37)
with . Replacing into (Equation2.36(2.36) (2.36) ) and approximating the initial condition , we obtain the following system approximating the original problem (Equation2.19(2.19) (2.19) ), (Equation1.7(1.7) (1.7) )(2.38) (2.38)
with , function being a grid function approximating the initial condition v and where is defined by formula (Equation2.31(2.31) (2.31) ). The coefficients matrix has the form(2.39) (2.39)
We note that the coefficients matrices defined by (Equation2.39(2.39) (2.39) ) is positive semi-definite (see e.g. [Citation23]).
By the same scheme presented in [Citation24], or more detailed in [Citation15] for the method of finite difference for the Neumann problem, we can prove the following results.
Lemma 2.8:
Let be a solution to the Cauchy problem (Equation2.38(2.38) (2.38) ).
(a) | If , then there exists a constant c independent of h and the coefficients of Equation (Equation1.5(1.5) (1.5) ) such that(2.40) (2.40) | ||||
(b) | If with . Then(2.41) (2.41) |
From these estimates, exactly following [Citation24] and [Citation15] we can prove the convergence of the difference-differential problem (Equation2.38(2.38) (2.38) ) to the solution of the Dirichlet problem (Equation1.5(1.5) (1.5) ).
Theorem 2.9:
(1) | If , then the multi-linear interpolation (Equation2.26(2.26) (2.26) ) of the difference-differential problem (Equation2.38(2.38) (2.38) ) in weakly converges in to the solution of the Dirichlet problem (Equation1.5(1.5) (1.5) ) and its derivatives with respect to , converges weakly in to . | ||||
(2) | If with , and , then strongly converges to u. |
We now turn to approximating the minimization problem (Equation2.6(2.6) (2.6) ). Using the representation in Section 2.1, we have the first-order optimality condition for this problem as follows(2.42) (2.42)
For the discretized problem (Equation2.38(2.38) (2.38) ) we approximate the functional as follows.
First, we approximate the functional by(2.43) (2.43)
As in the continuous problem, we define by the solution to the Cauchy problem(2.44) (2.44)
where is defined as in (Equation2.38(2.38) (2.38) ), and the solution to the Cauchy problem(2.45) (2.45)
Then,(2.46) (2.46)
Hence the operator is linear and bounded from into , for . Furthermore, if is a solution to the Cauchy problem(2.47) (2.47)
with where is defined in formula (Equation2.31(2.31) (2.31) ), then
Thus, the discretized version of has the form and the functional is now approximated as follows:(2.48) (2.48)
Here, for simplicity of notation, we again set . For this discretized optimization problem we have the first-order optimality condition(2.49) (2.49)
If we suppose that condition (2) of Theorem 2.9 is satisfied, then and tend to zero as h tends to zero. Following [Citation25] we can prove the following result.
Theorem 2.10:
Let condition (2) of Theorem 2.9 be satisfied. Then the interpolation of the solution of the problem (Equation2.48(2.48) (2.48) ) converges to the solution of the problem (Equation2.6(2.6) (2.6) ) in as h tends to zero.
2.3.1. Discretization in time
To obtain the finite difference scheme for Equation (Equation2.38(2.38) (2.38) ), we divide the time interval [0, T] into M sub-intervals by the points For simplifying the notation, we set . We also denote by and , . In the following, we drop the spatial index for simplifying the notation. For the one-dimensional case, we use the standard Crank–Nicholson method, for multi-dimensional ones, we process as follows.
2.3.2. Splitting method
In order to obtain a splitting scheme for the Cauchy problem (Equation2.38(2.38) (2.38) ), we also discrete the time interval in the same with finite difference method. We denote We introduce the following implicit two-circle component-by-component splitting scheme [Citation18](2.50) (2.50)
Equivalently,(2.51) (2.51)
where is the identity matrix corresponding to . The splitting scheme (Equation2.51(2.51) (2.51) ) can be rewritten in the following compact form(2.52) (2.52)
with(2.53) (2.53)
where
2.3.3. Variational method for the multi-dimensional case
To complete the variational method for multi-dimensional cases, we use the splitting method for the forward problem and(2.54) (2.54)
where is the first index for which , and shows its dependence on the initial condition with m being the index of grid points on time axis. The notation indices the approximation of the function in at points . Normally, we take its average over the cell where is located as defined by (Equation2.31(2.31) (2.31) ). We note that in the definition of the multiplier has been dropped as it plays no role.
For solving the problem (Equation2.54(2.54) (2.54) ) by the conjugate gradient method, we first calculate the gradient of the objective function and it is shown by the following theorem.
Theorem 2.11:
The gradient of the objective function at is given by(2.55) (2.55)
where satisfies the adjoint problem(2.56) (2.56)
Here the matrix is given by(2.57) (2.57)
For a small variation of , we have from (Equation2.54(2.54) (2.54) ) that(2.58) (2.58)
where and and the inner product is that of .
It follows from (Equation2.52(2.52) (2.52) ) that w is the solution to the problem(2.59) (2.59)
Taking the inner product of the both sides of the mth equation of (Equation2.59(2.59) (2.59) ) with an arbitrary vector , summing the results over , we obtain(2.60) (2.60)
Here is the adjoint matrix of . Consider the adjoint problem(2.61) (2.61)
Taking the inner product of the both sides of the first equation of (Equation2.61(2.61) (2.61) ) with an arbitrary vector , summing the results over , we obtain(2.62) (2.62)
From (Equation2.60(2.60) (2.60) ) and (Equation2.62(2.62) (2.62) ), we have(2.63) (2.63)
On the other hand, it can be proved by induction that . Hence, it follows from the condition , (Equation2.58(2.58) (2.58) ) and (Equation2.63(2.63) (2.63) ) that(2.64) (2.64)
Consequently, the gradient of the objective function can be written as(2.65) (2.65)
Note that, since the coefficient matrices are symmetric, we have
Hence(2.66) (2.66)
The proof is complete.
The conjugate gradient method for the discretized functional (Equation2.54(2.54) (2.54) ) can be written by following steps:
Step 1Given an initial approximation and calculate the residual by solving the splitting scheme (Equation2.50(2.50) (2.50) ) with being replaced by the initial approximation and set .
Step 2Calculate the gradient given in (Equation2.55(2.55) (2.55) ) by solving the adjoint problem (Equation2.56(2.56) (2.56) ). Then set
Step 3Calculate
where can be calculated from the splitting scheme (Equation2.50(2.50) (2.50) ) with being replaced by and . Then, setStep 4For , calculate whereStep 5Calculate
where can be calculated from the splitting scheme (Equation2.50(2.50) (2.50) ) with being replaced by and . Then, set
3. Numerical simulation
To illustrate the efficiency of the proposed algorithm, we present in this section some numerical tests. The results for the one-dimensional case with approximate singular values will be presented in Examples 1–4, respectively. The results for two-dimensional cases will be given in Examples 5–7. In all examples we test our algorithm for (1) very smooth initial, (2) for continuous, but not smooth and (3) for discontinuous initial conditions. Thus, the degree of difficulties increases with examples. We note that although in the theory we only prove the convergence of our difference scheme for smooth initial conditions, but it works for another cases as our numerical examples show. Also, we vary the observation points as well as time interval of observations.
3.1. One-dimensional numerical examples
Set , the final time . The one-dimensional system has the form:(3.1) (3.1)
In the all tests for this case, we set the coefficients and . For discretization, we take the grid size to be 0.02 and the random noise is 0.01.
We take 3 observations of the form (Equation1.9(1.9) (1.9) ) at and . The weight functions are chosen as follows(3.2) (3.2)
Thus, the reconstruction problem is to determine v from the observations for with . Following Section 2.1, we denote the solution to the system (Equation3.1(3.1) (3.1) ) with by . Then the operator defined by is bounded and linear from to . To characterize the degree of ill-posedness of the reconstruction problem, we have to analyse the singular values of the operator .
The result of approximate singular values of the solution operator obtained by our method described in Section 2.1 for different are given in Figure . From these singular values we see that our inverse problem is severely ill-posed and the ill-posedness depends on the time interval of observations: the less observation time is, the more ill-posed the problem is.
Now we show numerical results for reconstructing different initial conditions with different time intervals of observations and different positions for observations.
Example 1:
We set the exact solution to (Equation3.1(3.1) (3.1) ) . Hence, the exact initial condition is given by , and the right-hand side . Then a priori estimate , , the regularization parameter is chosen to be . The first iteration of the CG method .
The reconstruction result and the exact initial condition v for various positions of observations are shown in Figure . The numerical results show that the quality of the reconstruction depends on the position of observations, the result is better when observations distribute uniformly in .
Example 2:
In this example, the initial condition is chosen to be the same as Example 1, , and regularization parameter is also , but the starting time for observation is taken to be and . The comparison of reconstruction results is displayed in Figure .
Table shows the relation between the starting point of observation and the error in the -norm of the algorithm when the number of observations is and the regularization parameter is .
We now test the algorithm for nonsmooth initial conditions. In Examples 3 and 4, we choose v, set , and solve the direct problem by the Crank–Nicholson method to find an approximation to the exact solution u, then use these data for reconstructing the exact initial condition v. Here, we use different mesh-sizes for the direct solvers to avoid ‘inverse crime’.
Example 3:
In this example, the initial condition is given by
and . The regularization parameter and the starting time are chosen to be the same as Example 1. The comparison of reconstruction results is displayed in Figure .
Example 4:
In this example, the initial condition is given by
and . The regularization parameter and the starting time are chosen to be the same as Example 1. The comparison of reconstruction results is displayed in Figure .
3.2. Two-dimensional numerical examples
Set . Set further . The problem Dirichlet problem has the form(3.3) (3.3)
For the tests we take and .
For discretization, we take the grid sizes 0.02 and the random noise 0.01. The weight functions are chosen as follows(3.4) (3.4)
As said in the introduction the operators defined by (Equation1.9(1.9) (1.9) ) with these weight functions can be regarded as ‘practical’ point-wise observations. We shall test our algorithm for different distributions of : they will be taken either in the whole or in a part of it: , , or . The number of observation N will be taken either 4 or 9.
Example 5:
We test algorithm for the smooth initial condition given by and . The source term f is thus given by
The measurement is taken at 9 points, the regularization parameter is set to be . We test the algorithm with the guess . The reconstruction result and the exact initial condition v are shown in Figure . The algorithm stops after 38 iterations and computational time is 87.496271 s and the error in -norm is 0.02346. The behaviour of the algorithm when observation regions and number of observations vary is shown in Tables and . We can see that the more number of observations we take, the more accuracy we have.
Now we test the algorithm for nonsmooth initial conditions. In Examples 6 and 7, we choose v and set , then use the splitting method to find an approximation to the solution u. After that we use the obtained data to test our algorithm.
Example 6:
In this example, we choose 9 observations, set the regularization parameter and v a multi-linear function given by
and . The reconstruction results and the exact initial condition v are shown in Figure . The algorithm stops after 45 iterations and the computational time is 103.394200 s and the error in -norm is 0.024959 .
Example 7:
We test the algorithm with the piecewise constant initial condition
and . The algorithm stops after 100 iterations and the computational time is 109.423362 s and the error in -norm is 0.033148 (see Figure ).
Acknowledgements
The constructive comments of the reviewers are kindly acknowledged.
Additional information
Funding
Notes
No potential conflict of interest was reported by the authors.
References
- Agoshkov VI. Optimal control methods and the method of adjoint equations in problems of mathematical physics. Moscow: Russian Academy of Sciences, Institute for Numerical Mathematics; 2003. 257pp. Russian.
- Shutyaev VP. Control operators and iterative algorithms in variational data assimilation problems. Moscow: Nauka; 2001. 240pp. Russian.
- Aubert G, Kornprobst P. Mathematical problems in image processing. New York (NY): Springer; 2006.
- Hào DN. Methods for inverse heat conduction problems. Frankfurt/Main, Bern, New York (NY), Paris: Peter Lang Verlag; 1998.
- Boussetila N, Rebbani F. Optimal regularization method for ill-posed Cauchy problems. Electron. J. Differ. Equ. 2006;147:1–15.
- Hào DN, Duc NV. Stability results for backward parabolic equations with time dependent coefficients. Inverse Prob. 2011;27:025003, 20pp.
- Li J, Yamamoto M, Zou J. Conditional stability and numerical reconstruction of initial temperature. Commun. Pure Appl. Anal. 2009;8:361–382.
- Tröltzsch F. Optimal control of partial differential equations: theory, methods and applications. Providence (RI): American Mathematical Society; 2010.
- Ladyzhenskaya OA, Solonnikov VA, Ural’ceva NN. Linear and quasilinear equations of parabolic type. Providence (RI): American Mathematical Society; 1968.
- Wloka J. Partial differential equations. Cambridge: Cambridge University Press; 1987.
- Klibanov MV. Carleman estimates for the regularization of ill-posed Cauchy problems. Appl. Numer. Math. 2015;94:46–74.
- Klibanov MV, Kuzhuget AV, Golubnichiy KV. An ill-posed problem for the Black-Scholes equation for a profitable forecast of prices of stock options on real market data. Inverse Prob. 2016;32:015010, 16pp.
- Oanh NTN. A splitting method for a backward parabolic equation with time-dependent coefficients. Comput. Math. Appl. 2013;65:17–28.
- Trefethen LN, Bau D III. Numerical linear algebra. Philadelphia: SIAM; 1997.
- Hào DN, Oanh NTN. Determination of the initial condition in parabolic equations from boundary observations. J. Inverse Ill-Posed Prob. 2016;24:195–220.
- Nocedal J, Wright SJ. Numerical optimization. 2nd ed. New York (NY): Springer; 2006.
- Hinze M. A variational discretization concept in control constrained optimization: the linear-quadratic case. Comput. Optimiz. Appl. 2005;30:45–61.
- Marchuk GI. Methods of numerical mathematics. New York (NY): Springer-Verlag; 1975.
- Marchuk GI. Splitting and alternating direction methods. In Ciarlet PG, Lions JL, editors. Handbook of numerical mathematics. Volume 1: finite difference methods. North-Holland, Amsterdam: Elsevier Science Publisher B.V.; 1990.
- Yanenko NN. The method of fractional steps. Berlin: Springer-Verlag; 1971.
- Hào DN, Thành NT, Sahli H. Splitting-based gradient method for multi-dimensional inverse conduction problems. J. Comput. Appl. Math. 2009;232:361–377.
- Oanh NTN, Huong BV. Determination of a time-dependent term in the right hand side of linear parabolic equations. Acta Math. Vietnam. 2016;41:313–335.
- Thành NT. Infrared thermography for the detection and characterization of buried objects [PhD thesis]. Brussels, Belgium: Vrije Universiteit Brussel; 2007.
- Ladyzhenskaya OA. The boundary value problems of mathematical physics. New York (NY): Springer-Verlag; 1985.
- Hào DN, Thanh PX, Lesnic D, Johansson BT. A boundary element method for a multi-dimensional inverse heat conduction problem. Int. J. Comput. Math. 2012;89:1540–1554.
- Hào DN. A noncharacteristic Cauchy problem for linear parabolic equations II: a variational method. Numer. Funct. Anal. Optim. 1992;13:541–564.
- Hào DN. A noncharacteristic Cauchy problem for linear parabolic equations III: a variational method and its approximation schemes. Numer. Funct. Anal. Optim. 1992;13:565–583.
- Nemirovskii AS. The regularizing properties of the adjoint gradient method in ill-posed problems. Zh. vychisl. Mat. Mat. Fiz. 1986;26:332–347; Engl. Transl. in U.S.S.R. Comput. Maths. Math. Phys. 1986;26:7–16.