![MathJax Logo](/templates/jsp/_style2/_tandf/pb2/images/math-jax.gif)
Abstract
This paper presents a numerical regularization approach to the simultaneous determination of multiplicative space- and time-dependent source functions in a nonlinear inverse heat conduction problem with homogeneous Neumann boundary conditions together with specified interior and final time temperature measurements. Under these conditions a unique solution is known to exist. However, the inverse problem is still ill-posed since small errors in the input interior temperature data cause large errors in the output heat source solution. For the numerical discretisation, the boundary element method combined with a regularized nonlinear optimization are utilized. Results obtained from several numerical tests are provided in order to illustrate the efficiency of the adopted computational methodology.
1. Introduction
Solving inverse source problems for the parabolic heat equation has been the point of interest for many works, see [Citation1–Citation7] to mention only a few, due to their ever increasing applications in problems related to pollution, unknown heat source generation or control of oil wells and their production rates. Almost all the previous studies generally assumed that the unknown heat source is independent of one or more of the independent variables in order to ensure the uniqueness of solution. On the other hand, only a few studied looked at the general case,[Citation8] but the classes of functions in which the source is to lay were further restricted with constraints which are difficult to satisfy in practice (both experimental and computational). Recently, the authors investigated the reconstruction of an additive source of the form , see [Citation9]. In this work, we extend the investigation to include the reconstruction of a multiplicative source of the form r(t)s(x), in which both r(t) and s(x) are unknown functions. In contrast to the previously investigated linear reconstruction of the additive source, this new inverse source problem formulation is more difficult to solve because it now becomes nonlinear. Moreover, its ill-posednes with respect to small errors in the input data being blown up in the output source solution adds even further difficulty.
The plan of the paper is as follows. In Section 2, we give the mathematical formulation of the inverse multiplicative source problem and state its unique solvability. In Section 3, we describe the numerical discretisation of the problem based on the boundary element method (BEM), whilst in Section 4 we introduce the method for obtaining the solution based on a nonlinear least-squares minimization. Section 5 presents and discusses numerical results and illustrates the need for employing regularization in order to stabilise the solution. Finally, Section 6 presents the conclusions of the paper.
2. Mathematical formulation
Let and
be fixed numbers and denote by
. Consider the following inverse initial-boundary value problem of finding the temperature u(x, t) and the multiplicatively separable source function
which satisfy the equation
(2.1)
(2.1)
subject to the initial condition(2.2)
(2.2)
the homogeneous Neumann boundary conditions(2.3)
(2.3)
and the additional temperature measurement(2.4)
(2.4)
at a fixed sensor location , and
(2.5)
(2.5)
at the ‘upper-base’ final time . Conditions (Equation2.3
(2.3)
(2.3) ) express that the ends
of the finite slab (0, L) are insulated. In order to avoid trivial non-uniqueness represented by the identity
, with c arbitrary non-zero constant, we impose a fixing condition, say
(2.6)
(2.6)
In the above setting, the functions and the constant
are given, whilst the functions
and u(x, t) are unknowns. One can remark that the specified interior and the final time temperature measurements (Equation2.4
(2.4)
(2.4) ) and (Equation2.5
(2.5)
(2.5) ) are time- and space-dependent functions
and
and, in principle, they are meant to supply the missing information to identify the time- and space-dependent components r(t) and s(x), respectively. However, because of the nonlinearity represented by the product r(t)s(x) we cannot decouple the system of Equations (Equation2.1
(2.1)
(2.1) )–(Equation2.6
(2.6)
(2.6) ) and identify separately r(t) and s(x) from (Equation2.4
(2.4)
(2.4) ) and (Equation2.5
(2.5)
(2.5) ), respectively.
We further assume that the conditions (Equation2.2(2.2)
(2.2) )–(Equation2.5
(2.5)
(2.5) ) are consistent, i.e. the following compatibility conditions are satisfied:
(2.7)
(2.7)
The unique solvability, i.e. existence and uniqueness of the solution of the inverse problem (Equation2.1(2.1)
(2.1) )–(Equation2.6
(2.6)
(2.6) ), was established in [Citation10]. With some slight corrections, this theorem reads as follows.
Theorem 1:
Suppose that and
satisfy (Equation2.7
(2.7)
(2.7) ) and that
. Also, assume that:
(i) |
| ||||
(ii) |
| ||||
(iii) |
|
,
,
,
.
Then the inverse problem given by Equations (Equation2.1(2.1)
(2.1) )–(Equation2.6
(2.6)
(2.6) ) has a unique solution
and
.
Remarks:
(i) | In the above theorem, | ||||
(ii) | The higher regularity of solution u(x, t) in | ||||
(iii) | In deriving the expressions for |
Although the inverse problem (Equation2.1(2.1)
(2.1) )–(Equation2.6
(2.6)
(2.6) ) has a unique solution it is still ill-posed because it violates the continuous dependence upon the input data (Equation2.4
(2.4)
(2.4) ) and (Equation2.5
(2.5)
(2.5) ). This can easily be seen from the following example of instability.
Let and, for
take
(2.8)
(2.8)
which satisfies the heat Equation (Equation2.1(2.1)
(2.1) ) with the sources
(2.9)
(2.9)
We also have that the initial condition (Equation2.2(2.2)
(2.2) ) and the Neumann boundary conditions (Equation2.3
(2.3)
(2.3) ) are all homogeneous and the measured data (Equation2.4
(2.4)
(2.4) )–(Equation2.6
(2.6)
(2.6) ) are given by
(2.10)
(2.10)
as . However, the sources r(t) and s(x) given by (Equation2.9
(2.9)
(2.9) ) become unbounded as
, in any reasonable norm.
3. Boundary element method
In this section, we explain the numerical procedure for discretising the inverse problem (Equation2.1(2.1)
(2.1) )–(Equation2.6
(2.6)
(2.6) ) by using the BEM. First of all, let us introduce the fundamental solution G of the one-dimensional heat equation, as
where H is the Heaviside step function. On multiplying the Equation (Equation2.1(2.1)
(2.1) ) by this fundamental solution and using the Green’s identity, we obtain the following boundary integral equation, see e.g. [Citation2]:
(3.1)
(3.1)
where for
, and
is the outward unit normal to the space boundary
. For discretising (Equation3.1
(3.1)
(3.1) ), we divide the boundaries
and
into N small time-intervals
, with
, whilst the initial domain
is divided into
small cells
with
. Over each boundary element, the temperature u is assumed to be constant and take its value at the midpoint
, i.e.
Similarly, in each cell, the initial temperature u(x, 0) is assumed to be constant and takes its value at the midpoint , i.e.
The source functions r(t) and s(x) can also be approximated using piecewise constant approximations as
for , and
.
Applying the homogeneous Neumann boundary condition (Equation2.3(2.3)
(2.3) ) and using the constant BEM interpolations above, the boundary integral Equation (Equation3.1
(3.1)
(3.1) ) becomes
(3.2)
(3.2)
where the coefficients are given by(3.3)
(3.3)
(3.4)
(3.4)
(3.5)
(3.5)
where for
. The integrals in (Equation3.3
(3.3)
(3.3) ) and (Equation3.4
(3.4)
(3.4) ) can be evaluated analytically and are given in [Citation2], whereas the double integral source term
in (Equation3.5
(3.5)
(3.5) ) is given by
where and the error function
is defined as
. By applying (Equation3.2
(3.2)
(3.2) ) at the boundary element nodes
and
for
, we obtain the system of 2N equations
(3.6)
(3.6)
where
where is the Kronecker delta symbol.
For the direct problem, we can find now the boundary temperatures and
from (Equation3.6
(3.6)
(3.6) ) as
(3.7)
(3.7)
Furthermore, the temperatures for
and
for
are explicitly given from (Equation3.2
(3.2)
(3.2) ) as
(3.8)
(3.8)
(3.9)
(3.9)
where
4. Solution of inverse problem
In this section, we wish to obtain simultaneously the unknown components r(t) and s(x) of the multiplicative source term in the inverse problem (Equation2.1(2.1)
(2.1) )–(Equation2.6
(2.6)
(2.6) ) by using the BEM together with a classical minimization process which is commonly used in the theory of inverse problems.[Citation13] The conditions (Equation2.4
(2.4)
(2.4) )–(Equation2.6
(2.6)
(2.6) ) are imposed by minimizing the nonlinear least-squares function
(4.1)
(4.1)
Here, the approximated temperatures and u(x, T), as introduced earlier in (Equation3.8
(3.8)
(3.8) ) and (Equation3.9
(3.9)
(3.9) ), respectively, are now employed into the above objective function with the initial guesses
and
for functions r and s, respectively. Whereas
is approximated as
(4.2)
(4.2)
where is the largest number for which
. Then, introducing (Equation3.7
(3.7)
(3.7) )–(Equation3.9
(3.9)
(3.9) ) and (Equation4.2
(4.2)
(4.2) ) into (Equation4.1
(4.1)
(4.1) ) yield
(4.3)
(4.3)
where and the norm
is the Euclidean norm of a vector. The minimization of (Equation4.3
(4.3)
(4.3) ) is performed using the lsqnonlin routine from the MATLAB Optimization Toolbox. This routine attempts to find the minimum of a sum of squares by starting from some arbitrary initial guesses
for
, respectively. Note that we have compiled this routine with the following default parameters:
Algorithm = Trust-Region-Reflective.
Maximum number of objective function evaluations, ‘MaxFunEvals’ =
.
Maximum number of iterations, ‘MaxIter’ = 400.
Termination tolerance on the function value, ‘TolFun’ =
to
.
Termination tolerance, ‘TolX’ =
to
.
5. Numerical results and discussion
This section presents three benchmark test examples in order to test the accuracy and stability of the numerical methods introduced in Sections 3 and 4. The following root mean square errors (RMSE) are used to evaluate the accuracy of the numerical results:(5.1)
(5.1)
(5.2)
(5.2)
5.1. Example 1
We consider a benchmark test example with , and the initial data (Equation2.2
(2.2)
(2.2) ) given by
(5.3)
(5.3)
For the direct problem (Equation2.1(2.1)
(2.1) )–(Equation2.3
(2.3)
(2.3) ) we also need the input source data
(5.4)
(5.4)
In order to test the BEM accuracy for the direct problem given by Equation (Equation2.1(2.1)
(2.1) ) with the source given by the product of the functions in (Equation5.4
(5.4)
(5.4) ), subject to (Equation2.3
(2.3)
(2.3) ) and (Equation5.3
(5.3)
(5.3) ), the numerical results are compared with the exact solution given by
(5.5)
(5.5)
The exact expressions for the quantities (Equation2.4(2.4)
(2.4) )–(Equation2.6
(2.6)
(2.6) ) are given by
(5.6)
(5.6)
We took which is small because, as defined in Theorem 1, we then have
, and
which satisfy all the conditions (i)–(iii) for existence and uniqueness of the solution.
As the specified boundary conditions (Equation2.3(2.3)
(2.3) ) are of Neumann type, the boundary unknowns in the BEM are represented by the Dirichlet data u(0, t) and u(L, t), as given by (Equation3.7
(3.7)
(3.7) ). Once all the boundary data has been obtained accurately, Equations (Equation3.8
(3.8)
(3.8) ) and (Equation3.9
(3.9)
(3.9) ) can be employed to obtain explicitly the temperatures (Equation2.4
(2.4)
(2.4) ) and (Equation2.5
(2.5)
(2.5) ), respectively. The RMSE of the direct problem results are shown in Table and it can be concluded that the BEM numerical solutions are convergent to their corresponding exact values, as the number of boundary elements increases. Whereas Figure displays the analytical and numerical results of
and
and very good agreement can be observed.
Table 1. The RMSE (Equation5.1(5.1)
(5.1) ) and (Equation5.2
(5.2)
(5.2) ) for
and
, obtained using the BEM for the direct problem with
, for Example 1.
Figure 1. The analytical (—–) and numerical results for (a) and (b)
obtained using the BEM for the direct problem with
, for Example 1.
![Figure 1. The analytical (—–) and numerical results for (a) β(t) and (b) μ(x) obtained using the BEM for the direct problem with N=N0∈{10(-·-),20(⋯),40(---)}, for Example 1.](/cms/asset/00ac3ed8-bab5-479d-b110-3658ebee3195/gipe_a_1130041_f0001_b.gif)
Next we consider the inverse problem given by Equations (Equation2.1(2.1)
(2.1) ), (Equation2.3
(2.3)
(2.3) ), (Equation2.6
(2.6)
(2.6) ) with
specified, (Equation5.3
(5.3)
(5.3) ) and (Equation5.6
(5.6)
(5.6) ). The numerical solution is obtained, as described in Section 4, by minimizing the objective function (Equation4.1
(4.1)
(4.1) ). Preliminary numerical investigations showed that the initial guesses
and
cannot be so arbitrary in order for the minimization process to converge. After many trials, we decided to illustrate the numerical results obtained by considering the initial guess as
(5.7)
(5.7)
where and
are given by (Equation5.4
(5.4)
(5.4) ), and
and
are random variables generated by the MATLAB command for normal distributions with mean zero and the standard deviations
and
, respectively, given by
(5.8)
(5.8)
where is a percentage of perturbation. The intention here was that the initial guesses (Equation5.7
(5.7)
(5.7) ) are understood as some general priors for the unknowns presumed to have been obtained experimentally from some simple but rather inaccurate practical measurement. Hereafter, unless otherwise specified, we present results obtained with
perturbed initial guess (which is quite far from the exact solution (Equation5.4
(5.4)
(5.4) )) and
, and use the MATLAB optimization toolbox lsqnonlin with TolFun = TolX =
to solve the inverse problem.
Figure 2. (a) The objective function and the numerical results for (b) r(t), (c) s(x), (d) u(0, t), (e) u(0.1, t) obtained with no regularization
, for exact data for Example 1. The corresponding analytical solutions are shown by continuous line (—–) in (b)–(e) and the
perturbed initial guesses are shown by dotted line
in (b) and (c).
![Figure 2. (a) The objective function F0 and the numerical results for (b) r(t), (c) s(x), (d) u(0, t), (e) u(0.1, t) obtained with no regularization (-·-), for exact data for Example 1. The corresponding analytical solutions are shown by continuous line (—–) in (b)–(e) and the p0=100% perturbed initial guesses are shown by dotted line (⋯) in (b) and (c).](/cms/asset/a30512cc-82eb-4066-859d-3151c605eedf/gipe_a_1130041_f0002_b.gif)
Figure 3. The numerical results for (a) r(t), (b) s(x), (c) u(0, t), (d) u(0.1, t) obtained with the first-order regularization and the second-order regularization
with regularization parameter
, for exact data for Example 1. The corresponding analytical solutions are shown by continuous line (—–).
![Figure 3. The numerical results for (a) r(t), (b) s(x), (c) u(0, t), (d) u(0.1, t) obtained with the first-order regularization (⋯) and the second-order regularization (---) with regularization parameter λ=10-5, for exact data for Example 1. The corresponding analytical solutions are shown by continuous line (—–).](/cms/asset/c1d6810a-b52a-429f-bc3e-c09d4d21db93/gipe_a_1130041_f0003_b.gif)
Figure (a) shows the unregularized objective function which converges in 39 iterations and the numerical results for
are displayed in Figures (b)–(e), respectively. As we can see in these figures, the numerical results are inaccurate and partially unstable in Figure (c).
In order to improve the accuracy and stability, we apply a Tikhonov regularization process based on minimizing the penalised objective function(5.9)
(5.9)
where is a regularization parameter to be prescribed, and R is a (differential) regularizing matrix. Initially, we have applied the first- and second-order regularizations based on minimizing the objective function (Equation5.9
(5.9)
(5.9) ) as
(5.10)
(5.10)
(5.11)
(5.11)
respectively. By trial and error, among various regularization parameters , we have found, as best illustrative results, those obtained with
which are shown in Figure . As we can see in this figure, applying orders one or two regularizations (Equation5.10
(5.10)
(5.10) ) or (Equation5.11
(5.11)
(5.11) ) yield stable, but rather inaccurate results, especially near the endpoints of the intervals of definition of the functions involved. In order to improve on these inaccuracies we have then investigated a hybrid combination of first- and second- order regularizations given by
(5.12)
(5.12)
According to (Equation5.9(5.9)
(5.9) ) and (Equation5.12
(5.12)
(5.12) ), the differential regularization matrix R is given by
(5.13)
(5.13)
In the regularization process, we need to choose an appropriate regularization parameter which balances accuracy and stability. Here, we use the L-curve method to find the regularization parameter
. Figure (a) shows the L-curve obtained by plotting the solution norm
vs. the residual norm
for various values of
when R is given by (Equation5.13
(5.13)
(5.13) ). From this figure it can be seen that the corner of the L-curve occurs nearby
, with other appropriate values between the wide range
to
. With this value of the regularization parameter the numerical results are shown in Figures (b)–(f). From Figure (b) it can be seen that convergence for the regularized objective function
is achieved within 15 iterations. Also, in comparison with the previous Figures (c)–(f), very good agreement between the exact and the regularized numerical solutions is now obtained, as illustrated in Figures (c)–(f). All results are summarized in terms of the RMSE (Equation5.1
(5.1)
(5.1) ) and (Equation5.2
(5.2)
(5.2) ) in Table . Various initial guesses (Equation5.7
(5.7)
(5.7) ) with
in (Equation5.8
(5.8)
(5.8) ) have been investigated in order to test the robustness of the minimization procedure with respect to the independence on the initial guess. From Table it can be seen that whilst the choice of the initial guess seems to matter for the accuracy of the unregularized solution, this restriction disappears when regularization with
is imposed. This shows that the numerical regularization method employed is robust with respect to the independence on the initial guess.
Table 2. The RMSE (Equation5.1(5.1)
(5.1) ) and (Equation5.2
(5.2)
(5.2) ) for
for exact data for Example 1.
Figure 4. (a) The L-curve criterion, (b) the objective function , and the numerical results
for (c) r(t), (d) s(x), (e) u(0, t), (f) u(0.1, t) obtained with the hybrid-order regularization (Equation5.12
(5.12)
(5.12) ) with regularization parameter
suggested by L-curve, for exact data for Example 1. The corresponding analytical solutions are shown by continuous line (—–) in (c)–(f).
![Figure 4. (a) The L-curve criterion, (b) the objective function Fλ, and the numerical results (-∘-)for (c) r(t), (d) s(x), (e) u(0, t), (f) u(0.1, t) obtained with the hybrid-order regularization (Equation5.12(5.12) Fλ(r̲,s̲)=F0(r̲,s̲)+λ((r1-r2)2+(-rN-1+rN)2+∑i=2N-1(-ri+1+2ri-ri-1)2+(s1-s2)2+(-sN0-1+sN0)2+∑k=2N0-1(-sk+1+2sk-sk-1)2).(5.12) ) with regularization parameter λ=10-5 suggested by L-curve, for exact data for Example 1. The corresponding analytical solutions are shown by continuous line (—–) in (c)–(f).](/cms/asset/543292ca-c51f-495b-88e7-fb6261f8919f/gipe_a_1130041_f0004_b.gif)
Figure 5. (a) The objective function and the numerical results for (b) r(t), (c) s(x), (d) u(0, t), (e) u(0.1, t) obtained with the hybrid-order regularization (Equation5.12
(5.12)
(5.12) ) with regularization parameter
for
noisy data for Example 1. The corresponding analytical solutions are shown by continuous line (—–) in (b)–(e).
![Figure 5. (a) The objective function Fλ and the numerical results for (b) r(t), (c) s(x), (d) u(0, t), (e) u(0.1, t) obtained with the hybrid-order regularization (Equation5.12(5.12) Fλ(r̲,s̲)=F0(r̲,s̲)+λ((r1-r2)2+(-rN-1+rN)2+∑i=2N-1(-ri+1+2ri-ri-1)2+(s1-s2)2+(-sN0-1+sN0)2+∑k=2N0-1(-sk+1+2sk-sk-1)2).(5.12) ) with regularization parameter λ=10-5 for P∈{1(-·-),3(⋯),5(---)}% noisy data for Example 1. The corresponding analytical solutions are shown by continuous line (—–) in (b)–(e).](/cms/asset/4f6ce3fd-344b-4acd-9b17-4b8c080d1e82/gipe_a_1130041_f0005_b.gif)
To test the stability of the BEM combined with the nonlinear regularization, we solve the inverse problem when random noises defined as(5.14)
(5.14)
are added to the input functions and
, respectively, where P represents the percentage of noise with which the measured data (Equation2.4
(2.4)
(2.4) ) and (Equation2.5
(2.5)
(2.5) ) is contaminated. In (Equation5.14
(5.14)
(5.14) ), the standard deviations
and
are given by
(5.15)
(5.15)
Then and
in (Equation4.3
(4.3)
(4.3) ) are replaced by their noisy perturbations
(5.16)
(5.16)
The numerical results obtained with , (suggested by the L-curve criterion) are illustrated in Figure . From Figure (a) it can be seen that convergence of the objective functional (Equation5.12
(5.12)
(5.12) ) is rapidly achieved within 15–16 iterations for
. Furthermore, Figures (b)–(e) show that stable and accurate numerical results are obtained for all amounts of noise P. Also, as expected, the numerical solutions become more accurate as the amount of noise P decreases.
5.2. Example 2
In Example 1, all conditions for the existence and uniqueness of Theorem 1 were satisfied. We now consider an example which has the analytical solution,[Citation11](5.17)
(5.17)
(5.18)
(5.18)
where . One can easily check that the homogeneous Neumann conditions (Equation2.3
(2.3)
(2.3) ) are satisfied and that the initial condition (Equation2.2
(2.2)
(2.2) ) is homogeneous and given by (Equation5.3
(5.3)
(5.3) ). Taking also
we obtain that the input data (Equation2.4
(2.4)
(2.4) )–(Equation2.6
(2.6)
(2.6) ) are given by
(5.19)
(5.19)
From (Equation5.3(5.3)
(5.3) ) and (Equation5.19
(5.19)
(5.19) ) we have
. One can then observe that the conditions (i) and (ii) of Theorem 1 are satisfied, but the condition (iii) has been violated. Whilst a solution obviously exists, as given by Equations (Equation5.17
(5.17)
(5.17) ) and (Equation5.18
(5.18)
(5.18) ), one cannot guarantee yet that this solution is unique.
Figure 6. The analytical (—–) and numerical results for (a) and (b)
obtained using the BEM for the direct problem with
, for Example 2.
![Figure 6. The analytical (—–) and numerical results for (a) β(t) and (b) μ(x) obtained using the BEM for the direct problem with N=N0∈{5(-·-),10(⋯),20(---)}, for Example 2.](/cms/asset/1d3c7716-c855-4d7f-8253-677dbf6b8b54/gipe_a_1130041_f0006_b.gif)
We have solved first the direct problem given by Equations (Equation2.1(2.1)
(2.1) ) (with r and s given by (Equation5.18
(5.18)
(5.18) )), (Equation2.3
(2.3)
(2.3) ) and (Equation5.3
(5.3)
(5.3) ) using the BEM with various numbers of boundary elements
and the numerical results for
and
presented in Figure show rapid convergence and excellent agreement with the analytical solution (Equation5.19
(5.19)
(5.19) ). Afterwards, we have solved the inverse problem given by Equations (Equation2.1
(2.1)
(2.1) ), (Equation2.3
(2.3)
(2.3) ), (Equation5.3
(5.3)
(5.3) ) and (Equation5.19
(5.19)
(5.19) ) in order to retrieve the temperature u(x, t) and the heat source components r(t) and s(x) given analytically by Equations (Equation5.17
(5.17)
(5.17) ) and (Equation5.18
(5.18)
(5.18) ), respectively. We have taken
boundary elements and the arbitrary initial guesses
and
.
We first consider the case of exact data. The convergence of the unregularized objective function achieved within 56 iterations using the lsqnonlin routine with TolFun = TolX =
is illustrated in Figure (a). Also, the RMSEs (Equation5.1
(5.1)
(5.1) ) and (Equation5.2
(5.2)
(5.2) ) of solutions r and s are shown in Figure (b). The numerical solutions for r and s obtained after 56 iterations are shown by dash-dot line
in Figures (c) and (d), respectively. Very good agreement between the numerical and analytical solutions for s can be observed, whilst the numerical solution for r is stable but slightly away from the analytical solution. We then look more closely at Figure (b) and observe that the minimum of RMSEs is at iteration 31 instead of 56. Therefore, we have tried solving the inverse problem with the fixed iteration at 31, and the numerical results become more accurate, as illustrated by the circle markers
in Figures (c) and (d). Further, we have applied the hybrid-order regularization procedure (Equation5.12
(5.12)
(5.12) ) with the regularization parameter
(chosen by the trial and error) and the results are shown in Figures . Figure (a) displays the convergence of the regularized functional (Equation5.12
(5.12)
(5.12) ) achieved within 28 iterations. Also, results for RMSEs and the solutions for r and s are shown in Figures (b)–(d). From Figure (b) one can see that the minimum of the RMSEs occurs after 23 iterations. By comparing Figures and one can conclude that the inclusion of some small regularization yields slightly more accurate and stable results.
Table 3. The RMSEs (Equation5.1(5.1)
(5.1) ) and (Equation5.2
(5.2)
(5.2) ) for r(t) and s(x), for the noise levels
, for Example 2.
Figure 7. (a) The objective function , (b) the RMSEs (Equation5.1
(5.1)
(5.1) ) and (Equation5.2
(5.2)
(5.2) ) for
and
obtained with no regularization for exact data, and the numerical results for (c) r(t) and (d) s(x) obtained using the minimization process after 56 unfixed iterations
, and 31 fixed iterations
, for Example 2. The corresponding analytical solutions (Equation5.18
(5.18)
(5.18) ) are shown by continuous line (—–) in (c) and (d).
![Figure 7. (a) The objective function F0, (b) the RMSEs (Equation5.1(5.1) RMSEt=1N∑i=1NExact(t~i)-Numerical(t~i)2,(5.1) ) and (Equation5.2(5.2) RMSEx=1N0∑k=1N0Exact(x~k)-Numerical(x~k)2.(5.2) ) for r(t)(---) and s(x)(⋯) obtained with no regularization for exact data, and the numerical results for (c) r(t) and (d) s(x) obtained using the minimization process after 56 unfixed iterations (-·-), and 31 fixed iterations (∘∘∘), for Example 2. The corresponding analytical solutions (Equation5.18(5.18) r(t)=e3t,s(x)=4cos(x),t∈(0,0.3),x∈(0,π),(5.18) ) are shown by continuous line (—–) in (c) and (d).](/cms/asset/b01d3907-9402-44f0-b73e-6da2bec68e5b/gipe_a_1130041_f0007_b.gif)
Figure 8. (a) The objective function , (b) the RMSEs (Equation5.1
(5.1)
(5.1) ) and (Equation5.2
(5.2)
(5.2) ) for
and
obtained using the hybrid-order regularization (Equation5.12
(5.12)
(5.12) ) with regularization parameter
for exact data, and the numerical results for (c) r(t) and (d) s(x) obtained using minimization process after 28 unfixed iterations
, and 23 fixed iterations
, for Example 2. The corresponding analytical solutions (Equation5.18
(5.18)
(5.18) ) are shown by continuous line (—–) in (c) and (d).
![Figure 8. (a) The objective function Fλ, (b) the RMSEs (Equation5.1(5.1) RMSEt=1N∑i=1NExact(t~i)-Numerical(t~i)2,(5.1) ) and (Equation5.2(5.2) RMSEx=1N0∑k=1N0Exact(x~k)-Numerical(x~k)2.(5.2) ) for r(t)(---) and s(x)(⋯) obtained using the hybrid-order regularization (Equation5.12(5.12) Fλ(r̲,s̲)=F0(r̲,s̲)+λ((r1-r2)2+(-rN-1+rN)2+∑i=2N-1(-ri+1+2ri-ri-1)2+(s1-s2)2+(-sN0-1+sN0)2+∑k=2N0-1(-sk+1+2sk-sk-1)2).(5.12) ) with regularization parameter λ=2×10-4 for exact data, and the numerical results for (c) r(t) and (d) s(x) obtained using minimization process after 28 unfixed iterations (-·-), and 23 fixed iterations (∘∘∘), for Example 2. The corresponding analytical solutions (Equation5.18(5.18) r(t)=e3t,s(x)=4cos(x),t∈(0,0.3),x∈(0,π),(5.18) ) are shown by continuous line (—–) in (c) and (d).](/cms/asset/6c73dae8-5d8d-43c4-9dfe-d959f3d1db93/gipe_a_1130041_f0008_b.gif)
Next, we consider the stability of the numerical solution when the noise is present in the input data (Equation2.4(2.4)
(2.4) ) and (Equation2.5
(2.5)
(2.5) ). As in [Citation11], the noise was defined by
(5.20)
(5.20)
where is a random variable generated by the MATLAB command from a normal distribution with mean zero and unit standard deviation, and
represents the noise level. Remark that the noise (Equation5.20
(5.20)
(5.20) ) is multiplicative, whilst the noise in (Equation5.16
(5.16)
(5.16) ) is additive. For
, Figure illustrates the results obtained using the hybrid-order regularization (Equation5.12
(5.12)
(5.12) ) with regularization parameter
(found by the trial and error). The convergence of the regularized objective function achieved within 27 iterations is shown in Figure (a), whilst the minimum RMSEs of r and s occur after 21 iterations, as can be seen in Figure (b). Numerical solutions for r and s obtained after 27 (unfixed) and 21 (fixed) iterations are displayed in Figures (c) and (d), respectively. As expected, the conclusions from Figure obtained for a low level of noise
are very much the same as the those from Figure obtained for no noise
. From both Figures (c), (d) and (c), (d) one can observe that the numerical results are accurate and stable. Furthermore, there is little difference in the results obtained whether we stop (fix) the iteration process at the minimum of the RMSEs shown in Figures (b) and (b) or, if we let the iteration process running until converge of the regularized objective function is achieved.
Figure 9. (a) The objective function , (b) the RMSEs (Equation5.1
(5.1)
(5.1) ) and (Equation5.2
(5.2)
(5.2) ) for
and
obtained using the hybrid-order regularization (Equation5.12
(5.12)
(5.12) ) with regularization parameter
for noise level
, and the numerical results for (c) r(t) and (d) s(x) obtained using the minimization process after 27 unfixed iterations
, and 21 fixed iterations
, for Example 2. The corresponding analytical solutions (Equation5.18
(5.18)
(5.18) ) are shown by continuous line (—–) in (c) and (d).
![Figure 9. (a) The objective function Fλ, (b) the RMSEs (Equation5.1(5.1) RMSEt=1N∑i=1NExact(t~i)-Numerical(t~i)2,(5.1) ) and (Equation5.2(5.2) RMSEx=1N0∑k=1N0Exact(x~k)-Numerical(x~k)2.(5.2) ) for r(t)(---) and s(x)(⋯) obtained using the hybrid-order regularization (Equation5.12(5.12) Fλ(r̲,s̲)=F0(r̲,s̲)+λ((r1-r2)2+(-rN-1+rN)2+∑i=2N-1(-ri+1+2ri-ri-1)2+(s1-s2)2+(-sN0-1+sN0)2+∑k=2N0-1(-sk+1+2sk-sk-1)2).(5.12) ) with regularization parameter λ=4×10-4 for noise level δ=0.01, and the numerical results for (c) r(t) and (d) s(x) obtained using the minimization process after 27 unfixed iterations (-·-), and 21 fixed iterations (∘∘∘), for Example 2. The corresponding analytical solutions (Equation5.18(5.18) r(t)=e3t,s(x)=4cos(x),t∈(0,0.3),x∈(0,π),(5.18) ) are shown by continuous line (—–) in (c) and (d).](/cms/asset/56517533-9cf1-40c4-a22f-6cefbd28269d/gipe_a_1130041_f0009_b.gif)
Figure 10. (a) The objective function , (b) the RMSEs (Equation5.1
(5.1)
(5.1) ) and (Equation5.2
(5.2)
(5.2) ) for
and
obtained using the hybrid-order regularization (Equation5.12
(5.12)
(5.12) ) with regularization parameter
for noise level
, and the numerical results
for (c) r(t) and (d) s(x) obtained using the minimization process after 17 (unfixed) iterations, for Example 2. The corresponding analytical solutions (Equation5.18
(5.18)
(5.18) ) are shown by continuous line (—–) in (c) and (d).
![Figure 10. (a) The objective function Fλ, (b) the RMSEs (Equation5.1(5.1) RMSEt=1N∑i=1NExact(t~i)-Numerical(t~i)2,(5.1) ) and (Equation5.2(5.2) RMSEx=1N0∑k=1N0Exact(x~k)-Numerical(x~k)2.(5.2) ) for r(t)(---) and s(x)(⋯) obtained using the hybrid-order regularization (Equation5.12(5.12) Fλ(r̲,s̲)=F0(r̲,s̲)+λ((r1-r2)2+(-rN-1+rN)2+∑i=2N-1(-ri+1+2ri-ri-1)2+(s1-s2)2+(-sN0-1+sN0)2+∑k=2N0-1(-sk+1+2sk-sk-1)2).(5.12) ) with regularization parameter λ=2 for noise level δ=0.1, and the numerical results (-·-) for (c) r(t) and (d) s(x) obtained using the minimization process after 17 (unfixed) iterations, for Example 2. The corresponding analytical solutions (Equation5.18(5.18) r(t)=e3t,s(x)=4cos(x),t∈(0,0.3),x∈(0,π),(5.18) ) are shown by continuous line (—–) in (c) and (d).](/cms/asset/c248bd52-10cc-4ee0-98a7-8febe3b3564d/gipe_a_1130041_f0010_b.gif)
Next, we consider a large amount of noise, such as , included in (Equation5.20
(5.20)
(5.20) ) and the numerical results are shown in Figure . First, one can observe from Figure (a) that the convergence of the objective function (Equation5.12
(5.12)
(5.12) ) is rapidly achieved within 17 iterations and the monotonic decreasing curve has a somewhat different shape than that recorded in Figure (a) for no noise
or in Figure (a) for a low amount noise
. Also, interestingly, unlike in Figures (b) and (b) where the RMSEs (Equation5.1
(5.1)
(5.1) ) and (Equation5.2
(5.2)
(5.2) ) show a minimum before the iteration process has finished, in Figure (b) no such minimum occurs. Therefore, in Figures (c) and (d) we present only numerical results for r and s, respectively, obtained after 17 (unfixed) iterations.
From these figures it can be seen that the numerical solutions are stable, with an unexpected very high accuracy in predicting the s component in Figure (d). For completeness and clarity the RMSEs (Equation5.1(5.1)
(5.1) ) and (Equation5.2
(5.2)
(5.2) ) of Figures (b)–(b) are given in numbers in Table . From this table and also, from Figure (b) it can be seen that for
the component s(x) is predicted more accurately than the r(t) component, whilst for
in Figures (b), (b) and (b) this prediction is reversed.
Figure 11. The numerical results for (a) and (b)
obtained using the BEM for the direct problem with
, for Example 3.
![Figure 11. The numerical results for (a) β(t) and (b) μ(x) obtained using the BEM for the direct problem with N=N0∈{10(-·-),20(⋯),40(---)}, for Example 3.](/cms/asset/65855cf1-85d2-453f-8ddd-5bb5e46ad8a0/gipe_a_1130041_f0011_b.gif)
Figure 12. (a) The objective function and the numerical results for (b) r(t) and (c) s(x) obtained with the hybrid-order regularization (Equation5.12
(5.12)
(5.12) ) with regularization parameter
for
and
noisy data for Example 3. The corresponding analytical solutions (Equation5.21
(5.21)
(5.21) ) are shown by continuous line (—–) in (b) and (c).
![Figure 12. (a) The objective function Fλ and the numerical results for (b) r(t) and (c) s(x) obtained with the hybrid-order regularization (Equation5.12(5.12) Fλ(r̲,s̲)=F0(r̲,s̲)+λ((r1-r2)2+(-rN-1+rN)2+∑i=2N-1(-ri+1+2ri-ri-1)2+(s1-s2)2+(-sN0-1+sN0)2+∑k=2N0-1(-sk+1+2sk-sk-1)2).(5.12) ) with regularization parameter λ=2×10-4 for P=1%(-·-),P=5%(⋯) and P=10%(---) noisy data for Example 3. The corresponding analytical solutions (Equation5.21(5.21) r(t)=t,0≤t≤1/21-t,1/2<t≤1=T,s(x)=x,0≤x≤1/200.1-x,1/20<x≤1/10=L,(5.21) ) are shown by continuous line (—–) in (b) and (c).](/cms/asset/1fa824b9-a375-412d-a72c-52c2eac92916/gipe_a_1130041_f0012_b.gif)
Finally, we report that the numerical results presented in this subsection for Example 2 are comparable in terms of accuracy and stability with the numerical results obtained recently in [Citation11] using a different method of successive approximants previously developed in [Citation10].
5.3. Example 3
The previous examples possessed an analytical (smooth) solution available explicitly and they were tested in order to verify the accuracy and stability of the numerical method employed. In this subsection, we consider a severe test example represented by the non-smooth source components(5.21)
(5.21)
where . We also take the homogeneous initial temperature (Equation5.3
(5.3)
(5.3) ). This example does not have an analytical solution for the temperature u(x, t) readily available. Therefore, in such a situation the data (Equation2.4
(2.4)
(2.4) ) and (Equation2.5
(2.5)
(2.5) ) is simulated numerically by solving the direct problem (Equation2.1
(2.1)
(2.1) ) with the multiplicative source given by the product of the functions in (Equation5.21
(5.21)
(5.21) ), subject to the homogeneous boundary and initial conditions (Equation2.3
(2.3)
(2.3) ) and (Equation5.3
(5.3)
(5.3) ). The BEM numerical solutions for the data
and
are shown in Figure for various numbers of boundary elements
. From this figure the convergence of the numerical solution, as the number of boundary elements increases, can be observed.
Next we consider the inverse problem given by Equations (Equation2.1(2.1)
(2.1) ), (Equation2.3
(2.3)
(2.3) ), (Equation2.6
(2.6)
(2.6) ) with
specified, (Equation5.3
(5.3)
(5.3) ), and the additional measured data (Equation2.4
(2.4)
(2.4) ) and (Equation2.5
(2.5)
(2.5) ) which has been simulated numerically in Figure . We pick from Figure the numerical BEM solutions obtained with
and we further perturb this data with noise, as in (Equation5.16
(5.16)
(5.16) ). We took as initial guesses
, and we initiated the iterative minimization process of the hybrid regularization functional (Equation5.12
(5.12)
(5.12) ), as described in Example 1. The numerical results obtained with
(found by trial and error) are shown in Figure for
noise generated as in (Equation5.16
(5.16)
(5.16) ). From Figure (a) it can be seen that the convergence of the functional (Equation5.12
(5.12)
(5.12) ) is rapidly achieved within 7–8 iterations using the lsqnonlin routine with TolFun = TolX =
. Also, Figures (b) and (c) show that stable and accurate numerical solutions for both r(t) and s(x) are obtained for all the amounts of noise P.
In closure, although not illustrated, we report that the same good performance has been recorded when attempting to reconstruct even discontinuous source components.
6. Conclusions
In this paper, inverse source problems with homogeneous Neumann boundary conditions together with specified interior and final time temperature measurements have been considered to find the space- and the time-dependent components of a multiplicative source function. The numerical discretization was based on the BEM combined with a Tikhonov regularization procedure. For a wide range of test example, the obtained results indicate that stable and accurate numerical solutions have been achieved. The identification of both multiplicative r(t)s(x) and additive components of space- and time-dependent sources of the form
can also be considered,[Citation10] but its numerical implementation is deferred to a future work.
Acknowledgements
A. Hazanee would like to acknowledge the financial support received from the Ministry of Science and Technology of Thailand, and Prince of Songkla University Thailand, for pursuing her PhD at the University of Leeds.
Notes
No potential conflict of interest was reported by the authors.
References
- Erdem A, Lesnic D, Hasanov A. Identification of a spacewise dependent heat source. Appl. Math. Model. 2013;37:10231–10244.
- Farcas A, Lesnic D. The boundary element method for the determination of a heat source dependent on one variable. J. Eng. Math. 2006;54:375–388.
- Hasanov A. Identification of spacewise and time dependent source terms in 1D heat conduction equation from temperature measurement at final time. Int. J. Heat Mass Transfer. 2012;55:2069–2080.
- Hasanov A, Slodicka M. An analysis of inverse source problems with final measured out put data for the heat conduction equation: a semigroup approach. Appl. Math. Lett. 2013;26:207–214.
- Johansson BT, Lesnic D. A variational method for identifying a spacewise dependent heat source. IMA J. Appl. Math. 2007;72:748–760.
- Prilepko AI, Solov’ev VV. Solvability theorems and Rothe’s method for inverse problems for a parabolic equation. II. Differ. Equ. 1987;23:1341–1349.
- Solov’ev VV. Solvability of the inverse problem of finding a source, using overdetermination of the upper base for a parabolic equation. Differ. Equ. 1990;25:1114–1119.
- Bushuyev I. Global uniqueness for inverse parabolic problems with final observation. Inverse Prob. 1995;11:L11–L16.
- Hazanee A, Lesnic D. Reconstruction of an additive space- and time-dependent heat source. Eur. J. Comput. Mech. 2013;22:304–329.
- Savateev EG. On problems of determining the source function in a parabolic equation. J. Inverse Ill-Posed Prob. 1995;3:83–102.
- Shi C, Wang C, Wei T. Numerical solution for an inverse heat source problem by an iterative method. Appl. Math. Comput. 2014;244:577–597.
- Dym H, McKean H. Fourier series and integrals. New York (NY): Academic Press; 1985.
- Engl HW, Hanke M, Neubauer A. Regularization of inverse problems. Dordrecht: Kluwer Academic; 2000.