![MathJax Logo](/templates/jsp/_style2/_tandf/pb2/images/math-jax.gif)
Abstract
In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear in unknown parameters. The well-posedness of the modulating functions-based solution is proved. The wave and the fifth-order KdV equations are used as examples to show the effectiveness of the proposed method in both noise-free and noisy cases.
1. Introduction
Inverse coefficients and inverse source problems for partial differential equations (PDEs) are important topics in many applications such as medical imaging, seismic imaging, oil exploration and computer tomography.[Citation1–Citation4] Various methods have been proposed to solve these problems. The typical procedure consists in minimizing an appropriate cost function which compares measured data with the corresponding computed one. However, due to ill-posedness issues, these methods often require regularization techniques such as the well-known Tikhonov regularization,[Citation5] the quasi-reversibility method [Citation6] and the energy regularization approach.[Citation7] The performance of the regularization techniques depends on some regularization parameters; in addition, they are usually heavy computationally, especially in case of large number of parameters. Stochastic inversion techniques such as Bayesian-based approaches are also used to estimate the parameters of a PDE.[Citation8] These techniques require the knowledge of a prior distribution of the unknown which is not always obvious.
Another type of method which has been proposed to solve inverse coefficients or inverse source problems consists of recursive observers’ based approaches.[Citation9–Citation11] These methods have been initially designed for state estimation of finite dimensional dynamic systems and have been recently extended to infinite dimensional systems.[Citation12] However, when the PDE is discretized, these methods often suffer from some numerical issues generating the loss of observability, a necessary condition for the design of observer.[Citation13]
Responding to the growing interest in developing efficient and robust algorithms for parameter estimation of PDEs, we propose in this paper a method based on the so-called modulating functions. The modulating functions-based (MFB) method has been introduced in the early 1950s [Citation14,Citation15] and has been used in parameters identification for ordinary differential equations (ODEs). In 1966, Perdreauville and Goodson [Citation16] extended the method to the identification of constant and space-varying parameters in PDEs using distributed measurements on a continuous time. After that, Fairman and Shen [Citation17] modified the approach of Perdreauville and Goodson using finite difference scheme to approximate the spatial derivatives. In 1997, Ungarala and Co [Citation18] adapted the method for real-time parameters identification for ODE. Recently, the modulating function method has been extended to estimate the unknown input and the state variables of a linear dynamic system, as well as the fractional order derivatives of the output in [Citation19–Citation22]. Several types of modulating functions have been proposed and used, including sinusoidal functions,[Citation15,Citation16] Hermit functions,[Citation23] spline-type functions,[Citation24] Poisson moment functionals [Citation25] and Hartley modulating functions.[Citation26]
The MFB method has several advantages. It is computationally less costly and robust against noise. In addition, it requires neither initial nor boundary conditions. It also does not require solving the direct problem. Further, approximating the derivatives of the measurements, which are usually noisy, is avoided with this method.
In this paper, we study the well-posedness of the MFB solution, and we investigate the effect of the number of modulating functions which, as we will show in this paper, plays a significant role. To the best of our knowledge, only in Pearson and Lee [Citation27], a study on the number of modulating functions was considered. They provided guidelines on how to choose this number. However, the study focused on sinusoidal modulating functions only. Moreover, it requires a priori knowledge about the system bandwidth which is mostly unknown in identification problems.
The main contributions of this paper are the following. First, we extend the MFB method to estimate space–time-dependent parameters and source, separately and simultaneously, using finite number of measurements in both noisy and non-noisy cases. Secondly, the well-posedness of the MFB solution is proved. Then, a mathematical analysis of the estimation error is performed. Finally, the influence of the number of modulating functions is investigated and discussed independently on the choice of the modulating functions type.
The paper is organized as follows. In Section 2, the considered problem is formulated, the MFB method is presented and the well-posedness (existence, uniqueness and stability) of the MFB solution is proved. Section 3 studies the source and velocity estimation in the wave equation (linear PDE) and provides numerical simulations in order to illustrate the effectiveness and the robustness of the proposed method. Error analysis of the noise error contribution is also discussed. Parameter estimation for the fifth-order KdV equation (non-linear PDE) is studied in Section 4 where numerical simulations are depicted. Discussion and concluding remarks are presented in Sections 5 and 6, respectively.
2. Modulating functions-based method
In this section, the estimation problem is stated; then, the definition of modulating functions is introduced along with the procedure of applying the MFB method for estimation objectives; and finally, the well-posedness study is presented.
2.1. Problem statement
Consider the following one-dimensional partial differential equation (1D-PDE) of order n defined in the space–time domain :
(1)
(1)
with(2)
(2)
where x is the space variable, t is the time variable, L is the end point and T is the final time. and
are boundary and initial conditions operators, respectively.
and
are temporal and spatial partial differential operators such that
where ,
,
,
, are the coefficients, and f(x, t) is the source term. Both the coefficients and the source are assumed to be sufficiently smooth. The above regularity requirements insure the existence of all the solution’s derivatives, in the classical sense. Depending on the considered PDE, additional conditions are required in order to ensure the existence and the uniqueness of the solution. In addition, these requirements can be slightly relaxed when the equation is formulated in the weak sense.
At the two end points, 0 and L, the functions u(0, t),u(L, t) and their spatial derivatives up to terms are supposed to be bounded.
The following problems are studied in this paper:
IP1: | Estimation of the source f(x, t); | ||||
IP2: | Estimation of the coefficients | ||||
IP3: | Joint estimation of the source f(x) and the coefficients |
2.2. Procedure
Definition 1:
A function is called a modulating function of order l
if it satisfies:
(3)
(3)
where and p refers to the order of the derivative.
The use of the MFB method to solve the previous estimation problems requires that the order of the modulating functions l to be greater than or equal to the differential equation’s order; more precisely, if we are using space-dependent modulating function, as in this paper, then l should be greater than or equal to the order of the spatial differential operator, i.e. (this is necessary to be able to transfer the derivatives from the measurements to the modulating function, see STEP 3 in this subsection). The basic steps for the MFB method to solve the three inverse problems: IP1, IP2 and IP3, where
and
are the measured data, are presented in the following:
STEP 1: | Fix the time in Equation (Equation1 | ||||
STEP 2: | Integrate over the space interval: | ||||
STEP 3: | Apply the integration by parts formula to the second integral in (Equation5 |
Since the unknown is identified at fixed time , it can be also identified at any other fixed times; then, it can be identified in the whole domain
by interpolation.
2.3. Well-posedness of the MFB solution
In this section, we discuss the well-posedness of the MFB solution. First, since all the functions in (Equation6(6)
(6) ) are sufficiently smooth, and the product of two smooth functions has finite integral, all the integrals in (Equation6
(6)
(6) ) converge for each
. Hence, the MFB solution of (Equation6
(6)
(6) ) exists. The uniqueness and stability of the MFB solution for the three inverse problems are guaranteed, under some assumptions and conditions, by the next theorems.
Theorem 1 (Uniqueness of the MFB solution):
Assume that the measurements and
exist and are sufficiently smooth; then, IP1 has a unique solution satisfying (Equation6
(6)
(6) ). Moreover, if the unknowns in IP2 (respectively IP3) are space-dependent functions,
for
(resp.
;
for
), then IP2 (resp. IP3) has a unique solution satisfying (Equation6
(6)
(6) ) if and only if the functions
, for
(resp. the functions
and
, for
, with
for all
), are linearly independent as functions of t on a dense subset in [0, L].
IP1: Assume that f, satisfy
(7)
(7)
for all modulating functions of order l. Then,
(8)
(8)
for all , where
satisfies (Equation3
(3)
(3) ); hence, from Lemma 2.1 in [Citation30],
.
IP2: Assume that ,
,
, satisfy
(9)
(9)
for any -order modulating function
, where
. Hence,
(10)
(10)
for all -order
. By applying successive integration by parts, one can obtain:
(11)
(11)
for all -order modulating function
. Hence (using Lemma 2.1 in [Citation30]),
(12)
(12)
since ,
, are linearly independent as functions of t,
,
.
IP3: Here, we prove the uniqueness of estimating f(x) and ,
, jointly. Let f,
and
,
,
, satisfy
(13)
(13) for all modulating functions
of order l. Then,
(14)
(14)
Applying successive integration by parts to the second compound term in the previous equation gives(15)
(15)
for all -order
. Therefore (using the same lemma),
(16)
(16)
and the linearly independent, in time, of and
for
, completes the proof.
Remark 1:
The uniqueness of a single space–time-dependent coefficient in IP2 is also guaranteed from Theorem 1 (IP2-case). More precisely, for a fixed s, the coefficient has a unique solution satisfying (Equation6
(6)
(6) ) if and only if
for all
.
Theorem 2 (Stability of the MFB solution):
The MFB solution of IP1, IP2 and IP3 is stable in the sense that: IP1: if f and are the solutions of (Equation6
(6)
(6) ) with respect to the data
and
, respectively, such that
(17)
(17)
where are the noise functions such that
and
. Then,
as
. Similarly, for IP2 and IP3.
Let and
be two inputs for (Equation1
(1)
(1) ) w.r.t the data
and
, respectively; that is,
(18)
(18)
By applying the three steps in Section 2.2 to the Equations in (Equation18(18)
(18) ), we end up with
(19)
(19)
Therefore, f and are two solutions of (Equation6
(6)
(6) ) w.r.t the data
. Similarly, for
and
. Nevertheless, from the uniqueness theorem (Theorem 1), we conclude
and
Hence, the difference, in a norm, between the two equations in (Equation18
(18)
(18) ) is:
(20)
(20)
Under the assumption that ,
, is small enough, then from (Equation20
(20)
(20) ), as
,
, which proves the stability, in term of norm, for the MFB solution of IP1.
Similarly, one can prove the stability for the MFB solutions of IP2 and IP3.
As a main feature of the MFB method, estimating the unknowns in Equation (Equation6(6)
(6) ) can be simplified into solving a system of linear algebraic equations. The details are presented in the next proposition.
Proposition 1:
Let be a basis expansion of the unknown,
(with
for notation) or
(with
), at a fixed time
, where
and
, for
and
, are basis functions and their coefficients, respectively. Let
be a class of M linearly independent modulating functions of order l (
). Then, the unknown coefficients can be estimated by solving:
(21)
(21)
which can be written as a system of the form:(22)
(22)
where the elements of the matrix and the vector K are:
(25a)
(25a)
is a vector of the unknown basis coefficients.
By applying the steps used to derive (Equation6(6)
(6) ) but w.r.t
, one can obtain (Equation21
(21)
(21) ), and then generating the necessary number of equations from (Equation21
(21)
(21) ) which can be written in system (Equation22
(22)
(22) ) form.
Remark 2:
The linearly independent of the modulating functions in Proposition 1 is a necessary condition for the solvability of system (Equation22(22)
(22) ).
Remark 3:
The solvability and the numerical stability of system (Equation22(22)
(22) ) can be affected by the nature and the number of the modulating and the basis functions, but it is not difficult to choose them appropriately to ensure that the matrix
is full-column rank and the system is numerically stable.
3. Source and velocity estimation for the wave equation
3.1. Method
Consider the following one-dimensional wave equation in the domain :
(26)
(26)
where ,
,
, are, respectively, the boundary and the initial conditions which are assumed not to be necessarily known. The source function is denoted by f(x, t). c(x, t) is the square of the velocity of wave propagation at point x and time t, and it is assumed to be positive and bounded. All the functions are assumed to be sufficiently regular.
The goal of this example is to illustrate the application of modulating functions based method to solve IP1, IP2 and IP3. Here, the measurements are the displacement and the acceleration
at a fixed time instant. Next corollaries study these different problems.
Corollary 1:
Let be a basis expansion of the unknown, source (IP1) or velocity (IP2), in (Equation26
(28a)
(28a) ) at a fixed time
, where
and
, for
, are basis functions and basis coefficients, respectively. Let
be a class of at least second-order linearly independent modulating functions with
. Then, the unknown coefficients
,
, can be estimated by solving the system:
(27)
(27)
where the components of the matrix
are:
(28a)
(28a)
(28b)
(28b) for
and
; the components of the vector
are:
(29a)
(29a)
(29b)
(29b) and
is the vector of the unknowns
,
.
The proof follows the steps described previously in Section 2.2; see Appendix 1.
Remark 4:
Corollary 1 can be directly applied to estimate space-dependent velocity c(x) or space–time-dependent velocity c(x, t). For constant velocity case, the method is simpler and more accurate since there is no need for approximating the constant c using basis expansion or any other approximation. In this case, the velocity can be computed as follows:(30)
(30)
where the components of the vectors and K are
(31)
(31)
and(32)
(32)
respectively. The notation tr refers to transpose.
In the next corollary, the joint estimation problem (IP3) is studied.
Corollary 2:
Let and
be basis expansion for the unknown source and velocity, respectively, where
,
and
,
, for
and
are basis functions and basis coefficients, respectively. Let
be a class of linearly independent modulating functions with
and
. Then, the unknown coefficients
and
can be estimated by solving the system:
(33)
(33)
where is
matrix which can be written as
such that
is
matrix with components:
(34)
(34)
and is
matrix with components:
(35)
(35)
for and
. The components of the vector K are:
(36)
(36)
and the vector of the unknowns is(37)
(37)
and
are the measurements.
Appendix 2 provides the steps of the proof.
3.2. Error analysis
We have shown in the previous subsection that the studied inverse problems are simplified into solving a system of linear equations, say . Here, we are going to study the effect of small perturbations, which come from the noisy measurements, of
and K on the MFB solution.
Let and
be quadratically integrable bounded noises, i.e.
and
, such that:
(38)
(38)
Next proposition provides the absolute error of the MFB solution.
Proposition 2:
Let and
represent the MFB solution of the noise-free system
and the noisy one
, respectively. Then,
(39)
(39)
such that
IP1: |
| ||||
IP2: |
| ||||
IP3: |
|
First, we can write: and
, where E and e represent the perturbations in
and K, respectively, produced by the noises
and
. Now, since
and
are, respectively, the solutions of the noise-free and noisy systems, they satisfy
(40)
(40)
and(41)
(41)
Subtracting (Equation40(42)
(42) ) from (Equation41
(43)
(43) ) yields
(42)
(42)
By subtracting from the two sides in the previous equation, one can obtain
(43)
(43)
Using the fact and taking the infinite norm to the previous equation lead to
(44)
(44)
Computing the matrix (vector) norms of ,
, and
from the Equations (29), (Equation30
(32)
(32) ) (Equation35
(37)
(37) )–(Equation37
(39)
(39) ) and applying the holder inequality complete the proof. Inequality (Equation39
(41)
(41) ) suggests that the matrix
is constructed such that its generalized inverse norm is not too large. This can be achieved by the appropriate choice of the nature and the number of the basis and the modulating functions. In addition, the error bound is affected by the norm of the modulating functions which suggests normalizing the modulating functions to avoid numerical instability.
In order to show the efficiency of the presented method, IP1, IP2 and IP3 are simulated numerically in the next subsection.
3.3. Numerical simulations
To perform numerical simulations, system (Equation26(28a)
(28a) ) has been first discretized using finite differences scheme. Then, a set of synthetic data has been generated using the following parameters:
,
and
, where
and
are space and time grid sizes, respectively. The method has been implemented in Matlab and applied in noise-free and noisy cases. In the noise-corrupted case, 1, 3, 5 and 10% white Gaussian random noises with zero means have been added to the data where the noise level is evaluated using
; see Figure . The trapezoidal numerical integration method has been used to approximate the integrals.
Figure 1. Green solid lines represent the exact measurements, in IP1 example, while the red and blue lines represent the noisy measurements with 5 and 10% of noise, respectively. The sub-figure (a) is for the displacement while (b) for the acceleration
, both at fixed time
.
![Figure 1. Green solid lines represent the exact measurements, in IP1 example, while the red and blue lines represent the noisy measurements with 5 and 10% of noise, respectively. The sub-figure (a) is for the displacement u(x,t∗) while (b) for the acceleration utt(x,t∗), both at fixed time t∗.](/cms/asset/ea75d915-3f6c-48c7-a78a-76a406f603b5/gipe_a_1240793_f0001_oc.gif)
Regarding the choice of the modulating functions, as mentioned before there are many functions that satisfy (Equation3(3)
(3) ), see e.g. [Citation23–Citation27,Citation31]. In this paper, normalized polynomial-type modulating functions have been used for their simplicity. They have the following form:
(45)
(45)
where , M is the number of modulating functions and
is a degree of freedom which should be chosen such that
is of at least second order for all m; see Figure . Notice that
is normalized, i.e.
. For the basis functions, polynomial-type has been used.
3.3.1. IP 1
The exact source has been chosen to be and
. At a fixed time, the exact source and the estimated one vs. different noise levels are presented in Figure , and the corresponding relative errors are shown in Table .
Figure 3. (a) Exact source (blue dashed) and estimated ones (colored) w.r.t different noise levels: 0, 1, 3, 5 and 10% of noise, where ,
and
. (b) Estimation errors for the results in (a).
![Figure 3. (a) Exact source (blue dashed) and estimated ones (colored) w.r.t different noise levels: 0, 1, 3, 5 and 10% of noise, where q=2, I=11 and M=115. (b) Estimation errors for the results in (a).](/cms/asset/96b1bed9-b0e0-43be-a71b-fbb948aab789/gipe_a_1240793_f0003_oc.gif)
Table 1. Relative errors of vs. different noise levels.
Figure exhibits the exact source and the estimated one at three different times ,
and
. These results, which are obtained at only three time instants, are interpolated to estimate the source f(x, t) as shown in Figures and . For the interpolation, there are different methods that can be used to interpolate (estimate) new data points from known ones, see, for example, [Citation32]. Here, polynomial interpolation method has been used for illustration. The relative errors for estimating f(x, t) against different noise levels are presented in Table . As we can see, even with 10% of noise, the estimation error is acceptable.
Table 2. Relative errors of vs. different noise levels.
3.3.1.1. IP 1 with measurements interpolation
In this part, we assume the availability of the measurements at only few points; i.e., we have and
at some points
,
. In this case, we first interpolate the available measurements in order to approximate them over the whole space domain [0,L]; and then we applied the MFB method to estimate the unknown
. The relative errors corresponding to different noise levels are shown in Table , where
points (
) and spline interpolation have been used.
Table 3. Relative errors of vs. different noise levels when only 1.7% of the measurements are used (measurements interpolation).
3.3.2. IP 2:
Three cases have been tested: constant velocity, space-varying velocity and space–time-dependent velocity. The exact values of the velocity in the three cases are chosen as follows: ,
and
, respectively. Table shows the estimated values and the relative errors of the constant velocity coefficient with respect to different noise levels. The results of estimating c(x) are presented in Figure and Table . Figures – and Table show the results of estimating the space–time-dependent velocity c(x, t) after estimating and then interpolating the velocity at six time instances
.
Table 4. The estimated results for constant velocity coefficient with respect to different noise levels where the exact is .
Figure 7. (a) Exact space-dependent velocity c(x) (blue dashed) and estimated ones (colored solid) corresponding to different noise levels. (b) Estimation errors for the results in (a).
![Figure 7. (a) Exact space-dependent velocity c(x) (blue dashed) and estimated ones c^(x) (colored solid) corresponding to different noise levels. (b) Estimation errors for the results in (a).](/cms/asset/16cdd831-7991-4d33-8e72-918545979e5e/gipe_a_1240793_f0007_oc.gif)
Table 5. Relative errors of vs. different noise levels.
Figure 8. Exact velocity c(x, t) (left) and estimated one (right) after applying the MFB algorithm and doing interpolation;
.
![Figure 8. Exact velocity c(x, t) (left) and estimated one c^(x,t) (right) after applying the MFB algorithm and doing interpolation; noise level=5%.](/cms/asset/586a8550-ee49-428c-aef1-5b11552b99d9/gipe_a_1240793_f0008_oc.gif)
Table 6. Relative errors of vs. different noise levels.
3.3.3. IP3:
For joint estimation, we set . The estimated source and velocity in this joint estimation are shown in Figures and , respectively. The corresponding relative errors are presented in Table .
Figure 10. (a) Exact source f(x) (blue dashed) and estimated ones in the joint estimation (coloured solid) corresponding to different noise levels. (b) Estimation errors for the results in (a).
![Figure 10. (a) Exact source f(x) (blue dashed) and estimated ones in the joint estimation f^(x,t∗) (coloured solid) corresponding to different noise levels. (b) Estimation errors for the results in (a).](/cms/asset/5f1ed4d0-865e-430d-af77-bd3577fdeb78/gipe_a_1240793_f0010_oc.gif)
Figure 11. (a) Exact velocity c(x) (blue dashed) and the estimated ones in the joint estimation (coloured solid) corresponding to different noise levels. (b) Estimation errors for the results in (a).
![Figure 11. (a) Exact velocity c(x) (blue dashed) and the estimated ones in the joint estimation c^(x) (coloured solid) corresponding to different noise levels. (b) Estimation errors for the results in (a).](/cms/asset/88faf74b-62c5-4610-b09f-59d0d5caa399/gipe_a_1240793_f0011_oc.gif)
Table 7. Relative errors of estimating f(x) and c(x) in the joint estimation vs. different noise levels.
The presented figures and tables show that the estimated unknown is in quite good agreement with the exact one; therefore, the MFB method is an efficient and a robust method for solving parameters and source estimation for linear PDE. The method can be easily extended to identify parameters of non-linear PDE as shown in the next section.
4. Parameter estimation for the fifth-order KdV equation
4.1. Method
The fifth-order non-linear KdV equation has the following form:(46)
(46)
where ,
and
are positive parameters. This equation is used to model different phenomena such as capillary–gravity water waves, chains of coupled oscillators and magneto-acoustic waves in plasma.[Citation33–Citation35] The parameters,
,
and
, are related to properties of the physical medium under consideration. Thus, estimating these parameters can be used to validate the applicability of this equation for particular media.[Citation36] Hence, the following inverse problem can be defined:
Given and
at a fixed time
, find
,
and
.
The following corollary gives a solution to this inverse problem using the MFB method.
Corollary 3:
Let ,
and
be unknown parameters in (Equation46
(48)
(48) ), and let
be a class of at least fifth-order modulating functions with
. Then, the unknown parameters can be estimated by solving the following linear system:
(47)
(47)
where the rows of the matrix
, the elements of the vector K and the vector
are
(48)
(48)
(49)
(49)
and(50)
(50)
respectively.
Appendix 3 provides details of the proof.
4.2. Numerical simulations
For numerical simulations, let ; then, (Equation46
(48)
(48) ) is a Kawahara equation. Kawahara equation with an initial condition
has the following exact solution [Citation37]:
(51)
(51)
Figure and Table exhibit the estimated parameters values and the relative errors, respectively, where ,
,
and the degree of freedom q in (Equation45
(47)
(47) ) is chosen such that
is at least of order five. These results show that the identification of the parameters is successful in both noise-free and noisy cases.
Figure 12. Blue, red and green bars represent the estimated values of the parameters and
in Kawarah equation, respectively. The exact values are
and the estimation is done w.r.t different noise levels;
and
.
![Figure 12. Blue, red and green bars represent the estimated values of the parameters α1,α2 and α3 in Kawarah equation, respectively. The exact values are α1=α2=α3=1 and the estimation is done w.r.t different noise levels; q=9 and M=27.](/cms/asset/54b0ab3b-92e6-4104-9847-7a0ddbc79dd2/gipe_a_1240793_f0012_oc.gif)
Table 8. The relative errors of estimating the parameters and
in Kawahara equation w.r.t different noise levels.
For constant unknowns, as in this example, it can be enough to have the measurements at a suitable sub-domain . Estimating the three parameters in Kawahara equation with different values for
,
, is shown in Figure . From this figure, we observe that in the noise-free case, the error is small even when the data are available only in the first-third of the whole interval. In the noisy case, this sub-domain should be increased to have an acceptable estimation error. In both cases, the error generally decreases as
increases.
Figure 13. Relative errors (in %) for estimating and
simultaneously in Kawahara equation w.r.t different
. (a) in noise-free case and (b) with 5% of noise on the measurements.
![Figure 13. Relative errors (in %) for estimating α1,α2 and α3 simultaneously in Kawahara equation w.r.t different L∗. (a) in noise-free case and (b) with 5% of noise on the measurements.](/cms/asset/080b7387-639a-4697-aba5-6d7e5f5fd73d/gipe_a_1240793_f0013_oc.gif)
Figure 14. Number of modulating functions vs. the relative errors for (a) IP1, (b) IP2, (c) IP3 source estimation and (d) IP3 velocity estimation. In these figures, 5% of the noise was added to the measurements.
![Figure 14. Number of modulating functions vs. the relative errors for (a) IP1, (b) IP2, (c) IP3 source estimation and (d) IP3 velocity estimation. In these figures, 5% of the noise was added to the measurements.](/cms/asset/97bbdb86-4e01-4208-bb28-ebf1389e5a4c/gipe_a_1240793_f0014_b.gif)
Figure 15. Number of modulating functions vs. the relative error in IP1 w.r.t different noise levels and unknown source. Two types of modulating functions have been applied: polynomial modulating functions and sinusoidal modulating functions.
![Figure 15. Number of modulating functions vs. the relative error in IP1 w.r.t different noise levels and unknown source. Two types of modulating functions have been applied: polynomial modulating functions and sinusoidal modulating functions.](/cms/asset/da104b58-ef23-447f-9f69-415e556cfa76/gipe_a_1240793_f0015_oc.gif)
Similar to the procedure developed for the wave equation example, one can estimate space-dependent coefficients in the fifth-order KdV equation. In addition, the MFB method can be applied to estimate parameters in other high-order non-linear PDEs such as the sixth-order Boussinesq equation and take advantage of the properties of modulating functions to transfer the spatial derivatives to the modulating functions which can be computed analytically.
5. Discussion
The numerical part in this paper confirms the efficiency of the modulating functions-based method and its simple implementation. In addition, the obtained results have shown the good performance of this method and its success even with high levels of noise.
The number of modulating functions, M, plays an important role in the performance of the method; (see Figure ). Figure exhibits the number of modulating functions vs. the relative error for IP1, IP2 and IP3. Interestingly, it shows that the accuracy of the estimation can be improved by increasing M, especially in the noisy case. In addition, it shows that there exists a unique optimal number of modulating functions, , in the studied examples. However, the estimation is generally good for a relatively large interval for M. Also, it proves that this optimal number depends on the considered problem. From this observation, it is worthy to know under a specific parameter identification problem and after choosing the type of modulating functions how the relative error, as a function of M, is affected by the noise level and the nature of the unknown functions. Figure illustrates this behaviour in the noisy case in unknown input case. In this case, the relative error, with respect to the number of modulating functions, is invariant with respect to the noise level and the type of the unknown. This latter result offers a way to select an optimal number of modulating functions for real applications. In other words, one can set a synthetic function for the unknown input, apply the method and compute the error for different numbers of modulating functions to find the optimal one,
, and finally this
might be a good guess for the real inverse problem. This result is expected for the unknown input problem (IP1) because the matrix
is independent of the unknown and the measurements (see Equation (Equation23a
(25a)
(25a) )).
The choice of an appropriate number of basis functions used to expand the space-varying unknown functions is also important. Choosing this number may lead to ill-conditioning issues. Moreover, if this number is significantly smaller than the appropriate one, we lose accuracy. Hence, this number must be selected such that the numerical stability and accuracy are relatively good.
The MFB method can be also applied to the case of measurements that are available at fixed points in the space instead of fixed time instants. However, time-dependent modulating functions, , must be used in this case.
In a forthcoming study, we will consider the case of reduced number of measurements and study the solution of the inverse problems with the MFB method. An example of this study includes solving the previous inverse problems using field measurements only (without
).
6. Conclusion
In this paper, the MFB method for solving inverse problems for 1D-PDEs has been proposed. The well-posedness of the MFB solution has been studied. As illustrative examples, the method has been applied on the wave equation (linear 1D-PDE) and on the fifth-order KdV (non-linear 1D-PDE) to estimate different unknowns. By applying the MFB method, the problem has been converted to a system of algebraic equations which is linear in the unknowns. Then, these unknowns have been estimated using least square algorithms. Numerical simulations in both noise-free and noise-corrupted cases have shown good performance and robustness of this method. The noise error contribution has been also studied and an upper bound for it has been derived.
Future study will investigate the choice of the number of basis in order to propose an efficient and systematic method for selecting this number. In addition, we will extend the MFB method to the estimation of discontinuous space–time-dependent unknowns, which are more realistic for real applications.
Acknowledgements
The authors would like to thank the anonymous reviewers for their valuable comments and suggestions that have led to an improved version of this paper.
Additional information
Funding
Notes
No potential conflict of interest was reported by the authors.
References
- Robinson EA. Predictive decomposition of time series with application to seismic exploration. Geophysics. 1967;32:418–484.
- Cameron M, Fomel S, Sethian J. Inverse problem in seismic imaging. PAMM. 2007;7:1024803–1024804.
- Fear E, Stuchly M. Microwave detection of breast cancer. IEEE Trans. Microwave Theory Tech. 2000;48:1854–1863.
- Kirsch A. Characterization of the shape of a scattering obstacle using the spectral data of the far field operator. Inverse Prob. 1998;14:1489–1512.
- Muniz W, Ramos F, de Campos Velho H. Entropy-and tikhonov-based regularization techniques applied to the backwards heat equation. Comput. Math. Appl. 2000;40:1071–1084.
- Clason C, Klibanov MV. The quasi-reversibility method for thermoacoustic tomography in a heterogeneous medium. SIAM J. Sci. Comput. 2007;30:1–23.
- Han H, Ling L, Takeuchi T. An energy regularization for cauchy problems of laplace equation in annulus domain. Commun. Comput. Phys. 2011;9:878–896.
- Poli L, Oliveri G, Rocca P, et al. Bayesian compressive sensing approaches for the reconstruction of two-dimensional sparse scatterers under te illuminations. IEEE Trans. Geosci. Remote Sens. 2013;51:2920–2936.
- Moireau P, Chapelle D, Le Tallec P. Joint state and parameter estimation for distributed mechanical systems. Comput. Methods Appl. Mech. Eng. 2008;197:659–677.
- Ramdani K, Tucsnak M, Weiss G. Recovering the initial state of an infinite-dimensional system using observers. Automatica. 2010;46:1616–1625.
- Asiri S, Laleg-Kirati T, Zayane C. Inverse source problem for a one-dimensional wave equation using observers. In: 11th International Conference on Mathematical and Numerical Aspects of Waves; Tunisia; 2013. p. 149–150.
- Tucsnak M, Weiss G. From exact observability to identification of singular sources. Math. Control Signals Syst. 2014;27:1–21.
- Zuazua E. Propagation, observation, control and numerical approximation of waves. SIAM Rev. 2005;47:197–243.
- Shinbrot M. On the analysis of linear and nonlinear dynamical systems from transient-response data. National Advisory Committee for Aeronautics NACA; 1954.
- Shinbrot M. On the analysis of linear and nonlinear systems. Trans. ASME. 1957;79:547–552.
- Perdreauville FJ, Goodson R. Identification of systems described by partial differential equations. J. Fluids Eng. 1966;88:463–468.
- Fairman F, Shen D. Parameter identification for a class of distributed systems. Int. J. Control. 1970;11:929–940.
- Co TB, Ungarala S. Batch scheme recursive parameter estimation of continuous-time systems using the modulating functions method. Automatica. 1997;33:1185–1191.
- Liu DY, Laleg-Kirati TM, Perruquetti W, et al. Non-asymptotic state estimation for a class of linear time-varying systems with unknown inputs. IFAC P. Vol. 2014;47:3732–3738.
- Liu DY, Laleg-Kirati TM. Robust fractional order differentiators using generalized modulating functions method. Signal Process. 2015;107:395–406.
- Liu DY, Tian Y, Boutat D, et al. An algebraic fractional order differentiator for a class of signals satisfying a linear differential equation. Signal Process. 2015;116:78–90.
- Aldoghaither A, Liu D, Laleg-Kirati T. A novel approach for parameter and differentiation order estimation for a space fractional advection dispersion equation. SIAM J. Sci. Comput. 2015;37:A2813–A2839.
- Takaya K. The use of hermite functions for system identification. IEEE Trans. Autom. Control. 1968;13:446–447.
- Preisig H, Rippin D. Theory and application of the modulating function method–i. review and theory of the method and theory of the spline-type modulating functions. Comput. Chem. Eng. 1993;17:1–16.
- Saha DC, Rao BP, Rao GP. Structure and parameter identification in linear continuous lumped systems: the poisson moment functional approach. Int. J. Control. 1982;36:477–491.
- Patra A, Unbehauen H. Identification of a class of nonlinear continuous-time systems using hartley modulating functions. Int. J. Control. 1995;62:1431–1451.
- Pearson A, Lee F. Parameter identification of linear differential systems via fourier based modulating functions. Control-Theory Adv. Technol. 1985;1:239–266.
- Fliess M. Analyse non standard du bruit. C.R. Math. 2006;342:797–802.
- Fliess M. Critique du rapport signal à bruit en communications numériques-questioning the signal to noise ratio in digital communications. ARIMA (Revue africaine d’informatique et de Mathématiques appliquées). 2008;9:419–429.
- Liberzon D. Calculus of variations and optimal control theory: a concise introduction. New Jersey: Princeton University Press; 2012.
- Liu DY, Laleg-Kirati TM, Gibaru O, et al. Identification of fractional order systems using modulating functions method. In: American Control Conference (ACC), 2013. IEEE, Washington, DC; 2013. p. 1679–1684.
- Rabiner LR. Multirate digital signal processing. New Jersey: Prentice Hall PTR; 1996.
- Benney D. General theory for interactions between short and long waves. Stud. Appl. Math. 1977;56:81–94. cited By 110.
- Gorshkov K, Ostrovsky L. Interactions of solitons in nonintegrable systems: direct perturbation method and applications. Phys. D: Nonlinear Phenom. 1981;3:428–438.
- Bridges T, Derks G. Linear instability of solitary wave solutions of the kawahara equation and its generalizations. SIAM J. Math. Anal. 2002;33:1356–1378.
- Janno J, Šeletski A. Reconstruction of coefficients of higher order nonlinear wave equations by measuring solitary waves. Wave Motion. 2014;52:15–25.
- Kaya D. An explicit and numerical solutions of some fifth-order kdv equation by decomposition method. Appl. Math. Comput. 2003;144:353–363.
Appendix 1
Proof of Corollary (1)
IP1
First, at fixed time , the PDE in (Equation26
(28a)
(28a) ) is multiplied by
and integrated over [0, L]:
(A1)
(A1)
Then, we apply integration by parts twice to the second integral in (EquationA1(A1)
(A1) ):
(A2)
(A2)
Finally, by writing in its basis expansion, system (Equation27
(28b)
(28b) ) is obtained with components as in (Equation28a
(28a)
(28a) ) and (Equation29a
(29a)
(29a) ).
IP2
Equation (Equation26(28a)
(28a) ) is first multiplied by
, where
, and integrated over the space interval [0,L]; we obtain:
(A3)
(A3)
After that, we integrate the second term on the left-hand side by parts twice, and so
Then, by writting c(x) in its basis expansion, system (Equation27(28b)
(28b) ) is obtained with components as in (Equation28b
(28b)
(28b) ) and (Equation29b
(29b)
(29b) ).
Appendix 2
Proof of Corollary (2)
System (Equation33(35)
(35) ) can be obtained by doing the same steps in A.2: Equation (Equation26
(28a)
(28a) ) is first multiplied by
; integrated over the interval [0,L]; then, integration by parts is applied to the second integral, and finally, the unknowns are written in their basis expansion,
and
.
Appendix 3
Proof of Corollary (3)
STEP 1: Fix the time in Equation (Equation46(48)
(48) ) at
, and then multiply the equation by the modulating functions
:
(C1)
(C1)
STEP 2: Integrate over the space interval:(C2)
(C2)
STEP 3: By applying the integration by parts formula, once to the second integral, three times to the third integral and five times to the fourth integral in (EquationC2(C2)
(C2) ), one can obtain:
(C3)
(C3)
The first integral in (EquationC3(C3)
(C3) ) represents the mth row of K as in (Equation49
(51)
(51) ) while the second, third and fourth integrals form the mth row of
multiplied by the vector of unknowns
, see (Equation48
(50)
(50) ) and (Equation50
(50)
(50) ).