![MathJax Logo](/templates/jsp/_style2/_tandf/pb2/images/math-jax.gif)
Abstract
A Newton-type method is proposed for the recovery of the unknown coefficient function in the canonical Sturm-Liouville differential equation from two spectral data. Specifically, the two spectral data will be used to produce two Cauchy data, which in turn will serve as the input in a non-linear equation whose unknown is the coefficient function in the canonical Sturm-Liouville differential equation. This non-linear equation is to be solved numerically by the Newton method. Each Newton iterate requires that a Goursat-Cauchy boundary value problem be solved numerically. The Fréchet differentiability of the non-linear map is also discussed in the present paper. The numerical implementation of the Newton method for this inverse Sturm-Liouville problem is illustrated with examples, and it will be compared with a quasi-Newton method, and with a variational method discussed in previous literature. The numerical examples show that the Newton method needs fewer iterates to recover the true coefficient function than the quasi-Newton method. The Newton method is comparable with the variational method in terms of accuracy and number of iterates when the boundary parameters are given, and it requires a much smaller number of iterates than the variational method, when the boundary parameters are reconstructed from the available spectra.
1 Introduction
The goal of the paper is to develop a Newton-type method for solving numerically a class of inverse Sturm-Liouville problems: reconstruction of the potential (coefficient) function in the canonical Sturm-Liouville differential equation from two spectra. The potential reconstruction from two spectra is further beneficial to other inverse spectral problems (e.g. solving inverse Sturm-Liouville problems by multiple spectra). For solving numerically inverse Sturm-Liouville problems by three spectra, see for instance [Citation1–Citation3].
In practice, the Sturm-Liouville differential equation arises from the infinitesimal, vertical vibration of an elastic string of negligible weight and the constraints on the string give rise to the boundary conditions that accompany the differential equation (e.g. [Citation3, Chapter 2 and the beginning of Section 5.2]).
While a quasi-Newton method was proposed in [Citation4], the present work distinguishes itself from previous literature in the following aspects. We introduce intuitively the formula for the Fréchet derivative of the map in the non-linear equation we wish to solve. We then prove rigorously that the proposed formula satisfies the requirements of the Fréchet derivative. The non-linear map that we propose is different from that of [Citation4, p.166]. Another essential difference between the present paper and [Citation4] is that our method calls for updating the inverse of the Fréchet derivative operator at each step in the process, whereas the quasi-Newton method in [Citation4] does not update the inverse of the Fréchet derivative, but rather uses the inverse of the Fréchet derivative operator at zero in each stage of the iteration process. Using the inverse of the Fréchet derivative operator at the current iterate to produce the next iterate (i.e. the Newton method) results in fewer iterates needed to reconstruct the true potential function than the quasi-Newton method, as the examples in Section 6 indicate.
Our Newton-type method is also different from the variational method of [Citation5, Citation6]. Compared with our approach, where the two sequences of eigenvalues are used to produce two Cauchy data, and these data are the information the Newton’s method uses directly, the approach of [Citation5, Citation6] is to directly use the two sequences of eigenvalues. Specifically, the method in [Citation5, Citation6] constructs a weighted least-squares functional that quantifies the distance between the two given sequences of eigenvalues of a potential function that is to be reconstructed and the two counterpart sequences of eigenvalues of a trial potential. It is shown in [Citation5, Citation6] that the only extrema of the functional are global minimum points, and thus, the steepest descent method used to minimize the least-squares functional will generate an estimate of the global minimum point, i.e. an estimate of the true potential function. In [Citation5], the parameters in the boundary conditions are given, and they are used in the construction of the functional gradient. In [Citation6], the boundary parameters are reconstructed along with the potential function, which is a more realistic case, but the method therein requires a large number of iterates (hundreds and even thousands) to converge. The Newton method we propose, works well (i.e. low number of iterates and small relative/absolute error) in both cases: given the Robin boundary parameter H, and estimated H, because the only place these parameters are needed is in the construction of the Cauchy data which is done outside of the Newton algorithm. In both cases, the mean of the true potential function that is needed in the construction of the Cauchy data is estimated from the eigenvalues.
In the remainder of the paper, we shall provide the background of the inverse Sturm-Liouville problem by two spectra (Section 2), the description and details of our Newton-type method (Sections 3–5), the numerical results (Section 6), and the proof of an auxiliary result (Appendix 1).
2 Preliminaries
The canonical Sturm-Liouville equation on the interval is the second-order linear ordinary differential equation:
2.1
2.1 Here, the function
is called the coefficient function or potential function. Equation (Equation2.1
2.1
2.1 ) is accompanied by two boundary conditions (one per end-point). Then two types of problems arise:
the direct Sturm-Liouville problem: given
, find the eigenpairs
of (Equation2.1
2.1
2.1 ) with the accompanying boundary conditions. It has been proved in literature that if
is real-valued, then the eigenvalues
are real, distinct, countably many, tending to infinity, and satisfying an asymptotic formula specific to the type of boundary conditions under consideration, and that the normalized eigenfunctions
are real-valued and form an orthonormal basis of
(i.e. orthonormal and complete sequence). See Section 4.3 of [Citation7], and also [Citation8–Citation10].
the inverse Sturm-Liouville problem: given two spectral data, find the coefficient function
. Note that the two spectral data are not the set of eigenpairs
, since otherwise, even with one eigenpair, the problem would become trivial (i.e. knowing one pair
, the coefficient
will be immediately found from (Equation2.1
2.1
2.1 )). Rather, the two spectral data can be either two sequences of eigenvalues (one sequence corresponding to one pair of boundary conditions, and the other sequence corresponding to another pair of boundary conditions of different type than the first pair, or of the same type but with different boundary parameters), or one sequence of eigenvalues and one sequence of norming constants, or one sequence of eigenvalues and partial information about the potential function (e.g. symmetric with respect to the midpoint of the interval
).
Figure 1. The domain of the problem (Equation2.82.8
2.8 ).
![Figure 1. The domain of the problem (Equation2.82.8 Kxx(x,t;q)-Ktt(x,t;q)-q(x)K(x,t;q)=0,(x,t)∈Δ0K(x,x;q)=12∫0xq(s)ds,0≤x≤aK(x,0;q)=0,0≤x≤a.2.8 ).](/cms/asset/14e263dd-1bb1-450b-8688-146e3f8d8053/gipe_a_947478_f0001_oc.gif)
Differentiating (Equation2.72.7
2.7 ) with respect to
and using the first boundary condition of (Equation2.8
2.8
2.8 ) we obtain:
2.9
2.9 Based on (Equation2.7
2.7
2.7 ) and (Equation2.9
2.9
2.9 ) we can write the systems of Equations (Equation2.4
2.4
2.4 ) and (Equation2.5
2.5
2.5 ) in the equivalent form:
2.10
2.10 and respectively
2.11
2.11 where
. The unknowns in (Equation2.10
2.10
2.10 ) and (Equation2.11
2.11
2.11 ) are the functions
Expanding them in Fourier series and inserting these Fourier series in (Equation2.10
2.10
2.10 ) and (Equation2.11
2.11
2.11 ), respectively, we obtain two linear systems with the Fourier coefficients being the unknowns. Hence, the functions
will be determined through their Fourier coefficients. Then the Cauchy data
will be immediately obtained. We want to add here that the bases functions of the Fourier series representations must be chosen carefully to insure numerical accuracy in the Cauchy data when constructing them from the eigenvalues. An idea on how to choose the bases functions is provided by (Equation2.10
2.10
2.10 ) and (Equation2.11
2.11
2.11 ), the known properties of
, and the asymptotic formulas of the eigenvalues
’s and
’s.
It is known in literature (see [Citation8, p.256], or [Citation10, Theorem 1 (p.29), Theorem 2 (p.30), Theorem 4 (p.35)], or [Citation7, Lemma 4.7 (p.135–136) and formula (4.25) on p.139], and rescale to
by the change of variables
, with
) that the Dirichlet spectrum of the potential function
is a sequence of real, simple eigenvalues, increasing, and satisfying the asymptotic formula:
2.12
2.12 where
is an
sequence of real numbers (i.e.
).
It is also known (see [Citation8, p.255] and rescale to
) that the Dirichlet–Robin spectrum of the potential function
and boundary parameter
is a sequence of real, simple eigenvalues, increasing, and satisfying the asymptotic formula:
2.13
2.13 where
is an
sequence of real numbers. It follows from (Equation2.12
2.12
2.12 ) and (Equation2.13
2.13
2.13 ) that
2.14
2.14 from which we infer that for
sufficiently large,
2.15
2.15 Noting from (Equation2.10
2.10
2.10 ) and (Equation2.11
2.11
2.11 ) that the functions
formula (Equation2.15
2.15
2.15 ) suggests considering the bases functions
2.16
2.16 They are orthogonal bases of
as seen from [Citation12, Example 2(d1), p.308–309] with
, and the Appendix of [Citation1] with
, respectively. The orthogonality of the functions in (Equation2.16
2.16
2.16 ), along with (Equation2.15
2.15
2.15 ) make it easier for (Equation2.10
2.10
2.10 ) and (Equation2.11
2.11
2.11 ) to be solved numerically, as their coefficient-matrices will be almost diagonal. Another reason to choose the bases functions (Equation2.16
2.16
2.16 ) is that
are zero at
, as inferred from the second boundary condition of (Equation2.8
2.8
2.8 ). The same is true for the functions in (Equation2.16
2.16
2.16 ) at
. Also,
is not zero at
, and neither
are. From the first boundary condition of (Equation2.8
2.8
2.8 ) and the discussion above, we see that the best Fourier series representations are:
2.17
2.17
2.18
2.18 We mention here that we do not need
for our numerical method/algorithm. However, since
can be obtained numerically in one of these two ways. The first way is to apply a divided difference scheme to the just calculated
. Specifically,
Here,
is the number of partition points in
-direction. The second way is to take advantage of the integration by parts formula in (Equation2.10
2.10
2.10 ), and the two boundary conditions of (Equation2.8
2.8
2.8 ) with
:
2.19
2.19 From (Equation2.19
2.19
2.19 ) and (Equation2.14
2.14
2.14 ), and the fact that
is an orthogonal basis of
(see [Citation12, Example 2(c1), pg 308–309] with
), we can write the Fourier series representation of
as
2.20
2.20 Note that the bases functions
are not beneficial, because they are zero at
and
, whereas
need not be so.
3 The Newton method for the inverse two spectra problem
Now we can describe the proposed Newton method for recovering the potential function from two spectra. Let us take for illustration two sequences of eigenvalues. To be specific, we take
- the Dirichlet eigenvalues and
- the Dirichlet–Robin eigenvalues. Note that for another pair of spectral data, the difference in the way
is reconstructed resides only in the manner the pair of Cauchy data is extracted from the given pair of spectral data, the Newton method remaining basically the same. Note also that in practice, only finitely many of the
’s and
’s are available. So, we can assume that only
and
are known. Write Fourier series expansions for the functions
and
and insert them, respectively, into the systems:
3.1
3.1
3.2
3.2 where
and H are real numbers that can be estimated from the eigenvalues
’s and
’s. Precisely, due to the asymptotic formulas of the eigenvalues (Equation2.12
2.12
2.12 ) and (Equation2.13
2.13
2.13 ), we can write:
3.3
3.3
3.4
3.4 Since
is expected to match
, we take
3.5
3.5 and hence
3.6
3.6 Bearing in mind (Equation2.17
2.17
2.17 ) and (Equation2.18
2.18
2.18 ), we write the Fourier series (actually, truncated Fourier series) of
and
as follows:
3.7
3.7
3.8
3.8 Solve the newly obtained linear systems (Equation3.1
3.1
3.1 ) and (Equation3.2
3.2
3.2 ) for the Fourier coefficients
and
of
and
, respectively. Then we need to determine
such that the functions
where
solves the Goursat problem (Equation2.8
2.8
2.8 ). In other words, we need to solve the non-linear equation
3.9
3.9 for the unknown function
, where the map
is defined by
3.10
3.10 with
being the weak solution to the Goursat problem (Equation2.8
2.8
2.8 ). The non-linear Equation (Equation3.9
3.9
3.9 ) is equivalent to the equation
3.11
3.11 Letting
, the Newton method applied to (Equation3.11
3.11
3.11 ) reads:
which can be written equivalently based on the definition of
and (Equation3.10
3.10
3.10 ) as
3.12
3.12 In order for (Equation3.12
3.12
3.12 ) to be usable, the map
must be proven to be Fréchet differentiable, and the formula for its Fréchet derivative must be found. We shall also need to find the inverse of the Fréchet derivative operator.
Prior to working towards these goals, we mention that , although not needed for our algorithm (but needed for quasi-Newton), can be constructed numerically by following the model of
(see the paragraph between (Equation2.18
2.18
2.18 ) and (Equation2.20
2.20
2.20 )). Specifically, either apply a divided difference scheme to the just calculated
, or else write
3.13
3.13 and insert it into the system
3.14
3.14 to solve for the Fourier coefficients
of
.
4 The Fréchet differentiability of the map ![](//:0)
![](//:0)
For , the Fréchet derivative operator
must be a linear bounded operator from
into
such that:
4.1
4.1 We shall work step-by-step to find a natural choice for
, then we shall prove that the proposed formula for
satisfies (Equation4.1
4.1
4.1 ), and also that
is a linear and bounded operator between the previously indicated spaces of functions. Due to (Equation3.10
3.10
3.10 ), we have that
4.2
4.2 where
solves (Equation2.8
2.8
2.8 ), and
solves (Equation2.8
2.8
2.8 ) with
in place of
. Letting
we see that
satisfies
4.3
4.3 Due to the continuous dependence of
on
(see Theorem 4.15(b) in [Citation7, p.147]), and based on (Equation4.3
4.3
4.3 ) and (Equation4.2
4.2
4.2 ), a natural choice for
is
4.4
4.4 where
satisfies
4.5
4.5 The proposed formula (Equation4.4
4.4
4.4 ) would be correct if we prove that (Equation4.1
4.1
4.1 ) is satisfied. Letting
4.6
4.6 formula (Equation4.1
4.1
4.1 ) will be equivalent to:
4.7
4.7 Based on (Equation4.3
4.3
4.3 ) and (Equation4.5
4.5
4.5 ),
satisfies the boundary value problem:
4.8
4.8
In preparation for dealing with (Equation4.74.7
4.7 ) we make some remarks. Let
be the odd extension in
-variable of
from
to
(see Figure ), i.e.
Then based on (Equation2.8
2.8
2.8 ) and the fact that
is the same as
, one can easily show that
will satisfy the boundary value problem:
4.9
4.9 Conversely, if
satisfies (Equation4.9
4.9
4.9 ), then so will
do, as one can easily check because
at the same time
. Due to the uniqueness of solution to the boundary value problem (Equation4.9
4.9
4.9 ), it follows that the two solutions coincide, i.e.
, for all
from which we get
, for all
. This further implies that
, and so the restriction of
to
will satisfy (Equation2.8
2.8
2.8 ). Since the solution to (Equation2.8
2.8
2.8 ) is unique, it follows that
. Thus, we proved that (Equation2.8
2.8
2.8 ) and (Equation4.9
4.9
4.9 ) are equivalent. Therefore, we shall regard
as its own odd extension in variable
from
to
. Thus,
is an odd function in
and satisfies (Equation4.9
4.9
4.9 ). The same will be true for
with
replacing
in (Equation4.9
4.9
4.9 ). Then
,
,
can also be regarded as their own odd extensions in
-variable. They will satisfy, respectively,
4.10
4.10
4.11
4.11
4.12
4.12 Making the change of independent variables
, given by
4.13
4.13
and letting ,
,
,
, we have that
(see Figure ) and (Equation4.12
4.12
4.12 ) is transformed into:
4.14
4.14 One can show that the problem (Equation4.14
4.14
4.14 ) is equivalent to:
4.15
4.15 Now we can define an operator
given by
4.16
4.16 The fact that
is well defined (i.e.
, for each
) can be verified by the Lebesgue Dominated Convergence theorem (see [Citation13, Theorem 16, p.91]) applied to the sequence of functions
4.17
4.17 where
is the characteristic function of the rectangle
, and
in
, as
. See Figure for some details of the proof.
Figure 4. The position of relative to the quadrants formed by the horizontal and vertical lines through
determines the value
or
of
and
. Specifically, Upper-Left:
, for all
large enough; Upper-Right:
, for all
large enough; Lower-Left:
, for all
large enough; Lower-Right:
, for all
large enough. This is so because
, as
, making
stay much closer to
than
stays, and
stay much closer to
than
stays, for all
large enough. So,
, for all
, and all
, except maybe for those
in
located on the horizontal and vertical line segments through
. But these line segments have Lebesgue measure zero, and so we proved that the sequence (Equation4.17
4.17
4.17 ) converges to
a.e. in
. Also, all terms of (Equation4.17
4.17
4.17 ) are dominated by
, which can be easily proven to belong to
.
![Figure 4. The position of (ξ′,η′) relative to the quadrants formed by the horizontal and vertical lines through (ξ,η) determines the value 0 or 1 of χ[0,ξn]×[0,ηn](ξ′,η′) and χ[0,ξ]×[0,η](ξ′,η′). Specifically, Upper-Left: χ[0,ξn]×[0,ηn](ξ′,η′)=0=χ[0,ξ]×[0,η](ξ′,η′), for all n large enough; Upper-Right: χ[0,ξn]×[0,ηn](ξ′,η′)=0=χ[0,ξ]×[0,η](ξ′,η′), for all n large enough; Lower-Left: χ[0,ξn]×[0,ηn](ξ′,η′)=1=χ[0,ξ]×[0,η](ξ′,η′), for all n large enough; Lower-Right: χ[0,ξn]×[0,ηn](ξ′,η′)=0=χ[0,ξ]×[0,η](ξ′,η′), for all n large enough. This is so because (ξn,ηn)→(ξ,η), as n→∞, making ξn stay much closer to ξ than ξ′ stays, and ηn stay much closer to η than η′ stays, for all n large enough. So, |χ[0,ξn]×[0,ηn](ξ′,η′)-χ[0,ξ]×[0,η](ξ′,η′)|=0, for all n≥N=N(ξ,η,ξ′,η′), and all (ξ′,η′)∈D¯, except maybe for those (ξ′,η′) in D¯ located on the horizontal and vertical line segments through (ξ,η). But these line segments have Lebesgue measure zero, and so we proved that the sequence (Equation4.174.17 Q(ξ′,η′)w(ξ′,η′)χ[0,ξn]×[0,ηn](ξ′,η′)n≥1,4.17 ) converges to Q(ξ′,η′)w(ξ′,η′)χ[0,ξ]×[0,η](ξ′,η′) a.e. in (ξ′,η′)∈D¯. Also, all terms of (Equation4.174.17 Q(ξ′,η′)w(ξ′,η′)χ[0,ξn]×[0,ηn](ξ′,η′)n≥1,4.17 ) are dominated by |Q(ξ′,η′)|·wC((¯D)), which can be easily proven to belong to LR1(D).](/cms/asset/91dd7cae-d07c-4189-9da9-acd41503250f/gipe_a_947478_f0004_oc.gif)
The linearity of follows from the linearity of the double integral. The boundedness of
is proven as follows. From (Equation4.16
4.16
4.16 ) and (Equation4.13
4.13
4.13 ), and the Cauchy-Schwartz inequality we have
4.18
4.18 Taking
in (Equation4.18
4.18
4.18 ) we obtain:
4.19
4.19 Thus, the boundedness of the operator
is established:
Next, we shall find an upper bound for
, for
. Using
, (Equation4.16
4.16
4.16 ) with
replaced by
, (Equation4.18
4.18
4.18 ) and (Equation4.13
4.13
4.13 ), and the Cauchy-Schwartz inequality we find:
4.20
4.20 Based on (Equation4.18
4.18
4.18 ) and (Equation4.20
4.20
4.20 ), we propose the following estimate which can be proven by induction on
with similar calculations as the ones in (Equation4.20
4.20
4.20 ):
4.21
4.21 Taking
in (Equation4.21
4.21
4.21 ), we have that
4.22
4.22 and so
4.23
4.23 Formula (Equation4.23
4.23
4.23 ) implies that
, for
. Since
, it follows that for some
large enough,
. Thus, by Lemma 1 of Appendix 1 we infer that
is an invertible linear operator on
with a bounded inverse. This fact together with (Equation4.15
4.15
4.15 ), and similar calculations to those in (Equation4.18
4.18
4.18 ) imply that:
4.24
4.24 Then using the definition of the
-norm, the fact that
is odd in
, and the fact that
, for
, formula (Equation4.24
4.24
4.24 ), and the Cauchy-Schwartz inequality, we obtain:
4.25
4.25 By (Equation4.13
4.13
4.13 ) and
we have:
4.26
4.26 Due to (Equation4.26
4.26
4.26 ), the odd/even of
,
,
in
, the fact that
, for
, and the fact that
, we write:
4.27
4.27
4.28
4.28 Using the PDE of (Equation4.14
4.14
4.14 ) along with the first and respectively second boundary condition of (Equation4.14
4.14
4.14 ), we obtain:
4.29
4.29
4.30
4.30 Next, for an arbitrary but fixed
such that
, we have based on (Equation4.29
4.29
4.29 ), (Equation4.24
4.24
4.24 ) and (Equation4.13
4.13
4.13 ), the definitions of
,
,
(see right after (Equation4.13
4.13
4.13 )), and the Cauchy-Schwartz inequality, that:
4.31
4.31 Thus,
4.32
4.32 and similarly,
4.33
4.33 By (Equation4.25
4.25
4.25 ), (Equation4.27
4.27
4.27 ), (Equation4.28
4.28
4.28 ), (Equation4.32
4.32
4.32 ), (Equation4.33
4.33
4.33 ), we have:
4.34
4.34 In (Equation4.34
4.34
4.34 ), we used
. The quantity
in (Equation4.34
4.34
4.34 ) is:
Thus, (Equation4.34
4.34
4.34 ) shows that (Equation4.7
4.7
4.7 ) is satisfied, because
, as
(see Theorem 4.15(b) in [Citation7, p.147]). Hence, (Equation4.1
4.1
4.1 ) is satisfied, because as mentioned previously (Equation4.7
4.7
4.7 ) is equivalent to (Equation4.1
4.1
4.1 ).
In order to completely validate formula (Equation4.44.4
4.4 ) as the definition of the Fréchet derivative for the map
, in addition to proving (Equation4.1
4.1
4.1 ), we also need to show that for a fixed
,
is linear and bounded in
. The linearity follows from the fact that the boundary value problem (Equation4.5
4.5
4.5 ) can be easily proven to be linear in
via the uniqueness of its solution
.
For the boundedness of we proceed as follows. For a fixed
, the function
will be determined from (Equation2.8
2.8
2.8 ), so it will be known. Then recall that (Equation4.5
4.5
4.5 ) is equivalent to (Equation4.11
4.11
4.11 ) with the understanding that
and
in (Equation4.11
4.11
4.11 ) are actually the odd extensions in variable
of the originals
and
from
to
. By the change of independent variables (Equation4.13
4.13
4.13 ), and letting
4.35
4.35 the problem (Equation4.11
4.11
4.11 ) takes the equivalent form:
4.36
4.36 In turn, (Equation4.36
4.36
4.36 ) is equivalent to
4.37
4.37 We also have:
4.38
4.38
4.39
4.39 By (Equation4.16
4.16
4.16 ) and (Equation4.37
4.37
4.37 ), and the fact that
is invertible in
as shown previously, we write:
4.40
4.40 where
4.41
4.41 It follows that
because the first term in (Equation4.41
4.41
4.41 ) can be proven to belong to
by the Lebesgue Dominated Convergence theorem (similar arguments to those we justified that the right hand side of (Equation4.16
4.16
4.16 ) lies in
), and the second and last terms in (Equation4.41
4.41
4.41 ) are
since
, so they are
, and so continuous in
and respectively in
. Also, from (Equation4.41
4.41
4.41 ) and (Equation4.13
4.13
4.13 ) whose Jacobian is
, and the Cauchy-Schwartz inequality we get:
4.42
4.42 Formulas (Equation4.40
4.40
4.40 ) and (Equation4.42
4.42
4.42 ) imply that
4.43
4.43 From (Equation4.4
4.4
4.4 ), (Equation4.35
4.35
4.35 ), (Equation4.13
4.13
4.13 ), and the fact that
is odd in
variable (and thus
is odd in
, whereas
is even in
), and letting
we write that:
4.44
4.44 In (Equation4.44
4.44
4.44 ),
means
, and we used
, for
. By (Equation4.43
4.43
4.43 ) we have:
4.45
4.45 Then, by (Equation4.38
4.38
4.38 ), and with the understanding that in what follows
means
we write:
4.46
4.46 Next, by the change of variable
, with
in the first and third terms, and the change of variable
, with
in the second term in (Equation4.46
4.46
4.46 ), and using (Equation4.43
4.43
4.43 ), we write:
4.47
4.47 We also used the fact that
. The constant
in (Equation4.47
4.47
4.47 ) is:
4.48
4.48 With similar calculations, but using (Equation4.39
4.39
4.39 ) in place of (Equation4.38
4.38
4.38 ) we obtain:
4.49
4.49 Finally, using (Equation4.45
4.45
4.45 ), (Equation4.47
4.47
4.47 ) and (Equation4.49
4.49
4.49 ) in (Equation4.44
4.44
4.44 ), we obtain:
4.50
4.50 Thus, the boundedness of
is proven.
5 The inverse of the Fréchet derivative operator ![](//:0)
![](//:0)
The inverse of the Fréchet derivative is needed in (Equation3.12
3.12
3.12 ) to calculate the Newton iterates
. An analytical formula for the inverse of
is difficult to find. However, we only need to solve (Equation3.12
3.12
3.12 ) numerically. Specifically, we shall do this as follows:
(1) | Obtain the Cauchy data | ||||
(2) | Pick an initial guess | ||||
(3) | With the current iterate | ||||
(4) | Solve | ||||
(5) | Set | ||||
(6) | Repeat steps 3, 4, 5 until the relative error | ||||
(7) | The last Newton iterate will be a numerical estimate of the true potential function |
6 Numerical results and discussion
To observe how efficient our Newton-type method is in recovering the coefficient function from two sequences of eigenvalues, we started with a known
, on a chosen interval
, and a known Robin parameter H, then passed them to MATSLISE [Citation14] and obtained
(Dirichlet eigenvalues) and
(Dirichlet–Robin eigenvalues), for the desired values of
and
. Then continued as outlined in Section 5. Note that Sections 2 and 3 not only provide a way of constructing the Cauchy data
,
,
from
,
,
, H, but also indicate how
and H can be estimated from the two eigenvalue sequences (see (Equation3.5
3.5
3.5 ) and (Equation3.6
3.6
3.6 )). The numerical solution obtained via the implementation of the Newton algorithm in MATLAB codes is plotted together with the original
on the same plot, for comparison. We also passed the two eigenvalue sequences to the quasi-Newton MATLAB code and displayed the original
along with the numerical one on the same plot. The two types of plots (i.e. quasi-Newton and Newton) are featured side-by-side in Figures and – for the examples of
given below. We included on each picture
the number of iterates performed by the method used to produce that plot.
the relative error between the last two iterates (as defined in item 6 of Section 5). The stopping criterion for each method was a relative error smaller than
between two successive iterates.
the
-error of the last iterate with respect to the original (true)
.
Figure 5. Reconstruction of in Example 1; quasi-Newton (left) – Newton (right); exact Cauchy data supplied (top) - estimated Cauchy data supplied (bottom). When estimating the Cauchy data, we did not assume H, neither
, nor
known. The Robin parameters H and
were recovered from the two eigenvalue sequences, and
was not estimated by
. Rather we used (Equation3.8
3.8
3.8 ), and not (Equation6.7
6.7
6.7 ).
![Figure 5. Reconstruction of q(x) in Example 1; quasi-Newton (left) – Newton (right); exact Cauchy data supplied (top) - estimated Cauchy data supplied (bottom). When estimating the Cauchy data, we did not assume H, neither [q], nor q(a) known. The Robin parameters H and [q] were recovered from the two eigenvalue sequences, and q(a) was not estimated by [q]. Rather we used (Equation3.83.8 h~(t)=∑j=1N2βjsin(j-12)πat.3.8 ), and not (Equation6.76.7 h~(t)=C~at+∑j=1N2δjsinjπat.6.7 ).](/cms/asset/1857c3fb-ab5c-43be-8109-b5a97509b3dd/gipe_a_947478_f0005_b.gif)
Figure 6. The comparison of the analytical Cauchy data to the numerical Cauchy data corresponding to of Example 1. The numerical Cauchy data were produced from the first
Dirichlet eigenvalues (
’s), and the first
Dirichlet–Robin eigenvalues (
’s). The impedance parameter in the Robin boundary condition is
. To produce these three panels, the exact value of
was neither passed as an input argument, nor estimated by
(a possible gross estimate, though the only one available), and the exact value of H was only used by MATSLISE to yield the two eigenvalue sequences. Once these two sequences become available, they were used to estimate the mean of
and H (see (Equation3.5
3.5
3.5 ) and (Equation3.6
3.6
3.6 )), which in turn served to the construction of the three Cauchy data, as presented in Sections 2 and 3.
![Figure 6. The comparison of the analytical Cauchy data to the numerical Cauchy data corresponding to q(x) of Example 1. The numerical Cauchy data were produced from the first N1=21 Dirichlet eigenvalues (λ’s), and the first N2=12 Dirichlet–Robin eigenvalues (μ’s). The impedance parameter in the Robin boundary condition is H=2. To produce these three panels, the exact value of q(a) was neither passed as an input argument, nor estimated by [q] (a possible gross estimate, though the only one available), and the exact value of H was only used by MATSLISE to yield the two eigenvalue sequences. Once these two sequences become available, they were used to estimate the mean of q(x) and H (see (Equation3.53.5 C=limn→∞λn-nπa2,3.5 ) and (Equation3.63.6 aC2+H=a2·limn→∞μn-(n-12)πa2.3.6 )), which in turn served to the construction of the three Cauchy data, as presented in Sections 2 and 3.](/cms/asset/d3a1281f-a1fb-4fb2-adcb-57e42d057d7a/gipe_a_947478_f0006_b.gif)
Example 1
The potential function , where
is a real constant, is special in the sense that we are able to solve analytically the Goursat problem (Equation2.8
2.8
2.8 ) by separating the variables
and
, i.e. write
. We obtained:
Thus, we have the Cauchy data that correspond to this
(i.e.
) in analytical form. Applying the Newton method to the non-linear equation
with
6.1
6.1 the hope is that the numerical solution is close to the actual solution,
. And they are indeed so, if either quasi-Newton or Newton method is employed, as seen in the top two panels of Figure . However, the Newton method requires fewer iterates to converge than the quasi-Newton. Note that the quasi-Newton method uses the Cauchy data
6.2
6.2 instead of (Equation6.1
6.1
6.1 ). We also ran the two MATLAB codes (quasi-Newton and Newton) with the appropriate Cauchy data for each code, but they are now constructed from the two sets of eigenvalues, rather than being the exact Cauchy data. The results can be seen in the bottom two panels of Figure . We are also including the figures that display the comparison of the analytical and numerical (i.e. constructed from the two spectra) Cauchy data corresponding to the above-mentioned
. See Figure .
Example 1, exact Cauchy data: the comparison Newton versus quasi-Newton for
We supplied to both MATLAB codes (i.e. Newton and quasi-Newton) the exact (i.e. analytical) Cauchy data.
Example 1, estimated Cauchy data: the comparison Newton versus quasi-Newton for
We constructed the two Cauchy data required by each method from the eigenvalues. We used the first
Dirichlet eigenvalues (
’s), and the first
Dirichlet–Robin eigenvalues (
’s). In MATSLISE, we took a tolerance of
and
for calculating the Dirichlet and the Dirichlet–Robin eigenvalues, respectively.
Secondly, we see in these figures that when the exact Cauchy data were supplied to the Newton or quasi-Newton MATLAB codes, the reconstruction of the true potential function of Example 1 is very good: an -error between the last iterate and the true
of order
for the Newton case, and of order
for the quasi-Newton case. When the estimated Cauchy data were supplied, the reconstruction was possible, but not with such high accuracy as for the exact Cauchy data. This is an indication that both codes are efficient, and the efforts must be focused on obtaining the Cauchy data more accurately.
The top and bottom panels of Figure indicate that the Cauchy data and
(they are expected to match
, and respectively
- see Section 3) are effectively constructed from the two eigenvalue sequences. The middle panel of Figure features the Cauchy datum
(expected to match
– see Section 3). We see that at the left end
of the interval
, it fits the true
, but at the right end
is inaccurate. This is reminiscent of the Fourier series representation (Equation2.18
2.18
2.18 ) we had for
Recall from Section 2, that (Equation2.18
2.18
2.18 ) was our best choice within the available information (i.e. the two eigenvalue sequences). However, if in addition to the two eigenvalue sequences, the value
were known, we would have chosen to represent the function
6.3
6.3 (with
chosen conveniently - see below), relative to the orthogonal basis of functions in
6.4
6.4 This is a better representation than (Equation2.18
2.18
2.18 ) (and it turns out that it eliminates the discrepancy at
for
- see the bottom two panels of Figure ), because both the function (Equation6.3
6.3
6.3 ) and the basis functions (Equation6.4
6.4
6.4 ) are equal to zero at
, and
, by the second boundary condition of (Equation2.8
2.8
2.8 ) and by our choice of
, respectively. We choose
to make the function (Equation6.3
6.3
6.3 ) zero at
. That means
6.5
6.5 Using the first boundary condition of (Equation2.8
2.8
2.8 ), we can manipulate further (Equation6.5
6.5
6.5 ) as:
6.6
6.6 because as mentioned in Section 3,
is expected to be
and
is expected to match
. Thus, when
is known, we shall calculate numerically
and
as outlined in Section 3, obtain
via (Equation6.6
6.6
6.6 ), then write a truncated Fourier series for (Equation6.3
6.3
6.3 ) relative to the orthogonal basis (Equation6.4
6.4
6.4 ). Bearing in mind that
is expected to match
(see Section 3), we shall obtain
6.7
6.7 Then continue as outlined in Section 3. If
is unknown, we can either estimate it by
, a possible gross estimate but the only one available, and then follow the steps above, or we can go ahead with the procedure described in Section 3 to construct numerically the three Cauchy data.
As mentioned at the end of Section 1, the Newton method (and also the quasi-Newton method) works as well in the case of estimated H as in the case of known H (see Figures and ), because the only place H is needed is in the construction of the Cauchy data, and this is done outside of the Newton (and quasi-Newton) algorithm. The case of constructing the Cauchy data when H is estimated from the eigenvalues was presented in Section 3. Therein, an estimate of H was derived from (Equation3.63.6
3.6 ), and an estimate of
from (Equation3.5
3.5
3.5 ). Specifically, we took numerically
6.8
6.8
6.9
6.9 where the first
Dirichlet eigenvalues
’s, and the first
Dirichlet–Robin eigenvalues
’s are available. If H is known, (Equation6.9
6.9
6.9 ) gives us a second estimate for
:
6.10
6.10 Which of (Equation6.8
6.8
6.8 ) or (Equation6.10
6.10
6.10 ) to take for an estimate of
, when H is known? There are some options:
either (Equation6.8
6.8
6.8 ) or (Equation6.10
6.10
6.10 );
a weighted average of (Equation6.8
6.8
6.8 ) and (Equation6.10
6.10
6.10 ):
6.11
6.11
take (Equation6.8
6.8
6.8 ), if
; take (Equation6.10
6.10
6.10 ), if
; and take (Equation6.11
6.11
6.11 ), if
.
To conclude the discussion about the important parameters for the Cauchy data, we need to say a few words about the combination . In addition to
needed in (Equation3.1
3.1
3.1 ) to produce the Cauchy datum
, we also need the combination
in (Equation3.2
3.2
3.2 ) to produce the auxiliary datum
. If H is unknown, we would use (Equation6.9
6.9
6.9 ) derivable from (Equation3.4
3.4
3.4 ), as there is nothing else available. If H is known, we can use directly
, by plugging in (Equation6.8
6.8
6.8 ) for
, and the given value for H. It turned out through numerical experimentation that this latter estimate gives better accuracy in the Cauchy datum
than (Equation6.9
6.9
6.9 ). The panels of Figure display the comparison between the exact (true) and the numerical Cauchy datum
for all possible combinations
:
H estimated; H known
neither known nor estimated;
estimated with
;
known.
Figures and feature the comparison between the exact (true) and the numerical potential function when the Cauchy data constructed in each case of
from the two eigenvalue sequences were supplied to the quasi-Newton and Newton codes. Note that for Figure when H is estimated, the panels quasi-Newton and Newton for the case
unknown and not estimated that we obtained by running our MATLAB codes are the same as the bottom two panels of Figure , for which reason we have not included them in Figure . Also, for Figure when H is given, the panels quasi-Newton and Newton for the case of given
that we obtained by running our MATLAB codes are similar to the top two panels of Figure , for which reason we have not displayed them in Figure . The only significant difference between these two pairs of panels resides in the errors of the numerical potential (i.e. the last iterate) with respect to the original (true)
. These errors are of order
for the quasi-Newton and Newton panels when H and
are given, and of order
,
as seen in the top two panels of Figure . These computer-codes outputs demonstrate the power of our Newton-type method (and of the quasi-Newton method), as the coefficient function
was high accurately reconstructed when the inputs (here the two Cauchy data) were supplied more accurately to the Newton (respectively, quasi-Newton) MATLAB codes. Indeed, as the panels of Figures and illustrate, the best accuracy in the Cauchy data
,
,
was achieved when H and
were known.
In most cases (i.e. for the most ’s), the Goursat problem (Equation2.8
2.8
2.8 ) cannot be solved in closed form, so analytical formulas for the Cauchy data will not be possible. Thus, we must construct the Cauchy data numerically, from the available eigenvalues. Then we shall pass them to the quasi-Newton or Newton codes to produce the numerical potential functions. We are also including the following examples to illustrate the effectiveness of Newton as compared to quasi-Newton [Citation4] and to the variational method.[Citation5, Citation6]
Figure 7. The comparison analytical – numerical Cauchy datum corresponding to
of Example 1. The Robin boundary parameter
is: estimated from the two spectra (left) – given (right). The exact value
of the true potential at the right end of the interval is: unknown and not estimated (first row); estimated with
(second row); given (third row).
![Figure 7. The comparison analytical – numerical Cauchy datum Kx(a,t) corresponding to q(x) of Example 1. The Robin boundary parameter H=2 is: estimated from the two spectra (left) – given (right). The exact value q(a) of the true potential at the right end of the interval is: unknown and not estimated (first row); estimated with [q] (second row); given (third row).](/cms/asset/ad367a7a-8e1d-4497-bf93-abf72d4cf4ca/gipe_a_947478_f0007_b.gif)
Figure 8. Reconstruction of in Example 1, estimated Cauchy data. Here
is estimated from the two spectra; quasi-Newton (left) – Newton (right); unknown and not estimated
(first row) – estimated
with
(second row) – given
(third row). Note that the two panels that were supposed to be on the first row (i.e. unknown and not estimated
) are the same as the bottom two panels of Figure , and so they were not displayed here again.
![Figure 8. Reconstruction of q(x) in Example 1, estimated Cauchy data. Here H=2 is estimated from the two spectra; quasi-Newton (left) – Newton (right); unknown and not estimated q(a) (first row) – estimated q(a) with [q] (second row) – given q(a) (third row). Note that the two panels that were supposed to be on the first row (i.e. unknown and not estimated q(a)) are the same as the bottom two panels of Figure 5, and so they were not displayed here again.](/cms/asset/9380698a-d323-4653-a32b-9c6f579c6e32/gipe_a_947478_f0008_b.gif)
Figure 9. Reconstruction of in Example 1, estimated Cauchy data. Here
is given; quasi-Newton (left) – Newton (right); unknown and not estimated
(first row) - estimated
with
(second row) – given
(third row). Note that the two panels that were supposed to be on the third row (i.e. given
) are similar to the top two panels of Figure – see Section 6 for details – for which reason they were not displayed here.
![Figure 9. Reconstruction of q(x) in Example 1, estimated Cauchy data. Here H=2 is given; quasi-Newton (left) – Newton (right); unknown and not estimated q(a) (first row) - estimated q(a) with [q] (second row) – given q(a) (third row). Note that the two panels that were supposed to be on the third row (i.e. given q(a)) are similar to the top two panels of Figure 5 – see Section 6 for details – for which reason they were not displayed here.](/cms/asset/e71cdd27-292a-4631-9d27-13d1fbd91d96/gipe_a_947478_f0009_b.gif)
Figure 10. Reconstruction of in Example 2, estimated Cauchy data. Here
is estimated from the two spectra; quasi-Newton (left) – Newton (right); unknown and not estimated
(first row) – estimated
with
(second row) – given
(third row). The clustering of the iterates for the Newton panels is an indication that the Newton method converges faster than the quasi-Newton method.
![Figure 10. Reconstruction of q(x) in Example 2, estimated Cauchy data. Here H=-1/3 is estimated from the two spectra; quasi-Newton (left) – Newton (right); unknown and not estimated q(a) (first row) – estimated q(a) with [q] (second row) – given q(a) (third row). The clustering of the iterates for the Newton panels is an indication that the Newton method converges faster than the quasi-Newton method.](/cms/asset/213cfe08-7e36-4b12-86b8-705fd6351964/gipe_a_947478_f0010_b.gif)
Figure 11. Reconstruction of in Example 2, estimated Cauchy data. Here
is given; quasi-Newton (left) – Newton (right); unknown and not estimated
(first row) – estimated
with
(second row) – given
(third row). The clustering of the iterates for the Newton panels is an indication that the Newton method converges faster than the quasi-Newton method.
![Figure 11. Reconstruction of q(x) in Example 2, estimated Cauchy data. Here H=-1/3 is given; quasi-Newton (left) – Newton (right); unknown and not estimated q(a) (first row) – estimated q(a) with [q] (second row) – given q(a) (third row). The clustering of the iterates for the Newton panels is an indication that the Newton method converges faster than the quasi-Newton method.](/cms/asset/f5ef686f-c455-4ef4-bbd6-0f3ae95bb7e1/gipe_a_947478_f0011_b.gif)
Figure 12. Reconstruction of in Example 3, estimated Cauchy data. Here
is estimated from the two spectra; quasi-Newton (left) – Newton (right); unknown and not estimated
(first row) – estimated
with
(second row) – given
(third row). We mention that the counterpart panels for the case of given H that we obtained by running the quasi-Newton and Newton MATLAB codes are similar to these presented here in terms of plot-shape, number of iterates, and errors, for which reason we have not displayed them in a new figure. This fact indicates that the Newton method works, in terms of the number of iterates and errors, equally well when the boundary parameter H is estimated, as when it is given. The same is true for the quasi-Newton method.
![Figure 12. Reconstruction of q(x) in Example 3, estimated Cauchy data. Here H=0 is estimated from the two spectra; quasi-Newton (left) – Newton (right); unknown and not estimated q(a) (first row) – estimated q(a) with [q] (second row) – given q(a) (third row). We mention that the counterpart panels for the case of given H that we obtained by running the quasi-Newton and Newton MATLAB codes are similar to these presented here in terms of plot-shape, number of iterates, and errors, for which reason we have not displayed them in a new figure. This fact indicates that the Newton method works, in terms of the number of iterates and errors, equally well when the boundary parameter H is estimated, as when it is given. The same is true for the quasi-Newton method.](/cms/asset/322b47cd-f693-4db8-af27-4d866d49ddf1/gipe_a_947478_f0012_b.gif)
Example 2
Reconstruction of the more complicated potential6.12
6.12 The impedance parameter in the Robin boundary condition is
. The Cauchy data are estimated from the first
Dirichlet eigenvalues (
’s), and the first
Dirichlet–Robin eigenvalues (
’s). The MATSLISE tolerance was set at
and
, respectively, when calculating the two types of eigenvalues. As Figures and illustrate, the quasi-Newton method requires over 60 iterates to converge, whereas the proposed Newton method requires only 6 iterates (or an order of magnitude smaller).
Example 3
Reconstruction of6.13
6.13 The impedance parameter in the Robin boundary condition is
. The Cauchy data are estimated from the first
Dirichlet eigenvalues (
’s), and the first
Dirichlet–Robin eigenvalues (
’s). The MATSLISE tolerance was set at
and
, respectively, when calculating the two types of eigenvalues. See Figure . This example was also considered in [Citation4, Citation5]. We took
,
and the first
eigenvalues of each type, in order to match the default setting of [Citation5], where
,
. Our Newton-type method recovered
in
iterates in both situations: estimated Robin boundary parameter H, and given H. The least squares functional method [Citation5] recovered
in
iterates for the case of given boundary parameters
and
. However, results are not available in [Citation5], nor in [Citation6] about the case when the boundary parameters
,
are not supplied as input data. The
-error between the last iterate and the exact (true)
is
and
for the Newton method when
is neither known nor estimated and H is estimated (upper-right panel of Figure ) and respectively when H is given (panel not included as it is similar to its counterpart for the case H estimated), and it is
for the least squares functional method when the boundary parameters
are given. Overall, the Newton method and the least squares functional method are comparable, in the sense that the number of iterates and the absolute errors have the same order of magnitude. Within the same order of magnitude, the least squares functional method holds a slight advantage in this example.
The counterpart panels of those in Figure for the case of given H that we obtained by running the quasi-Newton and Newton MATLAB codes are similar (in plot-shape, number of iterates, errors) to the panels in Figure , and so they were not displayed. This fact indicates that the Newton (and also the quasi-Newton) method works, in terms of the number of iterates and errors, as well when the boundary parameter H is estimated as when H is given.
We note here that a variant of the least squares functional method [Citation5] is discussed in [Citation6] where in addition to recovering the potential function, the boundary parameters are recovered as well. However, the default setting in [Citation5]: ,
, where
are the parameters in the two pairs of boundary conditions:
6.14
6.14 and respectively
6.15
6.15 does not fall into the setting of [Citation6], where the two pairs of boundary conditions considered are:
6.16
6.16 and respectively
6.17
6.17 In the least squares functional method, the reconstruction of (Equation6.13
6.13
6.13 ) from the first
eigenvalues of (Equation6.16
6.16
6.16 )-type with
, and the first
eigenvalues of (Equation6.17
6.17
6.17 )-type with
,
, along with the reconstruction of the boundary parameters
,
,
was achieved in
iterates with an accuracy of
in the potential function. As mentioned before, the Newton method needs only four iterates to achieve an accuracy of
, although the reconstructed boundary parameters are of a different type. Also, to increase the speed of convergence in the method of [Citation6], the trial potential must be set to zero several times and restart the process. It is concluded in [Citation6] that determining at which iterates the trial potential must be set to zero is an open theoretical matter.
In all of the above examples, we required that both quasi-Newton and Newton methods stop when a relative error smaller than is attained between two successive iterates. Also, for both methods, we used a partition of
with
equally spaced nodes in
-direction, and therefore
equally spaced nodes in
-direction.
7 Summary and conclusions
This paper investigates a different method for solving inverse Sturm-Liouville problems by two spectra. The method is Newton-type, in the sense that a non-linear equation that involves the solution of the inverse spectral problem is solved numerically by the Newton iterative method. The non-linear map in question is a functional that maps
into the pair of functions
, where
is the solution in a triangular domain to the Goursat problem that corresponds to
. (
is also known as the Gelfand-Levitan-Marchenko kernel that corresponds to
.) Each Newton iterate requires that a Goursat-Cauchy boundary value problem be solved numerically, the details of which can be found in [Citation11]. The Fréchet differentiability of the non-linear map we used was proven rigorously in Section 4.
We illustrated with examples how the Newton method works in the case of the two spectra being two sequences of eigenvalues: Dirichlet type and Dirichlet–Robin type. The advantages of this method are that it recovers the true potential function of the canonical Sturm-Liouville equation with about the same accuracy as the quasi-Newton and the least squares methods, in fewer iterates than the quasi-Newton and comparable number of iterates as the least squares method when the boundary parameter is given as input datum. However, the Newton method recovers the true potential function with about the same efficiency (i.e. low number of iterates, and approximately the same accuracy) in both situations: given Robin boundary parameter H, and estimated H from the available spectra, whereas the least squares functional method needs a large number of iterates (hundreds or even thousands) in the more realistic case when the boundary parameters are not given and must be recovered. For the Newton method, the mean of the true potential function was estimated from the eigenvalues in both situations: given H, and estimated H.
Acknowledgements
I would like to thank Dr. Paul E. Sacks of Iowa State University for helpful discussions on the general topic of numerical reconstructions for inverse Sturm-Liouville problems.
References
- Drignei MC. Numerical reconstruction in a three-spectra inverse Sturm-Liouville problem with mixed boundary conditions. Inverse Probl. Sci. Eng. 2013;21:1368–1391.
- Drignei MC. Constructibility of an L2ℝ (0, a) solution to an inverse Sturm-Liouville problem using three Dirichlet spectra. Inverse Probl. 2010;26:025003. 29 p. doi:10.1088/0266-5611/26/2/025003.
- Drignei MC. Inverse Sturm-Liouville problems using multiple spectra [PhD thesis]. Ames (IA): Iowa State University; 2008.
- Rundell W, Sacks PE. Reconstruction techniques for classical inverse Sturm-Liouville problems. Math. Comput. 1992;58:161–183.
- Roehrl N. A least squares functional for solving inverse Sturm-Liouville problems. Inverse Probl. 2005;21:2009–2017.
- Roehrl N. Recovering boundary conditions in inverse Sturm-Liouville problems. In: Recent advances in differential equations and mathematical physics. Vol. 412, Contemporary mathematics. Providence (RI): Amer. Math. Soc; 2006. p. 263–270. arXiv:math.NA/0601031.
- Kirsch A. An introduction to the mathematical theory of inverse problems. Vol. 120, Applied mathematical sciences. New York (NY): Springer-Verlag; 1996.
- Dahlberg BEJ, Trubowitz E. The inverse Sturm-Liouville problem III. Commun. Pure Appl. Math. 1984;37:255–267.
- Isaacson EL, McKean HP, Trubowitz E. The inverse Sturm-Liouville problem II. Commun. Pure Appl. Math. 1984;37:1–11.
- Pöschel J, Trubowitz E. Inverse spectral theory. Vol. 130, Pure and applied mathematics. Boston (MA): Academic Press; 1987.
- Drignei MC. A numerical method for solving a Goursat-Cauchy boundary value problem. Appl. Math. Comput. 2013;220:123–141.
- Stakgold I. Green’s functions and boundary value problems. Pure and applied mathematics. 2nd ed. New York (NY): Wiley; 1998.
- Royden HL. Real analysis. 3rd ed. New York (NY): Macmillan Publishing Company; 1988.
- Matslise. Available from: http://users.ugent.be/vledoux/MATSLISE/.
Appendix 1
We conclude with the following result used in Section 4.
Lemma 1
Let (i.e.
is a linear bounded operator on the Banach space
over
or
) be such that
, for some
. Then the series
converges in
,
is invertible, and
A1
A1
Proof
Let be a real number such that
(possible since
). Then
. Let us denote by
A2
A2 the partial sum and respectively the remainder sum of order
(
) of the series
. Choose
and fix it. By the remainder theorem we can write
, for some unique
and
. Then,
A3
A3 From (EquationA3
A3
A3 ), using
;
, for
, and
:
A4
A4 If we let
in (EquationA4
A4
A4 ), so
because
, we obtain that
. Thus, we proved that the series
is convergent in
. Let
It follows by (EquationA2
A2
A2 ) and
that,
A5
A5 Letting
, so
since
in (EquationA5
A5
A5 ), we obtain that
. This implies that
. Similarly, one can show that
. Hence we proved that
is invertible and