325
Views
3
CrossRef citations to date
0
Altmetric
Articles

Constructive solution of the inverse spectral problem for the matrix Sturm–Liouville operator

Pages 1307-1330 | Received 11 Sep 2019, Accepted 23 Dec 2019, Published online: 24 Mar 2020

Abstract

An inverse spectral problem is studied for the matrix Sturm–Liouville operator on a finite interval with the general self-adjoint boundary condition. We obtain a constructive solution based on the method of spectral mappings for the considered inverse problem. The nonlinear inverse problem is reduced to a linear equation in a special Banach space of infinite matrix sequences. In addition, we apply our results to the Sturm–Liouville operator on a star-shaped graph.

2010 AMS Subject Classifications:

1. Introduction

The main aim of this paper is to provide a constructive solution of the inverse spectral problem for the matrix Sturm–Liouville operator with the general self-adjoint boundary condition. The operator under consideration corresponds to the following eigenvalue problem L=L(Q(x),T,H): (1) Y+Q(x)Y=λY,x(0,π),(1) (2) Y(0)=0,V(Y):=T(Y(π)HY(π))TY(π)=0.(2) Here, Y(x)=[yj(x)]j=1m is an m-element vector function; Q(x)=[qjk(x)]j,k=1m is an m×m matrix function, called the potential, qjkL2(0,π); and λ is the spectral parameter. Denote by Cm and Cm×m the spaces of complex-valued m-element column vectors and m×m matrices, respectively. It is supposed that TCm×m is an orthogonal projector in Cm, T=ImT, where ImCm×m is the unit matrix, and H = THT. The matrices Q(x) and H are assumed to be Hermitian, i.e. Q(x)=Q(x) a.e. on (0,π) and H=H, where the symbol † denotes the conjugate transpose. Under these restrictions, the boundary value problem L is self-adjoint.

We have imposed the boundary conditions (Equation2), since the problem (Equation1)–(Equation2) generalizes eigenvalue problems for Sturm–Liouville operators on a star-shaped graph (see, e.g. [Citation1,Citation2]). Differential operators on geometrical graphs, also called quantum graphs, have applications in mechanics, organic chemistry, mesoscopic physics, nanotechnology, theory of waveguides and other branches of science and engineering (see Refs. [Citation3–7] and references therein).

Inverse problems of spectral analysis consist in reconstruction of operators, by using their spectral information. The most complete results in inverse problem theory are obtained for scalar Sturm–Liouville operators y=y+q(x)y (see the monographs [Citation8–11]). Analysis of an inverse spectral problem usually includes the following steps:

  1. Uniqueness theorem.

  2. Constructive solution.

  3. Necessary and sufficient conditions of solvability.

  4. Local solvability and stability.

  5. Numerical methods.

Uniqueness of solution in most cases is the simplest issue in inverse problem theory. Constructive solution usually means the reduction of a nonlinear inverse problem to a linear equation in a Banach space. In particular, the Gelfand–Levitan method [Citation9] reduces inverse problems to Fredholm integral equations of the second kind. In the method of spectral mappings [Citation11,Citation12], the main role is played by a linear equation in a space of infinite bounded sequences. By relying on these constructive methods, necessary and sufficient conditions of solvability have been obtained; local solvability and stability have been proved and also numerical techniques for solution [Citation13,Citation14] have been developed for various classes of inverse spectral problems. We also have to mention the historically first method of Borg [Citation11,Citation15]. Initially, this method was local. Furthermore, it was developed by Pöschel and Trubowitz [Citation10] and was used for investigating global solvability of inverse problems.

Matrix Sturm–Liouville operators in the form Y=Y+Q(x)Y, where Q(x) is a matrix function, appeared to be more difficult for investigation. The main difficulties are caused by a complicated structure of spectral characteristics. Uniqueness issues of inverse problem theory for matrix Sturm–Liouville operators on finite intervals have been studied in Refs. [Citation16–21]. In Refs. [Citation22,Citation23], a constructive solution, based on the method of spectral mappings, has been developed for the inverse problem for Equation (Equation1) under the Robin boundary conditions (3) Y(0)hY(0)=0,Y(π)+HY(π)=0,(3) where h,HCm×m, h=h, and H=H. Chelkak and Korotyaev [Citation24] have given characterization of the spectral data for the problem (Equation1), (Equation3) (in other words, necessary and sufficient conditions for the inverse problem solvability) in the case of asymptotically simple spectrum. In Refs. [Citation23,Citation25], spectral data characterization has been obtained in the general case, with no restrictions on the spectrum, for self-adjoint and non-self-adjoint eigenvalue problems in the form (Equation1), (Equation3). Analogous results for the Dirichlet boundary conditions Y(0)=Y(π)=0 are provided in Ref. [Citation26]. Mykytyuk and Trush [Citation27] have given spectral data characterization for the matrix Sturm–Liouville operator with a singular potential from the class W21.

There are significantly less results on inverse matrix Sturm–Liouville problems with the general self-adjoint boundary conditions in the form (Equation2). In Ref. [Citation21], several uniqueness theorems have been proved for such inverse problems on a finite interval. Harmer [Citation28,Citation29] has studied inverse scattering on the half-line for the matrix Sturm–Liouville operator with the general self-adjoint boundary condition at the origin. However, we find inverse problems for matrix Sturm–Liouville operators on the half-line [Citation28–32] and on the line [Citation33–37] to be in some sense easier for investigation than inverse problems on a finite interval. Namely, the spectrum of the matrix Sturm–Liouville operators on a finite interval can contain infinitely many groups of multiple or asymptotically multiple eigenvalues, while operators on infinite domains usually have a bounded set of eigenvalues.

The main goal of this paper is to develop a constructive algorithm for solving an inverse spectral problem for the matrix Sturm–Liouville operator (Equation1)–(Equation2). In the future research, we plan to use this algorithm to obtain characterization of the spectral data to investigate local solvability and stability of the considered inverse problem. As far as we know, all these issues have not been studied before for the operator in the form (Equation1)–(Equation2). One can also develop numerical methods based on our algorithm. In addition, we intend to apply our results to inverse problems for differential operators on graphs [Citation38,Citation39].

Let us define the spectral data used for reconstruction of the considered operator. Denote p:=rank(T) and assume that 1p<m. Then, rank(T)=mp. In Ref. [Citation2], asymptotic properties of the spectral characteristics of the problem (Equation1)–(Equation2) have been studied. In particular, it has been proved that the spectrum of L is a countable set of real eigenvalues {λnk}nN,k=1,m¯, such that λnk=n12+On1,k=1,p¯,λnk=n+On1,k=p+1,m¯,nN. The more detailed eigenvalue asymptotics are provided in Section 2 of our paper. Note that multiple eigenvalues are possible, and they occur in the sequence {λnk}nN,k=1,m¯ multiple times according to their multiplicities. One can also assume that λn1,k1λn2,k2, if (n1,k1)<(n2,k2), i.e. n1<n2 or n1=n2,k1<k2.

Let Φ(x,λ) be the m×m matrix solution of Equation (Equation1), satisfying the boundary conditions Φ(0,λ)=Im, V(Φ)=0. Define M(λ):=Φ(0,λ). The matrix functions Φ(x,λ) and M(λ) are called the Weyl solution and the Weyl matrix of L, respectively. The notion of the Weyl matrix generalizes the notion of the Weyl function for scalar Sturm–Liouville operators. Weyl functions and their generalizations play an important role in inverse problem theory for various classes of differential operators (see Refs. [Citation8,Citation11,Citation22,Citation25]). It can easily be shown that M(λ) is a meromorphic matrix function. All the singularities of M(λ) are simple poles, which coincide with the eigenvalues {λnk}nN,k=1,m¯. Define the weight matrices (4) αnk=Resλ=λnkM(λ).(4) The collection {λnk,αnk}nN,k=1,m¯ is called the spectral data of L. This paper is devoted to the following inverse problem.

Inverse Problem 1.1

Inverse Problem 1.1:Given the spectral data {λnk,αnk}nN,k=1,m¯, recover the problem L, i.e. find the potential Q(x) and the matrices T, H.

The statement of Inverse Problem 1.1 generalizes the classical statement by Marchenko (see Refs. [Citation8,Citation11]). Denote by {λn}n1 the eigenvalues of the boundary value problem y+q(x)y=λy, y(0)=y(π)=0, and by {yn(x)}n1 the corresponding eigenfunctions, normalized by the condition yn(0)=1. The inverse problem by Marchenko consists in recovering the potential q(x) from the spectral data {λn,αn}n1, where αn:=0πyn2(x)dx are the so-called weight numbers or norming constants. Spectral data similar to {λnk,αnk}nN,k=1,m¯ has been used for reconstruction of matrix Sturm–Liouville operators in Refs. [Citation22–25,Citation27] and other papers.

In order to solve Inverse Problem 1.1, we develop the ideas of the method of spectral mappings [Citation11,Citation12], in particular, of its modification for matrix inverse Sturm–Liouville problems [Citation22,Citation23,Citation25,Citation40]. This method is based on the contour integration in the complex plane of the spectral parameter, so it is convenient for working with multiple and asymptotically multiple eigenvalues. Our key idea is to group the eigenvalues by asymptotics and to use the sums of the weight matrices for each group. That allows us to construct a special Banach space of infinite bounded matrix sequences and to reduce Inverse Problem 1.1 to a linear equation in this Banach space.

The paper is organized as follows. In Section 2, we provide preliminaries. In Section 3, the main equation of Inverse Problem 1.1 is derived, its unique solvability is proved, and a constructive algorithm for solving the inverse problems is developed. In Section 3, we only outline the main idea of our method, while the technical details of the proofs are provided in Section 4. In Section 5, we apply our results for the matrix Sturm–Liouville problem in the form (Equation1)–(Equation2) to the Sturm–Liouville operators on a graph. In Section 6, our algorithm for solving Inverse Problem 1.1 is illustrated by a numerical example.

2. Preliminaries

In this section, we provide asymptotic formulas for the eigenvalues and the weight matrices, obtained in Ref. [Citation2]. We also state the uniqueness theorem for solution of Inverse Problem 1.1.

First, we need some notations. Define the matrix (5) Ω:=120πQ(x)dx,(5) and the polynomials P1(z)=zpmdet(zImT(ΩH)T),P2(z)=zpdet(zImTHT). One can easily show that P1(z) and P2(z) are polynomials of the degrees p and (mp), respectively. Note that the matrices T(ΩH)T and THT are Hermitian. Consequently, they have real eigenvalues, part of which coincide with the roots of Pj(z), j = 1, 2. Denote the roots of P1(z) by {zk}k=1p and the roots of P2(z) by {zk}k=p+1m, counting with the multiplicities and in the nondecreasing order: zkzk+1 for k=1,p1¯ and k=p+1,m1¯.

Denote ρnk:=λnk, nN, k=1,m¯. Without loss of generality, we assume that λnk0. This condition can be achieved by a shift of the spectrum. Then, all the numbers ρnk are real.

Below we use the matrix norm in Cm×m, induced by the Euclidean norm in Cm, i.e. A=λmax(AA), where λmax is the maximal eigenvalue of a matrix. The symbol C denotes various constants. The notation {ϰn} is used for various sequences from l2. The notation {Kn} is used for various matrix sequences, such that {Kn}l2.

The assertion of Ref. [Citation2, Theorem 2.1] implies the following asymptotics for the eigenvalues.

Proposition 2.1

The eigenvalues {λnk}nN,k=1,m¯ satisfy the asymptotic relations (6) ρnk=n12+zkπ(n1/2)+ϰnn,k=1,p¯,ρnk=n+zkπn+ϰnn,k=p+1,m¯,nN(6) (the sequence {ϰn} may be different for different k).

In order to provide asymptotics for the weight matrices {αnk}, we need additional notations. In view of the definition (Equation4), if λn1,k1=λn2,k2, then αn1,k1=αn2,k2. Further we do not need to count such equal weight matrices multiple times. Therefore for every group of multiple eigenvalues λn1,k1=λn2,k2==λnl,kl, (n1,k1)<(n2,k2)<<(nl,kl), we define αn1,k1=αn1,k1, αnj,kj=0, j=2,l¯. It is supposed that there are exactly l eigenvalues among {λnk}nN,k=1,m¯ equal to λn1,k1. Define the sums αn(s)=k=1,p¯zk=zsαnk,s=1,p¯,αn(s)=k=p+1,m¯zk=zsαnk,s=p+1,m¯,αnI=k=1pαnk,αnII=k=p+1mαnk.

The following proposition combines the results of Theorems 3.1 and 3.4 from Ref. [Citation2].

Proposition 2.2

The following asymptotic relations are valid: (7) αnI=2(n1/2)2πT+Knn,αnII=2n2πT+Knn,(7) (8) αn(s)=2n2π(A(s)+Kn),s=1,m¯,(8) where nN, the matrices A(s)Cm×m, s=1,m¯, are uniquely specified by T, Ω and H.

In Proposition 2.2, the certain formulas for the matrices {A(s)}s=1m are not provided, since they are unnecessary for the purposes of this paper. However, by virtue of Bondarenko [Citation2, Corollary 3.3], the following relation holds: (9) sSzsA(s)=T(ΩH)T+TΩT=:Θ,(9) where S:={s=1,p¯:s=1or zszs1}{s=p+1,m¯:s=p+1or zszs1}. By virtue of Xu [Citation21, Theorem 3.1], the weight matrices are Hermitian, non-negative definite: αnk=αnk0, nN, k=1,m¯. This fact together with Proposition 2.2 yields the estimate (10) αnk=On2,nN,k=1,m¯.(10) Along with the problem L, we consider the problem L~=L(Q~(x),T~,H~) of the same form (Equation1)–(Equation2) as L, but with different coefficients Q~(x), T~ and H~. We agree that if a symbol γ denotes an object related to L, the symbol γ~ with tilde denotes the similar object related to L~.

Proposition 2.3

Let two problems L and L~ be such that T=T~ and Θ=Θ~. Then, ρnk=ρ~nk+ϰnn,nN,k=1,m¯,{ϰn}l2,αnI=α~nI+nKn,αnII=α~nII+nKn,αn(s)=α~n(s)+n2Kn,s=1,m¯,nN.

The next proposition is the uniqueness theorem for Inverse Problem 1.1.

Proposition 2.4

Suppose that λnk=λ~nk and αnk=α~nk for all nN, k=1,m¯. Then, Q(x)=Q~(x) for a.a. x(0,π), T=T~, H=H~.

Proposition 2.4 can be derived from the uniqueness results of Xu [Citation21] for the matrix Sturm–Liouville operator with the two boundary conditions in the general self-adjoint form. Another way to prove Proposition 2.4 is presented in Section 3. There we show that every solution of Inverse Problem 1.1 corresponds to a solution of the main equation (Equation22). Furthermore, the unique solvability of the main equation is proved, which implies Proposition 2.4.

3. Inverse problem solution

In this section, a constructive solution of Inverse Problem 1.1 is obtained. We start with a choice of a model problem L~ of the same form as L, but with different coefficients. Then, Inverse Problem 1.1 is reduced to the linear equation (Equation12) by using contour integration in the λ-plane (see Lemma 3.2). Furthermore, we group the eigenvalues by asymptotics and introduce a special Banach space B. It is shown that the linear equation (Equation12) can be represented as Equation (Equation22) in B. Later on, we prove the unique solvability of the main equation (Equation22) (see Theorem 3.4). Furthermore, in this section, the solution of the main equation is used for constructing Q(x) and H (see Theorem 3.6). Finally, we arrive at Algorithm 3.7 for solving Inverse Problem 1.1. The results in this section are presented schematically. We provide auxiliary lemmas and the proofs in the next section.

Let the spectral data {λnk,αnk}nN,k=1,m¯ of some unknown boundary value problem L=L(Q(x),T,H) be given. Our goal is to construct the solution of Inverse Problem 1.1, i.e. to find Q(x), T and H.

Furthermore, we need a model problem L~=L(Q~(x),T~,H~), satisfying the conditions T=T~ and Θ=Θ~. One can construct such a model problem using the following algorithm.

Algorithm 3.1

Let the spectral data {λnk,αnk}nN,k=1,m¯ be given. We have to construct the model problem L~.

  1. Find p, relying on the eigenvalue asymptotics (Equation6).

  2. Construct the matrices {αnk}nN,k=1,m¯, {αn(s)}nN,s=1,m¯ and {αnI}nN, by their definitions in Section 2.

  3. Construct the matrices T:=limnπ2n2αnI, T:=ImT.

  4. Find the numbers zk=limnλnk(n12)π(n12),k=1,p¯,zk=limn(λnkn)πn,k=p+1,m¯.

  5. Construct the matrices A(s)=limnπ2n2αn(s),sS.

  6. Calculate Θ:=sSzsA(s).

  7. Define Q~(x)2πΘ, x(0,π), T~:=T, H~:=0, L~:=L(Q~(x),T~,H~).

The relation (Equation9) guarantees that Θ=Θ~. Consequently, the asymptotic relations of Proposition 2.3 hold for L and L~.

Let us proceed to the derivation of the main equation. Denote by S(x,λ) the matrix solution of Equation (Equation1) under the initial conditions S(0,λ)=0, S(0,λ)=Im. The following notation will be used for the matrix Wronskian: Z,Y=ZYZY. Define (11) D(x,λ,μ)=S(x,λ¯),S(x,μ)λμ.(11) Introduce the notations λnk0=λnk,λnk1=λ~nk,ρnk0=ρnk,ρnk1=ρ~nk,αnk0=αnk,αnk1=α~nk,αnk0=αnk,αnk1=α~nk,Snk0(x)=S(x,λnk0),Snk1(x)=S(x,λnk1),S~nk0(x)=S~(x,λnk0),S~nk1(x)=S~(x,λnk1).

Using the method of spectral mappings [Citation11,Citation12], we prove the following lemma:

Lemma 3.2

The following relations hold for x[0,π], n,rN, k,q=1,m¯, s,η=0,1: (12) S~nks(x)=Snks(x)+l=1j=1m(Slj0(x)αlj0D~(x,λlj0,λnks)Slj1(x)αlj1D~(x,λlj1,λnks)),(12) (13) αrqηD~(x,λrqη,λnks)αrqηD(x,λrqη,λnks)=l=1j=1m(αrqηD(x,λrqη,λlj0)αlj0D~(x,λlj0,λnks)αrqηD(x,λrqη,λlj1)αlj1D~(x,λlj1,λnks)).(13)

Proof.

Repeating the standard arguments of the proofs of Refs. [Citation11, Lemma 1.6.3] and [Citation22, Lemma 1], we derive the relation (14) S~(x,λ)=S(x,λ)+12πiγS(x,μ)Mˆ(μ)D~(x,μ,λ)dμ,Mˆ(μ):=M(μ)M~(μ),(14) where γ is the boundary of the region X:={λ:Reλ>δ,|Imλ|<δ} with the counter-clockwise circuit, δ>0 is a fixed number. Clearly, under our assumptions, all the eigenvalues {λnk}nN,k=1,m¯ and {λ~nk}nN,k=1,m¯ lie in X. The integral in (Equation14) converges for λCX in the sense γ:=limNγN, γN:={λγ:|λ|(N+1/4)2}. Calculating the integral by the residue theorem, we obtain the relation S~(x,λ)=S(x,λ)+l=1j=1m(Slj0(x)αlj0D~(x,λlj0,λ)Slj1(x)αlj1D~(x,λlj1,λ)), where the series converges uniformly with respect to x[0,π] and λ on compact sets. Substituting λ=λnks, we arrive at (Equation12).

Similarly, following the proofs of Refs. [Citation11, Lemma 1.6.3] and [Citation22, Theorem 4], we derive the relation D~(x,λ,μ)D(x,λ,μ)=12πiγD(x,λ,ξ)Mˆ(ξ)D~(x,ξ,μ)dξ, which implies (Equation13).

For each fixed x[0,π], the relation (Equation12) can be considered as a system of linear equations with respect to Snks(x), nN, k=1,m¯, s = 0, 1. But the series in (Equation12) converge conditionally in the following sense: limNl=1Nj=1m(), so it is inconvenient to use (Equation12) as a system of main equations of the inverse problem. Below we transform (Equation12) into an equation in a specially constructed Banach space of bounded infinite sequences.

We divide the square roots of the eigenvalues {ρnks} into collections by asymptotics. Put (15) G1={ρnks}n=1,n0¯,k=1,m¯,s=0,1,G2j={ρn0+j,ks}k=1,p¯,s=0,1,G2j+1={ρn0+j,ks}k=p+1,m¯,s=0,1,(15) where jN, and an integer n0 is chosen so that GnGk= for nk. Such n0 exists because of the asymptotic relations (Equation6). Each collection Gn may contain multiple elements.

Consider a collection G={gi}i=1r of (possibly multiple) complex numbers. Denote by B(G) the finite-dimensional space of all the matrix functions f:GCm×m, such that gi=gj implies f(gi)=f(gj), with the norm (16) fB(G)=maxmaxi=1,r¯f(gi),maxgigji,j=1,r¯|gigj|1f(gi)f(gj).(16) Introduce the Banach space B of infinite sequences: (17) B={f={fn}n1:fnB(Gn),fB:=supn1(nfnB(Gn))<}.(17) For x[0,π], we define the sequence (18) ψ(x)={ψn(x)}n1,ψn(x)={ψn(x,ρljs),(l,j,s):ρljsGn},(18) where ψn(x,ρ)=S(x,ρ2) for all ρGn. Note that S(x,ρ2)sinρx/ρIm as |ρ|. Using Schwarz's lemma similarly to Ref. [Citation11, Section 1.6.1], we get the estimates (19) ψn(x,ρ)Cn,ψn(x,ρ)ψn(x,θ)Cn|ρθ|,nN,ρ,θGn.(19) Hence, ψn(x)C/n for nN, so ψ(x)B for each x[0,π].

For each fixed x[0,π], we define the linear operator R(x):BB, acting on any element f={fn}n1B in the following way: (20) (fR(x))n=k=1fkRk,n(x),Rk,n(x):B(Gk)B(Gn),(20) (21) (fkRk,n(x))(ρ)=(l,j):ρlj0,ρlj1Gk(fk(ρlj0)αlj0D(x,ρlj02,ρ2)fk(ρlj1)αlj1D(x,ρlj12,ρ2)),ρGn.(21)

Thus, the action of the operator R(x) is a multiplication of an infinite row vector f of m×m matrices by the infinite matrix. In (Equation20) and (Equation21), we put operators to the right of their operands, in order to keep the correct order of matrix multiplication.

Theorem 3.3

The series in (Equation20) converges in B(Gn)-norm. For each fixed x[0,π], the operator R(x) is bounded and, moreover, compact on B.

The proof of Theorem 3.3 is provided in Section 4.

Define the element ψ~(x)B and the operator R~(x):BB similarly to ψ(x) and R(x), respectively, with S~, D~ instead of S, D. Obviously, the relation (Equation12) can be rewritten in the form (22) ψ~(x)=ψ(x)(I+R~(x)),(22) where I is the identity operator in B. Clearly, the assertion of Theorem 3.3 is valid for R~(x), i.e. for each fixed x[0,π], the linear operator R~(x) is compact on B. We call Equation (Equation22) in the Banach space B the main equation of Inverse Problem 1.1.

Theorem 3.4

For each fixed x[0,π], the main equation (Equation22) is uniquely solvable in the Banach space B.

Proof.

Using (Equation13), (Equation20) and (Equation21), we derive the relation (IR(x))(I+R~(x))=I,x[0,π]. Symmetrically, one can obtain that (I+R~(x))(IR(x))=I,x[0,π]. Therefore, the operator (I+R~(x))1 exists and equals (IR(x)). By virtue of Theorem 3.3, the latter operator is bounded, so Equation (Equation22) is uniquely solvable.

By using the solution ψ(x) of the main equation (Equation22), one can construct the solution of Inverse Problem 1.1. Indeed, recall the definition (Equation18) of ψ(x). The known ψ(x), in fact, gives us the sequence of the matrix functions {Snks(x)}nN,k=1,m¯,s=0,1. These matrix functions satisfy Equation (Equation1) for λ=λnks, so one can construct the potential matrix by the formula Q(x)=Snks(x)Snks1(x)+λnksIm. Then, one can find Ω by (Equation5) and determine H from (Equation9) H=TΩT+TΩTΘ, since the matrices Θ and T are already known (see Algorithm 3.1).

Below we describe another way to find Q(x) and H. The following method is more convenient for further investigation of Inverse Problem 1.1, in particular, for characterization of the spectral data for the problem L.

Define the matrix functions (23) ε0(x)=n=1k=1m(Snk0(x)αnk0S~nk0(x)Snk1(x)αnk1S~nk1(x)),ε(x)=2ε0(x).(23)

Lemma 3.5

The series (Equation23) converges uniformly with respect to x[0,π]. Moreover, the matrix function ε0(x) is absolutely continuous on [0,π] and the elements of ε(x) belong to L2(0,π).

Theorem 3.6

The following relations hold: (24) Q(x)=Q~(x)+ε(x),H=H~Tε0(π)T.(24)

The proofs of Lemma 3.5 and Theorem 3.6 are provided in Section 4. Finally, we arrive at the following algorithm for solving Inverse Problem 1.1.

Algorithm 3.7

Let the spectral data {λnk,αnk}nN,k=1,m¯ be given. We have to construct Q(x) and H.

  1. Construct the model problem L~, using Algorithm 3.1. At this step, we also determine the matrices T and Θ.

  2. Find the matrix functions {S~nks(x)}nN,k=1,m¯,s=0,1 as the solutions of the initial value problems for Equation (Equation1) with the potential Q~(x) and λ=λnks.

  3. Using (Equation11), construct the functions D~(x,λljη,λnks) for n,lN, j,k=1,m¯, η,s=0,1.

  4. Form the collections {Gk}k1 by (Equation15).

  5. Using the matrix functions S~nks(x) and D~(x,λljη,λnks), form the element ψ~(x)B and the operator R~(x):BB (see (Equation18), (Equation20) and (Equation21)).

  6. Solve the main equation (Equation22) and find ψ(x)B, i.e. obtain {Snks(x)}nN,k=1,m¯,s=0,1.

  7. Construct ε0(x) and ε(x) by (Equation23).

  8. Find Q(x) and H by the formulas (Equation24).

Algorithm 3.7 is theoretical. Relying on this algorithm, one can develop a numerical technique for solving Inverse Problem 1.1. For the scalar Sturm–Liouville equation (m = 1), the numerical algorithm, based on the method of spectral mappings, is provided in Ref. [Citation14]. Similarly, one can obtain a numerical method for the matrix case, but this issue requires an additional investigation. In this paper, we illustrate the work of Algorithm 3.7 by a simple finite-dimensional example in Section 6.

4. Proofs

In this section, the proofs of the assertions from Section 3 are provided. Our methods develop the approach of Bondarenko [Citation25,Citation40] and are based on the grouping (Equation15) of the eigenvalues by their asymptotics.

Lemma 4.1

For collections Gk, k1, there are partitions into smaller collections (25) Gk=i=1pkGki,GkiGkj=,ij,(25) such that (26) k=1(kξk)2<,(26) (27) ξk:=i=1pkρ,θGki|ρθ|+1k3i=1pkα(Gki)α~(Gki)+1k2α(Gk)α~(Gk).(27) Here, α(G):=(l,j):ρlj0Gαlj0,α~(G):=(l,j):ρlj1Gαlj1 for any collection G of the form described in Section 3.

Proof.

The assertion of the lemma immediately follows from Propositions 2.1 and 2.3. For k = 1, the partition is trivial: p1=1, G11=G1. For k>1, each collection Gki is composed of the values with equal coefficients zs in the asymptotics (Equation6).

Lemma 4.2

For n,kN, ρGn, θ,χGk, x[0,π], the following estimates hold: D(x,θ2,ρ2)Cnk(|nk|+1),D(x,θ2,ρ2)D(x,χ2,ρ2)C|θχ|nk(|nk|+1), where the constant C does not depend on n, k, ρ, θ, χ and x.

Proof.

This lemma is proved by the standard approach based on Schwarz's lemma (see Ref. [Citation11, Section 1.6.1]).

Lemma 4.3

For n,kN, x[0,π], the following estimate holds: (28) Rk,n(x)B(Gk)B(Gn)Ckξkn(|nk|+1),(28) where the constant C does not depend on n, k and x.

Proof.

Fix n,kN and x[0,π]. Let h be an arbitrary element of Gk. Put ηk,n:=hRk,n(x). Clearly, ηk,nGn. Let us prove that (29) ηk,nB(Gn)CkξkhB(Gk)n(|nk|+1),(29) where the constant C does not depend on n, k, x and h. Obviously, the estimate (Equation29) implies (Equation28).

According to (Equation16), we have (30) ηk,nB(Gn)=maxmaxρGnηk,n(ρ),maxρθρ,θGn|ρθ|1ηk,n(ρ)ηk,n(θ).(30) First, we prove the estimate (31) ηk,n(ρ)CkξkhB(Gk)n(|nk|+1),ρGn.(31) In view of the definition (Equation21) of Rk,n(x), we have ηk,n(ρ)=(l,j):ρlj0,ρlj1Gk(h(ρlj0)αlj0D(x,ρlj02,ρ2)h(ρlj1)αlj1D(x,ρlj12,ρ2)). We derive that ηk,n(ρ)=J1+J2+J3, J1:=(l,j):ρlj0,ρlj1Gk(h(ρlj0)h(ρlj1))αlj0D(x,ρlj02,ρ2),J2:=(l,j):ρlj0,ρlj1Gkh(ρlj1)αlj1(D(x,ρlj02,ρ2)D(x,ρlj12,ρ2)),J3:=(l,j):ρlj0,ρlj1Gkh(ρlj1)(αlj0αlj1)D(x,ρlj12,ρ2). Using (Equation16) for hB(Gk) together with (Equation27), we obtain the estimates (32) h(ρlj1)hB(Gk),h(ρlj0)h(ρlj1)ξkhB(Gk),ρlj0,ρlj1Gk.(32) Lemma 4.2 implies (33) D(x,ρlj02,ρ)Cnk(|nk|+1),D(x,ρlj02,ρ2)D(x,ρlj12,ρ2)Cξknk(|nk|+1)(33) for all ρGn, ρlj0,ρlj1Gk. It follows from (Equation10) that (34) αljsCk2,ρljsGk.(34) Using (Equation32), (Equation33) and (Equation34), we obtain the estimate (35) JsCkξkhB(Gk)n(|nk|+1)(35) for s = 1, 2. It remains to prove (Equation35) for J3.

Consider the partition (Equation25) of the collection Gk. For every Gki, denote by ρ(Gki) a fixed value ρlj1Gki. We represent J3 in the following form: J3=J4+J5+J6,J4:=i=1pk(l,j):ρlj1Gki(h(ρlj1)h(ρ(Gki)))(αlj0αlj1)D(x,ρlj12,ρ2),J5:=i=1pk(l,j):ρlj1Gkih(ρ(Gki))(αlj0αlj1)(D(x,ρlj12,ρ2)D(x,ρ2(Gki),ρ2)),J6:=i=1pkh(ρ(Gki))(α(Gki)α~(Gki))D(x,ρ2(Gki),ρ2). Using (Equation16) and (Equation27), we get (36) h(ρ(Gki))hB(Gk),h(ρlj1)h(ρ(Gki))ξkhB(Gk),ρlj1Gki,i=1,pk¯.(36) Lemma 4.2 implies (37) D(x,ρlj12,ρ2)Cnk(|nk|+1),(37) (38) D(x,ρlj12,ρ2)D(x,ρ2(Gki),ρ2)Cξknk(|nk|+1)(38) for all ρGn, ρlj1Gki, i=1,pk¯. Combining (Equation34)–(Equation38), we conclude that (Equation35) holds for s = 4, 5.

In order to prove (Equation35) for J6, we use the representation J6=J7+J8+J9,J7:=i=1pk(h(ρ(Gki))h(ρ(Gk1)))(α(Gki)α~(Gki))D(x,ρ2(Gki),ρ2),J8:=i=1pkh(ρ(Gk1))(α(Gki)α~(Gki))(D(x,ρ2(Gki),ρ2)D(x,ρ2(Gk1),ρ2)),J9:=h(ρ(Gk1))(α(Gk)α~(Gk))D(x,ρ2(Gk1),ρ2). Using (Equation6) and Lemma 4.2, we obtain (39) h(ρ(Gki))h(ρ(Gk1))|ρ(Gki)ρ(Gk1)|hB(Gk)CkhB(Gk),(39) (40) D(x,ρ2(Gki),ρ2)D(x,ρ2(Gk1),ρ2)C|ρ(Gki)ρ(Gk1)|nk(|nk|+1)Cnk2(|nk|+1)(40) for ρGn, i=1,pk¯. Furthermore, it follows from (Equation27) that (41) α(Gki)α~(Gki)k3ξk,i=1,pk¯,α(Gk)α~(Gk)k2ξk.(41) The estimates (Equation36), (Equation37), (Equation39)–(Equation41) together yield (Equation35) for s = 7, 8, 9. Consequently, the estimate (Equation31) is valid.

Proof

Proof of Theorem 3.3.

Fix x[0,π] and suppose that f={fn}n1 is an arbitrary element of B. By virtue of (Equation17), we have (42) fkB(Gk)1kfB,k1.(42) The estimates (Equation28) and (Equation42) imply fkRk,n(x)B(Gn)CξkfBn(|nk|+1),n,k1. Using the latter estimate together with the definition (Equation20), we conclude that the series in (Equation20) converges in B(Gn) and (43) (fR(x))nB(Gn)k=1fkRk,n(x)B(Gn)CfBnk=1ξk.(43) In view of (Equation26), we obtain (fR(x))nB(Gn)CfBn,n1. According to (Equation17), we get fR(x)BCfB, i.e. the operator R(x) is bounded on B.

Let us show that the operator R(x) can be approximated by a sequence of finite-dimensional operators. For s1, define the operator Rs(x):BB as follows: Rs(x)=[Rk,ns(x)]n,k=1,Rk,ns(x)=Rk,n(x),k=1,s¯,0,k>s,n1. Using (Equation43), one can easily show that limsRs(x)R(x)BB=0. Thus, the operator R(x) is compact.

Remark 4.4

Note that all the constants C in the proof of Theorem 3.3 do not depend on x.

Corollary 4.5

Define Λ:=k=1(kξk)21/2 and fix Λ0>0. If ΛΛ0, the estimate R(x)BBCΛ holds, where the constant C depends only on L~ and Λ0.

Proof

Proof of Lemma 3.5.

The definition (Equation23) of ε0(x) can be rewritten in the following form: (44) ε0(x)=k=1Ek(x),Ek(x):=(l,j):ρlj0,ρlj1Gk(Slj0(x)αlj0S~lj0(x)Slj1(x)αlj1S~lj1(x)).(44) The further arguments resemble the proof of Lemma 4.3. Recall that every collection Gk is divided into smaller collections {Gki}i=1pk (see Lemma 4.1). For every collection Gki, we have chosen an arbitrary element ρlj1 and denoted it as ρ(Gki). For brevity, denote the corresponding matrix function Slj1(x) by S(x,Gki). Then, we derive the relation (45) Ek(x)=(l,j):ρlj0,ρlj1Gk(Slj0(x)Slj1(x))αlj0S~lj0(x)+(l,j):ρlj0,ρlj1GkSlj1(x)αlj0(S~lj0(x)S~lj1(x))+i=1pk(l,j):ρlj1Gki(Slj1(x)S(x,Gki))(αlj0αlj1)S~lj1(x)+i=1pk(l,j):ρlj1GkiS(x,Gki)(αlj0αlj1)(S~lj1(x)S~(x,Gki))+i=1pk(S(x,Gki)S(x,Gk1))(α(Gki)α~(Gki))S~(x,Gki)+i=1pkS(x,Gk1)(α(Gki)α~(Gki))(S~(x,Gki)S~(x,Gk1))+S(x,Gk1)(α(Gk)α~(Gk))S~(x,Gk1).(45) The relations (Equation19) and (Equation27) yield (46) Sljs(x)Ck,Slj0(x)Slj1(x)Ck|ρlj0ρlj1|Cξkk,ρljsGk,Slj1(x)S(x,Gki)Cξkk,ρlj1Gki,i=1,pk¯,S(x,Gki)S(x,Gk1)Ck,i=1,pk¯,(46) where the constant C does not depend on kN and on x[0,π]. The similar estimates are also valid for S~.

Using (Equation34), (Equation41), (Equation45) and (Equation46), we conclude that Ek(x)Cξk, kN, x[0,π]. Consequently, the series (Equation44) of continuous functions converges absolutely and uniformly with respect to x[0,π], and ε0(x)Ck=1ξkCΛ. Next, we show that the series k=1Ek(x) converges in L2-norm. For definiteness, consider the first sum in (Equation45): Z1,k(x):=(l,j):ρlj0,ρlj1Gk(Slj0(x)Slj1(x))αlj0S~lj0(x). Differentiation yields Z1,k(x)=(l,j):ρlj0,ρlj1Gk((Slj0(x)Slj1(x))αlj0S~lj0(x)+(Slj0(x)Slj1(x))αlj0(S~lj0)(x)). Furthermore, we use the asymptotic expressions Slj0(x)Slj1(x)=(ρlj0ρlj1)xsin(ρlj0x)Im+Oξkk,S~lj0(x)=sin(ρlj0x)ρlj0Im+O1k2,Slj0(x)Slj1(x)=(ρlj0ρlj1)xcos(ρlj0x)ρlj0Im+Oξkk2,(S~lj0)(x)=cos(ρlj0x)Im+O1k for ρlj0,ρlj1Gk. Here, the O-estimates are uniform with respect to x[0,π]. Taking the grouping (Equation15) into account, we define n1=0,n2j=n0+j1/2,n2j+1=n0+j,jN. Clearly, ρlj0=nk+O1k,ρlj0Gk, i.e. nk is the main part in the asymptotics (Equation6) of the values from the collection Gk. Finally, we get Z1,k(x)=Γkxsin(2nkx)+O(ξk),ΓkCm×m,kN,x[0,π],{Γk}l2. Consequently, the elements of the matrix series k=1Z1,k(x) converge in L2(0,π). The similar technique can be applied to all the other terms in (Equation45). Thus, the elements of the matrix function ε(x) belong to L2(0,π).

Proof

Proof of Theorem 3.6

Step 1. First, we derive the relation for Q(x). Using (Equation1) and (Equation11), we obtain D(x,λ,μ)=S(x,λ¯)S(x,μ). Then, formally differentiating (Equation14) twice with respect to x, we get S~(x,λ)=S(x,λ)+12πiγS(x,μ)Mˆ(μ)D~(x,μ,λ)dμ+1πiγS(x,μ)Mˆ(μ)S~(x,μ¯)dμS~(x,λ)+12πiγS(x,μ)Mˆ(μ)ddx(S~(x,μ¯)S~(x,λ))dμ. Furthermore, we express the second derivatives from (Equation1) and use (Equation11), so we obtain (Q~(x)λ)S~(x,λ)=(Q(x)λ)S(x,λ)+12πiγ(Q(x)μ)S(x,μ)Mˆ(μ)D~(x,μ,λ)dμ+2ddx12πiγS(x,μ)Mˆ(μ)S~(x,μ¯)dμS~(x,λ)+12πiγS(x,μ)Mˆ(μ)(μλ)D~(x,μ,λ)dμ. By residue theorem, the integral in the square brackets [] equals ε0(x), defined by (Equation23). Consequently, taking (Equation14) into account, we get the relation (Q~(x)Q(x))S~(x,λ)=2ε0(x)S~(x,λ), which implies Q(x)=Q~(x)+ε(x).

Step 2. Let us derive the relation for H in (Equation24). Similarly to (Equation14), one can obtain the relation for the Weyl solution (47) Φ~(x,λ)=Φ(x,λ)+12πiγS(x,μ)Mˆ(μ)E~(x,μ,λ)dx,(47) where (48) E~(x,μ,λ):=S~(x,μ¯),Φ~(x,λ)μλ,E~(x,μ,λ)=S~(x,μ¯)Φ~(x,λ).(48) Using (Equation47) and (Equation48), we derive (49) V(Φ~)=V(Φ)+12πiγV(S(x,μ))Mˆ(μ)E~(π,μ,λ)dμ+T12πiγS(π,μ)Mˆ(μ)S~(π,μ¯)dμΦ~(π,λ).(49) Recall that V(Φ)=0. Furthermore, it is shown that the first integral in (Equation49) also equals zero. Indeed, the definition (Equation48) yields (50) E~(π,μ,λ)=S~(π,μ¯)(T+T)Φ~(π,λ)(S~)(π,μ¯)(T+T)Φ~(π,λ).(50) The projectors T and T are mutually orthogonal, so it follows from the condition V~(Φ~)=0 that TΦ~(π,λ)=0,TΦ~(π,λ)=TH~Φ~(π,λ). Consequently, the relation (Equation50) implies (51) E~(π,μ,λ)=S~(π,μ¯)TH~Φ~(π,λ)+S~(π,μ¯)TΦ~(π,λ)(S~)(π,μ¯)TΦ~(π,λ).(51) Consider the linear form V~(S~(x,μ¯))=((S~)(π,μ¯)S~(π,μ¯)H~)TS~(π,μ¯)T. One can show that the matrix functions V(S(x,μ))M(μ) and M~(μ)V~(S~(x,μ¯)) are entire. Consequently, the matrix functions M~(μ)((S~)(π,μ¯)S~(π,μ¯)H~)T and M~(μ)S~(π,μ¯)T are also entire. Therefore, in view of (Equation51), the first integral in (Equation49) vanishes, so we get V(Φ~)=Tε0(π)Φ~(π,λ).

Obviously, V(Φ~)=V~(Φ~)+T(HH~)Φ~(π,λ). Since V~(Φ~)=0, we obtain that T(H~Hε0(π))Φ~(π,λ)=0. Using the asymptotic formula for the Weyl solution: Φ~(π,τ2)=2Texp(τπ)(1+O(τ1)),τ+, we conclude that H=H~Tε0(π)T.

5. Inverse problem on the star-shaped graph

In this section, we apply our results to the Sturm–Liouville eigenvalue problem on the star-shaped graph [Citation1,Citation2] in the form (52) yj+qj(x)yj=λyj,x(0,π),j=1,m¯,(52) with the standard matching conditions (53) y1(π)=yj(π),j=2,m¯,j=1myj(π)=0,yj(0)=0,j=1,m¯,(53) where {qj}j=1m are real-valued functions from L2(0,π). Clearly, the boundary value problem (Equation52)–(Equation53) can be rewritten in the form (Equation1)–(Equation2) with the diagonal matrix potential Q(x)=diag{qj(x)}j=1m, H = 0 and T=[Tjk]j,k=1m, Tjk=1m, j,k=1,m¯.

In this section, we use the notation A(ii)=aii, i=1,m¯ for diagonal elements of a matrix A=[ajk]j,k=1m. Yurko [Citation38] has proved the uniqueness theorem and suggested an approach to solution of the following inverse problem.

Proof

Inverse Problem 5.1

Given the eigenvalues {λnk}nN,k=1,m¯ and the elements {αnk(ii)}nN,k=1,m¯,i=1,m1¯ of the weight matrices, construct {qj}j=1m.

Thus, since the potential Q(x) is diagonal, it is sufficient to use only the diagonal elements of the weight matrices, excluding the last elements {αnk(mm)}nN,k=1,m¯. The method of Yurko consists of two steps.

Algorithm 5.2

Let the data {λnk}nN,k=1,m¯ and {αnk(ii)}nN,k=1,m¯,i=1,m1¯ be given. We have to construct {qi}i=1m.

  1. Solving local inverse problems. For each i=1,m1¯, find qi(x), by using the data {λnk,αnk(ii)}nN,k=1,m¯.

  2. Returning procedure. Using the given data and the already constructed potentials {qi}i=1m1, find qm.

For the first step of Algorithm 5.2, Yurko suggested to derive the main equations in appropriate Banach spaces using the method of spectral mappings. Now, we can easily obtain such main equations and prove their unique solvability, relying on the results of Section 3.

Diagonality of the matrix potential Q(x) implies that the matrix functions S(x,λ), S~(x,λ) and D~(x,λ,μ) are also diagonal. Consequently, taking only the main diagonal in the system (Equation12), we arrive at the scalar equations (54) S~nks(ii)(x)=Snks(ii)(x)+l=1j=1m(Slj0(ii)αlj0(ii)D~(ii)(x,λlj0,λnks)Slj1(ii)(x)αlj1(ii)D~(ii)(x,λlj1,λnks)),(54) where nN, k=1,m¯, s = 0, 1. Equations (Equation54) can be considered separately for each i=1,m¯.

Denote the Banach space B for m = 1 by B1, i.e. B1 is a space of scalar infinite sequences. For any element fB and each i=1,m¯, we can choose in every matrix component of f the diagonal element at the position (i,i) and obtain the element f(ii)B1, by combining these diagonal elements. Taking equation (Equation54) into account, one can construct the compact linear operators R~(ii)(x):B1B1, i=1,m¯, analogous to R~(x) and such that (55) ψ~(ii)(x)=ψ(ii)(x)(I1+R~(ii)(x)),i=1,m¯,x[0,π],(55) where I1 is the identity operator in B1. Analogously to Theorem 3.4, we obtain the following result.

Theorem 5.3

For every x[0,π] and each i=1,m¯, Equation (Equation55) is uniquely solvable in the Banach space B1.

The main equation (Equation55) can be used at step 1 of Algorithm 5.2 for solving the local inverse problems for i=1,m1¯. Theorem 5.3 justifies this step. Step 2 of Algorithm 5.2 is described in Ref. [Citation38], and we do not elaborate into that issue in the present paper.

6. Example

In this section, we solve Inverse Problem 1.1 for the following example. Put m = 3. Consider the model problem L~=L(Q~(x),T~,H~) with Q~(x)0, H~=0 and (56) T~=13111111111,T~=I3T~=13211121112.(56)

This matrix Sturm–Liouville problem L~ is equivalent to the Sturm–Liouville eigenvalue problem on the star-shaped graph (Equation52)–(Equation53) with m = 3 and qj(x)0, j=1,3¯. It is easy to check that this problem has the eigenvalues λ~n1=n122,λ~n2=λ~n3=n2,nN, and the weight matrices α~n1=2πn122T~,α~n2=α~n3=2πn2T~,nN. Suppose that the spectral data {λnk,αnk}nN,k=1,3¯ of the problem L differ from {λ~nk,α~nk}nN,k=1,3¯ only by the first eigenvalue, (57) λn1=a2,a[0,1),a12,λnk=λ~nk,(n,k)(1,1),αnk=α~nk,nN, k=1,3¯.(57)

Let us recover the potential matrix Q(x) and the coefficient H of the problem L from its spectral data {λnk,αnk}nN,k=1,3¯. Since the spectral data of the problems L and L~ coincide for n2, their asymptotics coincide, and therefore we can use the problem L~ as the model problem for reconstruction of L by the methods of Section 3. Moreover, we have T=T~, T=T~.

Consider the relation (Equation12). Note that Slj0(x)Slj1(x), if λlj0=λlj1. Consequently, we get that Snks(x)S~nks(x) for all (n,k)(1,1), s = 0, 1. Hence, we obtain from (Equation12) the system of two equations with respect to S110(x) and S111(x): (58) S~110(x)=S110(x)+S110(x)α11D~(x,λ110,λ110)S111(x)α~11D~(x,λ111,λ110),S~111(x)=S111(x)+S110(x)α11D~(x,λ110,λ111)S111(x)α~11D~(x,λ111,λ111).(58) For our example, we have λ110=a2,λ111=14,α11=α~11=12πT,S~110(x)=sin(ax)aI3,S~111(x)=2sinx2I3,D~(x,λ110,λ110)=12a2xsin2ax2aI3,D~(x,λ111,λ111)=2(xsinx)I3,D~(x,λ110,λ111)=D~(x,λ111,λ110)=1asin(a12)xa12sin(a+12)xa+12I3. Consequently, the system (Equation58) takes the form (59) S110(x)(f11(x)T+T)S111(x)f12(x)T=sin(ax)aI,S110(x)f12(x)T+S111(x)(f22(x)T+T)=2sinx2I,(59) where f11(x)=1+14πa2xsin(2ax)2a,f22(x)=11π(xsinx),f12(x)=12πasin((a12)x)a12sin((a+12)x)a+12. Solving the system (Equation59), we obtain S110(x)=Δ1(x)Δ0(x)T+sin(ax)aT,S111(x)=Δ2(x)Δ0(x)T+2sinx2T,Δ0(x):=f11(x)f12(x)f12(x)f22(x),Δ1(x):=sin(ax)af12(x)2sinx2f22(x),Δ2(x):=f11(x)sin(ax)af12(x)2sinx2.

It follows from (Equation23), (Equation57) and the above calculations that (60) ε0(x)=S110(x)α11S~110(x)S111(x)α~11S~111(x)=12πΔ0(x)Δ1(x)sin(ax)a2Δ2(x)sin(x2)T.(60) Then it is easy to find Q(x) and H by the formula (Equation24).

In particular, for a = 0.3, we have Q(x)=q(x)T, H = hT, where h approximately equals 0.361838. The plot of q(x) is presented in Figure .

Figure 1. Plot of q(x), x[0,π].

Figure 1. Plot of q(x), x∈[0,π].

Let us check our calculations, by finding the eigenvalues of the problem L. The solution S(x,λ) has the form S(x,λ)=Ts(x,λ)+Tρ1sinρπ, where s(x,λ) is the solution of the following scalar initial value problem with the constructed potential q(x): s(x,λ)+q(x)s(x,λ)=λs(x,λ),s(0,λ)=0,s(0,λ)=1. The eigenvalues of the problem L coincide with the zeros of its characteristic function detV(S(x,λ))=(s(π,λ)hs(π,λ))sin2ρπρ2. We have calculated the zeros of (s(π,λ)hs(π,λ)) numerically, using the forth-order Runge–Kutta method with the step π1000. The zeros in the interval [0,50] are presented in the following table.

Clearly, λ11=a2, λn1=(n12)2, n2. The multiplier sin2ρπ/ρ2 has the zeros λn2=λn3=n2, nN. Thus, the eigenvalues of the constructed problem L coincide with the initially given values, and our method works correctly for this example.

It is clear from (Equation60) that for a[0,12)(12,1) the matrices ε0(x) and Q~(x) are nondiagonal, so the problem L is not the Sturm–Liouville problem on the star-shaped graph. This example shows that a simple perturbation of an eigenvalue can withdraw an operator out of the class of differential operators on graphs. In order to remain in this class, perturbations of the spectral data have to be connected with each other by additional conditions. Obtaining such conditions is a challenging topic for future research.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by Grant 19-71-00009 of the Russian Science Foundation.

References

  • Pivovarchik VN. Inverse problem for the Sturm–Liouville equation on a star-shaped graph. Math Nachr. 2007;280:1595–1619. doi: 10.1002/mana.200410567
  • Bondarenko NP. Spectral analysis of the matrix Sturm–Liouville operator. Bound Value Probl. 2019;2019:178. doi: 10.1186/s13661-019-1292-z
  • Nicaise S. Some results on spectral theory over networks, applied to nerve impulse transmission. Berlin: Springer; 1985. p. 532–541 (Lecture notes in mathematics; 1771).
  • Langese J, Leugering G, Schmidt J. Modelling analysis and control of dynamic elastic multi-link structures. Boston, MA: Birkhäuser; 1994.
  • Kuchment P. Graph models for waves in thin structures. Waves Random Media. 2002;12(4):R1–R24. doi: 10.1088/0959-7174/12/4/201
  • Pokornyi YuV, Pryadiev VL. Some problems of the qualitative Sturm–Liouville theory on a spatial network. Russ Math Surv. 2004;59(3):515–552. doi: 10.1070/RM2004v059n03ABEH000738
  • Berkolaiko G, Carlson R, Fulling S, et al. Quantum graphs and their applications. Providence, RI: American Mathematical Society; 2006 (Contemp. Math; 415).
  • Marchenko VA. Sturm–Liouville operators and their applications. Kiev: Naukova Dumka; 1977; Russian; English transl., Birkhauser; 1986.
  • Levitan BM. Inverse Sturm–Liouville problems. Moscow: Nauka; 1984; Russian; English transl., Utrecht: VNU Sci. Press; 1987.
  • Pöschel J, Trubowitz E. Inverse spectral theory. New York: Academic Press; 1987.
  • Freiling G, Yurko V. Inverse Sturm–Liouville problems and their applications. Huntington, NY: Nova Science Publishers; 2001.
  • Yurko VA. Method of spectral mappings in the inverse problem theory. Utrecht: VNU Science; 2002 (Inverse and ill-posed problems series).
  • Rundell W, Sacks PE. Reconstruction techniques for classical inverse Sturm–Liouville problems. Math Comput. 1992;58(197):161–183. doi: 10.1090/S0025-5718-1992-1106979-0
  • Ignatiev M, Yurko V. Numerical methods for solving inverse Sturm–Liouville problems. Results Math. 2008;52:63–74. doi: 10.1007/s00025-007-0276-y
  • Borg G. Eine umkehrung der Sturm-Liouvilleschen eigenwertaufgabe. Acta Math. 1946;78:1–96 (in German). doi: 10.1007/BF02421600
  • Carlson R. An inverse problem for the matrix Schrödinger equation. J Math Anal Appl. 2002;267:564–575. doi: 10.1006/jmaa.2001.7792
  • Malamud MM. Uniqueness of the matrix Sturm–Liouville equation given a part of the monodromy matrix and Borg type results. Basel: Birkhäuser; 2005. p. 237–270 (Sturm–Liouville theory).
  • Chabanov VM. Recovering the M-channel Sturm–Liouville operator from M+1 spectra. J Math Phys. 2004;45(11):4255–4260. doi: 10.1063/1.1794844
  • Yurko VA. Inverse problems for matrix Sturm–Liouville operators. Russ J Math Phys. 2006;13(1):111–118. doi: 10.1134/S1061920806010110
  • Shieh C-T. Isospectral sets and inverse problems for vector-valued Sturm–Liouville equations. Inverse Probl. 2007;23:2457–2468. doi: 10.1088/0266-5611/23/6/011
  • Xu X-C. Inverse spectral problem for the matrix Sturm–Liouville operator with the general separated self-adjoint boundary conditions. Tamkang J Math. 2019;50(3):321–336. doi: 10.5556/j.tkjm.50.2019.3360
  • Yurko V. Inverse problems for the matrix Sturm–Liouville equation on a finite interval. Inverse Probl. 2006;22:1139–1149. doi: 10.1088/0266-5611/22/4/002
  • Bondarenko N. Spectral analysis for the matrix Sturm–Liouville operator on a finite interval. Tamkang J Math. 2011;42(3):305–327. doi: 10.5556/j.tkjm.42.2011.756
  • Chelkak D, Korotyaev E. Weyl–Titchmarsh functions of vector-valued Sturm–Liouville operators on the unit interval. J Func Anal. 2009;257:1546–1588. doi: 10.1016/j.jfa.2009.05.010
  • Bondarenko NP. An inverse problem for the non-self-adjoint matrix Sturm–Liouville operator. Tamkang J Math. 2019;50(1):71–102. doi: 10.5556/j.tkjm.50.2019.2735
  • Bondarenko NP. Necessary and sufficient conditions for the solvability of the inverse problem for the matrix Sturm–Liouville operator. Funct Anal Appl. 2012;46(1):53–57. doi: 10.1007/s10688-012-0006-4
  • Mykytyuk YaV, Trush NS. Inverse spectral problems for Sturm–Liouville operators with matrix-valued potentials. Inverse Probl. 2010;26:015009. doi: 10.1088/0266-5611/26/1/015009
  • Harmer M. Inverse scattering for the matrix Schrödinger operator and Schrödinger operator on graphs with general self-adjoint boundary conditions. ANZIAM J. 2002;43:1–8.
  • Harmer M. Inverse scattering on matrices with boundary conditions. J Phys A. 2005;38(22):4875–4885. doi: 10.1088/0305-4470/38/22/012
  • Agranovich ZS, Marchenko VA. The inverse problem of scattering theory. New York: Gordon and Breach; 1963.
  • Freiling G, Yurko V. An inverse problem for the non-selfadjoint matrix Sturm–Liouville equation on the half-line. J Inv Ill-Posed Probl. 2007;15:785–798.
  • Bondarenko N. An inverse spectral problem for the matrix Sturm–Liouville operator on the half-line. Bound Value Probl. 2015;2015:6688. doi: 10.1186/s13661-014-0275-3
  • Calogero F, Degasperis A. Nonlinear evolution equations solvable by the inverse spectral transform II. Nouvo Cimento B. 1977;39(1):1–54. doi: 10.1007/BF02738174
  • Wadati M. Generalized matrix form of the inverse scattering method. In: Bullough RK, Caudry PJ, editors. Solitons, topics in current physics. Vol. 17. Berlin: Springer; 1980. p. 287–299.
  • Olmedilla E. Inverse scattering transform for general matrix Schrödinger operators and the related symplectic structure. Inverse Probl. 1985;1:219–236. doi: 10.1088/0266-5611/1/3/007
  • Alpay D, Gohberg I. Inverse problem for Sturm–Liouville operators with rational reflection coefficient. Integr Equ Oper Theory. 1998;30:317–325. doi: 10.1007/BF01195586
  • Bondarenko N. Inverse scattering on the line for the matrix Sturm–Liouville equation. J Diff Equ. 2017;262(3):2073–2105. doi: 10.1016/j.jde.2016.10.040
  • Yurko VA. Inverse spectral problems for Sturm–Liouville operators on graphs. Inverse Probl. 2005;21:1075–1086. doi: 10.1088/0266-5611/21/3/017
  • Yurko VA. Inverse spectral problems for differential operators on spatial networks. Russ Math Surv. 2016;71(3):539–584. doi: 10.1070/RM9709
  • Bondarenko N. Recovery of the matrix quadratic differential pencil from the spectral data. J Inv Ill-Posed Probl. 2016;24(3):245–263.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.