![MathJax Logo](/templates/jsp/_style2/_tandf/pb2/images/math-jax.gif)
Abstract
An algorithm for the reconstruction of a symmetric matrix from the eigenvalues of finite dimensional perturbations is given. The sufficient condition of determining the finite symmetric matrices is derived.
2010 Mathematics Subject Classification:
1. Introduction
Let be an
real valued symmetric matrix. Let
be the spectrum of
. Consider the inverse eigenvalue problem of the real valued symmetric matrices A which are finite dimensional perturbations of
, expressed by
(1)
(1) where
are unit column vectors in
and
are p real numbers. Let
be the spectrum of A. Let
be the spectrum of
(2)
(2) for
. We intend to reconstruct the matrices A from the spectra
,
.
There are some results about inverse eigenvalue problem for real valued symmetric matrices (see, for example, [Citation1–8] and references therein). Our main motivation comes from the inverse two spectra problem for Jacobi matrices in [Citation4,Citation5]. In [Citation4], Nylen and Uhlig concerned the reconstruction of an Jacobi matrix T from eigenvalues of
where
signifies the matrix with a one in the j, j position and zero everywhere else. In [Citation5], they concerned the reconstruction of an
Jacobi matrix T from eigenvalues of
which are the roots of the equation
. From different perspectives, we study the inverse eigenvalue problem for real valued symmetric matrices and give a resolution of the problems in [Citation4,Citation5]. In the paper, we consider finite dimensional perturbations of real valued symmetric matrices while the analysis in [Citation4,Citation5] was concerntrated on rank one perteubations of Jacobi matrices.
The main result of this paper asserts that given the matrix and p sequences of real numbers
satisfying the interlacing properties
(3)
(3) or
(4)
(4) Then there exist finite
real valued symmetric matrices A such that the spectrum of A is
and the spectrum of
is
for
. The proof of this result is achieved by giving an explicit algorithm for reconstructing all such matrices A.
In Section 3, we apply our results to the inverse eigenvalue problem of Jacobi matrices and resolve the problem in [Citation4,Citation5].
2. The main theorem
To state our main result, we give two propositions and a preliminary theorem.
Proposition 2.1
Each real valued symmetric matrix
can be expressed by
(5)
(5) where
are unit column vectors in
and
are n real numbers.
Proof.
By Theorem 2.9 of Chapter 7 in [Citation9], there exists an real valued invertible matrix X that
(6)
(6) where
. Let the vector
be the ith row vector of matrix X. By (Equation6
(6)
(6) ), we get
Let
for
, then we have (Equation5
(5)
(5) ).
Corollary 2.2
Let be an
real valued symmetric matrix with rank r, then
can be expressed by (Equation5
(5)
(5) ). If
are linearly independent, then the number of nonzero real numbers
is r.
Proof.
According to Proposition 2.1, can be expressed by (Equation5
(5)
(5) ). Let
and
. Then
(7)
(7) If
are linearly independent, then the matrix X is invertible. By Property 2.9 of Chapter 4 in [Citation9], the rank of matrix B is r. That means the number of nonzero real numbers
is r. The proof is complete.
Proposition 2.3
Let be a real valued symmetric matrix,
be a unit column vector in
and a be a real number. Assume that two sequences of real numbers
arranged in the increasing order, represent the spectra of
and
(8)
(8) respectively. Then we have
(9)
(9) or
(10)
(10)
Proof.
We firstly find the characteristic polynomial of A. Let be a nontrivial solution of
(11)
(11) We can find the vectors set
such that
is the eigenvector of
corresponding to the eigenvalue
and
forms a normalised orthonormal basis in
. Then the vectors
and
can be expressed by
(12)
(12) where
are real. Substituting (Equation12
(12)
(12) ) into (Equation11
(11)
(11) ) yields
Since eigenvectors
are orthogonal, then
that is
(13)
(13) Then (Equation11
(11)
(11) ) has a nontrivial solution if and only if the determinant of the coefficients of the system (Equation13
(13)
(13) ) is zero. Denote
(14)
(14) then
is the characteristic polynomial of matrix A and the zeros set of
is
. So
can also be expressed by
Let
. By (Equation14
(14)
(14) ),
Hence
(15)
(15) for
. We will prove the conclusion in two cases:
If a is a positive real number, by the result of the classically known fact [Citation1], the coefficients of the partial fraction decomposition of
are nonpositive if and only if the roots of
interlace those of
. Thus (Equation9
(9)
(9) ) holds.
If a is a negative real number, then the coefficients of the partial fraction decomposition of
are nonnegative. By [Citation1], (Equation10
(10)
(10) ) holds. It finishs the proof.
Corollary 2.4
Let be a real valued symmetric matrix, a be a real number and
be a column vector in
. Assume that two sequences of real numbers
arranged in the increasing order, represent the spectra of
and A definded by (Equation8
(8)
(8) ) respectively. If
is the eigenvector of
corresponding to
then we have
Proof.
Let be the eigenvector of
corresponding to
. Noting that
and
the proof is complete.
Theorem 2.5
Let be an
real valued symmetric matrix and denote its spectrum by
. Suppose that
is a sequence of real numbers satisfying (Equation9
(9)
(9) ) or (Equation10
(10)
(10) ). Then there exists a unit column vector
in
and a real number a such that the spectrum of A defined by (Equation8
(8)
(8) ) is
.
Let
Denote by
the greatest common divisor of
and
. Let
where
It is possible to arrange the roots of
and
such that
(16)
(16) or
(17)
(17) Moreover the number of A is
at most.
Proof
Proof of Theorem 2.5
We consider two cases.
(i) Suppose that (Equation9(9)
(9) ) holds. Denote by I the following subset of
:
By (Equation16
(16)
(16) ),
is a Herglotz function. From the Herglotz representation theorem in [Citation3, P4 ],
can be expressed by
(18)
(18) where
. We can find the vectors set
such that
is the eigenvector of
corresponding to the eigenvalue
and
forms a normalised orthonormal basis in
. Let
(19)
(19) and
(20)
(20) where
. By (Equation18
(18)
(18) ),
(21)
(21) Next we claim that the spectrum of the matrix A defined by (Equation8
(8)
(8) ) is
. In fact, define
According to (Equation15
(15)
(15) ), it follows that
is the characteristic polynomial of A. Comparing with (Equation21
(21)
(21) ), we obtain
i.e.
This proves that the spectrum of A is
. Moreover by (Equation20
(20)
(20) ) the number of the vectors
is
at most.
(ii) Suppose that (Equation10(10)
(10) ) holds, then we have (Equation18
(18)
(18) ) with
. Let the vector
and the real number a be defined by (Equation20
(20)
(20) ) and (Equation19
(19)
(19) ). By the proof of case (i), we see that the spectrum of A is
and the number of the vectors
is
at most. Then the proof is complete.
Now we state our main result.
Theorem 2.6
Let be an
real valued symmetric matrix and denote its spectrum by
. Suppose that
are p sequences of real numbers satisfying (Equation3
(3)
(3) ) or (Equation4
(4)
(4) ). Then there exist finitely many matrices A defined by (Equation1
(1)
(1) ) such that the spectrum of A is
and the spectrum of
defined by (Equation2
(2)
(2) ) is
.
Proof.
We prove our theorem by the induction. Denote by the spectrum of
,
.
For k = 1, by Theorem 2.5 there exists the vector and the real number
such that
and the number of
is finite.
For k>1, if there exist k−1 vectors and k−1 real numbers
such that
for
, then by Theorem 2.5 there exists the vector
and the real number
such that
and the number of
is finite. That is, there exists k vectors
and k real numbers
such that
for
and the number of
is finite.
From above, there exist finitely many matrices A such that the eigenvalues sets of A is and the spectrum of
is
for
. The proof is ended.
Remark 2.7
By Proposition 2.1, we know that all matrices A reconstructed in Theorem 2.6 have been found do not satisfying any other relations except the interlacing property (Equation3(3)
(3) ) or (Equation4
(4)
(4) ).
3. Application
We can apply our results to the inverse eigenvalue problem of Jacobi matrices defined by
(22)
(22) where
. Let
be a unit column vector in
and a be a positive real number. Let
be the spectra of
and
represently. Then the following inverse eigenvalue problem is stated: reconstruct the Jacobi matrix
and the positive real number a from the spectra
and the vector
.
We first mention some preliminaries which will be needed subsequently. Let
where
is the spectrum of
. By Theorem 2.5, there exists the column vector
, where
(23)
(23) such that the spectrum of
is
.
For fundamental aspects of spectral theory for Jacobi matrices, we refer to [Citation10]. Let be a nontrivial solution of
(24)
(24) Noting that
can be written as
where
is a real valued function of λ. By (Equation24
(24)
(24) ), we get
For
, the matrix
denotes the
submatrix of A contained in the rows and columns of A indexed by
. Then
(25)
(25) where
is the determinant of
and
.
Let
(26)
(26) and
. Then V is an orthogonal matrix and
. If
(27)
(27) then the matrix
is similar to
and the spectrum of
is
.
Let be the standard basis in
. Next we will treat two cases:
;
.
Case (i). Let and
be two sequences of real numbers satisfying
(28)
(28) Then we can reconstruct the unique Jacobi matrix
and the positive real number a such that the spectrum of
is
and the spectrum of A is
, where
.
Calculate by (Equation23
(23)
(23) ). By Theorem 2 in [Citation5], there exists a uniqe Jacobi matrix
. Suppose that the orthogonal matrix V defined by (Equation26
(26)
(26) ) satisfies (Equation27
(27)
(27) ) i.e.
(29)
(29) Due to
, it follows that
, i.e.
(30)
(30) According to (Equation25
(25)
(25) ),
(31)
(31) Putting (Equation30
(30)
(30) ) and (Equation31
(31)
(31) ) into (Equation29
(29)
(29) ), we get
that is
(32)
(32) By (Equation28
(28)
(28) ), (Equation32
(32)
(32) ) has a unique solution
(33)
(33) Let
Therefore, it is possible to determine
by solving the Vandermonde system
(34)
(34) By
and
is a continuous function in
, there is at least one zero in
. However,
has only n−1 zeros. Hence it necessarily follows that the zeros set of
interlaces
. By Theorem 1 and Remark in [Citation4, p414], we can exactly reconstruct the uniqe matrix
from
and the zeros of
.
Example 3.1
Find all Jacobi matrices
and the positive number a such that the spectrum of
is
and the spectrum of A is
where
and
.
The explicit algorithm is:
Calculate
by (Equation23
(23)
(23) );
Let
;
Calculate
and a by (Equation33
(33)
(33) );
Calculate
by (Equation34
(34)
(34) );
Calculate the zeros of
, denoted by
;
Reconstruct the uniqe matrix
from
and
by
It yiels the unique matrix , i.e.
This computation was performed using MATLAB.
Case (ii). Let and
be two sequences of real numbers satisfying (Equation28
(28)
(28) ). Then we can reconstruct the Jacobi matrix
and a positive real number a such that the spectrum of
is
and the spectrum of A is
, where
.
Calculate by (Equation23
(23)
(23) ). By Theorem 2 in [Citation5], there exist finitely many
Jacobi matrices
. Suppose that the orthogonal matrix V defined by (Equation26
(26)
(26) ), satisfies (Equation27
(27)
(27) ) i.e.
(35)
(35) By
, it follows that
i.e.
(36)
(36) According to (Equation25
(25)
(25) ),
(37)
(37) Putting (Equation36
(36)
(36) ) and (Equation37
(37)
(37) ) into (Equation35
(35)
(35) ), we have
that is
(38)
(38) Combining with
, we obtain
Then, it is easily shown that
(39)
(39) By (Equation28
(28)
(28) ), (Equation39
(39)
(39) ) has a unique solution
(40)
(40) Let
Therefore, it is possible to determine
by solving the Vandermonde system
(41)
(41) By
and
is a continuous function in
, there is at least one zero in
. However,
has only n−1 zeros. Hence it necessarily follows that the zeros set of
interlaces
. By Theorem 3 in [Citation4], we can exactly reconstruct the finitely many
matrices
from
and the zeros of
.
Example 3.2
Find all Jacobi matrices
and the positive number a such that the spectrum of
is
and the spectrum of A is
where
and
.
The explicit algorithm is:
Calculate
by (Equation23
(23)
(23) );
Let
;
Calculate
and a by (Equation40
(40)
(40) );
Calculate
by (Equation41
(41)
(41) );
Calculate the zeros of
, denoted by
;
Reconstruct the matrices
from
and
by
It shows two matrices , i.e.
This computation was performed using MATLAB.
Disclosure statement
No potential conflict of interest was reported by the author(s).
References
- Duarte AL. Construction of acyclic matrices from spectral data. Linear Algebra Appl. 1989;113:173–182. doi: 10.1016/0024-3795(89)90295-4
- Friedland S. The reconstruction of a symmetric matrix from the spectral data. J Math Anal Appl. 1979;71:412–422. doi: 10.1016/0022-247X(79)90201-4
- Gesztesy F, Simon B. On the determination of a potential from three spectra. Trans Am Math Soc. 1999;2:85–92.
- Nylen P, Nylen F. Inverse eigenvalue problems associated with spring-mass systems. Linear Algebra Appl. 1997;254:409–425. doi: 10.1016/S0024-3795(96)00316-3
- Nylen P, Nylen F. Inverse eigenvalue problems: existence of special spring-mass systems. Inverse Probl. 1997;13:1071–1081. doi: 10.1088/0266-5611/13/4/012
- Ram YM. Inverse eigenvalue problem for a modified vibrating system. SIAM J Appl Math. 1993;53:1763–1775. doi: 10.1137/0153082
- Rio RD, Kudryavtsev M, Silva L. Rank one perturbations of Jacobi matrices with mixed spectra. Math Nachr. 2006;279:502–512. doi: 10.1002/mana.200310375
- Rio RD, Kudryavtsev M. Inverse problems for Jacobi operators I: interior mass-spring perturbations in finite systems. Inverse Probl. 2011;28:055007.
- Artin M. Algebra. New Jersey: Prentice Hall; 2010.
- Gladwell ML. Inverse problems in vibration. Dordrecht: Martinus Nijhoff; 1986.