482
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Distinct eigenvalues are realizable with generic eigenvectors

ORCID Icon, ORCID Icon & ORCID Icon
Pages 2054-2068 | Received 21 Jul 2021, Accepted 08 Jun 2023, Published online: 10 Jul 2023

Abstract

Motivated by applications in matrix constructions used in the inverse eigenvalue problem for graphs, we study a concept of genericity for eigenvectors associated with a given spectrum and a connected graph. This concept generalizes the established notion of a nowhere-zero eigenbasis. Given any simple graph G on n vertices and any spectrum with no multiple eigenvalues, we show that the family of eigenbases for symmetric matrices with this graph and spectrum is generic, strengthening a result of Monfared and Shader. We illustrate applications of this result by constructing new achievable ordered multiplicity lists for partial joins of graphs and providing several families of joins of graphs that are realizable by a matrix with only two distinct eigenvalues.

AMS CLASSIFICATIONS:

1. Introduction

Let G be a simple graph with vertex set V(G)={1,,n} and edge set E(G), and consider S(G), the set of all real symmetric n×n matrices A=(a(i,j)) such that, for ij, a(i,j)0 if and only if {i,j}E(G), with no restriction on the diagonal entries of A. The inverse eigenvalue problem for graphs (IEP-G) seeks to characterize all possible spectra σ(A)={λ1,,λn} of matrices AS(G). The IEP-G is solved for a few families of graphs (complete graphs, paths, generalized stars [Citation1], cycles [Citation2,Citation3], generalized barbell graphs [Citation4], linear trees [Citation5]) and for graphs of order at most 5[Citation6]; the problem remains open in many other cases.

The closely related ordered multiplicity inverse eigenvalue problem for graphs seeks to characterize all possible ordered multiplicities of eigenvalues of matrices in S(G), i.e. to characterize the ordered lists of nonnegative integers (m1,,mr) for which there exists a matrix AS(G) and λ1<<λr, such that σ(A)={λ1(m1),,λr(mr)}, where λi(mi) denotes mi copies of λi. We call such an ordered list (m1,,mr) an ordered multiplicity vector of G. The ordered multiplicity inverse eigenvalue problem has been resolved for all connected graphs of order at most 6[Citation7]. For some specific families of connected graphs, several ordered multiplicity vectors have been determined (see, e.g. [Citation4,Citation6,Citation8]). Moreover, Monfared and Shader proved the following theorem in [Citation9], showing that (1,1,,1) is an ordered multiplicity vector of any connected graph G, which can be realized with a nowhere-zero eigenbasis, that is, a basis of eigenvectors, each containing no zero entry.

Theorem 1.1

[Citation9,Theorem 4.3]

For any connected graph G on n vertices and distinct real numbers λ1,,λn, there exists AS(G) with spectrum σ(A)={λ1,,λn} and an eigenbasis consisting of nowhere-zero vectors.

Our main result, Theorem 2.4, is a strengthening of Monfared and Shader's theorem. We show that we can make considerably more stringent genericity requirements of the eigenbasis than simply asking that it is nowhere-zero; in the language of [Citation10], we show that any spectrum with only simple eigenvalues is generically realizable, for any connected graph G.

Recall that a matrix is nowhere-zero if each of its entries is nonzero. Let σ={λ1,,λn} be a multiset of real numbers. For a connected graph G, we say the multiset σ is realizable for G if σ=σ(A) for some AS(G); we say σ is generically realizable for G if, moreover, for any finite set YRn{0} there is an orthogonal matrix U so that UDUS(G), where D is a diagonal matrix with spectrum σ and Uy is nowhere-zero for all yY. (As noted above, this condition is stronger than the realizability of σ in S(G) with a nowhere-zero eigenbasis: consider Y={e1,,en}.) Let N be the set of strictly positive integers. An ordered multiplicity vector m=(m1,,mr)Nr is spectrally arbitrary for G if for any real numbers λ1<<λr, the multiset σ={λ1(m1),,λr(mr)} is realizable for G. Further, we say m is generically realizable for G if it is spectrally arbitrary for G, and every assignment of eigenvalues results in a generically realizable multiset for G, as defined above.

We will also need to deal with disconnected graphs. The notion of the multiplicity matrix of a disconnected graph was introduced in [Citation10] as a generalization of the ordered multiplicity vector of a connected graph, as follows. Let G be a graph with kN connected components G1,,Gk. For rN, an r×k matrix V with non-negative integer entries is said to be a multiplicity matrix for G if, for 1ik, the ith column of V is an ordered multiplicity list realized by a matrix in S(Gi). Note that a trivial necessary condition for V to be a multiplicity matrix of G is that the orders of the connected components of G are the same as the column sums of V; we abbreviate this by saying that V fits G. If every column of a multiplicity matrix V for a graph G is generically realizable for the corresponding connected component of G, then we say V is generically realizable for G.

The methods used in [Citation6,Citation11] to resolve the IEP-G and the ordered multiplicity IEP-G for small graphs make essential use of strong properties. In particular, a symmetric n×n matrix A is said to have the strong spectral property (the SSP) if the zero matrix 0 is the only symmetric matrix X satisfying AX = XA and AX=InX=0, where ○ denotes the Hadamard product. The SSP was first defined in [Citation11]. One of its key features is the following perturbation result.

Theorem 1.2

[Citation11,Theorem 10]

Let G be a spanning subgraph of a graph G. If AS(G) is a matrix with the SSP, then for any ϵ>0 there is a matrix AS(G) with the SSP such that A and A have the same eigenvalues and AA<ϵ.

To establish Theorem 2.4, we first prove it for trees, and then apply the SSP in the case that G is a spanning tree of G. A 0–1 matrix contains only 0s and 1s; casting Theorem 2.4 in this language, we obtain Theorem 2.5: every 0–1 multiplicity matrix which fits a graph G is generically realizable for G. In particular, this implies that every multiplicity matrix which fits a path Pn is generically realizable for Pn. In the terminology of [Citation10], we say the graph Pn is generically realizable.

Figure 1. Sketch of the partial join (P7,X)H, where pendant vertices X of P7 are coloured grey. The edges of H are not drawn.

Figure 1. Sketch of the partial join (P7,X)∨H, where pendant vertices X of P7 are coloured grey. The edges of H are not drawn.

Figure 2. Sketch of the partial join K1(3K1H), where the high degree vertex v is coloured grey. The edges of H are not drawn.

Figure 2. Sketch of the partial join K1∨(3K1∪H), where the high degree vertex v is coloured grey. The edges of H are not drawn.

We provide some applications in Section 3. In particular, we are interested in finding examples of joins of graphs that allow a small number of distinct eigenvalues. The minimum number of distinct eigenvalues of a graph q(G)=min{q(A):AS(G)},where q(A) denotes the number of distinct eigenvalues of a square matrix A, is one of the parameters motivated by IEP-G. The study of q(G) was initiated by Leal–Duarte and Johnson in [Citation12], and it has been broadly studied since then (see e.g. [Citation9–11,Citation13–17]). Monfared and Shader [Citation9,Theorem 5.2] proved that q(GH)=2 if G and H are connected graphs with |G|=|H|. A consequence of our result is the following generalization (Theorem 3.4): if G and H are arbitrary graphs, each with k connected components G1,,Gk and H1,,Hk, so that ||Gi||Hi||2 for each i, then we still have q(GH)2.

In this paper, we use the following notation. For an integer n, let us denote [n]={1,2,,n}, and we also write k+[n]:={k+1,k+2,,k+n}. Column vectors are typically written using boldface; for example, 1n denotes the column vector of ones in Rn, and ei:=(0,,0,1,0,,0)Rn is the vector with the 1 in the ith entry and 0s elsewhere. We write Ik for the k×k identity matrix, 0m×n is the zero m×n matrix, and we also write 0m:=0m×m and 0m:=0m×1. Where the context allows, we may omit these subscripts altogether.

For a vector xRn let us denote by x(i)Rn1 the vector x with its ith component removed, and for a matrix A let A(i) denote its principal submatrix with the ith row and column of A removed.

All graphs G=(V(G),E(G)) considered are simple undirected graphs with non-empty vertex sets V(G). The order of G is |G|=|V(G)| and we often assume (without loss of generality) that V(G)=[|V(G)|]. For a connected graph G, the distance between a pair of vertices is the number of edges in a shortest path between them, and the diameter diam(G) is the largest distance between any pair of vertices. A subgraph H is a spanning subgraph of G if V(H)=V(G). The join GH of two graphs G and H is the disjoint union GH together with all the possible edges joining the vertices in G to the vertices in H. We abbreviate the disjoint graph union of k copies of the same graph G by kG:=GG. For a subgraph H of G and a matrix AS(G), we define the matrix A[H] as the principal submatrix of A whose rows and columns are the vertices of H. We write Pn, Cn and Kn for the path, the cycle and the complete graph on n vertices, respectively, and we denote the complete bipartite graph on two disjoint sets of cardinalities m and n by Km,n:=mK1nK1.

2. Generic realizability of 0–1 matrices

In this section, we will prove that any 0–1 multiplicity matrix that fits a graph G is generically realizable for G.

Throughout this section, we fix mN, real numbers λ1<λ2<<λm, and the diagonal matrix Λ:=diag(λ1,,λm). Further, let EΛ denote the smooth manifold of all m×m symmetric matrices with eigenvalues λ1,,λm. This manifold was studied in [Citation11], where the following lemma was proven in the special case that f({i,j})=1 for all i, j. A nearly identical argument yields the more general result we require, and we give the details for completeness. (In fact, yet further generalizations may be proven, although we will not need those here.)

Lemma 2.1

Let G be a connected graph of order m. Given a function f:E(G)N and tR, consider the family of manifolds given by (1) Mf,G(t):={A=(a(i,j))S(G):a(i,j)=tf({i,j})for{i,j}E(G)}.(1) For every ϵ>0, there exists t0>0 and a matrix AMf,G(t0)EΛ with the SSP so that AΛ<ϵ.

Proof.

In this proof, we use definitions and notations from [Citation11]. In particular, NM.X denotes the normal space to a smooth submanifold M of the m×m matrices at some XM, and two such manifolds M and M intersect transversally at XMM if NM.XNM.X={0}.

Let M(t):=Mf,G(t) be the smooth family of manifolds of m×m symmetric matrices defined in (Equation1). Note that M(0) is the set of diagonal matrices. By [Citation11], we have NEΛ.Λ={X:X=XIm} and NM(0).Λ={X:XIm=0}. Since these two normal spaces have trivial intersection, the manifolds M(0) and EΛ intersect transversally at Λ.

By [Citation11,Theorem 3], there exists r>0 and a continuous function F:(r,r)EΛ such that F(0)=Λ and for t(r,r), the manifolds EΛ and M(t) intersect transversally at F(t). Hence, for any ϵ>0, for sufficiently small t0>0, the matrix A:=F(t0) has AΛ<ϵ and AM(t0)EΛ. To see that A has the SSP, we have to prove that S(G) and EΛ intersect transversally at A (see [Citation11,page 11]). Since M(t0)S(G), it follows that NS(G).ANM(t0).A. Further, since EΛ and M(t0) intersect transversally, we have NS(G).ANEΛ.ANM(t0).ANEΛ.A={0}, proving that S(G) and EΛ intersect transversally at A.

Lemma 2.2

For nN, let AnEΛ and let Un be an orthogonal matrix with non-negative diagonal entries satisfying UnAnUn=Λ. If AnΛ as n, then UnIn as n.

Proof.

For each k[m], let pk be a polynomial with pk(λ)=δk, for [m]. Then pk(An) is the orthogonal projection onto the λk-eigenspace of An. Hence pk(An)=pk(UnΛUn)=Unpk(Λ)Un=(Unek)(Unek)pk(Λ)=ekek.In particular, (Unek)k(Unek)=pk(An)k,pk(Λ)k,=δk,. Since (Unek)k0, this implies that (Unek)k1, which implies in turn that (Unek)0 if k. Hence, Unekek, proving UnIn.

In the previous lemma, we showed that the off-diagonal elements of Un decay to zero. We now show that when we also have AnMf,G(tn) where tn0, it is sometimes possible to precisely determine the rate of decay of off-diagonal elements of Un.

Lemma 2.3

Let G be a tree of order m. Given g:E(G)N, let N0N with N0>max{g(e):eE(G)}diam(G) and let f:E(G)N, f(e):=N0+g(e).

For i,jV(G), let P(i,j) be the subgraph of G consisting of the shortest path from i to j, c(i,j):=kV(P(i,j)){j}(λjλk)1, and s(i,j):=eE(P(i,j))f(e). (In particular, c(i,j)0, c(j,j):=1 and s(j,j):=0.)

Suppose tn>0 with tn0, and AnMf,G(tn)EΛ with AnΛ as n. For nN, let Un=(un(i,j)) be an orthogonal matrix with non-negative diagonal entries so that UnAnUn=Λ. Then un(i,j)ts(i,j)c(i,j) as n, for all i,jV(G).

Proof.

Assuming V(G)=[m] and An=(an(i,j))Mf,G(tn), it follows that an(i,j)=tf({i,j}) for each {i,j}E(G). Since 0<tn0 and AnΛ, we may assume (by taking n sufficiently large) that 0<tn<1 and for ij, an(i,i)λj is bounded away from zero.

Fix j0N and consider the vector Unej0=(un(i,j0))i[m]. This is a normalized eigenvector of An with eigenvalue λj0. Equivalently, for i[m], we have (2) (λj0an(i,i))un(i,j0)=kNG(i)tnf({i,k})un(k,j0)(2) where NG(i) is the set of neighbours of i in G.

For iV(G), let d(i,j0):=|E(P(i,j0))|0 denote the distance in G from i to j0. We claim that for 0xdiam(G):

  1. if iV(G) with d(i,j0)>x, then un(i,j0)tnxN00 as n and

  2. if iV(G) with d(i,j0)=x, then un(i,j0)tns(i,j0)c(i,j0) as n.

We will establish this claim by induction on x.

Since An converges to Λ, the j0th column of Un converges to ej0 by Lemma 2.2. This implies that the claim holds for x = 0.

Now assume inductively that the claim holds for all x with 0xx0<diam(G). We proceed to prove that it holds for x=x0+1 as well. First consider claim (a), and suppose iV(G) with d(i,j0)>x0+1. Rearranging (Equation2), we obtain un(i,j0)tn(x0+1)N0=1λj0an(i,i)kNG(i)un(k,j0)tn(x0+1)N0f({i,k}).For any eE(G), we have f(e)>N0, so (x0+1)N0f(e)<x0N0, and |un(i,j0)tn(x0+1)N0|1|λj0an(i,i)|kNG(i)|un(k,j0)tn(x0+1)N0f({i,k})|1|λj0an(i,i)|kNG(i)|un(k,j0)tnx0N0|.Since d(i,j0)>x0+1, we have d(k,j0)>x0 for all kNG(i), hence by part (a) of the claim for x=x0, each term in the sum above converges to 0. Since we also have λj0an(i,i)λj0λi0, part (a) of the claim holds for x=x0+1. This proves (a) for all x with 0xdiam(G).

Now consider part (b). Suppose d(i,j0)=x0+1. Since G is a tree, there exists precisely one k0NG(i) with d(k0,j0)=x0, and by the definition of s, we have s(i,j0)=f({i,k0})+s(k0,j0). We have ij0, so λj0an(i,i)0 and we can rearrange (Equation2)as follows: (3) un(i,j0)tns(i,j0)=1λj0an(i,i)(un(k0,j0)tns(k0,j0)+kNG(i){k0}un(k,j0)tns(i,j0)f({i,k})).(3) For every kNG(i){k0}, we have s(i,j0)f({i,k})=(eE(P(i,j0))f(e))f({i,k})=(x0+1)N0+(eE(P(i,j0))g(e))N0g({i,k})<x0N0+(x0+1)max{g(e):eE(G)}<(x0+1)N0.Hence, by part (a) of the claim, all the terms under the sum in (Equation3) converge to zero. Taking limits and using part (b) of the claim show that un(i,j0)tns(i,j0)(λj0λi)1c(k0,j0)=c(i,j0).

Theorem 2.4

Let G be a connected graph of order m, and choose a finite set YRm{0}. There exists a matrix AS(G)EΛ with the SSP and an orthogonal matrix U with UAU=Λ so that Uv is nowhere-zero for all vY.

Proof.

First we prove the statement for trees, so let G be a tree. Suppose that the statement is false for G. Choose g:E(G)N and f:=N0+g as in Lemma 2.3 so that the corresponding function s:V(G)×V(G)N satisfies s(i,j)=s(k,) if and only if {i,j}={k,}. (This may be guaranteed by a suitable choice of g.) Further, let AnMf,G(tn)EΛ also be as in Lemma 2.3, with the SSP (which we can guarantee by Lemma 2.1) and let Un be orthogonal matrices with non-negative diagonal satisfying UnAnUn=Λ. Since we suppose the statement is false for G, for every n there exists vY so that some entry of Unv is zero. Passing to a subsequence, we may assume that there is some fixed vector v=(v1,,vm)Y and a fixed index k[m] so that (Unv)k=0 for every nN. Let un=(un(k,1)un(k,2)un(k,m)) denote the kth row of Un, hence we are assuming unv=0 for all nN. We aim to arrive at the contradiction by proving v=0. From Lemma 2.3 we know that UnI, hence vk=0. Let i1,i2,,im1[m]{k} be such that s(k,i1)<s(k,i2)<<s(k,im1), and [m1] be minimal with vi0. Now 0=1tns(k,i)unv=r=m1un(k,ir)tns(k,i)vir.By Lemma 2.3, we know that un(k,i)tns(k,i) converges to a nonzero constant and un(k,ir)tns(k,i) converges to zero for r>. Hence vi=0, a contradiction, showing that the statement holds for any tree G.

Now, let G be a connected graph, and G be a spanning tree of G. Since the claim holds for G, there exists AS(G) with the SSP so that UAU=Λ, UU=Im, and Uv is nowhere-zero for all vY. Since A has the SSP, by Theorem 1.2 there exists AS(G) with the SSP arbitrarily close to A and with the same eigenvalues as A. Since the eigenvalues are distinct, the λk eigenspaces are close for A and A, so for A and A sufficiently close, there is an orthogonal matrix U diagonalizing A which is sufficiently close to U to ensure that Uv is also nowhere zero for all vY.

Theorem 2.5

If G is any graph, then every 0–1 multiplicity matrix which fits G is generically realizable for G.

Proof.

Given such a matrix V, label the connected components G1,,Gk of G so that the multiplicity vector Vei fits Gi. Every entry of Vei is 0 or 1, so by Theorem 2.4, Vei is generically realizable for Gi. Hence, V is generically realizable for G.

If every multiplicity matrix for a graph G is generically realizable for G, then we say G is generically realizable. This is a strong requirement for a graph; in particular, it implies that G is spectrally arbitrary for every multiplicity matrix that can be realized by G. In fact, the only families of generically realizable graphs previously known are unions of complete graphs [Citation10]. We can now extend this to include paths.

Corollary 2.6

  1. The path Pn is a generically realizable graph, for every nN.

  2. If every connected component of a graph G is either a complete graph or a path, then G is generically realizable.

Proof.

The maximum multiplicity of an eigenvalue of a path is 1 [Citation18], so every multiplicity vector for Pn is a 0–1 vector, hence it is generically realizable by Theorem 2.5. Complete graphs are also generically realizable [Citation10], so the second assertion follows immediately.

3. Applications to the inverse eigenvalue problem

In [Citation10], the authors developed an approach to study q(G), where G is the join of two graphs. We now give some applications of preceding results to this problem.

First we briefly review the set up from [Citation10]. For a matrix X with at least 3 rows, we write X~ for the matrix obtained by deleting the first row and the last row of X. Let Sm×n denote the set of m×n matrices with entries in a set S, and let N0 be the set of non-negative integers. Given r,k,N with r3, the matrices VN0r×k and WN0r× are said to be compatible if V~1k=W~1 and V~W~ is nowhere-zero. We say the graphs G and H have compatible multiplicity matrices if there exist compatible matrices V and W where V is a multiplicity matrix for G and W is a multiplicity matrix for H.

Which graphs G and H have q(GH)=2? The answer is closely related to the existence of compatible multiplicity matrices.

Theorem 3.1

[Citation10,Theorems 2.5 and 2.14]

Let G and H be two graphs. If q(GH)=2, then G and H necessarily have compatible multiplicity matrices.

Moreover, if there exist compatible generically realizable multiplicity matrices V and W for G and H, then q(GH)=2.

By Corollary 2.6, in the case of unions of paths or complete graphs we can strengthen this to a necessary and sufficient condition.

Corollary 3.2

If G and H are unions of paths or complete graphs, then q(GH)=2 if and only if G and H have a pair of compatible multiplicity matrices.

For general graphs, by combining Theorem 2.5 and Theorem 3.1, we obtain the following purely combinatorial sufficient condition for q(GH) to be 2.

Corollary 3.3

Let G and H be two graphs. If there exist compatible 0–1 matrices V and W that fit G and H, respectively, then q(GH)=2.

In general, deciding whether there exist compatible 0–1 matrices with prescribed column sums seems to be a difficult combinatorial question, which we plan to examine further in upcoming work.

Monfared and Shader [Citation9,Theorem 5.2] proved that q(GH)=2 if G and H are connected graphs with |G|=|H|. Using Corollary 3.3, it is now simple to generalize their result to the case ||G||H||2, and also to pairs of disconnected graphs with equal numbers of connected components.

Theorem 3.4

Let kN and suppose G and H are graphs that both have k connected components. If we can label these components of G and H as G1,,Gk and H1,,Hk, respectively, so that ||Gi||Hi||2 for each i[k], then q(i=1kGii=1kHi)=2.Moreover, if mi+2|Hi| for all i[k], then q(i=1kKmii=1kHi)=2.

Proof.

Let pi:=min{|Gi|,|Hi|} for i[k], p:=max{pi:i[k]} and qi:=j[pi]ei{0,1}p. Let E:=(q1q2qk){0,1}p×k and consider (4) V:=(v1Evp+2){0,1}(p+2)×kandW:=(w1Ewp+2){0,1}(p+2)×k,(4) where v1,vp+2,w1,wp+2{0,1}1×k are chosen so that 1p+2Vei=|Gi|, and 1p+2Wei=|Hi| for all i[k]. (The existence of such vectors is assured by the condition ||Gi||Hi||2.) Since V~=W~=E and the first row of E is nowhere-zero, V and W are compatible 0–1 matrices fitting i=1kGi and i=1kHi, respectively, hence q(i=1kGii=1kHi)=2 by Corollary 3.3.

If G=i=1kKmi, then the assumption mi|Hi|2 for all i[k] implies the existence of compatible matrices V and W as in (Equation4), except that we don't require the first and last rows of V to be 0–1 vectors, i.e. v1,vp+2N0k. By [Citation10] and Theorem 2.5, V and W are generically realizable multiplicity matrices for G and H, respectively, so q = 2 by Theorem 3.1.

For connected graphs G and H, we have just seen that q(GH)=2 if ||G||H||2. We now show that we generally cannot relax this inequality.

Example 3.5

Take H=Pn above, and suppose G is a connected graph with |G|n. In this case, let us prove that the condition ||G||H||2, i.e. |G|n2, is also necessary for q(GH) to be equal to 2.

If q(GPn)=2, then by Theorem 3.1 there exist compatible multiplicity matrices VN0(r+2)×1 and WN0(r+2)×1 for G and Pn, respectively. The maximum multiplicity of an eigenvalue of a path is 1, so W{0,1}(r+2)×1. By compatibility, we have V~=W~ and without loss of generality, we can delete any zeros in these column vectors to reduce to the case V=(v11rvr+2)andW=(w11rwr+2),where v1,vr+2N0 and w1,wr+2{0,1}. Since these matrices fit G and Pn, respectively, we have r=n(w1+wr+2)n2 and |G|rn2, as required.

Recall that if T is a tree and AS(T), then the extreme eigenvalues of A have multiplicity one. Hence, by a similar argument to that of the previous paragraph, we have q(T1T2)=2 if and only if ||T1||T2||2 for any trees T1,T2.

We now turn to another class of applications of Theorem 2.5, in which we obtain certain achievable spectra of partial joins. We require additional terminology given below.

Recall that if G is a graph and WV(G), then the vertex boundary of W in G is the set of all vertices in V(G)W which are joined to some vertex of W by an edge of G.

Suppose that G1 and G2 are vertex-disjoint graphs and let ViV(Gi) for i = 1, 2. The partial join of graphs G1 and G2 is the graph (G,V)=(G1,V1)(G2,V2) formed from G1G2 by joining each vertex of V1 to each vertex of V2. If V2=V(G2), then we write (G1,V1)G2:=(G1,V1)(G2,V(G2)).

Suppose G and G are graphs so that G is obtained from G by deleting s vertices and adding arbitrary number of edges (in particular, |G|=|G|+s). If a matrix AS(G) has the SSP, then by the Minor Monotonicity Theorem [Citation6,Theorem 6.13] there exist a matrix AS(G) with the SSP, such that σ(A)=σ(A){μ1,,μs}, where μiμj for distinct i,j[s] and μiσ(A) for i[s]. In the next result, we provide a statement of a similar flavour that does not depend on the SSP. The construction given in the lemma was first developed in the context of the nonnegative inverse eigenvalue problem [Citation19,Citation20].

Lemma 3.6

Let G be a graph, V(G)=V1V2 a partition of V(G), and XV1 be the vertex boundary of V2 in G. Suppose H is any connected graph with |H||V2| and consider the graph G:=(G[V1],X)H.

Let MS(G) and suppose that σ is a multiset of real numbers so that σ(M[V2])σ is generically realizable for H. Then there exists NS(G) with spectrum σ(N)=σ(M)σ.

Proof.

Let us denote mi:=|Vi| for i = 1, 2 and k:=|X|. Assume without loss of generality that V(G)=[m1+m2], V1=[m1] and X=m1k+[k]. We have M=(ABBC)S(G), with A=M[V1], C=M[V2], and σ(C)={λ1,,λm2}. Let QRm2×m2 be an orthogonal matrix, such that QCQ=Λ0:=diag(λ1,,λm2). Write σ={μ1,,μt}, and let Λ:=diag(μ1,,μt). Moreover, let N=MΛ with eigenvalues σ(M)σ.

Since X=m1k+[k] is the vertex boundary of V2 in G, we have B=(0m2×(m1k)B0)Rm2×m1, where no column of B0Rm2×k is zero. Let us denote the set of columns of QB0 by XRm2, and let YRm2+t denote the set of vectors obtained from elements of X by appending t zeros. Then |X|=|Y|=k. Since Q is orthogonal, all vectors in Y are nonzero.

Since σ(M[V2])σ is generically realizable for H, there exist a matrix CS(H) with σ(C)=σ(C)σ, and an orthogonal matrix UR(m2+t)×(m2+t), such that UTCU=Λ0Λ and Uy is a nowhere-zero vector for all yY.

Let U:=Im1(U(QIt))andN:=UNU=(AZZC),where Z=(0(m2+t)×(m1k)U(QB00t×k)).Since Uy is a nowhere-zero vector for yY, the matrix U(QB00t×k) is nowhere-zero. Hence, NS(G), and σ(N)=σ(N)=σ(M)σ as desired.

Lemma 3.6 can be used to build explicit examples or to provide more general results. Implementation is faced with two issues: first we need some information on σ(M[V2]), and second, we need to prove generic realizability of σ(M[V2])σ for H. In our first application, we rely on Theorem 2.5 for generic realizability.

Theorem 3.7

Let G be a graph, V(G)=V1V2 a partition of V(G), and XV1 be the vertex boundary of V2 in G. Suppose H is any connected graph with |H||V2| and consider the graph G:=(G[V1],X)H.

If there exists MS(G) so that M[V2] has distinct eigenvalues λ1,,λ|V2|, then there exists NS(G) with spectrum σ(N)=σ(M){μ1,,μt},where t=|H||V2| and μ1,,μt are any distinct real numbers satisfying μiλj for all i[t],j[|V2|].

Theorem 3.7 allows us, in some sense, to replace a submatrix with distinct eigenvalues with a matrix corresponding to an arbitrary connected graph of equal or bigger size. Note the technical requirement that M[V2] has distinct eigenvalues is automatically satisfied when G[V2] is a path. This observation allows us to improve an upper bound for the number of distinct eigenvalues of joins of graphs that is given in [Citation15,Section 4.2], where it is proved that for connected graphs G and H, q(GH) is bounded above by 2+||G||H||.

Corollary 3.8

For any nN0 and a graph H with |H|n we have q(GH)q(GPn)+|H|n.In particular, if G and H are both connected, then q(GH)max{2,||G||H||}.

Proof.

The first part follows easily from Theorem 3.7. Assume now that both G and H are connected. If ||H||G||2, then q(GH)=2 by Theorem 3.4. To cover the case |H||G|+3, we note that Theorem 3.4 implies q(GP|G|+2)=2 and hence we have q(GH)2+|H|(|G|+2)=|H||G|. By symmetry the statement follows.

Next we explore applications of Lemma 3.6 in the special case where G is a cycle. Cycles are a nice family to use for this purpose, since every induced connected subgraph of a cycle is a path, and the IEP-G for cycles is known to have the following solution.

Lemma 3.9

[Citation3,Theorem 3.3]

Let λ1λ2λn be a list of real numbers, n3. Then {λ1,,λn} is the set of eigenvalues of a matrix in S(Cn) if and only if λ1λ2<λ3λ4<orλ1<λ2λ3<λ4.

Example 3.10

Fix n2 and let X be the set of degree one vertices of Pn. Given a connected graph H, consider J:=(Pn,X)H (see Figure ). We claim that if sN0 and 2s|J|=n+|H|, then {2(s),1(|J|2s)} is an unordered multiplicity list for some matrix in S(J). Let G=Cn+|H|. By Lemma 3.9, there is a matrix in S(G) with unordered multiplicity list {2(s),1(|J|2s)}. Partition the vertices of G as V1V2 so that Pn=G[V1] and P|H|=G[V2], and apply Theorem 3.7.

Note that Lemma 3.6 allows us to increase the multiplicities of eigenvalues of M, provided the technical conditions of the lemma are satisfied. For example, the eigenvalues {μ1,,μt} that we add in Lemma 3.6 can be chosen to agree with eigenvalues in σ(M), hence increasing their multiplicity, provided they are not also eigenvalues of M[V2]. When this condition is not satisfied, Theorem 2.5 cannot be applied to assure generic realizability. Examples of both eventualities are given in the two examples below. In the first example, we exhibit ordered multiplicity lists that achieve q(G), where G is a wheel graph of order 4m−1.

Example 3.11

Let us apply Lemma 3.6 to G=C2m, X=V1={2m} and V2=[2m1]. Choose any distinct real numbers λ1,,λm. By Lemma 3.9, there exists a matrix MS(C2m) with σ(M)={λ1(2),λ2(2),,λm(2)}.Let B=M(2m)S(P2m1) be the leading principal (2m1)×(2m1) submatrix of M. By interlacing and [Citation18], we have σ(B)={λ1,μ1,λ2,μ2,,μm1,λm},where μi(λi,λi+1) for i[m1].

Let H=C4m2 and let σ=σ(B). By Lemma 3.9, there exists AS(C4m2) with σ(A)=σ(B)σ={λ1(2),μ1(2),λ2(2),μ2(2),,μm1(2),λm(2)}. Let us prove that we can choose a nowhere-zero eigenbasis for A. Every eigenspace V of A is spanned by two linearly independent vectors v=(vi) and w=(wi). If vi=wi=0 for some i[4m2], then v(i) and w(i) are linearly independent eigenvectors of A(i)S(P4m3) corresponding to the same eigenvalue, which contradicts the fact every matrix corresponding to a path has simple eigenvalues, [Citation18]. Hence for each i either vi0 or wi0, so we can choose their linear combinations to be nowhere-zero and hence we can choose a nowhere-zero eigenbasis for A.

Since A has a nowhere-zero eigenbasis and no simple eigenvalues, the proof of [Citation10,Proposition 3.1] shows that σ(A)=σ(B)σ is generically realizable for C4m2. By Lemma 3.6, there exists a matrix NS(K1C4m2) with spectrum σ(M)σ={λ1(3),μ1,λ2(3),μ2,,μm1,λm(3)}.Hence the ordered multiplicity list (3,1,3,1,,3) is realizable for K1C4m2, which are wheel graphs.

Note that Lemma 3.9 prohibits an odd number of eigenvalues between any two double eigenvalues of a matrix corresponding to a cycle. Hence by interlacing a matrix corresponding to K1C4m2 cannot have an even number (including zero) of eigenvalues between any two triple eigenvalues. Therefore with the above construction we have found a matrix corresponding to K1C4m2 with the maximal multiplicity of an eigenvalue and the minimal number of distinct eigenvalues. In particular, q(K1C4m2)=2m1.

Example 3.12

If m2 and H is any connected graph with 3m−2 vertices, then the multiplicity list {3(m)} is spectrally arbitrary for K2H and, in particular, q(K2H)m. To see this, take G=C2m, X=V1={2m,2m1} and V2=[2m2]. For any λ1<<λm, let MS(C2m) with σ(M)={λ1(2),λ2(2),,λm(2)}. Then σ(M(2m))={λ1,μ1,λ2,μ2,,μm1,λm}, where μi(λi,λi+1) for i[m1] (since a path has only simple eigenvalues). Now, M[V2] is a principal submatrix of M(2m), so by [Citation21,Problem 4.3.P17], the eigenvalues of M[V2] strictly interlace those of M(2m). In particular, the eigenvalues of M[V2] do not intersect σ:={λ1,,λm}. Applying Theorem 3.7 we see that the spectrum σ(M)σ={λ1(3),λ2(3),,λm(3)} is realized by a matrix in S(K2H), as required.

Remark 3.13

In particular, by interlacing, the maximal multiplicity of an eigenvalue of a matrix corresponding to PnK2 is equal to 3 and hence q(P3m2K2)=m. This result complements [Citation15,Example 4.5], where they proved that q(PsK1)=s+12. It would be interesting to determine q(PsKt) in general.

In Theorem 3.7, we can make use of graphs for which the IEP-G is solved (or better understood) to construct new realizable multiplicity lists for partial joins; we illustrate this idea also for generalized stars [Citation1].

Example 3.14

Let kN and H be an arbitrary connected graph. If m_={m1,m2,,m,1(n)}, such that mi2, <n, and i[]mi=|H|n+k+1k+, then m_ is an unordered multiplicity list for K1(kK1H) (see Figure ).

To prove this, let G=GS1,,1,|H| be a generalized star with k arm lengths equal to 1 and one arm length equal to |H|, G[V1]=K1,k, G[V2]=P|H|, and let vV(G) be the high degree vertex of G. By [Citation1,Theorems 14 and 15], any matrix in S(G) has the unordered multiplicity list m_, such that m1,,m is majorized by (k+1,1(|H|1)), hence i[]mik+. By Theorem 3.7, there exists a matrix in (K1,k,{v})H=K1(kK1H) with the same eigenvalues as a matrix in S(G).

Acknowledgements

Polona Oblak acknowledges the financial support from the Slovenian Research Agency (research core funding no. P1-0222).

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by Javna Agencija za Raziskovalno Dejavnost RS[P1-0222].

References

  • Johnson CR, Leal-Duarte A, Saiago CM. Inverse eigenvalue problems and lists of multiplicities of eigenvalues for matrices whose graph is a tree: the case of generalized stars and double generalized stars. Linear Algebra Appl. 2003;373:311–330. doi: 10.1016/S0024-3795(03)00582-2
  • Ferguson Jr WE. The construction of Jacobi and periodic Jacobi matrices with prescribed spectra. Math Comp. 1980;35(152):1203–1220. doi:10.1090/S0025-5718-1980-0583498-3.
  • Fernandes R, da Fonseca CM. The inverse eigenvalue problem for Hermitian matrices whose graphs are cycles. Linear Multilinear Algebra. 2009;57(7):673–682. doi: 10.1080/03081080802187870
  • Lin JC-H, Oblak P, Šmigoc H. On the inverse eigenvalue problem for block graphs. Linear Algebra Appl. 2021;631:379–397. doi: 10.1016/j.laa.2021.09.008
  • Johnson CR, Wakhare T. The inverse eigenvalue problem for linear trees. Discrete Math. 2022;345(4):112737 doi: 10.1016/j.disc.2021.112737
  • Barrett W, Butler S, Fallat SM, et al. The inverse eigenvalue problem of a graph: multiplicities and minors. J Combin Theory Ser B. 2020;142:276–306. doi: 10.1016/j.jctb.2019.10.005
  • Ahn J, Alar C, Bjorkman B, et al. Ordered multiplicity inverse eigenvalue problem for graphs on six vertices. Electron J Linear Algebra. 2021;37:316–358. doi: 10.13001/ela.2021.5005
  • Adm M, Fallat S, Meagher K, et al. Achievable multiplicity partitions in the inverse eigenvalue problem of a graph. Spec Matrices. 2019;7:276–290. doi: 10.1515/spma-2019-0022
  • Monfared KH, Shader BL. The nowhere-zero eigenbasis problem for a graph. Linear Algebra Appl. 2016;505:296–312. doi: 10.1016/j.laa.2016.04.028
  • Levene RH, Oblak P, Šmigoc H. Orthogonal symmetric matrices and joins of graphs. Linear Algebra Appl. 2022;652:213–238. doi: 10.1016/j.laa.2022.07.007
  • Barrett W, Fallat S, Hall HT, et al. Generalizations of the strong Arnold property and the minimum number of distinct eigenvalues of a graph. Electron J Combin. 2017;24(2):28 doi: 10.37236/5725 Paper 2.40.
  • Leal-Duarte A, Johnson CR. On the minimum number of distinct eigenvalues for a symmetric matrix whose graph is a given tree. Math Inequal Appl. 2002;5(2):175–180. doi: 10.7153/mia-05-19
  • Adm M, Fallat S, Meagher K, et al. Corrigendum to ‘Achievable multiplicity partitions in the inverse eigenvalue problem of a graph’ [Spec. Matrices 2019; 7:276–290.], Special Matrices; 2020; Vol. 8(1). pp. 235–241. doi: 10.1515/spma-2020-0117
  • Ahmadi B, Alinaghipour F, Cavers MS, et al. Minimum number of distinct eigenvalues of graphs. Electron J Linear Algebra. 2013;26:673–691. doi: 10.13001/1081-3810.1679
  • Bjorkman B, Hogben L, Ponce S, et al. Applications of analysis to the determination of the minimum number of distinct eigenvalues of a graph. Pure Appl Funct Anal. 2018;3(4):537–563.
  • da Fonseca CM. A lower bound for the number of distinct eigenvalues of some real symmetric matrices. Electron J Linear Algebra. 2010;21:3–11. doi: 10.13001/1081-3810.1409
  • Levene RH, Oblak P, Šmigoc H. A Nordhaus–Gaddum conjecture for the minimum number of distinct eigenvalues of a graph. Linear Algebra Appl. 2019;564:236–263. doi: 10.1016/j.laa.2018.12.001
  • Fiedler M. A characterization of tridiagonal matrices. Linear Algebra Appl. 1969;2:191–197. doi: 10.1016/0024-3795(69)90027-5
  • Laffey TJ, Šmigoc H. Spectra of principal submatrices of nonnegative matrices. Linear Algebra Appl. 2008;428(1):230–238. doi: 10.1016/j.laa.2007.06.029
  • Šmigoc H. Construction of nonnegative matrices and the inverse eigenvalue problem. Linear Multilinear Algebra. 2005;53(2):85–96. doi: 10.1080/03081080500054281
  • Horn RA, Johnson CR. Matrix analysis. 2nd ed., Cambridge: Cambridge University Press; 2013. doi: 10.1017/CBO9781139020411