369
Views
0
CrossRef citations to date
0
Altmetric
Research Article

The solution of the Loewy–Radwan conjecture

&
Received 21 Sep 2022, Accepted 14 Oct 2023, Published online: 30 Nov 2023

Abstract

A seminal result of Gerstenhaber gives the maximal dimension of a linear space of nilpotent matrices. It also exhibits the structure of such a space when the maximal dimension is attained. Extensions of this result in the direction of linear spaces of matrices with a bounded number of eigenvalues have been studied. In this paper, we answer what is perhaps the most general problem of the kind as proposed by Loewy and Radwan, by solving their conjecture in the positive. We give the maximal dimension of a vector space of n×n matrices with no more than k<n eigenvalues. We also exhibit the structure of the spaces for which this dimension is attained.

COMMUNICATED BY:

2020 Mathematics Subject Classifications:

1. Introduction

This paper presents the positive solution to the Loewy–Radwan conjecture, which has been open for more than twenty years (Theorem 1.1). It belongs to a theory that started over 60 years ago by the famous Gerstenhaber [Citation1] result on linear spaces of nilpotent matrices of maximal dimension. An interested reader may also wish to consider some recent results in the area, such as [Citation2] by Kokol Bukovšek and Omladič, and [Citation3–5] by de Seguins Pazzis. It seems that spaces of matrices satisfying more general conditions on eigenvalues have been studied for the first time by Omladič and Šemrl in [Citation6]. We will first give a brief history of this theme.

Throughout the paper, we fix positive integers n and k<n and suppose V is a linear subspace of the space Mn(C) of n×n complex matrices with the property that each member of V has at most k distinct eigenvalues. Here C can be replaced by any algebraically closed field of characteristic zero. We are interested in how large the dimension of such a space can be. The case when all the matrices are assumed nilpotent dates back to Gerstenhaber [Citation1], who proved that the dimension of such a space is at most (n2). Actually, Gerstenhaber proved the result for all fields with at least n elements, and this assumption was later removed (cf. Serežkin [Citation7], and Mathes, Omladič, Radjavi [Citation8]). Moreover, Gerstenhaber showed that when the maximal dimension is attained the space is simultaneously similar to the space of all strictly upper triangular matrices. Consequently, any space of matrices with only one eigenvalue has dimension at most (n2)+1, and in case of equality such a space is simultaneously similar to the space of upper triangular matrices with equal diagonal entries, see [Citation6] and also [Citation9] for a generalization of these results to other fields. The article [Citation6] also contains the maximal possible dimension for a vector space of matrices with at most k distinct eigenvalues when k = 2 and n is odd and when k = n−1 under some additional assumptions. In these two cases the spaces of maximal dimension were also classified in [Citation6].

Later, Loewy and Radwan [Citation10] removed the assumptions needed in [Citation6] and showed that the dimension of a vector space of matrices with most 2, respectively (n1), distinct eigenvalues is at most (n2)+2, respectively (n2)+(n12)+1. They also showed that for k = 3 the corresponding upper bound for the dimension is (n2)+4 and conjectured that the upper bound is (n2)+(k2)+1 for every k<n. On the other hand, de Seguins Pazzis [Citation11] classified ((n2)+2)-dimensional spaces of matrices having at most 2 distinct eigenvalues and extended the results to other fields. We also note that Jordan algebras of matrices with few eigenvalues were studied in [Citation12] and that Gerstenhaber's theorem was generalized to semisimple Lie algebras [Citation13, Citation14], which was further used to give another proof of the Erdös-Ko-Rado theorem in combinatorics [Citation15].

The aim of this paper is to prove the Loewy–Radwan conjecture and to classify spaces of matrices with at most k distinct eigenvalues and which have the maximal possible dimension among such spaces. More precisely, we are going to show the following.

Theorem 1.1

Let n and k<n be positive integers and let V be a linear subspace of Mn(C) with the property that each member of V has at most k distinct eigenvalues. Then dimV(n2)+(k2)+1. Moreover, if the equality holds and k3, then there exists p{0,1,,nk+1} such that V is simultaneously similar to the space of all matrices of the form (1) (ABC0DE00F)(1) where BMp×(k1)(C), DMk1(C) and EM(k1)×(nkp+1)(C) are arbitrary and (AC0F) is an arbitrary upper triangular matrix with equal diagonal entries.

The proof of this theorem is inspired by the proof of the generalization of Gerstenhaber's theorem to semisimple Lie algebras given in [Citation13]. Although our proof is technically more challenging, it is possible to adapt some of the main ideas from [Citation13] to our situation. Let us briefly explain these main ideas. To show that dimV(n2)+(k2)+1, we first observe that V belongs to some (projective) subvariety of a Grassmannian variety, which is invariant for the action by conjugation of the (solvable) group of invertible upper triangular matrices. The Borel Fixed Point Theorem then enables us to reduce the problem to linear subspaces that are invariant under conjugation by invertible upper triangular matrices. Such subspaces are spanned by diagonal matrices and matrix units, therefore their dimensions and the number of eigenvalues of their members can be estimated in a straightforward way.

To classify the subspaces of maximal dimension, we use an induction on k. We first show that our space must contain a nonderogatory (or cyclic) matrix with k−1 simple eigenvalues. With no loss of generality we assume that this matrix is in the Jordan canonical form. The rest of the proof is based on the following idea. We define a group homomorphism ϕ:t(t00In1) and consider the spaces V0=limt0ϕ(t)Vϕ(t)1 and V=limt0ϕ(t)1Vϕ(t). These spaces are invariant for the action (t,W)ϕ(t)Wϕ(t)1 of the group C=(C{0},), they have the same dimension as V and their members have at most k distinct eigenvalues. We first consider the structure of such spaces. Using the structure of C-modules and the condition on the number of eigenvalues we can compute that the lower-right (n1)×(n1) corner of such W has dimension exactly (n12)+(k12)+1 and its members have at most k−1 eigenvalues. For k4 we use the inductive assumption, while for k = 3 we use the main result of [Citation11] together with the existence of a nonderogatory matrix with k−1 simple eigenvalues in W to obtain the structure of its lower-right corner. After that, we show that each such W is of an appropriate form. We apply this result to V0 and V, and finally we show that V0=V, which implies that V is equal to these two spaces and concludes the proof of the theorem. The above argument uses some results from representation theory, but we make it accessible to the reader by defining V0 and V in an equivalent way and then using only methods from linear algebra.

Section 2 consists of the proof of the first part of Theorem 1.1 and the rest of the paper is devoted to the second part. After showing some preliminary results in Section 3, in Section 4 we prove the second part of Theorem 1.1 under an additional assumption that V=ϕ(t)Vϕ(t)1 for all t0. The general case is proved in Section 5.

2. The upper bound on the dimension

In this section we prove the first part of Theorem 1.1, i.e. we show that a linear subspace of Mn(C) whose members have at most k distinct eigenvalues has dimension at most (n2)+(k2)+1. We will prove this with the help of algebraic geometry, therefore we first show that certain conditions on matrices are open in the Zariski topology. We call an n×n matrix regular or cyclic or nonderogatory whenever its centralizer is n-dimensional. It is well known (see e.g. [Citation16, Section 3.2.4]) that this condition is equivalent to the condition that the characteristic and minimal polynomial of the matrix coincide or that the Jordan canonical form of the matrix has only one Jordan block for each eigenvalue.

Lemma 2.1

Let A be an n×n complex matrix and k be a nonnegative integer. The following conditions are open in the Zariski topology.

(a)

A is regular.

(b)

A has more than k distinct eigenvalues.

(c)

A has more than k simple eigenvalues (i.e. they have algebraic multiplicity 1).

Proof.

  1. It is well known that C(A), the centralizer of A, has dimension at least n (see e.g. [Citation16, Section 3.2.4]), so we may modify the defining condition of regularity into dimC(A)n, meaning that dimkeradAn, or equivalently rankadAn2n, where adA:Mn(C)Mn(C) is defined in the usual way by letting adA:B[A,B]=ABBA. This amounts to the same as requiring that at least one of the (n2n)-minors of the matrix of the transformation adA in a fixed basis of Mn(C) is different from zero – clearly an open condition.

  2. Let pA be the characteristic polynomial of A. Then it is well known and not hard to see that condition (b) is fulfilled if and only if the degree of gcd(pA,pA) is smaller than nk. It is also a classical result (see e.g. [Citation17, Section 2.1]) that this degree equals 2n1rankSpA,pA, where SpA,pA is the Sylvester matrix of the polynomials pA and pA. Let us recall that (in block partition made of n−1 respectively n rows) SpA,pA=(anan1a0anan1a0anan1a0bn1b0bn1b0bn1b0). Here, the ak's denote the coefficients of pA and the bj's the coefficients of its derivative. So, the matrix A has more than k distinct eigenvalues if and only if the (n+k)×(n+k) minors of the matrix SpA,pA are not all zero, which is clearly an open condition.

  3. We will show that the condition that pA has at most k simple roots is closed. This condition is equivalent to the condition that the matrix pA(A) has at most k nonzero eigenvalues (counted with algebraic multiplicities), which is further equivalent to rankpA(A)nk, a closed condition.

Here we follow some concepts used in [Citation18, Section 3], so that we will give only the general ideas and omit some of the details. Fix an integer k, 1kn1 and define X={AMn(C);A has no more than k distinct eigenvalues}. We will denote by m the maximal possible dimension of a vector space V such that VX. By Lemma 2.1(b) the set X is closed in the Zariski topology and it is clearly homogeneous (i.e. if AX and αC, then αAX), so we may view it as a projective variety. Following [Citation19, Example 6.19] we introduce the Fano variety Fm(X)={VX;V a vector space, dimV=m} which is a closed subset of the Grassmannian variety Gr(m,n2) of all m-dimensional subspaces in Cn2, considered as a projective variety via the Plücker embedding that sends a vector space V with a basis {v1,v2,,vm} into [v1v2vm]P(m(Cn2)). The variety Fm(X) is non-empty by the definition of the number m. Let Tn be the solvable algebraic group of all invertible upper triangular matrices and define an action of Tn on Gr(m,n2) by Tn×Gr(m,n2)Gr(m,n2),(P,V)PVP1={PAP1;AV}. It is obvious that X is invariant under conjugation, and consequently Fm(X) is invariant under the above action, which is given by regular (rational) maps: (P,[A1A2Am])[PA1P1PA2P1PAmP1]. So, we can apply the following theorem to Fm(X):

Borel fixed point theorem [Citation20, Theorem 10.4]: Let Z be a non-empty projective variety and G be a connected solvable algebraic group acting on it via regular maps. Then this action has a fixed point in Z.

Using this result we conclude.

Lemma 2.2

There exists a linear subspace VFm(X) such that PAP1V for all AV and every invertible upper triangular matrix P.

We now investigate properties of a space satisfying the previous lemma. In the following lemma we denote the set {1,,n} by [n]. The proof of this lemma may be found in [Citation18, Lemma 9]. We present all of it for the sake of completeness, some of it will be used in Lemma 2.4, the rest of it may be of independent interest.

Lemma 2.3

Let VMn(C) be a vector space such that PAP1V for all AV and all invertible upper triangular PMn(C). Then:

(a)

If for some AV and i,j[n] with i<j we have aji0, then EijV.

(b)

If for some i,j[n] with i<j we have EijV, then EpqV for all p,q[n] with pi and qj. If we have EijV for some i,j[n] with i>j, then the commutator [Epq,Eij] belongs to V for all p,q[n] with p<q.

(c)

If V contains a matrix of the form A=(αbTcD) in the block partition determined by dimensions (1,n1), then it also contains the matrices (0bT00) and (00c0).

(d)

Claim (c) remains valid if we replace the first column and row with the ith column and row for any i[n].

(e)

If for some AV and i,j[n] with ij we have aij0, then EijV.

(f)

If for some AV and i,j[n] with i<j we have aiiajj, then EijV.

Using Lemma 2.3 we will now show that a subspace of Mn(C) which satisfies Lemma 2.2 has a very special form.

Lemma 2.4

Let V be a subspace of Mn(C) which satisfies the conditions of Lemma 2.2. Then there exists a sequence of standard subspaces Ui1,Ui2, ,Uis of Cn of dimension at least 2 with trivial intersections such that for each t{i1,i2,,is} the span Span{ei;there exists ejUt for some ji} is invariant under all members of V, and:

(a)

For any index t{i1,i2,,is} and any standard basis vectors ei,ejUt we have EijV.

(b)

All Eij belong to V for 1i<jn.

Proof.

Recall that V is of maximal possible dimension among spaces of matrices having at most k distinct eigenvalues. Let W1W2Wr be a maximal chain of subspaces of Cn which are invariant under all members of V and spanned by some of the first vectors among ei. If dimWt1<dimWt1, let Ut be spanned by standard basis vectors eiWt such that eiWt1. Note that the inequality dimUt2 follows immediately.

Let i{1,2,,n} be arbitrary index such that eiUt and ei1Ut and suppose that aij=0 for all AV and all indices j<i. Lemma 2.3(e) and the second part of Lemma 2.3(b) then imply that apq=0 for all AV and all p>i and q<i. (Indeed, if apq0 for some q<i<p, then EpqV by Lemma 2.3(e) and then Eiq=[Eip,Epq]V by Lemma 2.3(b).) However, then the linear span of e1,e2,,ei1 is a V-invariant subspace of Cn which is strictly contained in Wt (since it does not contain ei) and strictly contains Wt1 (since ei1Wt1), which contradicts maximality of the chain W1W2Wr.

It follows that there is an index j<i (which necessarily satisfies ejUt) such that aij0 for some AV. But then, using the second part of Lemma 2.3(b), we conclude in particular that the diagonal matrix EjjEii=[Eji,Eij] is a member of V. Since i was arbitrary such that eiUt and ei1Ut, it follows that V contains all those diagonal matrices with trace zero whose only possible nonzero entries correspond to the subspace Ut. General members of the corresponding diagonal block have the maximal number of eigenvalues equal to nt=dimUt. Let AV be arbitrary and let A1,A2,,Ar be the diagonal blocks of A corresponding to the chain W1W2Wr. We will show that the matrix A1A2At1At+1Ar has at most knt distinct eigenvalues. Assume the contrary. We have shown above that there exists a diagonal matrix DV such that its t-th diagonal block has trace zero and nt nonzero distinct eigenvalues, while all the other blocks are zero. General linear combination of A and D then has at least k + 1 distinct eigenvalues contradicting our assumption. So, the matrix A1A2At1At+1Ar has at most knt distinct eigenvalues. Then it is clear that any linear combination of A and Eij, where ei,ejUt, has at most k distinct eigenvalues. By maximality of the dimension of V it follows that EijV concluding property (a).

Property (b) now follows easily by maximality of the dimension of V.

In the situation of Lemma 2.4 let nt=dimUt for t such that dimWt1<dimWt1, and let nt=0 otherwise. Furthermore, let l be the dimension of the space V of diagonal members of V that correspond to the standard basis vectors which do not belong to any of the subspaces Ui. Note also that V is invariant under projection on the diagonal by Lemma 2.4.

Corollary 2.5

Then kl+t=1rnt.

Proof.

The maximal number of distinct eigenvalues of a member of V is clearly a sum of all nt and the maximal number of distinct eigenvalues of a matrix from V. The last number cannot be smaller than l, as V is l-dimensional.

The proof of the main result of this section will be based on the above corollary and the following lemma.

Lemma 2.6

Let k, l, r be positive integers and n1,,nr nonnegative integers satisfying kl+t=1rnt. Then l+t=1r(nt+12)(k2)+1. The equality holds if and only if k=l+nt for some t and either l = 1 or k = l = 2.

Proof.

The second entry on the left hand side, multiplied by 2, can be estimated t=1rnt+t=1rnt2t=1rnt(1+t=1rnt)(kl)(kl+1), and it is clear that the difference between the right hand side and the left hand side of the first inequality above is equal to s,t=1strnsnt which equals zero only if no more than one number nt is nonzero. Moreover, the equality in the second inequality holds only if k=l+t=1rnt. It follows that l+t=1r(nt+12)12(k22kl+l2+k+l)=(k2)k(l1)+l(l+1)2(k2)12l2+32l(k2)+1, where we used the fact that 1lk. We have equality in the last two inequalities only if l = 1 or k = l = 2.

In the following theorem, we prove that m=(n2)+(k2)+1 which proves the first part of Theorem 1.1 and solves [Citation10, Conjecture 1.2]. In the theorem we also characterize subspaces where equality holds in Theorem 1.1 and which additionally satisfy conditions of Lemma 2.2 when k3. We will use this characterization in Lemma 3.2 which is one of the key steps in the characterization of all subspaces where the upper bound in Theorem 1.1 is achieved.

Theorem 2.7

Let 1kn1 and let m be the maximal possible dimension of a subspace of Mn(C) whose all elements have at most k distinct eigenvalues. Then m=(n2)+(k2)+1. Moreover, if k3 and VFm(X) is a subspace satisfying conditions of Lemma 2.2, then there exists p{0,1,,nk+1} such that V consists of all matrices (ABC0DE00F) where BMp×(k1)(C), DMk1(C) and EM(k1)×(nkp+1)(C) are arbitrary and (AC0F) is an arbitrary upper triangular matrix with equal diagonal entries.

Proof.

The spaces of matrices described in the theorem are clearly invariant under conjugation by invertible upper triangular matrices, they have dimension (n2)+(k2)+1 and for each k1 they consist of matrices with at most k distinct eigenvalues, so m(n2)+(k2)+1. To prove the converse, by the Borel fixed point theorem it suffices to show that spaces satisfying conditions of Lemma 2.2 have dimension at most (n2)+(k2)+1. Let V be such a space and let l and nt be defined as before Corollary 2.5. Note that l is positive, as scalar matrices are in V by the maximality of the dimension of V. We compute dimV=l+(n2)+t=1r(nt+12), where the first entry on the right hand side counts the diagonal elements of V corresponding to standard basis vectors that do not belong to any of the subspaces Ut, the second one counts the entries strictly above the diagonal, and the terms of the third one count the entries below the diagonal and on it corresponding to each of the subspaces Ut. The first part of the theorem now immeadiately follows from Lemma 2.6. Moreover, the equality in the inequality of the lemma holds only if r = 1, k=l+nt for some t and either l = 1 or k = l = 2. In particular, if k3, then l = 1, r = 1 and nt=k12, which gives us the possibilities for V described in the theorem.

3. Preliminaries for the structure result

In our considerations, we will often refer to properties that hold generically. As usual in the algebraic geometry, this will mean that the property holds on an open dense subset (in the Zariski topology). Most often generic conditions will be considered on some line. In this case, a property will hold generically if it will hold for all but a finite number of points of the line.

In the proof of the second part of Theorem 1.1 we will need the following condition more than once.

Two zeros condition: Let n be a positive integer, let s be a polynomial of degree no more than n, and let λC{0} be fixed. For generic μC the polynomial rμ(t)=tn+2λtn+1μs(t) has no more than two distinct zeros.

Lemma 3.1

The two zeros condition implies that s = 0.

Proof.

Let s(t)=antn+an1tn1++a0. By the condition under consideration, rμ has at most two distinct zeros for all but a finite number of scalars μ. Besides, this polynomial has a simple zero at μ=0. According to Lemma 2.1(c) (applied e.g. to the companion matrix of a polynomial) the condition that a monic polynomial of given degree has a simple zero is open, so that rμ has a simple zero generically, i.e. for all but a finite number of μC. In the rest of the proof, we consider such μ that rμ has a simple zero and at most two distinct zeros. For all these values of μ we can write rμ(t)=(tα)(tβ)n+1, where α and β may depend on μ. The first Vieta formula determines α as a linear function of β, i.e. α=λ(n+1)β. Insert this expression into the second and the third Vieta formula to get two polynomial conditions (2) (n+22)β2(n+1)λβμan=0 and(2) (3) 2(n+23)β3(n+12)λβ2+μan1=0(3) in β and μ. For a fixed μC as chosen above the polynomial equations (Equation2) and (Equation3) have a common solution β, so the resultant (i.e. the determinant of the Sylvester matrix) |(n+22)(n+1)λμan000(n+22)(n+1)λμan000(n+22)(n+1)λμan2(n+23)(n+12)λ0μan1002(n+23)(n+12)λ0μan1| of these two polynomials has to be zero. (Note that this is a special case of the result from [Citation17] used in Lemma 2.1, or see some standard textbook on algebraic geometry such as [Citation21, Section 3.5].) Since this condition is satisfied for general μ as considered above, all the coefficients at powers of μ in the obtained resultant must be zero. Since μ appears only in the last three columns of the above determinant, the degree of the resultant is at most 3, and it has zero constant term, since all entries of the last column of the determinant are multiples of μ. Now it is clear that the coefficient at μ equals |(n+22)(n+1)λ0000(n+22)(n+1)λ0000(n+22)(n+1)λan2(n+23)(n+12)λ00002(n+23)(n+12)λ0an1|=12(n+23)(n+1)3λ3an1 and the coefficient at μ3 equals |(n+22)(n+1)λan000(n+22)0an00000an2(n+23)(n+12)λ0an1002(n+23)00an1|=4(n+23)2an3. It follows that an=0 and an1=0. Using (Equation2) and (Equation3) one then concludes that β=0 and α=λ independently of the chosen μ and hence s = 0.

In the proof of Theorem 1.1 we will also need the fact that for k3 a space V of maximal dimension contains a regular matrix that has exactly k−1 simple eigenvalues, i.e. is similar to (4) (λ1λk1λk11λk),(4) where all the empty entries of this matrix are zeros and λ1,λ2,,λk are pairwise distinct.

Lemma 3.2

Let 3k<n and let V be a subspace of Mn(C) of dimension m=(n2)+(k2)+1 whose members have at most k distinct eigenvalues. Then V contains a regular element that has k−1 simple eigenvalues.

Proof.

Assume the contrary. Let Y be the set of all matrices that are not regular, and let Z be the set of all matrices with at most k−2 simple eigenvalues. By Lemma 2.1 the sets Y and Z are both closed in the Zariski topology, so is their union. Moreover, the union YZ is clearly invariant under the action of the group Tn of all invertible upper triangular matrices by conjugation. Furthermore, the Zariski closed set YZ is clearly homogeneous, so we may view it as a projective variety. Hence, we can introduce the Fano variety Fm(YZ)={WGr(m,n2);WYZ} of the union YZ. The assumption that V does not contain a regular element that has k−1 simple eigenvalues implies that the intersection Fm(X)Fm(YZ) is not empty. This intersection is invariant under Tn, so by the Borel fixed point theorem it has a fixed point V. However, since k3, V is then one of the spaces described in Theorem 2.7 and it contains a regular element with k−1 simple eigenvalues, which is similar (with a similarity that swaps the first two block rows and columns of (Equation1)) to a matrix of the form (Equation4) with λi pairwise distinct, contradicting the starting assumption on V.

4. Structure of some special spaces

Throughout the rest of the paper let k3. We will prove the main result by induction on k. Let V be a space of maximal dimension m=(n2)+(k2)+1 satisfying the conditions of Theorem 1.1. Note that by maximality we may assume that V contains all scalar matrices. First we make a reduction that is based on Lemma 3.2. Each regular n×n matrix with k−1 simple eigenvalues which has at most k distinct eigenvalues is similar to a matrix of the form (Equation4), so we now conjugate the space V by an appropriate invertible matrix to assume that V contains a matrix of the form (Equation4) for some distinct λ1,,λkC.

In this section, we describe the structure of the space considered under the following additional assumption.

AA: If a matrix (αbTcD) with blocks of respective sizes 1 and n−1 belongs to V, then the matrices (0bT00), (00c0), and (α00D) belong to V.

This additional assumption is motivated by Representation Theory. Denote by C the multiplicative group (C{0},). Let ϕ:CGLn be a group homomorphism defined by t(t00In1). For each tC the space ϕ(t)Vϕ(t)1 is m-dimensional and with elements having at most k distinct eigenvalues, so it belongs to Fm(X).

Lemma 4.1

Condition AA is equivalent to ϕ(t)Vϕ(t)1=V for all tC.

Proof.

Condition AA clearly implies ϕ(t)Vϕ(t)1=V. Conversely, a space satisfying this equality is a C-module for the action (t,A)ϕ(t)Aϕ(t)1. Now we use the fact that every C-module is a direct sum of weight spaces (see e.g. [Citation22, Proposition 22.5.2(iii)]) to see that V can be written as V=jZV(j),whereV(j)={AV;ϕ(t)Aϕ(t)1=tjA, for all t0}. Write a matrix AV(j) as A=(αbTcD) where αC, b,cCn1 and DMn1(C). It follows easily that members of V(j) are nonzero only in the case that j = 0, j = 1, or j = −1. Elements of V(0) are of the form A=(α00D), elements of V(1) are of the form A=(0bT00), and elements of V(1) are of the form A=(00c0), after a straightforward computation.

Our next step will be to estimate the dimensions of V(j) when j=±1. Here is an additional notation we need to introduce. Let V(1) be the set of all matrices of the form (0bT00)V(1) such that b has first k−2 entries equal to zero. We define similarly V(1).

Lemma 4.2

Let lk be the smallest index with the property that some row bT=(00bkbn) with bl0 equals the upper-right corner of a member of V(1) (with the convention l = n + 1 if b is always zero). Furthermore, let c=(c2cn) be an arbitrary lower-left corner of a member of V(1). Then cq=0 for all ql.

Proof.

Clearly we may assume ln. Choose an arbitrary (0bT00)V(1) with bl0 and (00c0)V(1). As explained at the beginning of this section V contains a matrix of the form (Equation4) for some distinct λ1,,λkC  so that it contains A(μ)=(λ1μbkμbnc2ck1λk1ckλk11cnλk) for some distinct λ1,,λkC and arbitrary μC. Our assumptions imply that this matrix has at most k distinct eigenvalues so that its characteristic polynomial Δ(t), which is computed in Lemma 4.3 below, has at most k distinct zeros for arbitrary μC. As shown in Lemma 4.3 the polynomial Δ(t) is the product of (λ2t)(λk1t) and a polynomial of the form rμ(t)=(λ1t)(λkt)nk+1+μs(t) for some polynomial s. By the assumption the numbers λ2,,λk1 are not zeros of the polynomial r0, hence they are not zeros of rμ for generic μC. Consequently, the polynomial rμ has at most two distinct zeros for generic μ. Now we replace t by t+λk and divide by (1)nk to get the following. For generic μC the polynomial tnk+2(λ1λk)tnk+1μi=0nkp=kk+ibpcnki+pti satisfies the Two zeros condition. Using Lemma 3.1 we conclude that (5) p=kk+ibpcnki+p=0(5) for all i=0,1,,nk. Recalling that l is the smallest index with bl0, the equalities (Equation5) imply that cq=0 for all ql as desired.

Lemma 4.3

Δ(t)=det(A(μ)tI)= (λ2t)(λk1t)((λ1t)(λkt)nk+1i=0nk+μi=0nk(1)nk+i+1(λkt)ip=kk+ibpcnki+p).

Proof.

First observe that the columns indexed by i=2,,k1 contain only one nonzero entry which equals λit. Let us perform the usual column expansions along all of these columns consecutively to conclude that Δ(t)=(λ2t)(λk1t)Δ1(t), where Δ1(t)=|λ1tμbkμbnckλkt100λkt1cn00λkt|. We compute Δ1(t) by expanding it first along the first row and then along the first column. The final minor is possibly nonzero only in the case of the (pk + 2)-th column and (qk + 2)-th row, for p,q=k,,n, such that pq, in which case it equals (λkt)(nk)(qp). So, Δ1(t)=(λ1t)(λkt)nk+1+μkpqnbpcq(1)p+q+1(λkt)nk+pq which gives the desired result after a small computation.

Corollary 4.4

dimV(1)+dimV(1)n1.

Proof.

The conclusion of Lemma 4.2 implies easily the desired estimates.

Corollary 4.5

dimV(1)+dimV(1)n+k3.

Proof.

This follows immediately from Corollary 4.4. Indeed, dimV(1)dimV(1)+k2 and the desired inequality follows.

We now recall that dimV=(n2)+(k2)+1. It follows by Corollary 4.5 that dimV(0)(n2)+(k2)+1nk+3=(n12)+(k12)+2. Recall that V(0) is a linear space of matrices of the form (α00D), whose lower-right corners form a space, which we denote by W, of dimension no smaller than (n12)+(k12)+1. On the other hand, members of W have no more than k−1 distinct eigenvalues. Indeed, if DW had k distinct eigenvalues, then some linear combination of the corresponding matrix (α00D)V and some matrix of the form (Equation4) would lie in V and have at least k + 1 distinct eigenvalues, a contradiction. Using Theorem 2.7 we can therefore conclude that the dimension of W is exactly (n12)+(k12)+1. This fact implies that in all the above inequalities up to and including Corollary 4.4 we have equalities, more precisely:

Corollary 4.6

(a)

dimV(0)=(n12)+(k12)+2

(b)

dimV(1)=nl+k1

(c)

dimV(1)=l2

This proves that the space of lower-left corners of V(1) equals the span of {e1,,el2}Cn1. Using this result we now show a version of Lemma 4.2 in which the roles of b and c are interchanged.

Lemma 4.7

An arbitrary upper-right corner of a member of V(1) is of the form bT=(b2bk100blbn).

Proof.

If l>k and (0bT00)V(1) is arbitrary, then a matrix (λ1b2bk1bkbl1blbn0λ20λk1001μ0100), where λ1,,λk1 are nonzero and pairwise distinct and μ appears in the (l1)-st row, belongs to V for all μC. The characteristic polynomial Δ(t) of this matrix equals Δ(t)=(λ2t)(λk1t)(t)nl+1Δ1(t) where Δ1(t)=|λ1tbkbl10t101μt|=(λ1t)(t)lk+(1)lkμi=0lk1bi+kti. Since the matrix defined above belongs to V, it has at most k distinct eigenvalues, and as in Lemma 4.2 we conclude that the polynomial (t)nl+1Δ1(t) has at most two distinct zeros for generic μC. Lemma 3.1 (applied to (t)nl+1Δ1(t)) then implies that Δ1(t)=(λ1t)(t)lk, so bi=0 for i=k,,l1, as desired.

Recall that WMn1(C) is a subspace of dimension (n12)+(k12)+1 whose members have at most k−1 distinct eigenvalues. If k4, it now follows from the inductive hypothesis that there exists p{0,1,,nk+1} such that the members of W are simultaneously similar to matrices of the form (Equation1), i.e. (ABC0DE00F), with blocks of respective sizes p, k−2, npk + 1, where (AC0F) is upper triangular with constant diagonal, but other than that the nonzero blocks are arbitrary. If k = 3, then there are more similarity classes of ((n12)+2)-dimensional spaces of (n1)×(n1) matrices having at most 2 eigenvalues, see [Citation11, Theorem 1.8]. However, the space W contains the (n1)×(n1) lower-right corner of the matrix given by (Equation4), which has a simple eigenvalue. This additional information together with [Citation11, Theorem 1.8] implies that the members of W are simultaneously similar to matrices of the form (Equation1) even if k = 3.

The next step is to prove that W is actually equal to the space of matrices obtained from (Equation1) by interchanging the first two block rows and columns.

Lemma 4.8

The space W of all (n1)×(n1) lower-right corners of V(0) is equal to the space of all matrices of the form (D0EBAC00F) with blocks of respective sizes k−2, p and npk + 1 for some p{0,1,,nk+1} where (AC0F) is the sum of a scalar matrix and a strictly upper triangular matrix, and all the other nonzero blocks are arbitrary.

Proof.

Recall that the space W is simultaneously similar to the space of matrices described in the lemma. Let P be an invertible matrix that provides this similarity. The space W contains the lower-right (n1)×(n1) corner of a regular matrix of the form (Equation4) such that P (λ2λk1λk11λk)=(D0EBAC00F)P. Here, A and F are upper triangular with the same constant, say λ, on the diagonal. Denote this matrix of the form (Equation4) by L and the 3×3 block matrix on the right by M. Since the two matrices are similar and λk is the only multiple eigenvalue of L and λ is a multiple eigenvalue of M, we have that λk=λ. Consequently, the eigenvalues of D are λ2,,λk1. Write P with blocks of respective sizes k−2, p, npk + 1 as P=(QRSNUTXYZ) to get (6) (QRSNUTXYZ)(D000λI+J1Ep100λI+J2)=(D0EBAC00F)(QRSNUTXYZ),(6) where D=Diag(λ2,,λk1) and J1,J2 are nilpotent Jordan blocks of appropriate sizes. The (3,1)-block of Equation (Equation6) equals XD=FX. Since the intersection of the spectra of D and F is empty, we conclude that X = 0 [Citation23, Section VIII.1]. We rewrite the (3,2)- and (3,3)-block of Equation (Equation6) into (YZ)J=(FλI)(YZ), where J is a nilpotent Jordan block of appropriate size. It follows inductively on l that (YZ)Jl=(FλI)l(YZ). So, the matrix (YZ) maps ImJl into Im(FλI)l which is included in Span{e1,,enpk+1l} for all positive integers l. This readily yields that Y = 0 and that Z is upper triangular.

Next, we consider the (1,2)-block of equation (Equation6) to get R(λI+J1)=DR. Since the spectrum of D does not contain λ, we determine that R = 0. Block equation (2,2) now implies that UJ1=(AλI)U. As above we deduce that U is upper triangular. Finally, we conclude that W is the space of all matrices of the form (D0EBλI+AC00λI+F), where A and F are strictly upper triangular. Indeed, we know that space W is simultaneously similar to the space of matrices of this form, while we proved here that a similarity matrix is also of this block form with (2,2) and (3,3)-blocks upper triangular.

Let us now write matrices with respect to the block partition of respective sizes 1, k−2, p and npk + 1. Then it follows from the above lemma and the equality dimV(0)=(n12)+(k12)+2=dimW+1 that V(0) consists of all matrices of the form (7) (α0000D0E0BλI+AC000λI+F),(7) where α,λ,D,E,B,C are arbitrary and A,F are strictly upper triangular. Also, V(1) respectively V(1) consists of some matrices of the form (0bTbTbT000000000000)respectively(0000c000c000c000). We now determine the structure of the spaces V(1) and V(1).

Lemma 4.9

For any matrix in V the blocks b and c are zero.

Proof.

Here is a simplified notation for the first row and column that will be useful b=(bbb)andc=(ccc). As in Lemma 4.2 let lk be the smallest index such that some row bT=(00bkbn) with bl0 equals the upper-right corner of a member of V(1), with the convention l = n + 1 if V(1) is trivial. By that lemma the entries of any lower-left corner c=(c2cn) of a member of V(1) satisfy cq=0 for all ql. First, we want to show that lk+p.() Towards a contradiction, we assume that l<k + p. In particular, we have p>0 and ln. Choose a matrix of the form (Equation7) with α=λ1, D=Diag(λ2,,λk1), where λi's are nonzero and distinct, B=Elk+1,s1 for some s, 2sk1, and all the other blocks are zero. We add c and μb described above to this matrix and compute the resulting characteristic polynomial Δ(t)=|λ1t0000μblμbnc2λ2tck1λk1tcktcl1t010t|; observe that the matrix under this determinant belongs to V. The isolated entry 1 was set in the l-th row and the s-th column. We expand the determinant at all rows and columns that contain only one nonzero entry: Δ(t)=(t)nki=2isk1(λit)|λ1t0μblcsλst001t|=(t)nkqμ(t), where we introduce qμ(t)=i=2isk1(λit)  ((λ1t)(λst)(t)+μblcs). So, if μ=0, then qμ has k distinct zeros. Consequently, by Lemma 2.1(b) the polynomial qμ has k distinct zeros for generic μ. If qμ(0)0 for such μ, then the polynomial Δ has k + 1 zeros, a contradiction with the standing assumption on V. Therefore, generically the polynomial qμ has k distinct roots, one of which is zero. This implies that blcs=0, and since bl0 we have cs=0. Now, in this consideration s is chosen arbitrary from the set {2,,k1}, so c2,,ck1 are all equal to zero.

We have shown that any first column of a member of V(1) is of the form c=(00ckcl100)T, so that dimV(1)lk, contradicting Corollary 4.6. This shows that Condition () lk+p holds.

Next, we want to show that l = k + p. Choose c with 1 in the (l2)-th position (i.e. cl1=1) and zeros elsewhere. Assume towards a contradiction that l>k + p (and hence pnk+1 and l>k) and repeat the above arguments with the roles of b and c interchanged. Consider a member of V of the form (Equation7) with α=λ1 and D=Diag(λ2,,λk1) where λ1,,λk1 are nonzero and pairwise distinct, and with a 1 in the (l1)-st column and s-th row for s{2,,k1}, and with zeros everywhere else. Add to this matrix c and μbT where bT is an arbitrary upper-right corner of V(1) and c is as above. Recall that bk==bl1=0 by Lemma 4.7. Computations as above reveal that the characteristic polynomial of this matrix is equal to Δ(t)=(t)nki=2,isk1(λit)|λ1tμbs00λst110t|, where the last determinant on the right hand side equals (λ1t)(λst)(t)+μbs. As before we conclude that bs=0 for all possible s{2,,k1}, which implies dimV(n2)+(k2)k+3. The contradiction so obtained brings us to the fact that l = k + p, which proves the lemma.

The above lemma concludes the proof that a space VFm(X) containing a matrix of type (Equation4) and satisfying condition AA consists of all matrices of the form (D0EBAC00F) with blocks of respective sizes k−1, p and npk + 1, where (AC0F) is an upper triangular matrix with equal diagonal entries and all the other nonzero blocks are arbitrary.

5. Structure of the spaces of maximal dimension

In this section, we will prove the second part of Theorem 1.1, i.e. the following theorem.

Theorem 5.1

Let 3k<n and let V be a subspace of Mn(C) of dimension m=(n2)+(k2)+1 such that each member of V has at most k distinct eigenvalues. Then there exists p{0,1,,nk+1} such that V is simultaneously similar to the space of all matrices of the form (Equation1) where BMp×(k1)(C), DMk1(C) and EM(k1)×(nkp+1)(C) are arbitrary and (AC0F) is an arbitrary upper triangular matrix with equal diagonal entries.

Let V be a space satisfying the conditions of the theorem. Recall from the beginning of the previous section that we may assume that V contains a matrix of the form (Equation4) for some distinct λ1,,λkC. Consider a block partition of V with respect to dimensions 1 and n−1. We define the projection π0 from V to the lower-left corner as π0:(αbTcD)(00c0). Clearly, kerπ0 consists of all members of V of the form (αbT0D). Next we define the projection π0 from kerπ0 to the diagonal blocks as π0:(αbT0D)(α00D). Now, kerπ0 consists of all members of V of the form (0bT00). Let V0=imπ0imπ0kerπ0. It is clear that dimV0=dimV.

Lemma 5.2

Elements of V0 have at most k distinct eigenvalues.

Proof.

Let (αbTcD) be an arbitrary matrix in V0. Then (0bT00)kerπ0V and (α00D)imπ0. So there exists bCn1 such that (αbT0D)kerπ0V. Finally, (00c0)imπ0, so there exist αC,bCn1, and DMn1(C) such that (αbTcD)V. Members of V have at most k distinct eigenvalues, therefore the matrix =(t00I)(t(αbTcD)+(αbT0D)+t1(0bT00))(t100I)=(α+tαbT+tbT+t2bTcD+tD) has at most k distinct eigenvalues for each t0. Consequently, the starting matrix has at most k distinct eigenvalues by Lemma 2.1(b).

Next, we define the projection π from V to the upper-right corner as π:(αbTcD)(0bT00). Clearly, kerπ consists of all members of V of the form (α0cD). We define the projection π from kerπ to the diagonal blocks as π:(α0cD)(α00D). Now, kerπ consists of all members of V of the form (00c0). Let V=imπimπkerπ, so that again dimV=dimV. Similar arguments as in the proof of Lemma 5.2 show that all members of V have at most k distinct eigenvalues. Let us point out that the so-defined spaces V0 and V satisfy condition AA from the beginning of Section 4.

Remark 5.3

Motivation for the definition of spaces V0 and V comes from Algebraic Geometry and Representation Theory. Recall the group homomorphism ϕ:CGLn defined by t(t00I). For each tC the space ϕ(t)Vϕ(t)1 is m-dimensional and with elements having at most k distinct eigenvalues, so it belongs to the Fano variety Fm(X). However, Fm(X) is a projective variety, hence there exist limits V0=limt0ϕ(t)Vϕ(t)1andV=limt0ϕ(t)1Vϕ(t) in Fm(X). A short computation reveals that V0 is ϕ-stable; indeed, ϕ(s)V0ϕ(s)1=ϕ(s)limt0ϕ(t)Vϕ(t)1ϕ(s)1=limt0ϕ(st)Vϕ(st)1=V0. The same considerations apply to V. Therefore, both spaces are C-modules for the action (t,A)ϕ(t)Aϕ(t)1. Observe that this is equivalent to Condition AA.

Remark 5.4

To show that the two definitions of V0 are equivalent choose the following basis of the space V : (α1b1Tc1D1),, (αrbrTcrDr), (αr+1br+1T0Dr+1),, (αr+sbr+sT0Dr+s), (0br+s+1T00),, (0bmT00). Here, r and s are chosen consecutively as the maximal possible so that c1,,cr are linearly independent and that (αr+100Dr+1),,(αr+s00Dr+s) are linearly independent. We denote the basis elements by B1,,Bm. Recall that V is represented in Gr(m,n2)P(m(Cn2)) by the class [i=1mBi], which is independent of the choice of the basis (cf. [Citation19, Chapter 6]). In order to get the basis of space V0, we compute the limits of classes within the Grassmanian determined by exterior products of basis elements: [V0]=limt0[i=1mϕ(t)Biϕ(t)1]=limt0[i=1m(αitbiTt1ciDi)]=limt0[i=1r(tαit2biTcitDi)i=r+1r+s(αitbiT0Di)i=r+s+1m(0biT00)]=[i=1r(00ci0)i=r+1r+s(αi00Di)i=r+s+1m(0biT00)].

Note that the elements of the above exterior product are indeed linearly independent, so a basis of V0 is given by (00c10),,(00cr0),(αr+100Dr+1),,(αr+s00Dr+s),(0br+s+1T00),,(0bmT00). Note that the first r elements form a basis of imπ0, the next s elements form a basis of imπ0, and the rest of the elements form a basis of kerπ0. So, the two definitions of V0 are equivalent. The same considerations apply to V.

Recall that the spaces V0 and V satisfy Condition AA, so we may apply the results of Section 4. In the block partition with respect to blocks of sizes 1, k−2, p and npk + 1 the upper-right corner of linear space V0(1) respectively V0(1) is made of vectors of the form (bT0bT) respectively (00bT). Also, the lower-left corner of the linear space V0(1) respectively V0(1) is made of all vectors of the form (cc0) respectively (0c0). So, the linear space V0 consists of all matrices of the form (αbT0bTcD0EcBλI+AC000λI+F), where A and F are strictly upper triangular. The case of the space V goes in the same way. However, the block division there may be based on a different index denoted by q instead of p. We now want to show that these indices are equal and that consequently V0=V.

Proposition 5.5

V=V0=V.

Proof.

Recall the definitions of the projections π0,π0,π, and π. Then imπ0=V0(1),imπ0=V0(0),kerπ0=V0(1),imπ=V(1),imπ=V(0), andkerπ=V(1). Note that kerπ0V and that π is injective on kerπ0. Consequently, dimkerπ0dimimπ, or equivalently dimV0(1)dimV(1). It was shown before the proposition that dimV0(1)=np1anddimV(1)=nq1, so pq. We want to show that the equality holds.

Write elements of V, V0 and V with respect to block partition of respective sizes 1, k−2, q, pq and nkp + 1 (where some of these numbers may be zero). Then, sets V0 and V consist of matrices of the form (αbT00bTcD00Ec1B1λI+A1A2C1c2B20λI+A3C20000λI+F) respectively(αbT0b1Tb2TcD0E1E2cBλI+AC1C2000λI+F1F20000λI+F3), where A,A1,A3,F,F1, and F3 are strictly upper triangular. So, members of V0(0)V(0) are of the form (8) (α00000D00E0BλI+A1A2C000λI+F1F20000λI+F3),(8) where A1,F1, and F3 are strictly upper triangular. An easy computation reveals that (9) dim(V0(0)V(0))=(n12)+(k12)+2(pq)(k2).(9) Let φ:kerπ0Cpq be the projection defined by (αbT0D)b1, where bT=(bTbTb1Tb2T); the blocks of the first matrix are of sizes 1 and n−1 and the blocks of bT are of the sizes k2,q,pq, and npk + 1. It is obvious that kerπ0=V0(1)kerφ. So, φ induces a linear map φ¯:V0(0)=imπ0kerπ0/kerπ0Cpq. Let A=(α00D)V0(0)=imπ0 be arbitrary. Then there exists b such that A=(αbT0D)kerπ0V. Write bT=(bTbTb1Tb2T). Then b1=φ(A)=φ¯(A). Recall that π is a projection from V to the upper-right corner. The structure of V(1)=imπ implies that all (1,3)-blocks of matrices from V are zero. In particular, b=0. If we subtract A from the matrix (10) A+(000φ¯(A)T000000000000000000000),(10) the difference lies in V0(1)=kerπ0V. It follows that for each AV0(0) the matrix (Equation10) lies in V. In particular, kerφ¯V. If AV0(0)V then Akerπ, so π(A)imπ=V(0). Since π is identity on V0(0)V, it follows that V0(0)VV0(0)V(0). Consequently, kerφ¯V0(0)V(0) and therefore dim(V0(0)V(0))dimkerφ¯dimV0(0)(pq)=(n12)+(k12)+2(pq). Combining this inequality with (Equation9) we get pq(k2)(pq).

If k4, it now immediately follows that p = q and hence V0=V. In particular, imπ=kerπ0 and imπ0=kerπ, which implies that V contains all upper-right and lower-left corners of its elements with respect to block partition (1,n1). Therefore V=V0=V.

It remains to get a contradiction in the case k = 3 when p>q. In this case, all the above dimension inequalities become equalities. This means that φ¯ is surjective and its kernel is V0(0)V(0), so the induced map φ¯¯:CpqV0(0)/(V0(0)V(0))Cpq is an isomorphism. It follows that for each yCpq the matrix (000φ¯¯(y)T000000000000y00000000) belongs to V.

To obtain a contradiction we now adjust the ideas of Lemmas 4.2 and 4.9. Assume first that pq2. Fix distinct nonzero numbers λ1 and λ2. In the above observation take y=epq and write φ¯¯(epq)T=xT=(x1x2xpq). Note that V0(1)V(1)kerπV and V0(0)V(0)=kerφ¯V. It follows that for an arbitrary i{1,,pq1} the matrix (λ100xT0μλ2000000000epq0Ei,pq000000) belongs to V for each μC. Hence it has at most three distinct eigenvalues. Its characteristic polynomial is equal to Δ(t)=(t)n+qp2|λ1t0xTμλ2t00epqEi,pqtI|=(t)n2(λ1t)(λ2t)μ(t)n+qp2|0xTepqEi,pqtI|=(t)n4(t2(λ1t)(λ2t)μ(xi+txpq)). By the assumption on V the quartic polynomial in the parentheses has a multiple zero for each μC, so for each μC its discriminant |1(λ1+λ2)λ1λ2μxpqμxi0001(λ1+λ2)λ1λ2μxpqμxi0001(λ1+λ2)λ1λ2μxpqμxi43(λ1+λ2)2λ1λ2μxpq000043(λ1+λ2)2λ1λ2μxpq000043(λ1+λ2)2λ1λ2μxpq000043(λ1+λ2)2λ1λ2μxpq| is zero. The above discriminant is a polynomial of degree 4 in μ with constant term zero. The coefficient on μ is equal to |1(λ1+λ2)λ1λ2000001(λ1+λ2)λ1λ2000001(λ1+λ2)λ1λ20xi43(λ1+λ2)2λ1λ20000043(λ1+λ2)2λ1λ20000043(λ1+λ2)2λ1λ20000043(λ1+λ2)2λ1λ2xpq|=4λ13λ23(λ1λ2)2xi and the coefficient on μ4 is equal to |1(λ1+λ2)λ1λ2xpqxi0001(λ1+λ2)0xpqxi000100xpqxi43(λ1+λ2)2λ1λ2xpq000043(λ1+λ2)0xpq0000400xpq0000000xpq|=27(xpq)4. As both coefficients have to be zero, we get xpq=xi=0. Since i was an arbitrary element from {1,,pq1}, we get x = 0, a contradiction with the fact that x=φ¯¯(epq) and φ¯¯ is an isomorphism.

On the other hand, if pq = 1, then we consider a similar matrix as above, with the only difference that the (4,4)-block is taken to be zero. The characteristic polynomial of this matrix equals Δ(t)=(t)n4(t2(λ1t)(λ2t)μxpqt). The same argument as above shows that xpq=0, i.e. x = 0, which yields again a contradiction. This concludes the proof of the proposition.

Finally, we can finish the proof of Theorem 1.1. By Proposition 5.5 the space V is equal to V0. Consequently, by the argument just before Proposition 5.5 there exists p{0,1,,nk+1} such that with respect to the block partition of respective sizes k−1, p, nkp + 1 the space V consists of all matrices of the form (D0EBA+λIC00F+λI), where A and F are strictly upper triangular and all other nonzero blocks are arbitrary. A similarity that exchanges the first two block rows and columns now brings the space V into the form (Equation1).

Acknowledgments

The authors are indebted to an anonymous referee for numerous suggestions that helped us improve substantially the presentation and organization of this manuscript.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

MO&KŠ acknowledge financial support from the Slovenian Research Agency [research core funding number P1-0222]. KŠ also acknowledges [grant numbers N1-0103 and J1-3004] from the Slovenian Research Agency.

References

  • Gerstenhaber M. On nilalgebras and linear varieties of nilpotent matrices I. Am J Math. 1958;80:614–622. doi: 10.2307/2372773
  • Kokol Bukovšek D, Omladič M. Linear spaces of symmetric nilpotent matrices. Linear Algebra Appl. 2017;530:384–404. doi: 10.1016/j.laa.2017.05.030
  • de Seguins Pazzis C. The structured Gerstenhaber problem (I). Linear Algebra Appl. 2019;567:263–298. doi: 10.1016/j.laa.2018.08.015
  • de Seguins Pazzis C. The structured Gerstenhaber problem (II). Linear Algebra Appl. 2019;569:113–145. doi: 10.1016/j.laa.2018.11.033
  • de Seguins Pazzis C. The structured Gerstenhaber problem (III). Linear Algebra Appl. 2020;601:134–169. doi: 10.1016/j.laa.2020.04.021
  • Omladič M, Šemrl P. Matrix spaces with bounded number of eigenvalues. Linear Algebra Appl. 1996;249:29–46. doi: 10.1016/0024-3795(95)00253-7
  • Serežkin VN. Linear transformations preserving nilpotency. Vesti Akad Navuk BSSR Ser Fiz Mat Navuk. 1985;5:46–50.
  • Mathes B, Omladič M, Radjavi H. Linear spaces of nilpotent matrices. Linear Algebra Appl. 1991;149:215–225. doi: 10.1016/0024-3795(91)90335-T
  • de Seguins Pazzis C. Spaces of matrices with a sole eigenvalue. Linear Multilinear Algebra. 2012;60:1165–1190. doi: 10.1080/03081087.2011.654118
  • Loewy R, Radwan N. On spaces of matrices with a bounded number of eigenvalues. Electron J Linear Algebra. 1998;3:142–152. doi: 10.13001/1081-3810.1020
  • de Seguins Pazzis C. Spaces of matrices with few eigenvalues. Linear Algebra Appl. 2014;449:210–311. doi: 10.1016/j.laa.2014.02.015
  • Grunenfelder L, Košir T, Omladič M, et al. Maximal Jordan algebras of matrices with bounded number of eigenvalues. Isr J Math. 2002;128:53–75. doi: 10.1007/BF02785418
  • Draisma J, Kraft H, Kuttler J. Nilpotent subspaces of maximal dimension in semi-simple lie algebras. Compos Math. 2006;142:464–476. doi: 10.1112/S0010437X05001855
  • Meshulam R, Radwan N. On linear subspaces of nilpotent elements in a lie algebra. Linear Algebra Appl. 1998;279:195–199. doi: 10.1016/S0024-3795(98)10010-1
  • Woodroofe R. An algebraic groups perspective on Erdös-Ko-Rado. Linear Multilinear Algebra. 2022;70:7825–7835. 10.1080/03081087.2021.2013428.
  • Horn RA, Johnson CR. Matrix analysis. 2nd ed. New York: Cambridge University Press; 2013.
  • Krein MG, Naimark MA. The method of symmetric and Hermitian forms in the theory of the separation of the roots of algebraic equations. Linear Multilinear Algebra. 1981;10:265–308. doi: 10.1080/03081088108817420. English translation from the Russian of the paper originally published in Kharkov (1936).
  • Omladič M, Radjavi H, Šivic K. On approximate commutativity of spaces of matrices. Linear Algebra Appl. 2023;676:251–266. doi: 10.1016/j.laa.2023.06.026
  • Harris J. Algebraic geometry: a first course. New York: Springer-Verlag; 1992. (Graduate texts in mathematics; Vol. 133).
  • Borel A. Linear algebraic groups. New York: Springer-Verlag; 1991. (Graduate Texts in Mathematics; Vol. 126).
  • Cox D, Little J, O'Shea D. Ideals, varieties, and algorithms: an introduction to computational algebraic geometry and commutative algebra. 3rd ed. New York: Springer; 2007. (Undergraduate texts in mathematics).
  • Tauvel P, Yu RWT. Lie algebras and algebraic groups. Berlin, Heidelberg: Springer-Verlag; 2005. (Springer monographs in mathematics).
  • Gantmacher FR. The theory of matrices. Rhode Island: Providence; 1959. (AMS Chelsea; Vol. 1).