ABSTRACT
We characterize all real matrix semigroups, indexed by the non-negative reals, which satisfy a mild boundedness assumption, without assuming continuity. Besides the continuous solutions of the semigroup functional equation, we give a description of solutions arising from non-measurable solutions of Cauchy’s functional equation. To do so, we discuss the primary decomposition and the Jordan—Chevalley decomposition of a matrix semigroup. Our motivation stems from a characterization of all multi-dimensional self-similar Gaussian Markov processes, which is given in a companion paper.
1. Introduction
It is a classical fact that all continuous matrix-valued functions satisfying the semigroup property
where is the identity matrix, are given by the maps , where is arbitrary. See, e.g. Problem 2.1 in (Engel & Nagel, Citation2000) and the subsequent discussion. To the best of our knowledge, few authors have considered non-continuous solutions of (Equation1.1(1.1) (1.1) ) in the multidimensional case d > 1. Kuczma and Zajtz (Kuczma & Zajtz, Citation1966) determine all matrix semigroups where is measurable. Zajtz (Zajtz, Citation1971) characterizes the matrix semigroups indexed by rational numbers. The one-dimensional case, which is equivalent to Cauchy’s functional equation,
is well studied, on the other hand (see (Aczél, Citation1966; Bingham et al., Citation1987)). Our motivation to investigate non-continuous matrix semigroups stems from probability theory: In a companion paper, we develop a characterization of all self-similar Gaussian Markov processes. By self-similarity, the bivariate covariance function of such a multi-dimensional stochastic process can be transformed to a matrix function of a single argument, which must satisfy (Equation1.1(1.1) (1.1) ). See Section 2 for some more details. Our main result (Theorem 2.6) determines all solutions of (Equation1.1(1.1) (1.1) ) which satisfy a mild boundedness assumption. In Section 3, we prepare the proof by providing a decomposition of the vector space into invariant subspaces, and establish some useful properties of the decomposition. The main proofs (in Section 5) are preceded by a discussion of the Jordan—Chevalley decomposition of a semigroup, in Section 4. In Appendix A, we give two auxiliary results on Cauchy’s functional equation.
2. Preliminaries and main result
To motivate our investigation, we first recall some facts about Gaussian stochastic processes (see, e.g. Lifshits, Citation2012; Nourdin, Citation2012, for more information). Let be a d-dimensional real centered Gaussian process. The covariance of X is a matrix-valued function satisfying
and uniquely characterizes the law of the process. Suppose that X is self-similar, and that is the identity matrix. In terms of the covariance function, self-similarity means that for and some self-similarity parameter H > 0. By a classical criterion (Doob, Citation1953) [Theorem V.8.1], X is a Markov process if and only if its covariance function satisfies
Upon introducing , self-similarity allows to reduce (Equation2.1(2.1) (2.1) ) to
This observation has been used in dimension d = 1 to prove that certain Gaussian processes do not have the Markov property (see, e.g. (Nourdin, Citation2012), [Theorem 2.3]). Unifying and generalizing these isolated results in a companion paper, we obtain a classification of all d-dimensional self-similar Gaussian Markov processes. A full classification requires finding all solutions of (Equation1.1(1.1) (1.1) ), without assuming continuity. The boundedness assumption 2.5 stated below causes no problems, though. To state our main result, define the rotation matrix
and, for even k and a function , the block-diagonal matrix
consisting of rotation matrices. In our statements, ν will denote some non-measurable (equivalently, non-continuous) solution of (Equation1.3(1.3) (1.3) ). The following assumption is in force throughout the paper.
Assumption 2.1.
In the following, V denotes a real d-dimensional vector space equipped with an inner product .
Definition 2.2.
We write L(V) for the set of linear maps from V to itself. The operator norm induced by the inner product is denoted by . If the basis is clear from the context, we will identify elements of L(V) and d × d matrices.
Definition 2.3.
We say that is a semigroup if and
for all . Semigroups acting on , the complexification of V, are defined by the same property.
Definition 2.4.
We say that a semigroup in L(V) is elementary if there exists an orthonormal basis such that
for some non-continuous ν satisfying Cauchy’s EquationEquation (1.3)(1.3) (1.3) and some matrix .
Assumption 2.5.
The semigroup satisfies for , where the function is locally bounded, right-continuous at 0 and satisfies .
We can now state our main theorem, which will be proven at the end of Section 5.
Theorem 2.6.
Let be a semigroup satisfying Assumption 2.5. Then, there exists an orthogonal decomposition such that each Vi is invariant under g(x), and either g(x) is elementary on Vi or for x > 0.
We call degenerate if for x > 0.
Example 2.7.
Let , and let be a non-continuous solution of Cauchy’s functional EquationEquation (1.3)(1.3) (1.3) . Then, the semigroup given by
is an example of a semigroup covered by Theorem 2.6 (by putting d = 2 and in Definition 2.4), but not the previous results mentioned at the beginning of the introduction. It illustrates that, in contrast to dimension one, a two-dimensional locally bounded semigroup need not be continuous.
3. Primary decomposition for semigroups
In this section, we discuss a decomposition of V into subspaces which are invariant for the given semigroup.
Definition 3.1.
Let S be a linear operator on the vector space V. A subspace is called S-invariant if S maps U into U.
The primary decomposition theorem from linear algebra [O’Meara et al., Citation2011, Theorem 1.5.1] decomposes a vector space into invariant subspaces for a given operator. On each subspace, the operator has a single real eigenvalue or a pair of conjugate complex eigenvalues. Instead of a single operator, we need the following version for semigroups.
Theorem 3.2.
Primary decomposition For any semigroup of linear maps acting on V, there exists a decomposition with such that each Vi is g(x)-invariant for all , and for all i one of the following holds:
(1) | For all , g(x) has one eigenvalue on Vi, | ||||
(2) | For all , g(x) has eigenvalues on Vi. They may coincide (and thus be real) for some values of x, but not for all . |
Proof.
It is known that the primary decomposition extends to commuting sets of matrices. Indeed, the decomposition into invariant subspaces follows from Theorem 5 on p. 40 in (Jacobson, Citation1962), applied to the span of the semigroup . Thus, we only need to argue why (1) or (2) follows from the semigroup property. Assume that each g(x) has only one eigenvalue on some Vi. Since the g(x) commute, they share a common eigenvector vi, and thus we have for . Suppose for some . Then, we have , and hence , contradicting for all . Hence for all .
Theorem 3.3.
Consider a semigroup acting on , the complexification of V. Then, there exists a decomposition with such that, for each i and all , the space Vi is g(x)-invariant, and g(x) has only one eigenvalue on Vi.
Proof.
This is an immediate consequence of Theorem 5 on p. 40 in (Jacobson, Citation1962) (cf. the preceding proof).
Definition 3.4.
We call the decomposition from Theorem 3.2 simultaneous real primary decomposition (SRPD) of V, omitting “w.r.t. g” if the semigroup is clear from the context. The component Vi is of first type in case (1), and of second type in case (2). Similarly, the simultaneous primary decomposition (SPD) is the decomposition from Theorem 3.3.
Lemma 3.5.
Let g be a semigroup acting on V, and let be a subspace of first type from the SRPD of V. Then, there exists a common eigenvector such that for all x > 0.
Proof.
We present an algorithm which yields the subspace of common eigenvectors. If itself is this subspace, then we are done. Otherwise there exists such that the eigenspace of is a strict subspace of Vi. For any y > 0 and any we have
It follows that is g(x)-invariant for all x > 0. Now either consists of common eigenvectors, or we can again find such that the eigenspace of in is a strict subspace of . Repeating this argument yields a sequence of nontrivial subspaces whose dimensions are strictly decreasing, hence it has to terminate. Clearly, the final vector space in this sequence is the space of common eigenvectors of the semigroup.
Lemma 3.6.
Let g be a semigroup acting on V, and let Vi be a subspace from the SRPD of V such that there exists with . Then, we have for all x > 0. Furthermore for all x > 0.
Proof.
By the previous lemma, there exists a common eigenvector . (Since commuting matrices are simultaneously triangularizable it follows that they share a common eigenvector .) We have
Hence λ satisfies . Since we have for all since . Let y > 0, then there exists such that . We obtain
Since for all y > 0 it follows that the characteristic polynomial of g(y) satisfies and hence, by the Cayley—Hamilton theorem, . For we obtain
Corollary 3.7.
Let g be a semigroup acting on V, with SRPD . For any x > 0 we have , where
Proof.
If , then for all and hence Vi is of type 1. By the previous lemma, we have
If the inclusion was strict, then there would exist Vj with such that . But since g(x) is invertible on Vj this gives a contradiction, hence we have equality.
Corollary 3.8.
Let g be a semigroup acting on V. Then, there exists a decomposition , such that is invertible for all and for x > 0.
4. Multiplicative Jordan–Chevalley decomposition of semigroups
Due to Corollary 3.8, from now on we assume in most of our statements that g(x) is invertible for all . A standard result from linear algebra, the Jordan—Chevalley decomposition, asserts that any matrix A can be uniquely decomposed as , where D is diagonalizable, N is nilpotent and D and N commute. If A is invertible, then we can express it as with T unipotent and commuting with D.
Definition 4.1.
For an invertible linear map A on with , the multiplicative decomposition A = DT into commuting factors with D diagonalizable and T unipotent, is called the multiplicative Jordan—Chevalley decomposition.
For background on the (multiplicative) Jordan—Chevalley decomposition, we refer to Section 15.1 in (Humphreys, Citation1975). We now analyze the structure of the multiplicative Jordan—Chevalley decomposition of a semigroup.
Theorem 4.2.
Let be a semigroup of invertible linear maps acting on with and let be the multiplicative Jordan—Chevalley decomposition of each g(x). Then and each form a semigroup, and the two families commute with each other, i.e. for all .
Proof.
We can w.l.o.g. assume that , since by uniqueness of the Jordan—Chevalley decomposition and if for . Take the SPD from Theorem 3.3, , so that each g(x) has only one eigenvalue on Vi. Denote by the multiplicative Jordan—Chevalley decomposition g(x) restricted to Vi. Denote by the eigenvalue of g(x) on Vi. Clearly, . Since the are a commuting family of matrices, they share a common eigenvector . We have
and hence is a semigroup. Since each is a multiple of the identity, it commutes with every linear map and hence
which shows that is also a semigroup. The result then follows, since by uniqueness is the multiplicative Jordan—Chevalley decomposition of g(x).
Theorem 4.3.
Let be a semigroup of invertible linear maps acting on V and let be its multiplicative Jordan—Chevalley decomposition. Then, there exist commuting real diagonalizable linear maps J(x) and commuting real nilpotent linear maps N(x) satisfying
(1) | |||||
(2) | |||||
(3) | for all |
such that
and
Proof.
Let be the SRPD and let . Assume first that Vi is of first type and is the single positive eigenvalue of . Then, the multiplicative Jordan—Chevalley decomposition on Vi is . Define . Since is nilpotent, we have
Notice that since the logarithm converges for all unipotent matrices, the exponential map between the Lie algebra of nilpotent matrices and the Lie group of unipotent matrices is bijective (see p. 35 in (Goodman & Wallach, Citation1998)). Since is a semigroup we have
Rewrite this as
Since is nilpotent and exp is bijective, we have
For any set , which is invertible for . It is clear that
Applying the same idea again yields
Again by uniqueness we obtain
or equivalently
By commutativity we obtain
and hence, again by uniqueness, we have . It follows that and have the desired properties.
In the second case Vi is of second type. By Theorem 3.3, and since is real, Vi decomposes over as , where has only one eigenvalue on each Uj and U1 is isomorphic to U2 with isomorphism given by . By taking the principal branch of the logarithm, in the same manner as in the real case, we obtain commuting on U1 and on U2 satisfying Cauchy’s equation such that
Notice that , where and satisfies Cauchy’s functional equation on . By Lemma A.1, we can lift any solution on to a solution on such that linearity is preserved. From now on denote this lift by . Hence, if we choose any basis of U1 and its complex conjugate on U2 we obtain that is similar to
Taking the similarity transform with the matrix
where , we obtain
Since matrix similarity over is equivalent to matrix similarity over for two real matrices, there exists a real matrix Bi on Vi such that
Setting
and
we see that Ji and Ni satisfy the desired conditions, with and by uniqueness of the Jordan—Chevalley decomposition. The direct sums and give the required matrices. Furthermore, in the case where, there exists x > 0 for which has a complex eigenvalue we have
where . Recall the matrix defined in (Equation2.2(2.2) (2.2) ). By changing the order of the basis, we have that Ui is similar to the block diagonal matrix (recall the notation (Equation2.2(2.2) (2.2) ))
Hence
where is the composition of Bi with some permutation matrix P.
5. Proof of the main result
After providing some final preparatory results, this section ends with the proof of Theorem 2.6. Consider again the SRPD from Theorem 3.2. Now on each Vi, g(x) has either one positive eigenvalue or two complex conjugate eigenvalues and , where it is possible that for some values of x. If Vi is of first type, set . Each νi is a solution to Cauchy’s functional EquationEquation (1.3)(1.3) (1.3) . Consider then the set and partition it into equivalent solutions, according to Definition A.2. This then gives a partition of the index set in the following manner: If or , then are in the same subset of the partition. This is well-defined, since if then . Set .
Definition 5.1.
We call the decomposition the partitioned SRPD of V.
Furthermore associate with each Wj one solution ηj of Cauchy’s equation such that with . If ηj is linear we always take . Notice that for i ≠ j we have , hence there can be at most one Wi with , and furthermore this is the only Wi which can have odd dimension since it contains all Vj of type 1 (recall Definition 3.4).
Theorem 5.2.
Let be a semigroup of invertible linear maps acting on V, and denote by its partitioned SRPD. Denote by ηi the solution associated with Wi. If ηi is non-continuous, then there exists a change of basis Ai on Wi such that
where , and
for all . If , then
In both cases is a commuting family of matrices on Wi satisfying Cauchy’s functional equation. Furthermore, if v is a common eigenvector of such that , then ν is linear in x.
Proof.
Assume first that ηi, the associated solution of Cauchy’s functional equation, is non-continuous. For each Wi we have the decomposition where Vj are subspaces from the SRPD. Since ηi is non-continuous, each Vj in the direct sum has to be of second type, hence by (Equation4.3(4.3) (4.3) ) on each Vj we have
By definition of ηi we have and hence there exists such that . Let
with
where Cj is of dimension . Then, we have
Set . Then
satisfy
By construction the common eigenvalues of satisfy . If , then by Theorem 4.3 there exists such that . By the definition of Wi, it follows that the imaginary parts of the eigenvalues of have to be equivalent to , hence they have to be linear.
Theorem 5.3.
Let g be a non-degenerate semigroup acting on V which satisfies Assumption 2.5. Then, there exists a matrix M and a semigroup , where SO(d) is the set of special orthogonal matrices, such that
with M commuting with S(x).
Proof.
Consider the SRPD from Theorem 3.2, and define . Clearly, . For each eigenvalue on Vi, we have
Since , we have . Moreover, , and so for and . It follows that µi is locally bounded and hence that for some . As in the proof of Theorem 4.3, on Vi we have where is diagonalizable with eigenvalues , and thus
As in Theorem 4.3, set
Then, since , we obtain
where F(x) is again locally bounded. Since the operator norm in finite dimensions is equivalent to the -norm, it follows that each entry of is locally bounded and satisfies Cauchy’s functional equation. Thus, there exists a nilpotent linear map Pi such that . Let be the partitioned SRPD of V. Assume first that the solution associated with Wi is . Since we have . The real part of the eigenvalues of is linear in x and by Theorem 5.2 also the complex parts have to be linear since . Since is diagonalizable and all its eigenvalues are continuous in x, is continuous in x and hence . Setting and yields . By Theorem 5.2 and (Equation5.1(5.1) (5.1) ), we have for Wi with ηi non-continuous
with
Hence, setting , we have . Thus
Next we show that is an isometry on Wi. Since , we have
for any . Hence, for we have
Fix an arbitrary , and set . The graph of ηi is dense in (see Appendix A), and so there is a sequence of positive reals xn with such that . Then, for we obtain
where since f is right-continuous at 0. Choose such that and . Then, since , we obtain
since . Hence, we obtain
which implies that is an isometry on Wi. Next we show that all Wi are pairwise orthogonal. Let and for i ≠ j. By Lemma A.3, w.l.o.g. there is a sequence such that , and . Hence and with . We have the identity . Applying this to we obtain
Hence we have . Set . Then , and applying (Equation5.3(5.3) (5.3) ) to and taking the limit along yields
where the last equality follows from the fact that each is an isometry on Wi and Wj. Since the inequality above also holds for , we obtain the equality
Since were arbitrary, this shows orthogonality. Hence, the decomposition is orthogonal, and since is an isometry on Wi, it follows that is in SO(d). Setting , we obtain
Corollary 5.4.
Assume that is a semigroup with A being an invertible matrix such that S(x) is an isometry for each . Then, there exists an orthogonal matrix U such that
Proof.
One can easily verify the identities and
Choose x > 0 such that . Such an x clearly exists since ν is linear on . Choose any of unit length and set
The definition of u is invariant under the choice of x as long as . This can be seen by noticing that
Hence we obtain . We have
where the last equality follows from
Similarly, we can show that u is also of unit length. Set . Clearly, H is invariant under and hence so is since each . In this manner, we can construct an orthonormal basis, and we denote by U the matrix associated with this change of basis. Then, we have
Lemma 5.5.
Suppose that the semigroup g, acting on V, satisfies Assumption 2.5. Let be the decomposition of Corollary 3.8 such that is invertible. Then, for any we have
Proof.
By Theorem 5.3, we have . Since , we obtain
which is continuous in x. Hence
Corollary 5.6.
Suppose that the semigroup g, acting on V, satisfies Assumption 2.5. Then, the decomposition from Corollary 3.8 is orthogonal.
Proof.
Let with being non-degenerate and . Assume V1 is not orthogonal to V2. Then, there exists such that , where denotes the orthogonal projection onto V2. By Lemma 5.5, we have
Calculating the same limit for we obtain
Since and , we have . Hence
but this contradicts . Hence .
Proof.
Proof of Theorem 2.6
By Corollary 5.6, we have the orthogonal decomposition with and non-degenerate. Applying Theorem 5.3 to yields the result.
literature.bib
Download Bibliographical Database File (22.3 KB)Disclosure statement
No potential conflict of interest was reported by the author(s).
Supplemental data
Supplemental data for this article can be accessed online at https://doi.org/10.1080/27684830.2023.2289203.
Additional information
Funding
References
- Aczél, J. (1966). Lectures on functional equations and their applications. In Mathematics in science and engineering, (Vol. 19). Academic Press.
- Bingham, N. H., Goldie, C. M., & Teugels, J. L. (1987). Regular variation, vol. 27 of encyclopedia of Mathematics and its applications. Cambridge University Press.
- Doob, J. L. (1953). Stochastic processes. John Wiley and Sons, Inc.
- Engel, K.-J., & Nagel, R. (2000). One-parameter semigroups for linear evolution equations, vol. 194 of graduate texts in Mathematics. Springer-Verlag. ( With contributions by S. Brendle, M. Campiti, T. Hahn, G. Metafune, G. Nickel, D. Pallara, C. Perazzoli, A. Rhandi, S. Romanelli and R. Schnaubelt).
- Goodman, R., & Wallach, N. R. (1998). Representations and invariants of the classical groups, vol. 68 of encyclopedia of Mathematics and its applications. Cambridge University Press.
- Humphreys, J. E. (1975). Linear algebraic groups, Graduate Texts in Mathematics, no. 21. Springer-Verlag.
- Jacobson, N. (1962). Lie algebras, interscience tracts in Pure and applied Mathematics, no. 10. Interscience Publishers (a division of John Wiley and Sons, Inc.).
- Kuczma, M., & Zajtz, A. (1966). On the form of real solutions of the matrix functional equation Φ(x)Φ(y) = Φ(xy) for non-singular matrices Φ. Publicationes Mathematicae Debrecen, 13(1–4), 257–262. https://doi.org/10.5486/PMD.1966.13.1-4.31
- Lifshits, M. (2012). Lectures on Gaussian processes, Springer briefs in Mathematics. Springer.
- Nourdin, I. (2012). Selected aspects of fractional Brownian motion, vol. 4 of Bocconi and Springer series. Springer, Milan; Bocconi University Press.
- O’Meara, K. C., Clark, J., & Vinsonhaler, C. I. (2011). Advanced topics in linear algebra. Oxford University Press.
- Zajtz, A. (1971) . On semigroups of linear operators. Zeszyty Nauk. Uniw. Jagielloń. Prace Mat, 181–184.
Appendix A.
Cauchy’s functional equation
It is classical that all continuous solutions of the EquationEquation (1.3)(1.3) (1.3) , , are linear, and that the non-linear solutions are not continuous, even not Lebesgue measurable, and have dense graphs. For this, and further references, we refer to [2, Section 1.1]. In this section, we provide two auxiliary results on Cauchy’s equation. They concern lifting solutions from an interval to the real line, resp. the joint behavior of two solutions that differ by a non-linear function.
Lemma A.1.
Let be a solution to Cauchy’s functional EquationEquation (1.3)(1.3) (1.3) on with a > 0. Then, there exists a solution of (Equation1.3(1.3) (1.3) ) such that . The solution is linear if and only if f is linear.
Proof.
Take a Hamel basis of such that for every . This is clearly possible by rescaling every basis element if necessary. For any there exists a finite subset and such that . The function satisfies . If f is linear, then clearly is linear as well. If f is not linear then there exist two basis elements r1 and r2 such that , and hence is also not linear.
Definition A.2.
We say that two solutions ν and η of Cauchy’s functional equation are equivalent if is linear.
Lemma A.3.
Let be two non-equivalent solutions of Cauchy’s functional equation. Then there exists a sequence in , converging to 0, such that either and with or vice versa.
Proof.
Choose such that the two vectors and are linearly independent and . This is possible, since
for all would imply that f − g is linear. Assume w.l.o.g. that . Since f and g are both linear on and and v1 and v2 are linearly independent, there exist sequences and such that for every n, and . We show that has the desired property. Clearly and
Hence