Abstract
We discuss the spectral decomposition of the hypergeometric differential operators on the line , such operators arise in the problem of decomposition of tensor products of unitary representations of the universal covering of the group . Our main purpose is a search of natural bases in generalized eigenspaces and variants of the inversion formula.
1. Introduction
1.1. The hypergeometric opertor on the line
It is well known that classical hypergeometric systems of orthogonal polynomials are eigenfunctions of certain differential or difference operators (see, e.g. [Citation1], [Citation2, Sect. 10.8–10.13, 10.21–22], [Citation3, Sect. 6.10, Ex. 6.29–6.37]). On the other hand many classical integral transforms, as the Hankel transform, the Kontorovich–Lebedev transform, the -Wimp transform, the Jacobi transform (synonyms: the Olevskiï transform, the generalized Mehler–Fock transform), etc., can be obtained as spectral decompositions of certain differential or difference operators with continuous spectra (see collections of examples with differential operators in [Citation4, Ch. 4], [Citation5, Sect. XIII.8]).
We consider the following differential operator: (1.1) (1.1) in . The parameters α, β are real. Clearly, replacing by does not change the operator, so we can assume .
As an algebraic expression is a hypergeometric differential operator, spectral expansions of similar operators produce the Jacobi polynomials (see, e.g.[Citation1, Sect. 9.8]) and the ‘Jacobi integral transform’, see [Citation4,Citation6], [Citation7, Sect. 4.16], [Citation5, Sect. XIII.8, Theorem on p. 1526], [Citation8–10]). Once more counterpart of was considered in [Citation11].
The operator has a continuous spectrum on the half-line with multiplicity two and a finite number of discrete points in the domain . The explicit spectral decomposition of was obtained in [Citation12]. Since the spectrum has multiplicity, there arises a question about possible choices of natural bases in spaces of solutions of the equation for .
1.2. Notation
Denote By we denote generalized hypergeometric functions, here is the Pochhammer symbol. By We denote bilateral hypergeometric series, see, e.g. [Citation13, Ch. 6]. If , then vanishes and , so we get a hypergeometric function . We prefer another normalization of bilateral series All summands of the latter sum are well defined (singularities are removable). The series absolutely converges on the circle if . If , then the series conditionally converges for , . The sum has a continuationFootnote1 to a function real analytic in and meromorphic in ,…, , .
The Dougall formula (see [Citation14, (1.4.1)] or [Citation13, (6.1.2.1)]) gives (1.2) (1.2)
1.3. The integral transform and the inversion formula
For any , we define a function on by (1.3) (1.3) where branches of the power functions in (1.3) are defined by the condition We have so for each σ we have a family of functions depending on a complex parameter t in a two-dimensional space of solutions. For we have in this case are generalized eigenfunctions of (see [Citation15, Sect. 2.2]).
For we define a function depending on , . The -lim is a limit of the family of functions in the sense of .
If is compactly supported, then is well defined for all σ, , so we get a function on holomorphicFootnote2 in , .
We started with a function of one variable x and get a function of two variables , . These data are overfilled, for a reconstruction of f it is sufficient to know values of for two values of t for each σ.
Theorem 1.1
Let . Consider two measurable maps , defined for , where . Assume that a.s. Then
(a) | For we have (1.4) (1.4) where the matrix spectral density R is (1.5) (1.5) We understand the integral in (Equation1.4(1.4) (1.4) ) as a -limit as of integralsFootnote3 . | ||||
(b) | For , we have the Plancherel formula (1.6) (1.6) |
Theorem is proved in Section 2.
Remark
It is more-or-less obvious that matrix R admits an explicit expression in the terms of Γ-functions. But a multiplicative structure of the matrix elements is a result of a long calculation and looks as a happy-end, see, e.g. transformation (Equation2.25(2.25) (2.25) ) below.
Remark
Let . In this case are hypergeometric functions up to simple functional factors, for this case our statement is formulated separately in Proposition 2.2.
1.4. The case and Romanovski polynomials
This subsection contains nothing new comparatively [Citation12], however it is important for understanding of our topic. For the operator has also a finite family of -eigenfunctions where k ranges in integers satisfying the condition (1.7) (1.7) (so for such functions are absent). The functions are the Romanovski polynomials [Citation16] (we use a nonstandard normalization), they are orthogonal on the line with respect to the weight This weight decreases at infinity as a power, for this reason we have only a finite family of orthogonal polynomials. The -norms are given by (see [Citation17]) (1.8) (1.8) For in the right-hand side of the inversion formula (Equation1.4(1.4) (1.4) ) there arise additional terms where the summation is taken over k satisfying (Equation1.7(1.7) (1.7) ).
1.5. Difference operators
Next, we find the image of the operator of multiplication by x under the transformation .
Theorem 1.2
Let be a compactly supported integrable function on . Let the operator send to . Then sends to the function (1.9) (1.9)
Remark
Recall that in the inversion formula and in the Plancherel formula the integrations are taken over the imaginary axis . So is a difference operator in the direction transversal to the contour of integration. Similar facts take place for other classical index integral transform as the Jacobi transform (see [Citation18, Th. 2.1)]), the Kontorovich–Lebedev transform (see [Citation19, Th. 3.2, Prop. 3.3]) and the Wimp transform (see [Citation19, Th. 4.2]). Moreover, Cherednik showed that the multi-dimensional Harish-Chandra transform sends a certain algebra of operators of multiplications to an algebra of difference operators (see [Citation20,Citation21]).
1.6. The further structure of the paper
Proof of Theorem 1.1 is contained in Section 2, this section contains also two other variants of the inversion formula, see Proposition 2.2, Section 2.5. Theorem 1.2 is proved in Section 3. The last Section 4 contains evaluations of the transform of some functions.
1.7. Purposes of this work
The operator appears in a natural way in the problem of decomposition of tensor products of unitary representations of the group and its universal covering group, see [Citation12]. Tensor products of unitary representation of were topics of many papers, in particular, [Citation22–25]. However, the appearance of a multiplicity makes the topic non-flexible for further development, the same obstacles in more serious forms arise for numerous spectral problems of non-commutative harmonic analysis with multiplicities. An informal purpose of the present paper is a search of an approach to such problems. In particular, this gives at least additional hopes for harmonic analysis related to and other rank one classical groups. On the other hand, there arises a question about bilateral analogs of some other hypergeometric integral transforms.
2. Proof of the inversion formula
2.1. A reduction of to a Schrödinger operator
We consider a unitary operator given by the formula (2.1) (2.1) It sends the operator to the operator (cf. [Citation4, Sect. 4.16]). We get a Schrödinger operator with a rapidly decaying potential , and we can apply general statements about such operators, see, e.g. [Citation15, Sect. II.6]. The operator defined on the space of smooth compactly supported functions is essentially self-adjoint in , see [Citation15, Th. II.1.1] (therefore also is essentially self-adjoint). The space splits as a direct sum of two -invariant subspaces corresponding to discrete and continuous spectrum. The subspace is finite-dimensional, eigenfunctions are -solutions of the equation , s>0 and they have asymptotics of the form Next, let . Consider the two-dimensional space consisting of solutions of the equation , they have asymptotics of the form so these functions are not in . We define an inner product in by (2.2) (2.2) Next, we define two special canonical solutions of , they have asymptotics of the form (2.3) (2.3) and (2.4) (2.4) it can be shown that the scattering matrix is unitary and symmetric (see [Citation15, Sect. II.6], [Citation26, Sect. 36]). For this reason, , form an orthogonal basis in with respect to the inner product (Equation2.2(2.2) (2.2) ).
Next, consider two operators, given by and Then , . The operator I is a unitary operator and J is the inverse operator . See [Citation15, Th. 6.2].
We will use the statement in the following form. Let us choose (in a measurable way) a basis , in each . Consider the corresponding Gram matrix (2.5) (2.5) Denote (2.6) (2.6) Consider the space equipped with the inner product (2.7) (2.7) denote by the completion of with respect to this inner product. Then the operator is a unitary operator from to .
2.2. A reduction of to a hypergeometric differential operator
We set and pass to the differential operator Next, we pass to a complex variable and come to a new operator The equation for eigenfunctions becomes a special case of the hypergeometric differential equation with (2.8) (2.8) We write two Kummer solutions [Citation14, (2.9.3),(2.9.20)] of the hypergeometric equation Substituting (Equation2.8(2.8) (2.8) ), and multiplying by we get two following solutions of the equation : (2.9) (2.9) (2.10) (2.10) These solutions are obtained one from another by a substitution , this substitution does not change the operator .
Remark
In this place we must assume . Otherwise , coincide, and we come to the logarithmic case of the hypergeometric differential equation, see [Citation14, Sect. 2.3], [Citation3, Sect. 2.3].
We need asymptotics of these functions as . In this case the argument of hypergeometric function tends to 1, we apply formulas [Citation14, (2.10.1),(2.10.5)], We have Denote (2.11) (2.11) (2.12) (2.12) Then (2.13) (2.13) (2.14) (2.14) For we have a similar expression with replaced by . The formulas hold for any . For both summands of the asymptotic have the same order (and are almost -functions), for one summand dominates another.
To adapt the general reasoning from Section 2.1 we must apply the unitary operator (Equation2.1(2.1) (2.1) ), transform as (2.15) (2.15) (2.16) (2.16)
2.3. The Gram matrix for the hypergeometric eigenfunctions
Let . Our next purpose is to evaluate the matrices Δ and (see (Equation2.5(2.5) (2.5) )) for the eigenfunctions , given by (Equation2.9(2.9) (2.9) ), (Equation2.10(2.10) (2.10) ).
Lemma 2.1
(a) | The matrix elements of the Gram matrix Δ for eigenfunctions , are | ||||
(b) | The determinant of Δ is |
Proof.
(a) We present a calculation of , (2.17) (2.17) Let us evaluate the first summand. We have and We see that the first summand in (Equation2.17(2.17) (2.17) ) is even in σ. Therefore it is equal to the second summand, and we come to the final expression.
Evaluations of other matrix elements are similar.
(b) Evaluating we meet the following subexpressions: Applying these transformations we get the following expression for : Simplifying the expression in the curly brackets we get and we come to the final expression.
Now we write the matrix in a straightforward way and get the following statement, see [Citation12]:
Proposition 2.2
Let . Then for the eigenfunctions , given by (Equation2.9(2.9) (2.9) )–(Equation2.10(2.10) (2.10) ) the spectral matrix Ξ in (Equation2.7(2.7) (2.7) ) is given byFootnote4
2.4. Bilateral hypergeometric functions
It is easy to see that functions satisfy the differential equation (cf. [Citation13, (2.1.2.1)]). Functions differ from by constant factors. Moreover for any the function satisfy the same differential equation (we assume that ). Therefore any three functions , , are linear dependent, i.e. (2.18) (2.18) for some , , . In fact, see [Citation27], (2.19) (2.19)
Remark
These coefficients , , of the linear dependence can be derived in the following way. The Dougall formula (Equation1.2(1.2) (1.2) ) provides us an explicit value for any at z = 1. We substitute and and get two equations for the coefficients.
Setting to (Equation2.19(2.19) (2.19) ) we get an expression of an arbitrary function in terms of Gauss hypergeometric functions.
In particular, we get an expression for the functions defined in Section 1.3. Namely, (2.20) (2.20) where (2.21) (2.21) (2.22) (2.22)
Lemma 2.3
(2.23) (2.23) where
Proof.
Let Δ be the Gram matrix of the eigenfunctions , , see Lemma 2.1. Then where Let us verify the identity for the first matrix element: The product of three Γ-factors is and we come to the desired expression.
Now we are ready to evaluate (2.24) (2.24) The expression in the curly bracket is (2.25) (2.25) this implies the statement of the lemma. The last identity is not obvious, but when written, it admits a straightforward verification.
Proof
Proof of Theorem 1.1
Thus, the Gram matrix of and is The inverse matrix is and we come to the formula (Equation1.6(1.6) (1.6) ).
2.5. A generalized orthogonal system
For completeness we present formulas for the eigenfunctions , , see (Equation2.3(2.3) (2.3) )–(Equation2.4(2.4) (2.4) ). Set where Then , have the following asymptotics at infinity (see (Equation2.15(2.15) (2.15) )–(Equation2.16(2.16) (2.16) )): and where the elements of the scattering matrix are given by and For such functions , the matrix Ξ in (Equation2.7(2.7) (2.7) ) is , but we pay for this by longer and less flexible expressions for eigenfunctions.
2.6. The case ,
For this case the calculations of this section are not valid, but we can easily apply continuity arguments. Our final formula (Equation1.5(1.5) (1.5) ) follows from (Equation2.23(2.23) (2.23) ). To extend the latter formula to our case, it is sufficient to show that coefficients at high terms of asymptotics of at infinities are continuous at the point for , where . These coefficients can be easily written explicitly with formulas (Equation2.20(2.20) (2.20) ), (Equation2.13(2.13) (2.13) )–(Equation2.14(2.14) (2.14) ). For instance, the coefficient in front of as is (2.26) (2.26) where , , are defined by formulas (Equation2.11(2.11) (2.11) )–(Equation2.12(2.12) (2.12) ), (Equation2.21(2.21) (2.21) ). Substitute to the bracket . It is easy to see that similar identities take place also for , . Therefore for the expression is zero. So the singularity on the surface in (Equation2.26(2.26) (2.26) ) is removable and the whole expression is continuous.
3. The difference operator
The topic of this section is the proof of Theorem 1.2. In fact, we must show that the kernel satisfies the equation (3.1) (3.1) (the variable t is absent in the coefficients).
We use the expression (Equation2.20(2.20) (2.20) ) for Φ, it is sufficient to show that two terms , satisfy the same difference equation. We write as see [Citation14, (2.9.1), (2.9.3)]. The expression for is obtained by replacing
By [Citation18, (2.3)], the Gauss hypergeometric function satisfies the following contiguous relation: Therefore satisfies the difference equation (3.2) (3.2) The expression in the curly brackets can be transformed as If satisfies the difference equation (Equation3.2(3.2) (3.2) ), then satisfies equation (Equation3.1(3.1) (3.1) ). So satisfies (Equation3.1(3.1) (3.1) ). Since expression (Equation3.1(3.1) (3.1) ) is invariant with respect to the transformation , the summand also satisfies the difference equation.
4. Some evaluations
There are many explicit evaluations for the Jacobi transform, this allows to use it as a tool for obtaining non-trivial properties of special functions, see [Citation9,Citation28,Citation29] (see also, [Citation30] for the complex analog of the Jacobi transform). It is interesting to find a collection of evaluations of for some functions f. This section contains few examples.
The transform sends the function to the function (4.1) (4.1) To verify this, we must evaluate the integral
We expand into a series. Integrating term-wise with the formula we come to (Equation4.1(4.1) (4.1) ).
Remark
The functions (Equation4.1(4.1) (4.1) ) are bilateral version of Hahn functions, which were considered in [Citation24].
Next, for cases and for expression (Equation4.1(4.1) (4.1) ) can be simplified. For instance, the transformation sends a function to To establish this statement, we substitute to (Equation4.1(4.1) (4.1) ). Then we get a function of the form and apply the Dougall formula (Equation1.2(1.2) (1.2) ).
In a similar way we set and observe that our transform sends to Two remaining cases are similar.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Additional information
Funding
Notes
1 In any case, coefficients of the series have polynomial growth, therefore the series always converge in the sense of distributions on the circle .
2 We have as , see (Equation2.20(2.20) (2.20) ), (Equation2.13(2.13) (2.13) )–(Equation2.14(2.14) (2.14) ). The logarithm arise since formulas (Equation2.13(2.13) (2.13) )–(Equation2.14(2.14) (2.14) ) are valid if . For integer σ we come to logarithmic solutions of the hypergeometric differential equation, see [Citation14, Subsect. 2.3.1].
3 See the general statement about self-adjoint differential operators in [Citation5, Th. XIII.5.1, Cor. XIII.5.2.].
4 We write each matrix element as a two-line formula, this allows to obtain a readable expression.
References
- Koekoek R, Swarttouw RF. The Askey-scheme of hypergeometric orthogonal polynomials and its q-analogue. Delft University of Technology, Faculty of Information Technology and Systems, Department of Technical Mathematics and Informatics; 1998. (Report no. 98-17). Preprint. Available from: https://arXiv.org/pdf/math/9602214.pdf.
- Erdélyi A, Magnus W, Oberhettinger F, et al. Higher transcendental functions. Vol. II, Based, in part, on notes left by Harry Bateman. New York: McGraw-Hill; 1953.
- Andrews GE, Askey R, Roy R. Special functions. Cambridge: Cambridge University Press; 1999.
- Titchmarsh EC. Eigenfunction expansions with second-order differential operators. Oxford: Clarendon Press; 1946.
- Dunford N, Schwartz JT. Linear operators. Part II: spectral theory. Self adjoint operators in Hilbert space. New York: John Wiley & Sons; 1963.
- Weyl H. Über gewönliche lineare Differentialgleichungen mit singulären Stellen und ihre Eigenfunktionen (2 Note). Nachr Konig Gess Wiss Göttingen Math-Phys. 1910;1910:442–467. In: H. Weyl H., editor. Gesammelte Abhandlungen. Vol. 1. Berlin: Springer-Verlag; 1968, p. 222–247.
- Olevskiï MN. On the representation of an arbitrary function in the form of an integral with a kernel containing a hypergeometric function (Russian). Dokl AN SSSR. 1949;69:11–14.
- Koornwinder TH. Jacobi functions and analysis on noncompact symmetric spaces. In: Askey RA, Koornwinder TH, Schempp W, editors. Special functions: group theoretical aspects and applications. Dordrecht: Reidel; 1984. p. 1–85.
- Koornwinder TH. Special orthogonal polynomial systems mapping to each other by the Fourier-Jacobi transform. Lecture Notes Math. 1985;1171:174–183.
- Yakubovich SB. Index transforms. River Edge (NJ): World Scientific; 1996.
- Molchanov VF, Neretin YuA. A pair of commuting hypergeometric operators on the complex plane and bispectrality. J Spectr Theory. 2021;11:509–586.
- Neretin YuA. Some continuous analogs of the expansion in Jacobi polynomials and vector-valued orthogonal bases. Funct Anal Appl. 2005;39(2):106–119.
- Slater LJ. Generalized hypergeometric functions. Cambridge: Cambridge University Press; 1966.
- Erdélyi A, Magnus W, Oberhettinger F, et al. Higher transcendental functions. Vol. I, Based, in part, on notes left by Harry Bateman. New York: McGraw-Hill; 1953.
- Berezin FA, Shubin MA. The Schrödinger equation. Dordrecht: Kluwer; 1991.
- Romanovski V. Sur quelques classes nouvelles de polynómes orthogonaux. C R Acad Sci Paris. 1929;188:1023–1025.
- Askey R. An integral of Ramanujan and orthogonal polynomials. J Indian Math Soc. 1987;51:27–36.
- Neretin YuA. Index hypergeometric transform and imitation of analysis of Berezin kernels on hyperbolic spaces. Sb Math. 2001;192:403–432.
- Neretin YuA. Difference Sturm-Liouville problems in the imaginary direction. J Spectr Theory. 2013;3:237–269.
- Cherednik I. Inverse Harish-Chandra transform and difference operators. Internat Math Res Notices. 1997;1997:733–750.
- van Diejen JF, Emsiz E. Difference equation for the Heckman–Opdam hypergeometric function and its confluent Whittaker limit. Adv Math. 2015;285:1225–1240.
- Pukanszky L. On the Kronecker products of irreducible unitary representations of the 2×2 real unimodular group. Trans Amer Math Soc. 1961;100:116–152.
- Molchanov VF. Tensor products of unitary representations of the three-dimensional Lorentz group. Math USSR-Izv. 1980;15(1):113–143.
- Groenevelt W, Koelink E, Rosengren H. Continuous Hahn functions as Clebsch-Gordan coefficients. In: Ismail M, Koelink E, editors. Theory and applications of special functions. Papers from the Special Session of the American Mathematical Society Annual Meeting held in Baltimore, MD; January 15–18, 2003. New York: Springer; 2005. p. 221–284.
- Groenevelt W. Wilson function transforms related to Racah coefficients. Acta Appl Math. 2006;91(2):133–191.
- Faddeev LD, Yakubovskiï OA. Lectures on quantum mechanics for mathematics students. Providence (RI): Amer Math Soc; 2009.
- Agarwal RP. General transformations of bilateral cognate trigonometrical series of ordinary hypergeometric type. Canad J Math. 1953;5:544–553.
- Neretin YuA. Beta-integrals and finite orthogonal systems of Wilson polynomials. Sb Math. 2002;193:1071–1089.
- Neretin YuA. Index hypergeometric integral transform. Addendum to Russian translation of Andrews GE, Askey R, Roy R. Special functions. Moscow: MCCME; 2013: p. 607–624; English version: Preprint. arXiv:1208.3342.
- Neretin YuA. An analog of the Dougall formula and of the de Branges-Wilson integral. Ramanujan J. 2021;54:93–106.