1,085
Views
1
CrossRef citations to date
0
Altmetric
Original Articles

Complex Methods for Bounds on the Number of Periodic Solutions with an Application to a Neural Model

Pages 133-150 | Received 20 Jun 2019, Accepted 23 Aug 2021, Published online: 31 Jan 2022

Abstract

The qualitative behavior of the differential equation x=f(t,x), where x is real and f is periodic in t, is determined by the attracting periodic solutions. This article gives an exposition of a method of Ilyashenko that uses complex analysis to determine an upper bound on the number of periodic solutions. The method is then applied to a well-known model of a single neuron or pool of neurons.

1 Introduction

The value of many real-valued quantities at time t can often be modeled by a differential equation of the form x=f(t,x). A simple example is logistic population growth with periodic harvesting [Citation3]. If f(t, x) depends only on x, the equation is called autonomous. If the time dependence has period T, then interest focuses on the periodic solutions with period T, because these solutions include the attractors that exhibit the long-term behavior of the system. For small periodic perturbations of an autonomous equation, general conclusions can be drawn about the number of periodic solutions. For one such method, see [Citation3] and [Citation11]. However, for large perturbations, up until recently there was no general method for placing upper bounds on that number.

Yulij Ilyashenko has found such upper bounds by using one-variable complex analysis, showing once again the power of complex analysis in real variable problems. In [Citation15, Citation17], his approach is developed and then applied to special cases of Hilbert’s still partly unsolved sixteenth problem. We hope to encourage wider knowledge and broader application of this lovely work.

This article begins with an exposition of Ilyashenko’s method, continues with a description of a well-known neural model [25, pp. 78–80], and then applies Ilyashenko’s method to the neural model. The elements of Ilyashenko’s method are summarized after Theorem 2 in Section 3. In [Citation15], Ilyashenko applied his method to functions f(t, x) that are polynomial in x. We show how it also can be applied to transcendental functions such as those used in the neural model. The article concludes with questions and directions for further research. Those primarily interested in the neural application can begin with Section 4 and refer as necessary to previous sections.

After an introduction to differential equations and to complex variables, students may wish to attempt some of the problems posed in this article. For example, the exercise suggested after Proposition 10 (Section 6) only uses complex variables. Some special cases of Question 2 (Section 7) involve patching together solutions of first-order linear ordinary differential equations. Furthermore, methods involving nullclines (Section 4) and computer-assisted numerical computation can be used in the search for periodic solutions.

2 The Poincaré Map, Jensen’s Formula, and the Poincaré Metric

This section discusses the Poincaré map, Jensen’s formula, and the Poincaré metric. The first is central to the study of periodically forced differential equations. The other two topics can be approached after an undergraduate course in complex analysis. They are used in Section 3 to develop Ilyashenko’s method. As is common, in describing functions on subsets of the complex numbers, the terms analytic, complex analytic, and holomorphic will be used synonymously.

2.1 The Poincaré map

Consider a periodically forced differential equation x=f(t,x), where for all t, f(t+T,x)=f(t,x). For convenience, we assume that the period T is 1. Therefore, a periodic solution is a solution x(t) of the differential equation such that for all t, x(t+1)=x(t). The time-one map P(x), usually referred to as the Poincaré map, is defined as follows [3, pp. 209–211], [11, p. 116]. Suppose x(t) is a solution with initial condition x(0)=x0. Then P(x0)=x(1), supposing that x(1) is defined. We will refer to F(x)=P(x)x as the displacement map. Observe that periodic solutions of the differential equation are in one-to-one correspondence with fixed points of the Poincaré map P(x) and thus with zeros of the displacement map F(x). For example, the initial value problem x=x(1+cos(2πt)) with x(0)=x0 has the solution x(t)=x0exp(t+12πsin(2πt)) with x(1)=x0exp(1)=ex0. The Poincaré map is P(x) = ex and the displacement map is F(x)=exx. The only periodic solution is x(t) = 0, which corresponds to the fixed point x = 0 of the Poincaré map.

Let VW, where V is a finite real closed interval and W is an open neighborhood of V in the complex numbers C. Suppose the continuous function f:R×WC is complex analytic for all fixed t and that f(t, z) restricts on R×V to a function f(t, x) determining a real-valued differential equation x=f(t,x). If the Poincaré map P(x) and thus the displacement map F(x) are defined for all x in V, it follows that the displacement map F(z) for the differential equation z=f(t,z) extends F(x) and that F(z) is defined and analytic on some ϵ-neighborhood U of V in C [7, p. 36], [22, p. 44] (see ). Ilyashenko applies to the displacement map F(z) his general theorem that gives an upper bound for the number of zeros on V of any function that is complex analytic in an ϵ-neighborhood U of V in C (Theorem 2).

Fig. 1 ϵ-neighborhood U of the closed interval V.

Fig. 1 ϵ-neighborhood U of the closed interval V.

2.2 Jensen’s formula

Ilyashenko’s method is based on a novel application of Jensen’s formula, a result that was published in 1899 in Acta Mathematica as a letter from Jensen to Mittag-Leffler, the founder and editor of that distinguished journal [Citation19]. This formula relates the position of the zeros of a function within a disk to the logarithm of its absolute value on the boundary of the disk. Accounts of how the foundations of complex analysis were understood at the time are found in [Citation6, Citation10].

Johan Jensen was a Danish electrical engineer and amateur mathematician who never held an academic position. It is surprising and also gratifying that this outsider was able to conjecture and prove in a few lines an important result that had eluded some of the greatest mathematicians of the nineteenth century. We are informed in a biographical article on Jensen by Børge Jessen, himself a distinguished Danish mathematician, that Jensen was an admirer of Weierstrass [Citation20]. This is evident in Jensen’s own proof of his formula [Citation19], a proof based on power series methods. Such methods were preferred by Weierstrass and his followers. Jensen uses the term holomorphic, a term introduced by followers of Cauchy, but the term was sometimes also used by followers of Weierstrass [6, p. 718]. The result is stated and proved elegantly and correctly, as one would expect from Jessen’s remark that “his papers are patterns of exact and precise exposition” [Citation20]. There is a great charm to Jensen’s letter. He begins by addressing Monsieur le Professeur and then chattily notes his attendance at some lectures recently given by the addressee. There is a buoyant enthusiasm throughout, from the title that announces a new and important result, to the hopes expressed at the end that his result will enable progress to be made on the Riemann conjecture regarding the zeros of the zeta function. While these hopes were not realized, Jensen’s formula was used extensively in succeeding investigations of entire functions; that is, functions holomorphic on the whole complex plane [1, p. 208].

Jensen’s Formula ([1, p. 207], [28, p. 181])

Suppose f is holomorphic in the closed disk of radius ρ, f(0) is nonzero, and f has n zeros in the interior of the disk, counted with multiplicities, and these zeros occur at points z=ai (see ). Then ln|f(0)|=i=1nln|ρai|+12π02πln|f(ρeiθ)|dθ.

Fig. 2 Jensen’s formula: n = 5 zeros of f(z). Zeros located at z=ai.

Fig. 2 Jensen’s formula: n = 5 zeros of f(z). Zeros located at z=ai.

Proofs of Jensen’s formula rely on variations of the following procedure that cancels the zeros of f in the interior of the disk of radius ρ. Let F(z)=f(z)i=1nρ2ai¯zρ(zai). Note that F(0)0. On the circle |z|=ρ, let z=ρeiθ, and observe that for each term ρ2ai¯zρ(zai) the numerator and denominator have the same absolute value and so |ρ2ai¯zρ(zai)|=1. Therefore |F(z)|=|f(z)| on the circle |z|=ρ. We then take a branch of the complex logarithm of F(z). Jensen works with a power series representation that he integrates term by term about the circle [Citation19], Saks and Zygmund directly apply the Cauchy integral theorem for a circle [Citation28], while Ahlfors uses the result that ln|F(z)| is a well-defined harmonic function on R2 and thus satisfies the mean value property for such functions [1, p. 165], a property that he previously establishes using the Cauchy integral theorem.

Ilyashenko relies on the following corollary to Jensen’s formula. As there are some subtle points, a proof of the corollary is given below.

Corollary to Jensen’s Formula

Suppose f is holomorphic in the open disk D1 of radius 1 and continuous in the closed disk D1¯ of radius 1 and f(0) is nonzero. Let M=max{|f(z)||zD1¯}. Let E be a compact subset of the open disk D1 that is contained in a closed disk Dr¯ of radius r, with 0<r<1. Let N be the number of zeros counted with multiplicity of f restricted to E (see ). Then:

Fig. 3 Corollary to Jensen’s formula: n = 5 zeros of f(z) at points marked by *; N = 3 zeros in compact subset E of Dr¯.

Fig. 3 Corollary to Jensen’s formula: n = 5 zeros of f(z) at points marked by *; N = 3 zeros in compact subset E of Dr¯.
  1. N1ln(1/r)ln(M|f(0)|).

  2. If 0E,m=max{|f(z)||zE}, and m=|f(0)|, then N1ln(1/r)ln(Mm).

Proof.

(a) We can choose ρ arbitrarily close to 1 with r<ρ<1. f is holomorphic on the closed disk of radius ρ. Suppose f has n zeros in the interior of the disk of radius ρ, counted with multiplicity, and these zeros occur at points z=ai. For all θ, ρeiθ is contained in D1¯ and thus |f(ρeiθ)|M. By Jensen’s formula, ln|f(0)|i=1nln|ρai|+ln(M) and therefore i=1nln|ρai|ln(M|f(0)|). For each zero ai of f that is contained in EDr¯, we have |ai|r and thus ln(ρr)ln|(ρai)|. So Nln(ρr)ln(M|f(0)|), for all ρ with r<ρ<1. Letting ρ approach 1, we obtain the desired conclusion.

(b) Part (b) follows by assuming that m=|f(0)|. □

2.3 Poincaré metric

The Poincaré metric is well-suited to studying complex analytic functions on the open unit disk D1. A detailed treatment can be found in [Citation21].

For the Poincaré metric, a differentiable curve C in the disk D1, given by z(t)=x(t)+iy(t) with atb, has length determined by the integralab2(x(t))2+(y(t))21(x(t))2(y(t))2dt,an integral that is often written invariantly as C2ds1|z|2. The distance d(z1,z2) between two points in D1 is defined by the infimum of the lengths of all curves connecting z1 and z2. The term Poincaré metric refers both to the distance d(z1,z2) and to the function Δ(z)=21|z|2. This function is sometimes called the metric density. Note that when defining the Poincaré metric some writers such as those of [Citation21] divide by 2 the distance function and the metric density that are defined above.

The curve-lengths and distances are invariant under the fractional linear transformations T(z)=az+bcz+d that map the disk D1 homeomorphically to itself. These fractional linear transformations are the isometries of the Poincaré metric.

By using the Riemann mapping theorem, the Poincaré metric can also be defined on any open set U properly contained in C provided that U is homeomorphic to D1. According to that justly celebrated theorem, the homeomorphism from U to D1 can be chosen to be a complex analytic function ϕ. The distance dU(w,z) between two points in U can then be defined as d(ϕ(w),ϕ(z)) using the previous definition of the Poincaré metric on D1 (). The metric on U is well-defined because any two such complex analytic homeomorphisms ϕ1 and ϕ2 from U to D1 differ by a fractional linear transformation T of D1 (that is, ϕ1=T°ϕ2), and, as mentioned above, such transformations T are isometries of the Poincaré metric on D1. Similarly, the Riemann map ϕ from U to D1 determines a positive real metric density ΔU(z)=ϕ(z)Δ(ϕ(z)) at each point z in U and the length of any curve C in U is given by CΔU(z) ds [Citation21]. If the curve is given parametrically by z(t)=x(t)+iy(t) with atb, then the length of C is determined by the integral abΔU(z(t))(x(t))2+(y(t))2dt.

Fig. 4 Poincaré metric on an open set U homeomorphic to D1.

Fig. 4 Poincaré metric on an open set U homeomorphic to D1.

The inclusion principle is a fundamental property of the Poincaré metric. Let dU and dW be the Poincaré metrics as defined above on U and W, respectively, where U is contained in W and let ΔU and ΔW be the respective metric densities. Then for all z1 and z2 in U, we have dU(z1,z2)dW(z1,z2). Intuitively z1 and z2 are closer to the boundary of U than they are to the boundary of W, so their Poincaré distance in U is greater than their Poincaré distance in W (see ). Similarly, at all points z in U, we have ΔU(z)ΔW(z).

Fig. 5 The inclusion principle: dU(z1,z2)dW(z1,z2).

Fig. 5 The inclusion principle: dU(z1,z2)≥dW(z1,z2).

The Poincaré metric on an open disk of radius ϵ is an important special case for what follows. For U=Dϵ, the open disk of radius ϵ centered at the origin, the Riemann map from U to D1 is defined by ϕ(z)=zϵ. So ΔU(z)=|ϕ(z)|Δ(ϕ(z))=1ϵ21(|z|ϵ)2=2ϵϵ2|z|2. Similarly, for any open disk U of radius ϵ centered at a point p in C, we obtain ΔU(z)=2ϵϵ2|zp|2.

3 Ilyashenko’s Theorems on Zeros and Periodic Solutions

In this section, we draw two theorems from Ilyashenko’s article [Citation15]. These results place upper bounds on the number of zeros of a complex analytic function f on a compact set K contained in a domain U homeomorphic to the open unit disk. The section concludes with an application of Ilyashenko’s method to a differential equation arising from Hilbert’s sixteenth problem. Formula (6) in [Citation15] should have a factor of 2 inside the exponential function and thus read γ(U,K)exp(2Kϵ1). The appropriate changes have been made throughout this article including in the statement of Theorem 3.

Theorem 1

([Citation15]). For some domain U, assume there is a function ϕ analytic on U that maps the closure U¯ homeomorphically onto the closed disk D1¯. These conditions are satisfied if U is the interior of a Jordan curve. Let f be holomorphic on U and continuous on its closure U¯, with M=max{|f(z)||zU¯}. Let K be a compact subset of U, with m=max{|f(z)||zK} such that m0. Let N be the number of zeros counted with multiplicity of f(z) restricted to K and let σ be the diameter of K in the Poincaré metric on U. Then:

  1. For any r < 1 such that ϕ(K)Dr¯, we have N1ln(1r)ln(Mm).

  2. N(eσ1)ln(Mm) for σln(2) and N<ln(Mm) for σ<ln(2).

  3. Neσln(Mm).

Proof.

(a) For a proof that the interior of a Jordan curve satisfies the hypothesis of Theorem 1, see [Citation30]. Choose a point P in K such that |f(P)|=m=max{|f(z)||zK}0. By composing if necessary with a fractional linear transformation T on D1¯, we can assume that ϕ(P)=0. The composition g=f°ϕ1 is holomorphic on the open disk D1 of radius 1 and continuous on the closed disk D1¯ (see ) Then M=max{|g(z)||zD1¯},m=max{|g(z)||zϕ(K)},m=g(0)0, and N is the number of zeros counted with multiplicity of g restricted to ϕ(K). The result follows by applying the corollary to Jensen’s formula to the function g and the compact set ϕ(K).

Fig. 6 Theorem 1: P is in K, a compact subset of U¯. ϕ(K)Dr*¯D1¯. ϕ(P)=0.

Fig. 6 Theorem 1: P is in K, a compact subset of U¯. ϕ(K)⊆Dr*¯⊆D1¯. ϕ(P)=0.

(b) If σ = 0, it follows that K contains only one point and that N = 0. The theorem follows trivially. Therefore, we will assume that σ>0.

As above, chose a point P in K such that ϕ(P)=0 and |f(P)|=m=max{|f(z)||zK}0. Because ϕ is an isometry between the Poincaré metric on U and the Poincaré metric on D1, σ is also the diameter of ϕ(K) in the Poincaré metric on D1. By the compactness of ϕ(K), there is some closed disk with center 0 that contains ϕ(K) and has a minimal radius r* with 0<r*<1; that is, r*=min{ν|ϕ(K)Dν¯} (). By part (a), N1ln(1r*)ln(Mm). It remains to determine r* in terms of σ.

We observed above that 0 is in ϕ(K). Furthermore, by the definition of r*, there is a point Q in ϕ(K) that lies on the boundary of the disk of radius r*. Therefore, using the Poincaré metric on D1 and its isotropy property,σ=diameter ofϕ(K)d(0,Q)=d(0,r*)=0r*21x2dx=ln(1+r*1r*).

By elementary algebra and properties of the logarithm, for all r* with 0<r*<1, the inequality σln(1+r*1r*) is equivalent to the inequality ln(eσ+1eσ1)ln(1r*). Therefore, N1ln(eσ+1eσ1)ln(Mm).

The inequality σln(2) implies 0<1eσ11. Using calculus, ln(1+2x)x for all x, 0x1. Therefore, for σln(2),ln(eσ+1eσ1)=ln(1+2(1eσ1))1eσ1, and so N(eσ1)ln(Mm).

The inequality 0<σ<ln(2) implies ln(eσ+1eσ1)>ln(3)>1 and thus N<ln(Mm).

(c) Part (c) follows directly from part (b). ■

Theorem 2

([Citation15]). Let the “stadium” U be the open ϵ-neighborhood of a finite closed real interval V=[a,b] of length L in the usual Euclidean metric and with diameter σ in the Poincaré metric on U (see ). Let f be holomorphic on U and continuous on its closure U¯, with M=max{|f(z)||zU¯} and m=max{|f(z)||zV}0. Let N be the number of zeros counted with multiplicity of f(z) restricted to V. Then:

Fig. 7 Theorem 2: The diameter σ of V in the Poincaré metric on U equals dU(P,Q). As shown in the proof of Theorem 2, dU(P,Q)2Lϵ.

Fig. 7 Theorem 2: The diameter σ of V in the Poincaré metric on U equals dU(P,Q). As shown in the proof of Theorem 2, dU(P,Q)≤2Lϵ.
  1. σ2Lϵ.

  2. Nexp(2Lϵ)ln(Mm).

Proof.

(a) Since V is compact, the diameter σ of V equals the distance dU(P,Q) of two points P and Q in V (see ). This distance is at most equal to the Poincaré arclength along V between P and Q which is at most equal to the length LU of V in the Poincaré metric on U. LU=abΔU(x)dx, where ΔU(x) is the metric density of U evaluated at x. For each x between a and b, let Dϵ(x) be the open disk of radius ϵ centered at x, and let Δϵ,x(z)=2ϵϵ2|zx|2 be the metric density for the Poincaré metric on Dϵ(x) (see ). Since Dϵ(x) is contained in U, by the inclusion principle we have ΔU(z)Δϵ,x(z) for all z in Dϵ(x) (see ). Letting z = x, we have ΔU(x)Δϵ,x(x)=2ϵ. Therefore,LU=abΔU(x)dxab2ϵdx=2Lϵ,where L=ba is the usual Euclidean length of the interval V. So σLU2Lϵ.

(b) Part (b) follows from part (a) and Theorem 1c. ■

3.1 Ilyashenko’s method

Consider a periodically forced differential equation x=f(t,x), where for all t, f(t+T,x)=f(t,x). For convenience, we assume that the period T = 1. Upper bounds on the number of real periodic solutions of x=f(t,x) are determined by upper bounds on the number of zeros of the displacement map F. Using Theorem 2b, the method requires

  1. a closed interval V containing the initial points of all real periodic solutions,

  2. an ϵ-neighborhood U of V with closure U¯ on which the displacement map F is defined and analytic,

  3. the maximum M of |F(z)| on the closure of U, and

  4. the maximum m of |F(z)| on V such that m0.

Using this method, Ilyashenko has obtained results pertinent to aspects of Hilbert’s sixteenth problem. For a survey, see [Citation16]. Hilbert’s problem involves upper bounds on the number of limit cycles of polynomial planar differential equations, but a simplified version involves a one-dimensional equation of the form x=f(t,x) that is polynomial in x and periodic in t. Recall that a limit cycle is an attracting or repelling periodic solution. Charles Pugh inquired whether there was an upper bound on the number n of limit cycles that depended only on the degree d of the polynomial.

For d = 2, there are no more than two limit cycles. Furthermore, in the cubic case, for x=a3(t)x3+a2(t)x2+a1(t)x+a0(t) with the coefficient a3(t) of constant sign for all t, it was known that there could be no more than three limit cycles [Citation3]. An important negative result was obtained by Lins Neto who showed that for any degree d3, and any n > 0, there is such an equation having at least n limit cycles [Citation23]. In particular, in the cubic case d = 3, the constant sign condition on a3(t) is necessary to obtain the upper bound n = 3. However, it remained to determine upper bounds on n using not only the degree d but also properties of the coefficient functions of the polynomial.

Theorem 3

([Citation15]). Suppose f(t, x) is of period 1 in t and polynomial of degree d4 in x with the xd term having coefficient equal to 1 and where C > 1 is greater than the maximum of the absolute value of the coefficients of f. Then for the differential equation x=f(t,x) there is some number N(C, d), depending only on d and C, such that the number of limit cycles is at most N(C, d). Moreover, N(C,d)8exp{(6C+4)exp[(3/2)(2C+3)d]}.

Stephen Smale has directed attention to the Liénard equation, another special case of Hilbert’s sixteenth problem [Citation29]. By reducing the problem to a one-dimensional periodically forced equation and using a technique more elaborate but similar to that in the previous theorem, Ilyashenko and Panov obtained the following result:

Theorem 4

([Citation17]). Consider the Liénard equation x=yF(x),y=x,F(x)=xn+j=1n1ajxj,where |aj|C,C4,n5, and n is odd. Then the number of limit cycles L(n,C)exp(exp(C14n)).

The reason for the double or triple exponential character of these estimates will be suggested by the proofs of similar results in Section 5, results that apply Ilyashenko’s method to certain equations x=f(t,x) that arise from the neural model of Section 4 and for which f is transcendental in x.

Returning to the constant sign cubic case, suppose f(t, x) is qualitatively similar to functions a3(t)x3+a2(t)x2+a1(t)x+a0(t) with a3(t) of constant sign for all t; that is, for all t there is one inflection point and no more than two turning points, and either for all t, limx±f(t,x)=±, or for all t, limx±f(t,x)=. It was conjectured that as in the constant sign cubic case, the differential equation x=f(t,x) would have no more than three limit cycles. This conjecture was recently answered in the negative by Decker and Noonburg, with examples drawn from the neural model of Section 4 exhibiting an arbitrarily large number of limit cycles [Citation8, Citation9].

4 Periodic Solutions of a Neural Model

4.1 Sigmoid response functions

In the simplest models, a neuron gives an all-or-nothing response to a stimulus; that is, the response is gated by a translate of the unit step function u(x)={0x<01x0. However, to use the resources of calculus and differential equations, it is preferable to have a response function S that rises smoothly from 0 to 1 as x passes from to . We normalize the model as follows.

Definition.

A sigmoid response function S is a continuously twice differentiable increasing function of the real numbers R that is concave up on (,0) and concave down on (0,) with S(x)+S(x)=1,S(0)=1, limxS(x)=0, and limxS(x)=1.

In neural modeling, sigmoid response functions are often implemented using elementary functions. Four common examples are

  1. S(x)=11+exp(4x),

  2. S(x)=1πarctan(πx)+12,

  3. S(x)=12tanh(2x)+12,

  4. S(x)=12erf(πx)+12, where the error function erf(x)=2π0xexp(t2)dt.

We plot S(x)=11+exp(4x) together with its bell-shaped derivative S(x) (see ).

Fig. 8 S(x)=1/(1+exp(4x)).

Fig. 8 S(x)=1/(1+ exp (−4x)).

For any number r and for any k > 0, a transformation of S(x) yields the function g(x)=S(k(xr)), with g(x)=kS(k(xr)).The maximum value k of the derivative of this function g(x) occurs at its threshold x = r. For fixed r, as k approaches , we see that g(x) approaches u(xr). This behavior is illustrated in the plot of S(5(x1)) where S(x)=11+exp(4x) ().

4.2 Neural models

The usual model for a single neuron is the four-dimensional Hodgkin–Huxley ordinary differential equation that led to the 1963 Nobel Prize in Physiology or Medicine for its discoverers. However, simpler models are easier to manipulate and may reveal important qualitative features of neural behavior. The two-dimensional Fitzhugh–Nagumo ordinary differential equation is such a simplification [Citation24]. A further reduction yields an experimentally tested one-dimensional model derived from electrophysiological considerations [18, pp. 37, 55, 89, 106]. The model considered in this article shares qualitative features with this electrophysiological model [Citation14], [25, pp. 78–80]. Furthermore, the model in this article has been applied successfully to the collective behavior of a single pool of neurons and of interacting pools of neurons including the much studied Wilson–Cowan oscillator [Citation26, Citation31]. It is also used in differentiable artificial neural networks [Citation13].

Let x represent the activation level of a neuron. Then x would decay exponentially if it were not for direct or indirect self-stimulation that is gated through a sigmoid response function S with threshold r and maximum derivative k. The resulting one-dimensional autonomous differential equation is x=f(x)=x+S(k(xr)). We first discuss the qualitative features of this autonomous equation. We then discuss the periodically forced equation where the threshold r varies periodically.

For any sigmoid response function S, if f(x)=x+S(k(xr)) is the right-hand side of the differential equation, we obtain f(x)=0 if and only if S(k(xr))=1/k. The bell-shaped properties of S imply that the equation f(x)=0 has two solutions for k > 1, one solution for k = 1, and no solutions for k < 1. Therefore, f has at most three zeros, and the differential equation x=f(x) has at most three equilibria. As an example, we graph f using S(x)=11+exp(4x), k = 1.5, and r = 1/2 ().

For the example above, at the equilibrium with the smallest x value and at the equilibrium with the largest x value, the derivative f<0, implying that these equilibria are sinks or attractors; that is, solutions with initial conditions sufficiently close to such an equilibrium converge to the equilibrium as time t increases. At the middle equilibrium, the derivative f>0, implying that this equilibrium is a source or repeller; that is, solutions with initial conditions sufficiently close to the equilibrium converge to the equilibrium as time t decreases and move away from the equilibrium as time t increases. It is this behavior that makes this equation a useful neural model, as neurons and neuron populations are often in either a low-activation state or in a high-activation state. Furthermore, because the derivative f is nonzero at each of the three equilibria, it follows for sufficiently small perturbations of f and f that the equilibria persist, respectively, as attractors and as a repeller [Citation3], [5, pp. 84–89], [Citation11]. In the language of dynamical systems theory, an equilibrium at which f is nonzero is called hyperbolic, and a differential equation x=f(x) such that the number and type of equilibria persist under sufficiently small perturbations of f and f is called structurally stable [Citation12].

4.3 Periodic stimulation and periodic solutions

Abnormal oscillatory behavior of regions of the brain is characteristic both of epilepsy and Parkinson’s disease and is much studied and modeled [Citation2]. Periodic stimulation of parts of the brain is increasingly used in the treatment of those cases of epilepsy that do not respond to medication [Citation27]. Periodic stimulation of a single neuron or of a pool of neurons may come from other parts of the brain or from external sources. Periodic input into the one-variable neuron model is equivalent to a periodic change in the threshold r. Therefore, we consider the periodically stimulated nonautonomous one-dimensional ordinary differential equation model x=f(t,x)=x+S(k(xr(t))), where S is a sigmoid response function, k > 0, and r is continuous with period 1.

To be specific at this point, we consider a periodic perturbation of the autonomous equation discussed above: x=f(t,x)=x+S(k(xr(t))), where S(x)=11+exp(4x), k = 1.5, r(t)=r0+r1(t), where r0=1/2 and r1 is continuous with period 1.

When the equilibria are hyperbolic, as in the unperturbed example, then for a sufficiently small magnitude perturbation r1 about the threshold r0, the three equilibria persist as three periodic solutions. The low and high excitation solutions are attractors and the middle solution is a repeller [Citation3, Citation11], [26, pp. 1595–1598].

More generally, consider for this example the nullclines f(t,x)=0 and the generalized nullclines fx(t,x)=0 [3, pp. 205–208]. For the unperturbed example, the nullclines are three horizontal lines x=x0,x=x1, and x=x2 corresponding to the three equilibria x0, x1, and x2, while the generalized nullclines are two horizontal lines x=c1 and x=c2 corresponding to the two critical points c1 and c2. For small enough perturbations, the nullclines persist as curves x0(t),x1(t), and x2(t) in the (t, x)-plane that are not solutions of the differential equation, but rather indicate where the slope field of the differential equation is horizontal. If constants d1 and d2 exist with x0(t)<d1<x1(t)<d2<x2(t), then the differential equation has at least three periodic solutions. Furthermore, if fx(t,x)<0 for x<d1 and x>d2 and fx(t,x)>0 for d1<x<d2, then there are precisely three periodic solutions, with the low and high excitation solutions attractors and the middle solution a repeller [Citation3, Citation11, Citation26].

These methods break down when the values of k and r become sufficiently large so that the nullclines are no longer separated by constants or no longer persist as three separate curves but rather have branches that join and separate in the interval 0t1. In such cases, the number of periodic solutions may be less than, equal to, or greater than three [Citation3, Citation11, Citation26].

To be specific, consider the following example: x=f(t,x)=x+S(k(xr(t))), where S(x)=11+exp(4x), k = 1.5, r(t)=r0+r1(t), where r0=1/2 and r1(t)=Asin(2πt). As shown in , for A = 0.06 the three nullclines are separated by constants and there are precisely three periodic solutions. For A=0.2, the nullclines no longer persist as separate curves, but there are still precisely three periodic solutions. For A = 0.4, the nullclines no longer persist as separate curves and there is only one periodic solution.

Fig. 9 Nullclines f(t,x)=0 (dashes), slope field, and periodic solutions (solid lines) for k = 1.5, r = 1/2, and A = 0.06, A = 0.2, and A = 0.4.

Fig. 9 Nullclines f(t,x)=0 (dashes), slope field, and periodic solutions (solid lines) for k = 1.5, r = 1/2, and A = 0.06, A = 0.2, and A = 0.4.

For the example above, the function f(t, x) seems qualitatively cubic-like (see the last paragraph of Section 3), that is, similar to a function a3(t)x3+a2(t)x2+a1(t)x+a0(t) with the coefficient a3(t) negative for all t, and it was known that differential equations of the form x=a3(t)x3+a2(t)x2+a1(t)x+a0(t) with a3(t) of constant sign have no more than three periodic solutions [Citation3]. So it was conjectured that differential equations from the neural model of the form x=f(t,x) also have no more than three periodic solutions.

This conjecture was answered in the negative by Decker and Noonburg who, using a careful bifurcation analysis, explored an example with five periodic solutions [Citation8]. Such an example is provided by the equation x=f(t,x)=x+S(k(xr(t))), with S(x)=11+exp(4x), k = 3, r(t)=r0+r1(t),r0=1/2, A = 0.37, and r1(t)=Asin(2πt).

Furthermore, by first studying examples where k= and r(t) is piecewise linear, and then smoothing these examples, Decker and Noonburg demonstrated that for any n > 0, there exist k > 0 and a continuous periodic function r, such that there are at least n periodic solutions for the equation x=f(t,x)=x+S(k(xr(t))) [Citation9].

Many questions remain. One such is to find an upper bound N for the number of periodic solutions that is dependent on k and on S, but is independent of the perturbation r. In Section 5, we find an answer to such questions using complex analytic techniques developed by Yulij Ilyashenko.

5 Upper Bounds for Number of Periodic Solutions of Neural Models

Ilyashenko’s method finds upper bounds on the number of real periodic solutions of x=f(t,x) by finding upper bounds on the number of zeros of the displacement map F (see Section 2). The method requires (see Theorem 2)

  1. a closed interval V containing the initial points of all real periodic solutions,

  2. an ϵ-neighborhood U of V with closure U¯ on which the displacement map F(z) is defined and analytic,

  3. the maximum M of |F(z)| on the closure of U, and

  4. the maximum m of |F(z)| on V such that m0.

For the periodically forced neuron of Section 4, these elements are established in Proposition 5. They are then used in Theorems 9 and 11 to obtain results on the number of real periodic solutions.

Recall that a function on a subset of C is defined to be complex analytic if it is complex analytic in some neighborhood of that subset [Citation1]. Note that F may fail to be defined at a point z0 in U if f is undefined at z0, or if f fails to be complex analytic at z0, or if the solution z(t) starting at z0 fails to be defined for all t less than or equal to the period of the differential equation.

Definition.

A complex sigmoid response function S is a sigmoid response function that is complex analytic on the real line.

Definition.

A complex sigmoid response function has (H, B) bounded derivative if H>0, B > 1, S is complex analytic in the closed H-neighborhood of the real axis, and for all z in that H-neighborhood we have |S(z)|B (see ).

Fig. 10 Definition: S is (H, B) bounded if |S(z)|B in the closed H-neighborhood of the real axis.

Fig. 10 Definition: S′ is (H, B) bounded if |S′(z)|≤B in the closed H-neighborhood of the real axis.

Proposition 5.

Let S be a complex sigmoid response function with (H, B) bounded derivative. Consider the differential equation z=f(t,z)=z+S(k(zr(t))), where k1 and r(t) is real-valued and continuous with period 1. Let V be the interval [1,1]. Suppose ϵ=H/(ke1+Bk), U is the ϵ-neighborhood of V, A is the closed Hk-neighborhood of V, P is the time-one (i.e., Poincaré) map, F(z)=P(z)z is the displacement function, M=max{|F(z)||zU¯}, and m=max{|F(z)||zV}. Then:

  1. V contains the initial points of all real periodic solutions of the differential equation.

  2. On the closure of U, the Poincaré map P and the displacement function F are defined and analytic and P takes values in A.

  3. M2(1+Hk).

  4. m11e.

Proof.

Note that the three lemmas included in the proof of Proposition 5 are only used for the proof of part (b).

  1. Consider a solution x(t) with initial condition x0. The bounds on S, 0<S(k(zr(t)))<1, imply f(t,x)>0 for x < 0 and f(t,x)<0 for x > 1. If x0<0, then there is a neighborhood of x0, such that, for all t, x(t) is increasing while in that neighborhood, showing that x(t) is not a periodic solution. Similarly, x(t) is not a periodic solution whenever x0>1. Therefore, the closed unit interval and hence V contain all initial points for all periodic solutions.

  2. Note that U¯A and that, by the assumption k1, A is a subset of the closed H-neighborhood of the real axis. Therefore the functions f(t, z) and fz(t,z) are defined for all (t, z) with z in A, as is the differential equation z=f(t,z)=z+S(k(zr(t))). Suppose z0 is in the closure of U and let z(t) be the solution of z=f(t,z)=z+S(k(zr(t))) with initial condition z(0)=z0. We need to show that z(t) is defined and in A for all t1 (see ). Suppose not. We use the following three lemmas to derive the contradiction Hk<Hk. Lemma 7 is a corollary of Gronwall’s inequality and is much used in the theory of differential equations.

    Fig. 11 Proposition 5b: On the closure of U, both the Poincaré map P and the displacement function F are defined and the Poincaré map takes values in A. The proof is by contradiction.

    Fig. 11 Proposition 5b: On the closure of U, both the Poincaré map P and the displacement function F are defined and the Poincaré map takes values in A. The proof is by contradiction.

Lemma 6.

Let D=max{|fz(t,z)||tR,zA}. For all fixed t and all points P and Q in A, |f(t,P)f(t,Q)|D|PQ|; i.e., for all fixed t, D is a Lipschitz constant in z for f(t, z) on A.

Proof.

Note that the maximum D exists since f is periodic in t and A is compact. Since A is convex, the straight-line path C from P to Q lies in A.|f(t,P)f(t,Q|=|Cfz(t,z)dz|C|fz(t,z)||dz|D|PQ|.

See [1, p. 104]. □

Lemma 7.

([4, p. 103], [12, pp. 169, 297]). For some T0, suppose z1(t) and z2(t) are solutions of z=f(t,z) that are contained in A for all t with 0tT, and that for all fixed t, D is a Lipschitz constant in z for f(t, z) in A. Then |z2(t)z1(t)||z2(0)z1(0)|eDt for all t with 0tT.

Lemma 8.

Let D=max{|fz(t,z)||tR,zA}. Then D1+Bk.

Proof.

If z is in the closed Hk-neighborhood A of V, then |Im(z)|Hk. Moreover, |Im(zr(t))|Hk because r is real-valued. So |Im(k(zr(t))|H, which implies |S(k(zr(t))|B, because S has an (H, B) bounded derivative. Therefore, for all z in A, |fz(t,z)|=|1+kS(k(zr(t))|1+|kS(k(zr(t))|1+Bk. ■

Now suppose that there exists some t1, such that either z(t) is not defined or that z(t) is defined but it is not in A. In either case there exists t1<1, such that z(t1) is on the boundary of A and z(t) is in A for all tt1. The distance from z(t1) to V is Hk ().

Choose a point w0 in V such that |z0w0|ϵ and let w(t) be the solution of z=f(t,z)=z+S(k(zr(t))) with initial condition w(0)=w0 (). As argued in the proof of part (a), f(t,x)>0 for x < 0 and f(t,x)<0 for x > 1. Therefore, for all t0, w(t) is defined and in V and consequently in A. z(t1) on the boundary of the Hk neighborhood A of V implies Hk|z(t1)w(t1)|.

Since t1<1, it follows by Lemmas 6, 7, 8 thatHk|z(t1)w(t1)||z(0)w(0)|eDt1<|z(0)w(0)|eD|z(0)w(0)|e1+Bk.

By choice of w(0), |z(0)w(0)|ϵ=H/(ke1+Bk), implying the contradiction Hk<Hk. Therefore, for all t1, z(t) is defined and in A. In particular, for all z0 in the closure of U, the Poincaré map P(z0)=z(1) is defined and in A.

It remains to show that the Poincaré map P and the displacement function F are complex analytic on the closure of U. The assumption k1 implies that S and thus also S are complex analytic on an open neighborhood W of A. Therefore, for all fixed t, the function f(t,z)=z+S(k(zr(t))) is complex analytic on W. We have just shown that solutions with initial conditions in the closure of U remain in A and thus in W for all t1. It follows from classical results on the analyticity of solutions with respect to initial conditions that the Poincaré function P and thus also the displacement function F(z)=P(z)z are complex analytic on the closure of U [22, p. 44].

  1. By part (b), z in U¯ implies P(z) in A. The real line is invariant for P, so either z and P(z) are both in the closed upper half of A or they are both in the closed lower half of A. Therefore, M is at most equal to the diameter of the closed upper half of A. This region is formed by a rectangle of length 2 and height Hk together with quarter circles of radius Hk at each end (). The region has diameter 2(1+Hk).

  2. To find a lower bound for m we estimate |x(1)x(0)| from below, where we choose x(t) to be the solution of the differential equation x=f(t,x)=x+S(k(xr(t))) with initial condition x(0)=1. For comparison, consider the exponential differential equation x=h(x)=x, with initial condition x(0)=1 and solution et. Since for all x < 0, f(t,x)>h(x)>0, it follows that for all t0,x(t)>et. Therefore m|x(1)x(0)|=x(1)x(0)>1e(1)=11e. ■

We now apply Ilyashenko’s Theorem 2 using the estimates in Proposition 5.

Theorem 9.

Let S be a complex sigmoid response function with (H, B) bounded derivative. Let N be the number of real periodic solutions for the differential equation z=f(t,z)=z+S(k(zr(t))), where k > 0 and r(t) is real valued and continuous with period 1. Then:

  1. Nexp(4ke1+BkH)ln(2e(1+Hk)e1).

  2. If kH, then N2exp(CkeBk), where C=4e/H.

Proof.

(a) Suppose k1. By Proposition 5a, N equals the number of zeros of the displacement function F(z) on the interval V=[1,1]. Under the assumptions of Theorem 2, Nexp(2Lϵ)ln(Mm), where L = 2 is the length of V. By Proposition 5b, these assumptions are satisfied by ϵ=H/(ke1+Bk). By parts (c) and (d) of Proposition 5, we estimate M2(1+Hk) and m11e. Substituting in Nexp(2Lϵ)ln(Mm), we obtain the result.

If 0<k<1, then for real x, fx(t,x)<0 for all t, which yields the bound 0<P(x)<1 for the derivative of the Poincaré map P [3, pp. 109–110], [11, p. 130]. Therefore the displacement map F(x)=P(x)x is monotone decreasing and has at most one zero, implying that there is at most one periodic solution.

(b) Suppose kH; then we haveNexp(4ke1+BkH)ln(4ee1)<2exp(4eHkeBk).

6 Explicit Upper Bound on the Number of Periodic Solutions

In this section, we consider the complex sigmoid response function S(z)=11+exp(4z) from Section 4, and letting B = 2 we determine an explicit value of H such that the derivative S(z) is (H, B) bounded. We then use Theorem 9 to put an upper bound on the number N of real periodic solutions for the differential equation z=f(t,z)=z+S(k(zr(t))), where k>0 and r is real-valued with period 1. The value of H is determined by the magnitude of S near the real axis and, therefore, influenced by the location of the poles of S near the real axis.

Proposition 10

. S(z)=11+exp(4z) has (H, B) bounded derivative S(z) where H=π8 and B = 2.

Proof.

The derivative S(z)=4e4z(1+e4z)2. The following statements are equivalent:

  1. |S(z)|2,

  2. |(e2z+e2z)2|2,

  3. |cosh2(2z)|1/2, and

  4. cosh2(2x)sin2(2y)1/2, where z=x+iy.

This last inequality is satisfied for all x and y with |y|π8. ■

As an exercise, using properties of elementary complex analytic functions, similar estimates can be obtained for the other examples of sigmoid response functions that are listed in Section 4.

Exercise. S(z) is (H, B) bounded with:

B = 2 and H=22π, for S(z)=1πarctan(πx)+12,

B = 2 and H=π6, for S(z)=12tanh(2z)+12, and

B = 2 and H=ln(2)π, for S(z)=12erf(πx)+12.

Theorem 11.

Let N be the number of real periodic solutions to the differential equation z=f(t,z)=z+S(k(zr(t))), where k > 0, r is real-valued and continuous with period 1, and S(z)=11+exp(4z). Then N<2exp(Cke2k) where C=32eπ28, approximating to the least integer larger than C.

Proof.

If kH=π8, Theorem 11 follows by applying Theorem 9b, using B = 2 and the value H=π8 determined in Proposition 10. If k<H=π8<1, then for real x, fx(t,x)<0 for all t, which yields the bound 0<P(x)<1 for the derivative of the Poincaré map P [3, pp. 109–110], [11, p. 130]. Therefore the displacement map F(x)=P(x)x is monotone decreasing and has at most one zero, implying that there is at most one periodic solution. ■

7 Conclusions and Open Questions

After an introductory course in differential equations and an introductory course in complex analysis, the method of Ilyashenko can be used to study other accessible and illuminating problems involving upper bounds on the number of real periodic solutions of equations of the form x=f(t,x), where f is periodic in t and complex analytic in x. Delicate work is required to understand the particular function f and its derivative f and how these determine the elements of the approach (see Section 5).

For example, undergraduates may wish to consider the autonomous population model x=f(x)=rx(1x/L)(x/M1)H, where r is the intrinsic growth/decay rate, L is the carrying capacity, M is the minimal sustainable population in the absence of harvesting, H is the harvesting rate, and where r, L, and M are assumed to be positive with M < L [3, p. 217]. If H = 0, there are three equilibria: x = 0, x = L, and x = M. For small continuous periodic perturbations of the constants, the three equilibria persist as periodic solutions. Furthermore, whatever their size, as long as the periodic perturbations r(t), L(t), and M(t) remain positive, the coefficient of x3 remains of constant negative sign, and this implies that there can be no more than three periodic solutions. If r is allowed to vary in sign, one may search for examples with more than three periodic solutions, and then use Ilyashenko’s method to determine upper bounds on the number of such solutions.

Even for the neural models studied in this article, better estimates might be obtained by setting V=[c,1], for c > 0, and allowing B to approach 1 and c to approach 0.

Ilyashenko believes that the double or triple exponential estimates are likely to be far too big [Citation15, Citation17]. He quotes Smale’s conjecture for the Liénard equation, that the number of periodic solutions is likely to be polynomial in d and independent of the magnitude C of the coefficients. Ilyashenko suggests that sharper estimates might be found by complexifying time t and studying the resulting foliations that have Riemann surfaces as leaves.

The estimate from Theorem 11 that N<2exp(Cke2k), with C=32eπ, suggests a very large number of periodic solutions that could not possibly be detected in either natural or artificial neural networks. However, this upper bound is independent of the periodic stimulation r and suggests the existence of a more plausible upper bound that cannot yet be theoretically demonstrated. Furthermore, seemingly impractical mathematical idealizations often find unexpected applications.

We conclude with three questions suggested by these studies. The last question is vague and requires further precision.

Question 1. Suppose a complex sigmoid response function S is complex analytic in some neighborhood of fixed width about the real axis. Does such an S have a derivative S that is (H, B) bounded for some H > 0 and B > 1?

Question 2. The piecewise linear function L(x)={0,x<1/2x+1/2,1/2x1/21,x  1/2has derivative L(0)=1. Replace S(x) by L(x) in the neural model we have studied. Using real variable methods find an upper bound N for the number of periodic solutions to x=f(t,x)=x+L(k(xr(t))), where k > 0 and r is continuous and has period 1 in t. Using such methods can one obtain an estimate comparable to or better than the estimates in Theorem 11? To begin, consider the case r(t)=12+Asin(2nπt) for n an integer starting with n = 1.

Question 3. For x=f(t,x)=x+S(k(xr(t))), for a fixed number k > 0 and a fixed complex sigmoid response function S, let N(k, S) be the supremum of the number of periodic solutions, where the supremum is taken over the set of continuous functions r with period 1. Note that Theorem 11 gives an estimate for N(k, S), where S(x)=11+exp(4x). Let N(k) be the infimum of N(k, S) where the infimum is taken over the set of complex sigmoid response functions S. What is the “best” upper estimate for N(k) that can be obtained using Ilyashenko’s method and (H, B) bounded derivatives as in Section 5?

ACKNOWLEDGMENT

This article was inspired by discussions I had with my University of Hartford colleagues Robert Decker, Virginia W. Noonburg, and Ben Pollina. Valuable suggestions were offered by Linda Keen, Malgorzata Marciniak, John Mitchell, and Zbigniew Nitecki. Nine of the figures were rendered by Andrew Starnes who used the LATEXpackage PGFPlots. and were generated with Matlab. The meticulous work of the editors and reviewers was of great help and is much appreciated.

Additional information

Notes on contributors

Diego M. Benardete

Diego Mair Benardete received his doctorate in mathematics from the City University of New York in 1985 and has ever since researched pure and applied aspects of dynamical systems and differential equations. At the University of Hartford, he enjoys teaching undergraduate mathematics. He also has an amateur’s interest in history, literature, philosophy, and religion that manifests itself professionally in the study of the history and philosophy of mathematics.

References

  • Ahlfors, L. (1979). Complex Analysis: An Introduction to the Theory of Analytic Functions of One Complex Variable, 3rd ed. New York, NY: McGraw-Hill.
  • Ashwin, P., Coombes, S., Nicks, R. (2016). Mathematical frameworks for oscillatory network dynamics in neuroscience. J. Math. Neurosci. 6: Art. 2, 92 pp.
  • Benardete, D., Noonburg, V. W., Pollina, B. (2008). Qualitative tools for studying periodic solutions and bifurcations as applied to the periodically harvested logistic equation. Amer. Math. Monthly. 115(3): 202–219.
  • Birkhoff, G., Rota, G-C. (1962). Ordinary Differential Equations. Waltham, MA: Ginn and Company.
  • Blanchard, P., Devaney, R. L., Hall, G. R. (1998). Differential Equations. Pacific Grove, CA: Brooks/Cole.
  • Bottazini, U., Gray, J. (2013). Hidden Harmony—Geometric Fantasies: The Rise of Complex Function Theory. New York: Springer.
  • Coddington, E. A., Levinson, N. (1985). Theory of Ordinary Differential Equations. Malabar, FL: Krieger. Reprint of original edition, (1955). New York, NY: McGraw-Hill.
  • Decker, R., Noonburg, V. W. (2012). A periodically forced Wilson–Cowan system with multiple attractors. SIAM J. Math. Anal. 44(2): 887–905.
  • Decker, R., Noonburg, V. W. A single neuron equation with multiple periodic cycles. In preparation.
  • Gray, J. (2015). The Real and the Complex: A History of Analysis in the 19th Century. New York, NY: Springer.
  • Hale, J. K., Kocak, H. (1991). Dynamics and Bifurcations. New York, NY: Springer.
  • Hirsch, M. W., Smale, S. (1974). Differential Equations, Dynamical Systems, and Linear Algebra. Orlando, FL: Academic Press.
  • Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proc. Nat. Acad. Sci. 79: 2554–2558. DOI: https://doi.org/10.1073/pnas.79.8.2554.
  • Hoppensteadt, F. C., Izhikevich, E. (1997). Weakly Connected Neural Networks. New York, NY: Springer.
  • Ilyashenko, Yu. (2000). Hilbert-type numbers for Abel equations, growth and zeros of holomorphic functions. Nonlinearity. 13(4): 1337–1342. DOI: https://doi.org/10.1088/0951-7715/13/4/319.
  • Ilyashenko, Yu. (2002). Centennial history of Hilbert’s 16th problem. Bull. Amer. Math. Soc. (N.S.). 39(3): 301–354.
  • Ilyashenko, Yu., Panov, A. (2001). Some upper estimates of the number of limit cycles of planar vector fields with an application to the Liénard equations. Moscow Math. J. 1(4): 583–599. DOI: https://doi.org/10.17323/1609-4514-2001-1-4-583-599.
  • Izhikevich, E. (2007). Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. Cambridge, MA: MIT Press.
  • Jensen, J. (1899). Sur un nouvel et important théorème de la théorie des fonctions. Acta. 22: 359–364. DOI: https://doi.org/10.1007/BF02417878.
  • Jessen, B. (1973). Jensen, Johan Ludwig William Valdemar. In: Gillispie, C. C., ed. Dictionary of Scientific Biography, Vol. 7. New York: Scribner, p. 101.
  • Keen, L., Lakic, N. (2007). Hyperbolic Geometry from a Local Viewpoint. New York: Cambridge Univ. Press.
  • Lefschetz, S. (1963). Differential Equations: Geometric Theory, 2nd ed. New York: John Wiley.
  • Lins Neto, A. (1980). On the number of solutions of the equation dxdt=∑0naj(t)xj,0≤t≤1, for which x(0)=x(1) . Invent. Math. 59(1): 67–76.
  • Murray, J. D. (1993). Mathematical Biology, 2nd ed. New York, NY: Springer.
  • Noonburg, V. W. (2019). Differential Equations: From Calculus to Dynamical Systems, 2nd ed. Providence, RI: MAA Press.
  • Noonburg, V. W., Benardete, D., Pollina, B. (2003). A periodically forced Wilson–Cowan system. SIAM J. Appl. Math. 63(5): 1585–1603.
  • Rolston, J. D., Eaglot, D. J., Wang, D, D., Shih, T., Chang, E. F. (2012). Comparison of seizure control outcomes and the safety of vagus nerve, thalamic deep brain, and responsive neurostimulation: evidence from randomized control trials. Neurosurg. Focus. 32(3): E14.
  • Saks, S., Zygmund, A. (1971). Analytic Functions. Amsterdam: Elsevier.
  • Smale, S., (1998). Mathematical problems for the next century. Math. Intelligencer. 20: 7–15. DOI: https://doi.org/10.1007/BF03025291.
  • Veech, W. A. (1967). A Second Course in Complex Analysis. New York: W. A. Benjamin.
  • Wilson, H. R., Cowan, J. D. (1972). Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. J. 12: 1–24. DOI: https://doi.org/10.1016/S0006-3495(72)86068-5.