814
Views
9
CrossRef citations to date
0
Altmetric
Research Article

Dissipativity and passivity analysis for discrete-time complex-valued neural networks with time-varying delay

& | (Reviewing Editor)
Article: 1048580 | Received 12 Nov 2014, Accepted 29 Apr 2015, Published online: 10 Jun 2015

Abstract

In this paper, we consider the problem of dissipativity and passivity analysis for complex-valued discrete-time neural networks with time-varying delays. The neural network under consideration is subject to time-varying. Based on an appropriate Lyapunov–Krasovskii functional and by using the latest free-weighting matrix method, a sufficient condition is established to ensure that the neural networks under consideration is strictly (Q,S,R)-dissipative. The derived conditions are presented in terms of linear matrix inequalities. A numerical example is presented to illustrate the effectiveness of the proposed results.

Public Interest Statement

The passivity approach for interconnection of passive systems provides a nice tool for controlling a large class of nonlinear systems and DNNs, and its potential applications have been found in the stability and stabilization schemes of electrical networks, and in the control of teleoperators.

1. Introduction

In the past several decades, the neural networks are very important nonlinear circuit networks because of their wide applications in various fields such as associative memory, signal processing, data compression, system control (Hirose, Citation2003), optimization problem, and so on Liang, Wang, and Liu (Citation2009), Wang, Ho, Liu, and Liu (Citation2009), Liu, Wang, Liang, and Liu (Citation2009), Bastinec, Diblik, and Smarda (Citation2010), and Diblik, Schmeidel, and Ruzickova (Citation2010). Recently, neural networks have been electronically implemented and they have been used in real-time applications. However in electronic implementation of neural networks, some essential parameters of neural networks such as release rate of neurons, connection weights between the neurons and transmission delays might be subject to some deviations due to the tolerances of electronic components employed in the design of neural networks (Aizenberg, Paliy, Zurada, & Astola, Citation2008; Hu & Wang, Citation2012; Mostafa, Teich, & Lindner, Citation2013; Wang, Xue, Fei, & Li, Citation2013; Wu, Shi, Su, & Chu, Citation2011). As we know, time delays commonly exist in the neural networks because of the network traffic congestions and the finite speed of information transmission in networks. So the study of dynamic properties with time delay is of great significance and importance. However, most of the studied networks are real number valued. Recently, in order to investigate the complex properties in complex-valued neural networks, some complex-valued network models are proposed.

Figure 1. State trajectories of real part of two-neuron complex-valued neural networks for τ1(k)=2.5+0.5sin(0.5kπ) and τ2(k)=4.5+0.5sin(0.5kπ) with initial states x11=2+2j.

Figure 1. State trajectories of real part of two-neuron complex-valued neural networks for τ1(k)=2.5+0.5sin(0.5kπ) and τ2(k)=4.5+0.5sin(0.5kπ) with initial states x11=2+2j.

Figure 2. State trajectories of imaginary part of two-neuron complex-valued neural networks for τ1(k)=2.5+0.5sin(0.5kπ) and τ2(k)=4.5+0.5sin(0.5kπ) with initial states x12=-1-j.

Figure 2. State trajectories of imaginary part of two-neuron complex-valued neural networks for τ1(k)=2.5+0.5sin(0.5kπ) and τ2(k)=4.5+0.5sin(0.5kπ) with initial states x12=-1-j.

Figure 3. State trajectories of real part of two-neuron complex-valued neural networks for τ1(k)=3.5+0.5sin(0.5kπ) and τ2(k)=5.5+0.5sin(0.5kπ) with initial states x11=2+2j.

Figure 3. State trajectories of real part of two-neuron complex-valued neural networks for τ1(k)=3.5+0.5sin(0.5kπ) and τ2(k)=5.5+0.5sin(0.5kπ) with initial states x11=2+2j.

Figure 4. State trajectories of imaginary part of two-neuron complex-valued neural networks for τ1(k)=3.5+0.5sin(0.5kπ) and τ2(k)=5.5+0.5sin(0.5kπ) with initial states x12=-1-j.

Figure 4. State trajectories of imaginary part of two-neuron complex-valued neural networks for τ1(k)=3.5+0.5sin(0.5kπ) and τ2(k)=5.5+0.5sin(0.5kπ) with initial states x12=-1-j.

The most notable feature of complex-valued neural networks (CVNNs) is the compatibility with wave phenomena and wave information related to, for example, electromagnetic wave, light wave, electron wave, and sonic wave (Hirose, Citation2011). Furthermore, CVNNs are widely applied in coherent electromagnetic wave signal processing. They are mainly used in adapting, processing of interferometric synthetic aperture radar (InSAR) images captured by satellite or airplane to observe land surface (Suksmono & Hirose, Citation2002; Yamaki & Hirose Citation2009). Another important application field is sonic and ultrasonic processing. Pioneering work has been done in various directions (Zhang & Ma, Citation1997). In communication systems, the CVNNS can be regarded as an extension of adaptive complex filters, i.e. modular multiple-stage and nonlinear version. From this view point, several groups worked on time sequential signal processing (Goh & Mandic, Citation2007, Citation2005). Furthermore, there are many ideas based on CVNNs in image processing. An example is the adaptive processing for blur compensation by identifying the point scatting function in the frequency domain (Aizenberg et al., Citation2008). Recently, many mathematicians and scientists have paid more attention to this field of research. Besides, CVNNs have different and more complicated properties than the real-valued ones. Therefore, it is necessary to study the dynamic behaviors of the systems deeply. Over the past decades, some work has been done to analyze the dynamic behavior of the equilibrium points of the various CVNNs. In Mostafa et al. (Citation2013), local stability analysis of discrete-time, continuous-state, complex-valued recurrent neural networks with inner state feedback was presented. In Zhou and Song (Citation2013), the authors studied boundedness and complete stability of complex-valued neural networks with time delay by using free weighting matrices.

It is well known that dissipativity theory gives a framework for the design and analysis of control systems using an input?output description on energy-related considerations (Jing, Yao, & Shen, Citation2014; Wu, Shi, Su, & Chu, Citation2013; Wu, Yang, & Lam, Citation2014) and it becomes a powerful tool in characterizing important system behaviors such as stability. The passivity theory, being an effective tool for analyzing the stability of systems, has been applied in complexity (Zhao, Song, & He, Citation2014), signal processing, especially for high-order systems and thus the passivity analysis approach has been used for a long time to deal with the control problems (Chua, Citation1999). However, to the best of our knowledge, there is no result addressed on the dissipativity and passivity analysis of discrete-time complex-valued neural networks with time-varying delay, which motivates the present study.

In this paper, we consider the problem of dissipativity and passivity analysis for discrete-time complex-valued neural networks with time-varying delay. Based on the lemma proposed in Zhou and Song (Citation2013), a condition is derived for strict (Q,S,R)-dissipativity and passivity of the control neural networks, which depends only on the discrete delay. In established model, the delay-dependent dissipativity and passivity conditions are derived and the obtained linear matrix inequalities (LMIs) can be checked numerically using the effective LMI toolbox in MATLAB and accordingly the estimator gains are obtained. The effectiveness of the proposed design is finally demonstrated by a numerical example.

The rest of this paper is organized as follows: model description and preliminaries are given in Section 2. Dissipativity and passivity analysis for discrete-time complex-valued neural networks with time-varying delay are presented in Section 3. Illustrative example and its simulation results for dissipativity conditions have been given in Section 4.

Notations: Cn and Rn denote, respectively, the n-dimensional complex space and Euclidean space. z(k)=x(k)+iy(k) denote the complex-valued function, where x(k), y(k)Rn. Rn×m is the set of real n×m matrices, I is the identity matrix of appropriate dimension. For any matrix P, P>0(P<0) means P is a positive definite (negative definite) matrix. The superscript denotes the matrix complex conjugate transpose, diag{···} stands for a block-diagonal matrix. Let C([-τ2,0],D) be the Banach space of continuous functions mapping [-τ2,0] into DCn. For integers a and b with a<b, let N[a,b]=a,a+1,,b-1,b. XT represents the transpose of matrix X, ΔV(k) denotes the difference of function V(k) given by ΔV(k)=V(k+1)-V(k).

2. Model description and preliminaries

Consider the following discrete-time complex-valued neural networks with time-varying delays:(1) z(k+1)=Af(z(k))+Bf(z(k-τ(k)))+u(k)y(k)=f(z(k))(1)

where z(k)=[z1(k),z2(k),zn(k)]TCn is the neuron state vector; ACn×n, BCn×n are the connection weight matrix and the delayed connection weight matrix, respectively; y(k) is the output of neural network (1). u(k)=[u1(k),u2(k),un(k)]TCn is the input vector; time delay τ(k) ranges from τ1 to τ2 as τ1τ(k)τ2; f(z(k))=[f1(z1(k)),,fn(zn(k))]TCn and f(z(k-τ(k)))=[f1(z1(k-τ1(k))),,fn(z(k-τ1(k)))]TCn are the complex-valued neuron activation functions without and with time delays. The initial conditions of the CVNNs (1) are given byzi(s)=ϕi(s),s[-τ2,0],i=1,2,,n

where ϕiC([-τ2,0],D) are continuous. Complex-valued parameters in the neural network can be represented as aij=aijR+iaijI, bij=bijR+ibijI. Then (1) can be written as(2) ziR(k+1)=j=1naijRf(zjR(k))-j=1naijIf(zjI(k))+j=1nbijRf(zjR(k-τ(k)))-j=1nbijIf(zjI(k-τ(k)))+uiR(k)ziI(k+1)=j=1naijRf(zjI(k))+j=1naijIf(zjR(k))+j=1nbijRf(zjI(k-τ(k)))+j=1nbijIf(zjR(k-τ(k)))+uiI(k)(2)

where ziR and ziI are the real and imaginary parts of variable zi, respectively. aijR and aijI are the real and imaginary parts of connection weight aij; bijR and bijI are the real and imaginary parts of delayed connection weight bij. uiR(k) and uiI(k) are the real and imaginary parts of u(k). Connection weight matrices are represented as AR=(aijR)n×nRn×n, AI=(aijI)n×nRn×n, BR=(bijR)n×nRn×n, and BI=(bijI)n×nRn×n. Then, we havezR(k+1)=ARfR(k)-AIfI(k)+BRfR(k-τ(k)))-BIfI(k-τ(k)))+uR(k)zI(k+1)=ARfI(k)+AIfR(k)+BRfI(k-τ(k)))+BIfR(k-τ(k)))+uI(k)

To derive the main results, we will introduce the following assumptions, definitions, and lemmas.

Assumption 2.1

The activation function fj(·) can be separated into real and imaginary parts of the complex numbers z. It follows that fj(z) is expressed byfj(z)=fjR(Re(z))+ifjI(Im(z))

where fiR(·), fjI(·):RR for all j=1,2,,n. Then,ljR-fjR(α1)-fjR(α2)α1-α2ljR+ljI-fjI(α1)-fjI(α2)α1-α2ljI+,α1,α2R,α1α2

Definition 2.1

Zhou and Song (Citation2013): The neural network (1) is said to be (Q,S,R)-dissipative, if the following dissipation inequality(3) k=0kpr(u(k),y(k))0,kp0(3)

holds under zero initial condition for any nonzero input ul2[0,+). Furthermore, if for some scalar γ>0, the dissipation inequality(4) k=0kpr(u(k),y(k))γk=0kpuT(k)u(k)kp0(4)

holds under zero initial condition for any nonzero input ul2[0,+), then the neural network (1) is said to be strictly (Q,S,R)-γ-dissipative. In this paper, we define a quadratic supply rate r(u,y) associated with neural network (1) as follows:(5) r(u,y)=yTQy+2yTSu+uTRu(5)

where Q0, S, and R are real symmetric matrices of appropriate dimensions.

Definition 2.2

Wu et al. (Citation2011): The neural network (1) is said to be passive if there exists a scalar γ>0 such that, for all k002j=0k0yT(j)u(j)-γj=0k0uT(j)u(j)

under the zero initial condition.

Lemma 2.1

Liu et al. (Citation2009): Let X and Y be any n-dimensional real vectors, and let P be an n×n positive semidefinite matrix. Then, the following inequality holds:2XTPYXTPX+YTPY

Lemma 2.2

For any constant matrix MRn×n, M=MT>0, integers r1 and r2 satisfying r2>r1, and vector function ω:N[r1,r2]Rn, such that the sums concerned are well defined, then(6) -(r2-r1+1)j=r1r2ωT(j)Mω(j)-j=r1r2ωT(j)Mj=r1r2ω(j)(6)

Proof

-(r2-r1+1)j=r1r2ωT(j)Mω(j)=-12(r2-r1+1)j=r1r2ωT(j)Mω(j)-12(r2-r1+1)j=r1r2ωT(j)Mω(j)=-12(r2-r1+1)i=r1r2ωT(i)Mω(i)-12(r2-r1+1)j=r1r2ωT(j)Mω(j)=-12[i=r1r2(r2-r1+1)ωT(i)Mω(i)+i=r1r2j=r1r2ωT(j)Mω(j)]=-12j=r1r2[(r2-r1+1)ωT(i)Mω(i)+j=r1r2ωT(j)Mω(j)]=-12j=r1r2j=r1r2[ωT(j)Mω(j)+ωT(i)Mω(i)]-j=r1r2i=r1r2ωT(j)Mω(i)[UsingLemma2.1]=-j=r1r2ωT(j)Mi=r1r2ω(i)

Lemma 2.3

Let MRn×n be a positive semidefinite matrix, ξjRn, and scalar constant aj0(j=1,2,). If the series concerned is convergent, then the following inequality holds:(7) (j=1ajξj)TM(j=1ajξj)(j=1aj)j=1ajξjTMξj(7)

Proof

Letting m be a positive integer, we have(j=1majξj)TM(j=1majξj)=(j=1majξj)TM(i=1maiξi)=j=1mi=1majaiξjTMξij=1mi=1m12ajai(ξjTMξj+ξiTMξi)(bytheLemma2.1)=(j=1maj)j=1majξjTMξj

and then (8) follows directly by letting m, which completes the proof.

Lemma 2.4

Given a Hermitian matrix Q, The inequality Q<0 is equivalent toQR-QIQIQR<0

where QR=Re(Q) and QI=Im(Q).

3. Main results

In this section, we derive the dissipativity criterion for discrete-time complex-valued neural networks (1) with time-varying delays using the Lyapunov functional method combining with LMI approach. For convenience, we use the following notations: ψ(k)=1α(k)[i=k-τ(k)k-τ1-1z(i)],ϕ(k)=1β(k)[i=k-τ2k-τ(k)-1z(i)],M=τ144G+(τ2-τ1)24H,α(k)=τ(k)-τ1,β(k)=τ2-τ(k). Table describes the matrices along with the dimensions that are used in the following Theorem 3.1.

Theorem 3.1

Assume that Assumption 2.1 holds, then the complex-valued neural networks (1) are dissipative if there exist positive Hermitian matrices P=P1+iP2, Q=Q1+iQ2, R=R1+iR2, S=S1+iS2, T=T1+iT2, U=U1+iU2, V=V1+iV2, W=W1+iW2, G=G1+iG2, H=H1+iH2, two positive diagonal matrices F1>0, F2>0, and a scalar γ>0 such that the following LMI holds.(8) Θ=ΘR-ΘIΘIΘR<0(8)

where(9) ΘR=Θ1,1R000Θ1,5RΘ1,6RΘ1,7R000Θ1,11R0Θ2,2R000000Θ2,9R0000Θ3,3R00000Θ3,9R00000Θ4,4R00000000000Θ5,5RΘ5,6R0000Θ5,11R00000Θ6,6R0000Θ6,11R000000Θ7,7R00000000000Θ8,8R00000000000Θ9,9R00000000000Θ10,10R00000000000Θ11,11R(9)

and(10) ΘI=Θ1,1I000Θ1,5IΘ1,6IΘ1,7I000Θ1,11I0Θ2,2I000000Θ2,9I0000Θ3,3I00000Θ3,9I00000Θ4,4I00000000000Θ5,5IΘ5,6I0000Θ5,11I00000Θ6,6I0000Θ6,11I000000Θ7,7I00000000000Θ8,8I00000000000Θ9,9I00000000000Θ10,10I00000000000Θ11,11I(10)

withΘ1,1R=-P1+Q1+R1+T1+τ12U1+τ22V1+M1-τ12G1+F1ΓΘ1,5R=-A1TM1-A2TM2,Θ1,6R=-B1TM1-B2TM2,Θ1,7R=-τ1G1,Θ1,11R=-M1Θ2,2R=-Q1+S1-H1,Θ2,9R=H1,Θ3,3R=-H1-R1-S1,Θ3,9R=H1Θ4,4R=-T1+B1TP1B1-B1TP2B2+B2TP2B1+B2TP1B2-F2ΓΘ5,5R=A1TP1A1-A1TP2A2+A2TP2A1+A2TP1A2+A1TM1A1-A1TM2A2+A2TM2A1+A2TM1A2-Q1-F1,Θ5,6R=A1TP1B1-A1TP2B2+A2TP2B1+A2TP1B2Θ5,11R=P1A1-P2A2+M1A1-M2A2-I-S1Θ6,6R=B1TM1B1-B1TM2B2+B2TM2B1+B2TP1B2-F2,Θ6,11R=B1TP1+B2TP2Θ7,7R=-U1+G1,Θ8,8R=-V1,Θ9,9R=-H1,Θ10,10R=-H1,Θ11,11R=-R1+γI+P1M1Θ1,1I=-P1+Q1+R1+T1+τ12U1+τ22V1+M1-τ12G1Θ1,5I=-A1TM2-A2TM1,Θ1,6I=-B1TM2-B2TM1,Θ1,7I=-τ1G2,Θ1,11I=-M2Θ2,2I=-Q2+S2-H2,Θ2,9I=H2,Θ3,3I=-H2-R2-S2,Θ3,9I=H2Θ4,4I=-T2+B1TP2A1+B1TP1B2-B2TP1B1+B2TP2B2Θ5,5I=A1TP2A1+A1TP1A2-A2TP1A1+A2TP2A2+A1TM2A1+A1TM1A2-A2TM1A1+A2TM2A2-Q2,Θ5,6I=A1TP2B1+A1TP1B2-A2TP1B1+A2TP2B2Θ5,11I=P1A2-P2A1+M1A2-M2A1-S2Θ6,6I=B1TM2B1+B1TM1B2-B2TM1B1+B2TP2B2,Θ6,11I=B1TP2-B2TP1Θ7,7I=-U2+G2,Θ8,8I=-V2,Θ9,9I=-H2,Θ10,10I=-H2,Θ11,11I=-R2+P2M2

Proof

Defining η(k)=z(k+1)-z(k), we consider the following Lyapunov–Krasovskii functional for neural network in (1):V(k)=i=16Vi(k)

whereV1(k)=z(k)Pz(k)V2(k)=i=k-τ1k-1z(i)Qz(i)+i=k-τ2k-1z(i)Rz(i)+i=k-τ2k-τ1-1z(i)Sz(i)+i=k-τ(k)k-1z(i)Tz(i)V3(k)=τ1j=-τ1-1i=k+jk-1z(i)Uz(i)+τ2j=-τ2-1i=k+jk-1z(i)Vz(i)V4(k)=(τ2-τ1)j=-τ2-1-τ1i=k+jk-1z(i)Wz(i)+j=τ1-1+τ2i=k-jk-1z(i)Tz(i)V5(k)=τ122i=-τ1-1j=i0l=k+jk-1η(l)Gη(l)+i=k-τ1k-τ1-1j=i0l=k+jk-1η(l)Hη(l)

Letting ΔV(k)=V(k+1)-V(k), along the solution of the neural network (1), we have(11) ΔV1(k)=z(k+1)Pz(k+1)-z(k)Pz(k)=f(z(k))APA+f(z(k-τ(k)))BTPBf(z(k-τ(k)))+u(k)Pu(k)+2f(z(k))APBf(z(k-τ(k)))+2f(z(k-τ(k)))BPu(k)+2u(k)PAf(z(k))-z(k)Pz(k)ΔV2(k)=z(k)(Q+R+T)z(k)+z(k-τ1)(-Q+S)z(k-τ1)+z(k-τ2)(-R-S)z(k-τ2)-z(k-τ(k))Tz(k-τ(k))+i=k+1-τ(k+1)k-τ(k)z(i)Tz(i)ΔV3(k)=z(k)(τ12U+τ22)Vz(k)-[i=k-τ1k-1z(i)]U[i=k-τ1k-1z(i)]-[i=k-τ2k-1z(i)]V[i=k-τ2k-1z(i)]ΔV4(k)=z(k)[(τ2-τ1)2W+(τ2-τ1)T]z(k)-[i=k-τ2k-τ1-1z(i)]W[i=i=k-τ2k-τ1-1z(i)]-i=k+1-τ2k-τ1[(z(i))Tz(i)]ΔV5(k)=z(k+1)[τ144G+(τ2-τ1)24H]z(k+1)+z(k)[τ144G+(τ2-τ1)24H]z(k)-2z(k+1)[τ144G+(τ2-τ1)24H]z(k)-[τ1z(k)-i=k-τ1k-1z(k)]TG[τ1z(k)-i=k-τ1k-1z(k)]-(z(k-τ1)-ψ(k))H(z(k-τ1)-ψ(k))-(z(k-τ1)-ϕ(k))H(z(k-τ1)-ϕ(k))ΔV5(k)=f(z(k))AMAf(z(k))+f(z(k-τ(k))BMBf(z(k-τ(k))+u(k)Mu(k)+2f(z(k))AMBf(z(k-τ(k)))+2f(z(k-τ(k)))BMu(k)+2u(k)MAf(z(k))+z(k)Mz(k)-2f(z(k))AMz(k)-f(k-2τ(k))BMz(k)-2u(k)Mz(k)-[i=k-τ1k-1z(i)]G[i=k-τ1k-1z(i)]-2τ1z(k)G[i=k-τ1k-1z(i)]-z(k-τ1))Hz(k-τ1)-z(k-τ2)Hz(k-τ2)+2z(k-τ1)Hψ(k)+2z(k-τ2)Hϕ(k)-ψ(k)Hψ(k)-ϕ(k))Hϕ(k)(11)

Furthermore, from the Assumption 2.1, the activation function fj(·) of (1) can be written as ljR-fjR(xj(k))xj(k)ljR+, ljI-fjI(yj(k))yj(k)ljI+ for all j=1,2,,n. Hence, we have(12) |fjR(xj(k))||gjR(xj(k))|,|fjI(xj(k))||gjI(xj(k))|(12)

where gjR=max{|ljR-|,|ljR+|}, gjR=max{|ljI-|,|ljI+|} for all j=1,2,,n.

From (17), we get(13) sjf(zj(k))f(zj(k))sjgj2zj(k)zj(k)(13)

where gj=max{gjR,gjI} and sj is a positive constant for all j=1,2,,n. Therefore, we can write the vector form of (18) as follows:(14) f(z(k))F1f(z(k))z(k)F1Γz(k)f(z(k))F1f(z(k))-z(k)F1Γz(k)0(14)

where F1=diag{s1,s2,,sn}.

Similarly,(15) f(z(k-τ(k)))F2f(z(k-τ(k)))z(k-τ(k))F2Γz(k-τ(k))f(z(k-τ(k)))F2f(z(k-τ(k)))-z(k-τ(k))F2Γz(k-τ(k))0(15)

where F2=diag{s1,s2,,sn} and Γ=diag{g12,g22,,gn2}.

Now, ΔV(k)=ΔV1(k)+ΔV1(k)+ΔV1(k)+ΔV1(k)+ΔV1(k)+ΔV1(k)+0+0.

Substituting equations from (12) to (16) in ΔV(k) and using the inequalities (19) and (20) in the RHS of ΔV(k), we get(16) ΔV(k)-y(k)Qy(k)-2y(k)Su(k)-u(k)(R-γI)u(k)ξ(k)Θξ(k)(16)

where

ξ(k)=[z(k)z(k-τ1)z(k-τ2)z(k-τ(k))f(z(k))f(z(k-τ(k)))k-τ1k-1z(k)k-τ2k-1z(k)ψ(k)ϕ(k)u(k)].

Thus,(17) k=0kpΔV(k)-k=0kpy(k)Qy(k)-2u(k)Sy(k)-u(k)(R-γI)u(k)k=0kpξ(k)Θξ(k)(17)

for all kpN.

Suppose Θ<0, then (22) yieldsk=0kpΔV(k)k=0kp[y(k)Qy(k)+2u(k)Sy(k)+u(k)(R-γI)u(k)]V(x(k+1))-V(x(0))k=0kpy(k)Qy(k)+2u(k)Sy(k)+u(k)(R-γI)u(k)]

for all kpN.

Thus (5) holds under the zero initial condition. Therefore, according to Definition 2.1, neural network (1) is strictly (Q,S,R)-γ-dissipative. This completes the proof.

The LMIs obtained in Theorem 3.1 ensures the (Q,S,R)-γ-dissipativity of discrete-time complex-valued neural network (1). Further, we specialize Theorem 3.1 to obtain the passivity conditions for the system (1), by assuming Q=0,S=I, and R=2γI. The derived passivity conditions are presented in the following corollary.

Corollary 3.2

Assume that Assumption 2.1 holds, then the complex-valued neural networks (1) are passive if there exist positive Hermitian matrices P=P1+iP2, Q=Q1+iQ2, R=R1+iR2, S=S1+iS2, T=T1+iT2, U=U1+iU2, V=V1+iV2, W=W1+iW2, G=G1+iG2, H=H1+iH2, two positive diagonal matrices F1>0, F2>0, and a scalar γ>0 such that the following LMI holds.(18) Σ=ΣR-ΣIΣIΣR<0(18)

where(19) ΣR=Σ1,1R000Σ1,5RΣ1,6RΣ1,7R000Σ1,11R0Σ2,2R000000Σ2,9R0000Σ3,3R00000Σ3,9R00000Σ4,4R00000000000Σ5,5RΣ5,6R0000Σ5,11R00000Σ6,6R0000Σ6,11R000000Σ7,7R00000000000Σ8,8R00000000000Σ9,9R00000000000Σ10,10R00000000000Σ11,11R(19)

and(20) ΣI=Σ1,1I000Σ1,5IΣ1,6IΣ1,7I000Σ1,11I0Σ2,2I000000Σ2,9I0000Σ3,3I00000Σ3,9I00000Σ4,4I00000000000Σ5,5IΣ5,6I0000Σ5,11I00000Σ6,6I0000Σ6,11I000000Σ7,7I00000000000Σ8,8I00000000000Σ9,9I00000000000Σ10,10I00000000000Σ11,11I(20)

withΣ1,1R=Θ1,1R,Σ1,5R=Θ1,5R,Σ1,6R=Θ1,6R,Σ1,7R=Θ1,7R,Σ1,11R=Θ1,11R,Σ2,2R=Θ2,2R,Σ2,9R=Θ2,9RΣ3,3R=Θ3,3R,Σ3,9R=Θ3,9R,Σ4,4R=Θ4,4R,Σ5,6R=Θ5,6R,Σ6,6R=Θ6,6R,Σ6,11R=Θ6,11R,Σ7,7R=Θ7,7RΣ8,8R=Θ8,8R,Σ9,9R=Θ9,9R,Σ10,10R=Θ10,10R,Σ1,1I=Θ1,1I,Σ1,5I=Θ1,5I,Σ1,6I=Θ1,6I,Σ1,7I=Θ1,7I,Σ1,11I=Θ1,11I,Σ2,2I=Θ2,2I,Σ2,9I=Θ2,9IΣ3,3I=Θ3,3I,Σ3,9I=Θ3,9I,Σ4,4I=Θ4,4I,Σ6,6I=Θ6,6IΣ6,11I=Θ6,11I,Σ7,7I=Θ7,7I,Σ8,8I=Θ8,8I,Σ9,9I=Θ9,9I,Σ10,10I=Θ10,10IΣ5,5R=A1TP1A1-A1TP2A2+A2TP2A1+A2TP1A2+A1TM1A1-A1TM2A2+A2TM2A1+A2TM1A2-F1Σ5,11R=P1A1-P2A2+M1A1-M2A2-I,Σ11,11R=-γI+P1M1Σ5,5I=A1TP2A1+A1TP1A2-A2TP1A1+A2TP2A2+A1TM2A1+A1TM1A2-A2TM1A1+A2TM2A2Σ5,6I=A1TP2B1+A1TP1B2-A2TP1B1+A2TP2B2,Σ5,11I=P1A2-P2A1+M1A2-M2A1-IΣ11,11I=P2M2

Proof

The proof is same as that of Theorem 3.1 and hence it is omitted.

4. Numerical examples

In this section, we will give an example showing the effectiveness of established theories.

Example 4.1

Consider the discrete-time complex-valued neural networks (1), where the interconnected matrices are, respectively,A=0.2+0.1j-0.2+0.2j-0.1+0.1j0.2,B=-0.2-0.2j-0.10.1+0.3j-0.1+0.2jQ=52.2+1.4i2.2-1.4i3,R=4.5-0.5-i-0.5+i2.5,S=0.3+0.5i-0.6-0.2i0.4-0.6i0.5+0.3i

Here, the activation functions are assumed to be f(x)=1-e-x1+ex, f(y)=11+ex with F1=diag{0.01,0.01}, F2=diag{0.1,0.1}, and Γ=diag{0.1,0.1}. Taking τ1(k)=2.5+0.5sin(0.5kπ), τ2(k)=4.5+0.5sin(0.5kπ), using the Matlab LMI control toolbox for LMI (9), the feasible matrices are sought asP1=17.10773.352020.3736,P2=3.48226.914818.1144,Q1=2.22410.82643.1205,Q2=1.77350.60183.0145R1=0.91420.36721.3126,R2=0.71750.26331.2604,S1=1.30860.45921.8068,S2=1.05480.33851.7529T1=1.90120.60742.5871,T2=2.33271.16743.9043,U1=0.35270.15490.5209,U2=0.25470.10520.4923V1=0.06260.03090.0985,V2=0.05870.02400.0951,W1=0.16920.07780.2535,W2=0.12880.05390.2400G1=0.03380.02310.0585,G2=0.06150.02360.0608,H1=0.15490.11860.2980,H2=0.47570.16260.2913

Setting the initial states as x11=2+2j and x12=-1-j, Figures and show that the model (1) with above given parameters is dissipative in the sense of Definition 2.1 with γ=58.9167. Further, the state curves for the real and imaginary parts of the discrete-time complex-valued neural networks (1) have been given in Figures and . When τ1(k)=3.5+0.5sin(0.5kπ) and τ2(k)=5.5+0.5sin(0.5kπ), the LMI (9) in Theorem 3.1 is not feasible and hence the CVNNs (1) is not (Q,S,R)-dissipative. In this case, Figures and describe the unstable behavior of the trajectories of the CVNNs (1).

Table 1. Dimensions of matrices concerned in Theorem 3.1

Remark 4.1

Different from the Lyapunov functional V(k) given in Zhang, Wang, Lin, and Liu (Citation2014), in our paper, we have constructed the appropriate Lyapunov functional involving the termsV4(k)=(τ2-τ1)j=-τ2-1-τ1i=k+jk-1[z(l)Hz(l)]+j=τ1-1+τ2i=k-jk-1[(z(i)Tz(i)]V5(k)=τ122i=-τ1-1j=i0l=k+jk-1[(η(l)Hη(l)]+i=k-τ1k-τ1-1j=i0l=k+jk-1[(η(l)Hη(l)]

Further, Lemma 2.3 is used to reduce the triple summation terms in ΔV5(k). In Zhang et al. (Citation2014), the maximum values of upper bounds are obtained as τ1=1 and τ2=2 whereas the proposed results in our paper yield τ1=3 and τ2=5. Hence, the results proposed in Theorem 3.1 are less conservative than those obtained in Zhang et al. (Citation2014).

5. Conclusions

In this paper, dissipativity and passivity analysis for discrete-time complex-valued neural networks with time-varying delays was studied. A delay-dependent condition has been provided to ensure the considered neural network to be strictly (Q,S,R)-γ dissipative. An effective LMI approach has been proposed to derive the dissipativity criterion. Based on the new bounding technique and appropriate type of Lyapunov functional, a sufficient condition for the solvability of this problem is established for the dissipativity criterion. One numerical example is given to show the effectiveness of the established results. We would like to point out that it is possible to generalize our main results to more complex systems, such as neural networks with parameter uncertainties, stochastic perturbations, and Markovian jumping parameters.

Additional information

Funding

The work of authors was supported by UGC-BSR Research Start-Up-Grant, New Delhi, India, under the sanctioned No. F. 20-1/2012 (BSR)/20-5(13)/2012(BSR).

Notes on contributors

G. Nagamani

G. Nagamani served as a lecturer in Mathematics in Mahendra Arts and Science College, Namakkal, Tamilnadu, India, during 2001–2008. Currently she is working as an assistant professor in the Department of Mathematics, Gandhigram Rural University-Deemed University , Gandhigram, Tamilnadu, India, since June 2011. She has published more than 15 research papers in various SCI journals holding impact factors. She is also serving as a reviewer for few SCI journals. Her research interest is in the field of Modeling of Stochastic Differential Equations, Neural Networks, Dissipativity and Passivity Analysis.

The author research area is based on the passivity approach for dynamical systems and also for various types of neural networks such as Markovian jumping neural networks, Takagi-Sugeno fuzzy stochastic neural networks and Cohen-Grossberg neural networks. The author has published 14 research articles in most reputed SCI journals in the thrust area of the project during the past six years.

References

  • Aizenberg, I., Paliy, D. V., Zurada, J. M., & Astola, J. T. (2008). Blur identification by multilayer neural network based on multivalued neurons. IEEE Transaction on Neural Networks, 19, 883–898.
  • Bastinec, J., Diblik, J., & Smarda, Z. (2010). Existence of positive solutions of discrete linear equations with a single delay. Journal of Difference Equations and Applications, 16, 1165–1177.
  • Chua, L. O. (1999). Passivity and complexity. IEEE Transactions on Circuit and Systems, 46, 71–82.
  • Diblik, J., Schmeidel, E., & Ruzickova, M. (2010). Asymptotically periodic solutions of Volterra system of difference equations. Computers and Mathematics with Applications, 59, 2854–2867.
  • Goh, S. L., & Mandic, D. P. (2007). An augmented extended Kalman filter algorithm for complex valued recurrent neural networks. Neural Computation, 19(4), 1–17.
  • Goh, S. L., & Mandic, D. P. (2005). Nonlinear adaptive prediction of complex valued nonstationary signals. IEEE Transactions on Signal Processing, 53, 1827–1836.
  • Hirose, A. (2003). Complex-valued neural networks: Theory and applications. Vol. 5, Series on innovative intelligence. River Edge, NJ: World Scientific.
  • Hirose, A. (2011). Nature of complex number and complex valued neural networks. Frontiers of Electrical and Electronic Engineering in China, 6, 171–180.
  • Hu, J., & Wang, J. (2012). Global stability of complex-valued recurrent neural networks with time-delays. IEEE Transaction on Neural Networks and Learning Systems, 23, 853–865.
  • Jing, W., Yao, F., & Shen, H. (2014). Dissipativity-based state estimation for Markov jump discrete-time neural networks with unreliable communication links. Neurocomputing, 139, 107–113.
  • Liang, J., Wang, Z., & Liu, X. (2009). State estimation for coupled uncertain stochastic networks with missing measurements and time-varying delays: The discrete-time case. IEEE Transactions on Neural Networks, 20, 781–793.
  • Liu, Y., Wang, Z., Liang, J., & Liu, X. (2009). Stability and synchronization of discrete-time Markovian jumping neural networks with mixed mode-dependent time delays. IEEE Transactions on Neural Networks, 20, 1102–1116.
  • Mostafa, M., Teich, W. G., & Lindner, J. (2013). Local stability analysis of discrete-time, continuous-state, complex-valued recurrent neural networks with inner state feedback. IEEE Transactions on Neural Networks and Learning Systems, 25, 830–836. doi:10.1109/TNNLS.2013.2281217
  • Suksmono, A. B., & Hirose, A. (2002). Adaptive noise reduction of InSAR image based on complex-valued MRF model and its application to phase unwrapping problem. IEEE Transactions on Geoscience and Remote Sensing, 40, 699–709.
  • Wang, T., Xue, M., Fei, S., & Li, T. (2013). Triple Lyapunov functional technique on delay-dependent stability for discrete-time dynamical networks. Neurocomputing, 122, 221–228.
  • Wang, Z., Ho, D. W. C., Liu, Y., & Liu, X. (2009). Robust H8 control for a class of nonlinear discrete time delay stochastic systems with missing measurements. Automatica, 45, 684–691.
  • Wu, L., Yang, X., & Lam, H. K. (2014). Dissipativity analysis and synthesis for discrete-time T-S fuzzy stochastic systems with time-varying delay. IEEE Transactions on Fuzzy Systems, 22, 380–394.
  • Wu, Z. G., Shi, P., Su, H., & Chu, J. (2011). Passivity analysis for discrete-time stochastic Markovian jump neural networks with mixed time delays. IEEE Transaction on Neural Networks, 22, 1566–1575.
  • Wu, Z. G., Shi, P., Su, H., & Chu, J. (2013). Dissipativity analysis for discrete-time stochastic neural networks with time-varying delays. IEEE Transactions on Neural networks, 22, 345–355.
  • Yamaki, R., & Hirose, A. (2009). Singular unit restoration in interferograms based on complex valued Markov random field model for phase unwrapping. IEEE Geoscience and Remote Sensing Letters, 6, 18–22.
  • Zhang, H., Wang, X. Y., Lin, X. H., & Liu, C. X. (2014). Stability and synchronization for discrete-time complex-valued neural networks with time-varying delays. PLoS ONE, 9, e93838. doi:10.1371/journal.pone.0093838
  • Zhang, Y., & Ma, Y. (1997). CGHA for principal component extraction in the complex domain. IEEE Transactions on Neural Networks, 8, 1031–1036.
  • Zhao, Z., Song, Q., & He, S. (2014). Passivity analysis of stochastic neural networks with time-varying delays and leakage delay. Neurocomputing, 125, 22–27.
  • Zhou, B., & Song, Q. (2013). Boundedness and complete stability of complex-valued neural networks with time delay. IEEE Transactions on Neural Networks and Learning Systems, 24, 1227–1238.