![MathJax Logo](/templates/jsp/_style2/_tandf/pb2/images/math-jax.gif)
Abstract
In this paper, we consider the problem of dissipativity and passivity analysis for complex-valued discrete-time neural networks with time-varying delays. The neural network under consideration is subject to time-varying. Based on an appropriate Lyapunov–Krasovskii functional and by using the latest free-weighting matrix method, a sufficient condition is established to ensure that the neural networks under consideration is strictly -dissipative. The derived conditions are presented in terms of linear matrix inequalities. A numerical example is presented to illustrate the effectiveness of the proposed results.
Public Interest Statement
The passivity approach for interconnection of passive systems provides a nice tool for controlling a large class of nonlinear systems and DNNs, and its potential applications have been found in the stability and stabilization schemes of electrical networks, and in the control of teleoperators.
1. Introduction
In the past several decades, the neural networks are very important nonlinear circuit networks because of their wide applications in various fields such as associative memory, signal processing, data compression, system control (Hirose, Citation2003), optimization problem, and so on Liang, Wang, and Liu (Citation2009), Wang, Ho, Liu, and Liu (Citation2009), Liu, Wang, Liang, and Liu (Citation2009), Bastinec, Diblik, and Smarda (Citation2010), and Diblik, Schmeidel, and Ruzickova (Citation2010). Recently, neural networks have been electronically implemented and they have been used in real-time applications. However in electronic implementation of neural networks, some essential parameters of neural networks such as release rate of neurons, connection weights between the neurons and transmission delays might be subject to some deviations due to the tolerances of electronic components employed in the design of neural networks (Aizenberg, Paliy, Zurada, & Astola, Citation2008; Hu & Wang, Citation2012; Mostafa, Teich, & Lindner, Citation2013; Wang, Xue, Fei, & Li, Citation2013; Wu, Shi, Su, & Chu, Citation2011). As we know, time delays commonly exist in the neural networks because of the network traffic congestions and the finite speed of information transmission in networks. So the study of dynamic properties with time delay is of great significance and importance. However, most of the studied networks are real number valued. Recently, in order to investigate the complex properties in complex-valued neural networks, some complex-valued network models are proposed.
Figure 1. State trajectories of real part of two-neuron complex-valued neural networks for and
with initial states
.
![Figure 1. State trajectories of real part of two-neuron complex-valued neural networks for τ1(k)=2.5+0.5sin(0.5kπ) and τ2(k)=4.5+0.5sin(0.5kπ) with initial states x11=2+2j.](/cms/asset/e960497c-e9f2-48ee-9d44-e40b2d61ef4d/oama_a_1048580_f0001_oc.gif)
Figure 2. State trajectories of imaginary part of two-neuron complex-valued neural networks for and
with initial states
.
![Figure 2. State trajectories of imaginary part of two-neuron complex-valued neural networks for τ1(k)=2.5+0.5sin(0.5kπ) and τ2(k)=4.5+0.5sin(0.5kπ) with initial states x12=-1-j.](/cms/asset/86f1d3dd-7c46-440e-a88b-c1402a151374/oama_a_1048580_f0002_oc.gif)
Figure 3. State trajectories of real part of two-neuron complex-valued neural networks for and
with initial states
.
![Figure 3. State trajectories of real part of two-neuron complex-valued neural networks for τ1(k)=3.5+0.5sin(0.5kπ) and τ2(k)=5.5+0.5sin(0.5kπ) with initial states x11=2+2j.](/cms/asset/8e5663c2-6fd8-4523-a7a8-43aac76c33ba/oama_a_1048580_f0003_oc.gif)
Figure 4. State trajectories of imaginary part of two-neuron complex-valued neural networks for and
with initial states
.
![Figure 4. State trajectories of imaginary part of two-neuron complex-valued neural networks for τ1(k)=3.5+0.5sin(0.5kπ) and τ2(k)=5.5+0.5sin(0.5kπ) with initial states x12=-1-j.](/cms/asset/c034ce9d-9fcc-4696-9061-2bdb18a7125a/oama_a_1048580_f0004_oc.gif)
The most notable feature of complex-valued neural networks (CVNNs) is the compatibility with wave phenomena and wave information related to, for example, electromagnetic wave, light wave, electron wave, and sonic wave (Hirose, Citation2011). Furthermore, CVNNs are widely applied in coherent electromagnetic wave signal processing. They are mainly used in adapting, processing of interferometric synthetic aperture radar (InSAR) images captured by satellite or airplane to observe land surface (Suksmono & Hirose, Citation2002; Yamaki & Hirose Citation2009). Another important application field is sonic and ultrasonic processing. Pioneering work has been done in various directions (Zhang & Ma, Citation1997). In communication systems, the CVNNS can be regarded as an extension of adaptive complex filters, i.e. modular multiple-stage and nonlinear version. From this view point, several groups worked on time sequential signal processing (Goh & Mandic, Citation2007, Citation2005). Furthermore, there are many ideas based on CVNNs in image processing. An example is the adaptive processing for blur compensation by identifying the point scatting function in the frequency domain (Aizenberg et al., Citation2008). Recently, many mathematicians and scientists have paid more attention to this field of research. Besides, CVNNs have different and more complicated properties than the real-valued ones. Therefore, it is necessary to study the dynamic behaviors of the systems deeply. Over the past decades, some work has been done to analyze the dynamic behavior of the equilibrium points of the various CVNNs. In Mostafa et al. (Citation2013), local stability analysis of discrete-time, continuous-state, complex-valued recurrent neural networks with inner state feedback was presented. In Zhou and Song (Citation2013), the authors studied boundedness and complete stability of complex-valued neural networks with time delay by using free weighting matrices.
It is well known that dissipativity theory gives a framework for the design and analysis of control systems using an input?output description on energy-related considerations (Jing, Yao, & Shen, Citation2014; Wu, Shi, Su, & Chu, Citation2013; Wu, Yang, & Lam, Citation2014) and it becomes a powerful tool in characterizing important system behaviors such as stability. The passivity theory, being an effective tool for analyzing the stability of systems, has been applied in complexity (Zhao, Song, & He, Citation2014), signal processing, especially for high-order systems and thus the passivity analysis approach has been used for a long time to deal with the control problems (Chua, Citation1999). However, to the best of our knowledge, there is no result addressed on the dissipativity and passivity analysis of discrete-time complex-valued neural networks with time-varying delay, which motivates the present study.
In this paper, we consider the problem of dissipativity and passivity analysis for discrete-time complex-valued neural networks with time-varying delay. Based on the lemma proposed in Zhou and Song (Citation2013), a condition is derived for strict -dissipativity and passivity of the control neural networks, which depends only on the discrete delay. In established model, the delay-dependent dissipativity and passivity conditions are derived and the obtained linear matrix inequalities (LMIs) can be checked numerically using the effective LMI toolbox in MATLAB and accordingly the estimator gains are obtained. The effectiveness of the proposed design is finally demonstrated by a numerical example.
The rest of this paper is organized as follows: model description and preliminaries are given in Section 2. Dissipativity and passivity analysis for discrete-time complex-valued neural networks with time-varying delay are presented in Section 3. Illustrative example and its simulation results for dissipativity conditions have been given in Section 4.
Notations: and
denote, respectively, the
-dimensional complex space and Euclidean space.
denote the complex-valued function, where
,
.
is the set of real
matrices,
is the identity matrix of appropriate dimension. For any matrix
,
means
is a positive definite (negative definite) matrix. The superscript
denotes the matrix complex conjugate transpose,
stands for a block-diagonal matrix. Let
be the Banach space of continuous functions mapping
into
. For integers
and
with
, let
.
represents the transpose of matrix
,
denotes the difference of function
given by
2. Model description and preliminaries
Consider the following discrete-time complex-valued neural networks with time-varying delays:(1)
(1)
where is the neuron state vector;
,
are the connection weight matrix and the delayed connection weight matrix, respectively;
is the output of neural network (1).
is the input vector; time delay
ranges from
to
as
;
and
are the complex-valued neuron activation functions without and with time delays. The initial conditions of the CVNNs (1) are given by
where are continuous. Complex-valued parameters in the neural network can be represented as
,
. Then (1) can be written as
(2)
(2)
where and
are the real and imaginary parts of variable
, respectively.
and
are the real and imaginary parts of connection weight
;
and
are the real and imaginary parts of delayed connection weight
.
and
are the real and imaginary parts of
. Connection weight matrices are represented as
,
,
, and
. Then, we have
To derive the main results, we will introduce the following assumptions, definitions, and lemmas.
Assumption 2.1
The activation function can be separated into real and imaginary parts of the complex numbers
. It follows that
is expressed by
where ,
for all
. Then,
Definition 2.1
Zhou and Song (Citation2013): The neural network (1) is said to be -dissipative, if the following dissipation inequality
(3)
(3)
holds under zero initial condition for any nonzero input . Furthermore, if for some scalar
, the dissipation inequality
(4)
(4)
holds under zero initial condition for any nonzero input , then the neural network (1) is said to be strictly
-
-dissipative. In this paper, we define a quadratic supply rate
associated with neural network (1) as follows:
(5)
(5)
where ,
, and
are real symmetric matrices of appropriate dimensions.
Definition 2.2
Wu et al. (Citation2011): The neural network (1) is said to be passive if there exists a scalar such that, for all
under the zero initial condition.
Lemma 2.1
Liu et al. (Citation2009): Let and
be any
-dimensional real vectors, and let
be an
positive semidefinite matrix. Then, the following inequality holds:
Lemma 2.2
For any constant matrix ,
, integers
and
satisfying
, and vector function
, such that the sums concerned are well defined, then
(6)
(6)
Proof
Lemma 2.3
Let be a positive semidefinite matrix,
, and scalar constant
. If the series concerned is convergent, then the following inequality holds:
(7)
(7)
Proof
Letting m be a positive integer, we have
and then (8) follows directly by letting , which completes the proof.
Lemma 2.4
Given a Hermitian matrix , The inequality
is equivalent to
where and
.
3. Main results
In this section, we derive the dissipativity criterion for discrete-time complex-valued neural networks (1) with time-varying delays using the Lyapunov functional method combining with LMI approach. For convenience, we use the following notations: . Table describes the matrices along with the dimensions that are used in the following Theorem 3.1.
Theorem 3.1
Assume that Assumption 2.1 holds, then the complex-valued neural networks (1) are dissipative if there exist positive Hermitian matrices ,
,
,
,
,
,
,
,
,
, two positive diagonal matrices
,
, and a scalar
such that the following LMI holds.
(8)
(8)
where(9)
(9)
and(10)
(10)
with
Proof
Defining , we consider the following Lyapunov–Krasovskii functional for neural network in (1):
where
Letting , along the solution of the neural network (1), we have
(11)
(11)
Furthermore, from the Assumption 2.1, the activation function of (1) can be written as
,
for all
Hence, we have
(12)
(12)
where ,
for all
From (17), we get(13)
(13)
where and
is a positive constant for all
Therefore, we can write the vector form of (18) as follows:
(14)
(14)
where =
.
Similarly,(15)
(15)
where =
and
=
.
Now, .
Substituting equations from (12) to (16) in and using the inequalities (19) and (20) in the RHS of
, we get
(16)
(16)
where
.
Thus,(17)
(17)
for all .
Suppose , then (22) yields
for all .
Thus (5) holds under the zero initial condition. Therefore, according to Definition 2.1, neural network (1) is strictly -
-dissipative. This completes the proof.
The LMIs obtained in Theorem 3.1 ensures the -dissipativity of discrete-time complex-valued neural network (1). Further, we specialize Theorem 3.1 to obtain the passivity conditions for the system (1), by assuming
, and
. The derived passivity conditions are presented in the following corollary.
Corollary 3.2
Assume that Assumption 2.1 holds, then the complex-valued neural networks (1) are passive if there exist positive Hermitian matrices ,
,
,
,
,
,
,
,
,
, two positive diagonal matrices
,
, and a scalar
such that the following LMI holds.
(18)
(18)
where(19)
(19)
and(20)
(20)
with
Proof
The proof is same as that of Theorem 3.1 and hence it is omitted.
4. Numerical examples
In this section, we will give an example showing the effectiveness of established theories.
Example 4.1
Consider the discrete-time complex-valued neural networks (1), where the interconnected matrices are, respectively,
Here, the activation functions are assumed to be ,
with
,
, and
. Taking
,
, using the Matlab LMI control toolbox for LMI (9), the feasible matrices are sought as
Setting the initial states as and
, Figures and show that the model (1) with above given parameters is dissipative in the sense of Definition 2.1 with
. Further, the state curves for the real and imaginary parts of the discrete-time complex-valued neural networks (1) have been given in Figures and . When
and
, the LMI (9) in Theorem 3.1 is not feasible and hence the CVNNs (1) is not
-dissipative. In this case, Figures and describe the unstable behavior of the trajectories of the CVNNs (1).
Table 1. Dimensions of matrices concerned in Theorem 3.1
Remark 4.1
Different from the Lyapunov functional given in Zhang, Wang, Lin, and Liu (Citation2014), in our paper, we have constructed the appropriate Lyapunov functional involving the terms
Further, Lemma 2.3 is used to reduce the triple summation terms in . In Zhang et al. (Citation2014), the maximum values of upper bounds are obtained as
and
whereas the proposed results in our paper yield
and
. Hence, the results proposed in Theorem 3.1 are less conservative than those obtained in Zhang et al. (Citation2014).
5. Conclusions
In this paper, dissipativity and passivity analysis for discrete-time complex-valued neural networks with time-varying delays was studied. A delay-dependent condition has been provided to ensure the considered neural network to be strictly -
dissipative. An effective LMI approach has been proposed to derive the dissipativity criterion. Based on the new bounding technique and appropriate type of Lyapunov functional, a sufficient condition for the solvability of this problem is established for the dissipativity criterion. One numerical example is given to show the effectiveness of the established results. We would like to point out that it is possible to generalize our main results to more complex systems, such as neural networks with parameter uncertainties, stochastic perturbations, and Markovian jumping parameters.
Additional information
Funding
Notes on contributors
G. Nagamani
G. Nagamani served as a lecturer in Mathematics in Mahendra Arts and Science College, Namakkal, Tamilnadu, India, during 2001–2008. Currently she is working as an assistant professor in the Department of Mathematics, Gandhigram Rural University-Deemed University , Gandhigram, Tamilnadu, India, since June 2011. She has published more than 15 research papers in various SCI journals holding impact factors. She is also serving as a reviewer for few SCI journals. Her research interest is in the field of Modeling of Stochastic Differential Equations, Neural Networks, Dissipativity and Passivity Analysis.
The author research area is based on the passivity approach for dynamical systems and also for various types of neural networks such as Markovian jumping neural networks, Takagi-Sugeno fuzzy stochastic neural networks and Cohen-Grossberg neural networks. The author has published 14 research articles in most reputed SCI journals in the thrust area of the project during the past six years.
References
- Aizenberg, I., Paliy, D. V., Zurada, J. M., & Astola, J. T. (2008). Blur identification by multilayer neural network based on multivalued neurons. IEEE Transaction on Neural Networks, 19, 883–898.
- Bastinec, J., Diblik, J., & Smarda, Z. (2010). Existence of positive solutions of discrete linear equations with a single delay. Journal of Difference Equations and Applications, 16, 1165–1177.
- Chua, L. O. (1999). Passivity and complexity. IEEE Transactions on Circuit and Systems, 46, 71–82.
- Diblik, J., Schmeidel, E., & Ruzickova, M. (2010). Asymptotically periodic solutions of Volterra system of difference equations. Computers and Mathematics with Applications, 59, 2854–2867.
- Goh, S. L., & Mandic, D. P. (2007). An augmented extended Kalman filter algorithm for complex valued recurrent neural networks. Neural Computation, 19(4), 1–17.
- Goh, S. L., & Mandic, D. P. (2005). Nonlinear adaptive prediction of complex valued nonstationary signals. IEEE Transactions on Signal Processing, 53, 1827–1836.
- Hirose, A. (2003). Complex-valued neural networks: Theory and applications. Vol. 5, Series on innovative intelligence. River Edge, NJ: World Scientific.
- Hirose, A. (2011). Nature of complex number and complex valued neural networks. Frontiers of Electrical and Electronic Engineering in China, 6, 171–180.
- Hu, J., & Wang, J. (2012). Global stability of complex-valued recurrent neural networks with time-delays. IEEE Transaction on Neural Networks and Learning Systems, 23, 853–865.
- Jing, W., Yao, F., & Shen, H. (2014). Dissipativity-based state estimation for Markov jump discrete-time neural networks with unreliable communication links. Neurocomputing, 139, 107–113.
- Liang, J., Wang, Z., & Liu, X. (2009). State estimation for coupled uncertain stochastic networks with missing measurements and time-varying delays: The discrete-time case. IEEE Transactions on Neural Networks, 20, 781–793.
- Liu, Y., Wang, Z., Liang, J., & Liu, X. (2009). Stability and synchronization of discrete-time Markovian jumping neural networks with mixed mode-dependent time delays. IEEE Transactions on Neural Networks, 20, 1102–1116.
- Mostafa, M., Teich, W. G., & Lindner, J. (2013). Local stability analysis of discrete-time, continuous-state, complex-valued recurrent neural networks with inner state feedback. IEEE Transactions on Neural Networks and Learning Systems, 25, 830–836. doi:10.1109/TNNLS.2013.2281217
- Suksmono, A. B., & Hirose, A. (2002). Adaptive noise reduction of InSAR image based on complex-valued MRF model and its application to phase unwrapping problem. IEEE Transactions on Geoscience and Remote Sensing, 40, 699–709.
- Wang, T., Xue, M., Fei, S., & Li, T. (2013). Triple Lyapunov functional technique on delay-dependent stability for discrete-time dynamical networks. Neurocomputing, 122, 221–228.
- Wang, Z., Ho, D. W. C., Liu, Y., & Liu, X. (2009). Robust H8 control for a class of nonlinear discrete time delay stochastic systems with missing measurements. Automatica, 45, 684–691.
- Wu, L., Yang, X., & Lam, H. K. (2014). Dissipativity analysis and synthesis for discrete-time T-S fuzzy stochastic systems with time-varying delay. IEEE Transactions on Fuzzy Systems, 22, 380–394.
- Wu, Z. G., Shi, P., Su, H., & Chu, J. (2011). Passivity analysis for discrete-time stochastic Markovian jump neural networks with mixed time delays. IEEE Transaction on Neural Networks, 22, 1566–1575.
- Wu, Z. G., Shi, P., Su, H., & Chu, J. (2013). Dissipativity analysis for discrete-time stochastic neural networks with time-varying delays. IEEE Transactions on Neural networks, 22, 345–355.
- Yamaki, R., & Hirose, A. (2009). Singular unit restoration in interferograms based on complex valued Markov random field model for phase unwrapping. IEEE Geoscience and Remote Sensing Letters, 6, 18–22.
- Zhang, H., Wang, X. Y., Lin, X. H., & Liu, C. X. (2014). Stability and synchronization for discrete-time complex-valued neural networks with time-varying delays. PLoS ONE, 9, e93838. doi:10.1371/journal.pone.0093838
- Zhang, Y., & Ma, Y. (1997). CGHA for principal component extraction in the complex domain. IEEE Transactions on Neural Networks, 8, 1031–1036.
- Zhao, Z., Song, Q., & He, S. (2014). Passivity analysis of stochastic neural networks with time-varying delays and leakage delay. Neurocomputing, 125, 22–27.
- Zhou, B., & Song, Q. (2013). Boundedness and complete stability of complex-valued neural networks with time delay. IEEE Transactions on Neural Networks and Learning Systems, 24, 1227–1238.