Publication Cover
Automatika
Journal for Control, Measurement, Electronics, Computing and Communications
Volume 60, 2019 - Issue 4
801
Views
3
CrossRef citations to date
0
Altmetric
Regular Papers

A gradient-based iterative algorithm for solving coupled Lyapunov equations of continuous-time Markovian jump systems

, &
Pages 510-518 | Received 10 Oct 2018, Accepted 26 Jul 2019, Published online: 05 Sep 2019

ABSTRACT

In this paper, a new gradient-based iterative algorithm is proposed to solve the coupled Lyapunov matrix equations associated with continuous-time Markovian jump linear systems. A necessary and sufficient condition is established for the proposed gradient-based iterative algorithm to be convergent. In addition, the optimal value of the tunable parameter achieving the fastest convergence rate of the proposed algorithm is given explicitly. Finally, some numerical simulations are given to validate the obtained theoretical results.

1. Introduction

Continuous-time Markovian jump linear systems have been widely used to describe some practical plants. Stability is fundamental for the investigation of any control systems. For the stability analysis of the Markovian jump linear systems, the coupled Lyapunov matrix equation is an important tool. In the past decades, the coupled continuous Lyapunov matrix equations are broadly used in stability analysis for continuous-time Markovian jump linear systems. In [Citation1], the existence of the unique positive definite solution of coupled Lyapunov matrix equations was used to check the moment stability of Markovian jump linear systems. In [Citation2], the authors shown that the Markovian jump linear system is stochastically stable if and only if the associated coupled Lyapunov matrix equation has a unique positive definite solution. The solution of the coupled Lyapunov matrix equations was used in [Citation3] to check the stochastic stability of the nonhomogeneous Markovian jump system. In [Citation4], the coupled Lyapunov equations have been applied to the stabilization of stochastic linear systems.

According to the above descriptions, it is known that the coupled Lyapunov matrix equation has an importance role in the stability analysis of the Markovian jump linear system. Thus, finding solutions for this kind of matrix equations has attracted much attention over the past decade. The exact solution of coupled continuous Lyapunov matrix equations was obtained in [Citation5] by using the Kronecker product and matrix inversion. However, the computational difficulties of the traditional compute method will arise because excessive computer memory is required for computation of high-dimensional matrices. Due to this, the iterative approaches are widely used to solve coupled matrix equations in recently years and some effective iterative algorithms have been developed. For example, some gradient-based iterative algorithms were presented in [Citation6,Citation7] to solve the general coupled matrix equations, including the continuous coupled Lyapunov matrix equations. The reduced-rank gradient-based algorithm was developed in [Citation8] for solving coupled Sylvester matrix equations. By using the property of symmetric positive definite matrix, a gradient-based iterative algorithm was presented in [Citation9] for a type of coupled matrix equations. As a matter of fact, the above-mentioned gradient-based algorithms in [Citation8,Citation9] can also be extended to solve the coupled Lyapunov matrix equations. In addition, some explicit iterative algorithms are proposed to solve the coupled Lyapunov matrix equation. In [Citation10], the conjugate direction method was given for solving the coupled Lyapunov matrix equations. In [Citation11], a classical Smith iterative algorithm was constructed by using the fixed point iterative approach. A simple iterative technique was introduced in [Citation12] to solve the coupled Lyapunov matrix equation. Recently, some new gradient-based iterative algorithms were presented for solving some kinds of matrix equations [Citation13,Citation14]. These algorithms also can be extended to solve the coupled Lyapunov matrix equations.

Besides, an implicit iterative algorithm was firstly proposed in [Citation15] by taking the special structure of the coupled continuous-time Lyapunov matrix equations into consideration. In this algorithm, some standard continuous Lyapunov matrix equations need to be solved at each iteration step. Based on the idea in [Citation15], two modified implicit iterative algorithms were developed in [Citation16,Citation17] for solving coupled continuous Lyapunov matrix equations. Recently, a new implicit iterative algorithm was constructed in [Citation18] for solving the coupled Lyapunov matrix equation by using successive over relaxation. More results for the solution of the matrix equations can be found in [Citation19–22]. However, some standard continuous Lyapunov matrix equations need to be solved in the aforementioned implicit iterative algorithms. In addition, a main disadvantage of the exist gradient-based iterative algorithms is that the convergence rates of these algorithms are slow. Inspired by the above analysis, in this paper, we aim to develop a new gradient-based iterative algorithm, which does not need to solve the standard Lyapunov equation and has faster convergence rate than the existing iterative algorithms.

In this paper, the gradient-based iterative technique is investigated for solving the continuous coupled Lyapunov matrix equations. First, a novel and simple gradient-based iterative algorithm is constructed and analysed. It is proven that the proposed gradient-based iterative algorithm converges to the unique solution of the coupled Lyapunov matrix equations if and only if the tunable parameter satisfies a certain inequality. Moreover, an optimal tunable value that achieves the fastest convergence rate of the algorithm is obtained. Finally, the correctiveness of the convergence results are verified by some simulation results.

Notation: Throughout this paper, the notation ⊗ represents the Kronecker product of two matrices. I represents an identity matrix of appropriate dimensions. For a real matrix A, we use AT,A2,AF,λ(A),λmin(A),λmax(A), and ρ(A) to denote the transpose, the 2-norm, the F-norm, the eigenvalues, the maximal eigenvalue, the minimal eigenvalue and the spectral radius of matrix A, respectively. For two integers m and l with ml, I[m,l] denotes the set {m,m+1,,l}. For two square matrices E and A, let us define λ(E,A)={s|det(sEA)=0}. For any matrix X=[x1 x2xn]Rm×n, the stretching function is defined as vec()=[x1Tx2TxnT].

2. Preliminaries

Consider the following continuous-time Markovian jump linear system (1) dxt=Artxtdt,(1) where x(t)Rn is the system state, and r(t) is a continuous-time discrete-state Markovian process taking values in a finite set SI[1,N]. For the Markovian jump linear system (Equation1), the system matrices of N subsystems are AiRn×n,iI[1,N]. The stationary transition probabilities of the Markovian process r(t) are given by (2) Probrt+h=j|rt=i=πijh+oh,ij,1+πiih+oh,i=j,(2) where h>0,limh0o(h)/h=0 and πij (ji) is the transition rate from mode i at time t to mode j at time t+h. All the transition rates πij, i,jI[1,N], can be collected into a transition rate matrix Π=π11π12π1Nπ21π22π2NπN1πN2πNN, which has the following property πij0,ij,πij<0,i=j,j=1Nπij=0. Let the initial condition for the system (Equation1)–(Equation2) be x(0)=x0 and r(0)=r0, the definition of asymptotically mean square stability for the system (Equation1)–(Equation2) can be stated as follows:

Definition 2.1

[Citation3]

The continuous-time Markovian jump linear system (Equation1)–(Equation2) is asymptotically mean square stable if for any x0Rn, there holds limtE{xt2}=0, where E the denotes mathematical expectation.

It is well known that the preceding mean square stability of the continuous-time Markovian jump linear system (Equation1)–(Equation2) is closely related to the following continuous coupled Lyapunov matrix equations (CLMEs) (3) AiTXi+XiAi+j=1NπijXj+Qi=0,iI[1,N],(3) where QiRn×n,iI[1,N], are arbitrarily given positive definite matrices, and Xi,iI[1,N], are the unknown matrices to be determined. Regarding the mean square stability of the system (Equation1)–(Equation2), the following results are introduced.

Lemma 2.1

[Citation3]

The continuous-time Markovian jump linear system (Equation1)–(Equation2) is mean square stable if and only if the continuous CLMEs (Equation3) have a unique solution X=(X1,X2,,XN) with Xi>0,iI[1,N], for any given Q=(Q1,Q2,,QN) with Qi>0,iI[1,N].

Due to the significance of the continuous CLMEs in the stability analysis of the Markovian jump linear system, many researchers pay attention to the solution of the continuous CLMEs (Equation3). In next section, we will investigate the iterative technique for solving the continuous CLMEs (Equation3). Before give the main results, we first present some classical iterative algorithms.

Algorithm 2.1

[Citation16]

(4) Ai+πii/2ITXik+1+Xik+1Ai+πii/2I=j=1i1πijXjk+1j=i+1NπijXjkQi,iI[1,N].(4)

Algorithm 2.2

[Citation17]

(5) Ai+πii/2Iωi/2ITXik+1+Xik+1Ai+πii/2Iωi/2I=j=1i1πijt=0sijγijtXjk+1tωit=0sijγijtXjktj=i+1Nπijt=0sijγijtXjktQi,iI[1,N],(5) where γijt,tI[0,sij], i,jI[1,N], and ωi,iI[1,N], are some tunable parameters, which satisfy some given conditions.

In this paper, k denotes the iterative step of the iterative algorithms.

Remark 2.1

For Algorithms 2.1 and 2.2, at each iteration step, one needs to solve N standard continuous Lyapunov matrix equations in the following form, ATX+XA=Q. Due to this, some additional operations are required when using Algorithms 2.1 and 2.2 for solving the continuous CLMEs (Equation3). Thus, they are implicit iterative algorithms.

In addition, the gradient-based iterative algorithms proposed in [Citation7,Citation8] can also be used to solve the continuous CLMEs (Equation3). These results are stated as follows:

Algorithm 2.3

[Citation7]

(6) Xik+1=XikμAiTTiX+TiXAi+i=1NπijTiX,(6) with (7) TiX=AiTXik+XikAi+j=1NπijXjk+Qi,iI[1,N].(7)

Algorithm 2.4

[Citation8]

(8) Yik+1=Yi(k)μj=1NπijXj(Y(k))+QiAiTXi(Y(k))+Xi(Y(k))Ai+j=1NπijXj(Y(k))+Qi,iI[1,N],(8) where Xj(Y(k))=AjTYj(k)+Yj(k)Aj+i=1NπijYi(k),jI[1,N].

Remark 2.2

It is easily noted that the gradient-based iterative Algorithms 2.3 and 2.4 involve intermediate variables Ti(k) and Yi(k) at each iterative step. Thus, the computational cost of these two gradient-based iterative algorithms are high.

In this section, two explicit iterative algorithms and two implicit iterative algorithms for solving the continuous CLMEs (Equation3) have been reviewed. In order to improve the convergence rate, a new gradient-based iterative algorithm to solve the continuous CLMEs (Equation3) will be proposed in the next section.

3. A new gradient-based iterative algorithm

The basic idea of the gradient-based iterative method is to search for an optimal matrix X=(X1,X2,,XN) such that a given objective function is minimized. In this case, some simplified objective functions can be given as follows: (9) JiX=12AiTXi+XiAi+j=1NπijXj+QiF2,iI[1,N].(9) To construct the gradient-based iterative algorithm, we should first calculate the gradient of Ji(X),iI[1,N], with respect to Xi,iI[1,N]. In fact, the gradient of Ji(X),iI[1,N], with respect to Xi,iI[1,N], can be derived easily as below: (10) JiXXi=AiTTi+TiAi+πiiTi,(10) where Ti,iI[1,N], are given by (11) Ti=AiTXi+XiAi+j=1NπijXj+Qi.(11) Now, based on (Equation10) a new gradient-based iterative algorithm can be constructed to search the solution of the continuous CLMEs (Equation3).

Algorithm 3.1

(12) Xik+1=XikμAiTTiX+TiXAi+πiiTiX,iI[1,N],(12) where TiX,iI[1,N], are defined in (Equation7).

Remark 3.1

In comparison with the existing Algorithms 2.1 and 2.2, the proposed Algorithm 3.1 is explicit and it does not need to solve the standard Lyapunov matrix equations by applying the Matlab function 'lyap'. Thus, the computational cost of the Algorithm 3.1 should be less than the implicit iterative Algorithms 2.1 and 2.2.

Remark 3.2

In comparison with the existing Algorithms 2.3 and 2.4, it can be observed that the term i=1,ijNπijTi(X) is not included in the Algorithm 3.1 at each iterative step. In this case, the computational complexity of this gradient-based algorithm will be much lower than the existing iterative algorithms. Therefore, the proposed Algorithm 3.1 would have faster convergence rate than the Algorithms 2.3 and 2.4 as evidenced in the simulations.

In the following, we will give some convergence results of the Algorithm 3.1. To this end, we first introduce the following useful lemma, which has an important role in the derivation of the main results.

Lemma 3.1

[Citation23]

If ARm×n,BRp×q, and XRn×p, then we have vecAXB=BTAvecX.

In addition, it is denoted that (13) Ω=Ψ12π12Ψ1π21Ψ2Ψ22π23Ψ2πN1,1ΨN1πN1ΨN,π1NΨ1π2NΨ2ΨN12πN1,NΨN1πN,N1ΨNΨN2,(13) where (14) Ψi=IAi+πii/2IT+Ai+πii/2ITI,iI[1,N],(14) and let (15) λΩ=c1+d1i,c2+d2i,,cn2N+dn2Ni,(15) where ci,diR,iI[1,n2N]. On the basis of above notations (Equation13) and (Equation15), a necessary and sufficient condition for the convergence of the proposed Algorithm 3.1 can be given in the following theorem.

Theorem 3.1

Assume that the continuous CLMEs (Equation3) has a unique solution X=(X1,X2,,XN).

If (16) ci>0,iI[1,n2N],(16) then, the sequence X(k)=(X1(k),X2(k),,XN(k)) obtained by the Algorithm 3.1 with arbitrary initial condition converges to the unique solution of CLMEs (Equation3) if and only if (17) 0<μ<2cici2+di2,iI[1,n2N];(17) If (18) ci<0,iI[1,n2N],(18) then, the sequence X(k)=(X1(k),X2(k),,XN(k)) obtained by the Algorithm 3.1 with arbitrary initial condition converges to the unique solution of CLMEs (Equation3) if and only if (19) 2cici2+di2<μ<0,iI[1,n2N].(19) If neither the condition (Equation16) nor the condition (Equation18) is satisfied, then the Algorithm 3.1 is divergent.

Proof.

Define the iterative error matrix (20) X~ik=XikXi,iI[1,N].(20) Substituting (21) Qi=AiTXiXiAij=1NπijXj,iI[1,N],(21) into the Algorithm 3.1 and adding Xi,iI[1,N], on both sides of (Equation12), yields (22) X~ik+1=X~ikμAi+πii/2ITTiX~+TiX~Ai+πii/2IAi+πii/2ITTiX~,iI[1,N],(22) with TiX~=Ai+πii/2ITX~ik+X~ikAi+πii/2I+j=1,jiNπijX~jk,iI[1,N], where X~i(k),iI[1,N], are defined in (Equation20). Next, by performing vectorization operation and using Lemma 3.1, the obtained expressions (Equation22) can be equivalently written as vec(X~ik+1)=vec(X~ik)μ(IAi+πii/2IT)vec(Ti(X~))+(Ai+πii/2ITI)vec(Ti(X~)),iI[1,N], with vec(Ti(X~))=(IAi+πii/2IT)vec(X~ik)+(Ai+πii/2ITI)vec(X~ik)+j=1,jiNπijvec(X~jk),iI[1,N], Further, by using the notation (Equation14), it can be obtained from the above equations that vec(X~ik+1)=vec(X~ik)μΨivec(Ti(X~)),iI[1,N], with vec(Ti(X~))=Ψivec(X~ik)+j=1,jiNπijvec(X~jk),iI[1,N], where Ψi,iI[1,N], are defined in (Equation14). From the above equations, one can obtain vecX~ik+1=vecX~ikμΨij=1,jiNπijvecX~jkΨi2vecX~ik+Ψij=1,jiNπijvecX~jk. Then, the above relations can be compactly written as (23) η~k+1=IμΩη~k,(23) where Ω is defined in (Equation13) and (24) η~k=vecX~1kT vecX~2kTvecX~NkTT.(24) This relation implies that limkX~i(k)=0,iI[1,N], for arbitrary initial conditions if and only if IμΩ is Schur stable. Further, it is well known that IμΩ is Schur stable if and only if the eigenvalues of matrix IμΩ satisfy 1μci+dii<1,iI[1,n2N]. From this relation, it can be obtained that (25) μ2ci2+di22μci<0,iI[1,n2N].(25) Obviously, the convergence condition (Equation17) of the Algorithm (Equation12) can be obtained by the relation (Equation25). The proof of this theorem is thus completed.

Remark 3.3

In Theorem 3.1, the convergence results of the Algorithm 3.1 are given under the constrain conditions (Equation16) and (Equation18). In future, we will investigate to remove these conditions by applying some other techniques.

Corollary 3.1

Assume that all the eigenvalues λi(Ω)R, iI[1,n2N], and (26) λ1Ω>λ2Ω>>λn2NΩ.(26) If λiΩ>0,iI[1,n2N], then, the sequence X(k)=(X1(k),X2(k),,XN(k)) obtained by the Algorithm 3.1 with arbitrary initial condition converges to the unique solution of CLMEs (Equation3) if and only if 0<μ<2λ1Ω. If λiΩ<0,iI[1,n2N], then, the sequence X(k)=(X1(k),X2(k),,XN(k)) obtained by the Algorithm 3.1 with arbitrary initial condition converges to the unique solution of CLMEs (Equation3) if and only if 2λn2NΩ<μ<0.

Remark 3.4

In Corollary 3.1, the convergence conditions of the proposed Algorithm 3.1 are provided for two special cases: all the eigenvalues of the matrix Ω are greater than zero and all the eigenvalues of the matrix Ω are less than zero. For the case where the matrix Ω have both positive and negative eigenvalues, it is difficult to obtain an explicit expression of the convergence conditions in the current paper. This is our future work.

Theorem 3.2

Assume that the continuous CLMEs (Equation3) has a unique solution X=(X1,X2,,XN). With Ω defined in (Equation13) and any initial condition X(0)=(X1(0),X2(0),,XN(0)), the following relation holds (27) Xk+1XFρIμΩkX0XF.(27) Moreover, if all the eigenvalues λi(Ω)R, iI[1,n2N], then the convergence rate of the Algorithm 3.1 is maximized when μ=μopt=2λmaxΩ+λminΩ.

Proof.

It is known that the relation AF=vec(A)2 holds for any matrix A, hence it follows from (Equation23) that Xk+1XF=η~k+12=IμΩη~k2IμΩ2η~k2=ρIμΩη~k2ρIμΩkη~02=ρIμΩkX0XF. From the above relation, we conclude that the relation (Equation27) holds.

It can be seen from the relation (Equation27) that ρ(IμΩ) can be used to measure the convergence rate of the Algorithm 3.1. Moreover, the relation (Equation27) shows that the smaller ρ(IμΩ) is, the faster the proposed Algorithm 3.1 will converge. In other words, the convergence rate of the Algorithm 3.1 is maximized if ρ(IμΩ) is minimized. According to the above analysis, the iteration in (Equation12) is convergent if and only if the eigenvalues of matrix IμΩ satisfy |1μλi(Ω)|<1. In this case, we can further assume that the relation (Equation26) holds. For a given μ, the optimal convergence factor μopt should satisfy the following equation minmax1μλ1Ω,1μλ2Ω,,1μλn2NΩ=minmax1μλ1Ω,1μλn2NΩ, which means that |1μλ1(Ω)|=|1μλn2N(Ω)| has a non-trivial solution. From the above analysis, the optimal choice of μ can be given by μopt=2λ1Ω+λn2NΩ=2λmaxΩ+λminΩ. Thus, the proof of this corollary is completed.

4. Illustrative examples

In this section, we give two examples to illustrate the effectiveness of the iterative Algorithms 2.1–3.1 and validate some theoretical results on the optimal convergence. For fair comparison of different iterative algorithms, we define the iterative error versus the iteration step k as logδ(k), where

δk=i=1NAiTXik+XikAi+Qi+j=1NπijXjkF2.

Example 4.1

Consider the continuous CLMEs in the form of (Equation3) with the following system matrices A1=1.32321.15821.02900.122922.07370.22340.60751.16563.1031,A2=2.4791.35370.57170.82461.87270.48681.09580.95250.6483,A3=2.76040.51640.03810.50672.60640.3990.5280.24652.1332, and transition rate matrix Π=3211.520.50.750.751.5. It can be seen that the number of the subsystems is N=3, and the dimension of the above system is n=3. This example was once used in [Citation15]. Assume that the positive definite matrices Qi,iI[1,N], are all chosen as identity matrices. By [Citation15], the initial values of the iterative algorithms can be given by (28) X10=100.5001.2230.8,X20=10.50.7100.902.11,X30=0.80.51.60.152.30.70.32.11.5.(28) Next, several simulations are given to show different advantages of the proposed Algorithm 3.1.

Simulation 1: In this simulation, we will verify some convergence results of the developed gradient-based iterative Algorithm 3.1 for initial conditions (Equation28). First, we use the Algorithm 3.1 to solve the continuous CLMEs (Equation3) for different tuning parameters μ. According to Theorem 3.1, the iterative Algorithm 3.1 converges if and only if the tunable parameter satisfies 0<μ< 0.0239. When the Algorithm 3.1 is applied, by using the method in Theorem 3.2, the best tuning parameter is given by μ=μopt= 0.0210. The iterative error log10δ(k) against k with different tuning parameters are shown in Figure .

Figure 1. The convergence performance of the proposed Algorithm 3.1 with different tuning parameters.

Figure 1. The convergence performance of the proposed Algorithm 3.1 with different tuning parameters.

From Figure , it can be seen that the Algorithm 3.1 is convergent when the tunable parameter belongs to 0<μ< 0.0239. In addition, the optimal tuning parameter leads to the fastest convergence rate.

Simulation 2: In this simulation, we will compare the convergence performance of the proposed gradient-based iterative Algorithm 3.1 with some existing iterative Algorithms 2.1, 2.2, 2.3 and 2.4 in terms of computational time. By applying the method in [Citation7], we choose the step size μ=μopt=0.0214 such that the Algorithm 2.3 with initial conditions (Equation28) has the maximal convergence rate. For the Algorithm 2.4, the fastest convergence rate can be obtained when the step size is chosen as μ=μopt=0.0161. By using the method in Theorem 3.2, it can be derived that the Algorithm 3.1 has the best convergence performance if μ=μopt=0.0210. By using different iterative algorithms to solve the continuous CLMEs (Equation3), thus we obtain the computational time and the iterative errors for different algorithms as shown in Table 1 for precision δ(k)1015.

From Table , one can see that the Algorithm 3.1 takes less computational time than the previous iterative Algorithms 2.1, 2.2, 2.3 and 2.4. In addition, the iteration errors log10δ(k) versus k for Algorithms 2.3, 2.4 and 3.1 are shown in Figure .

Figure 2. The convergence performance of the different gradient-based iterative algorithms.

Figure 2. The convergence performance of the different gradient-based iterative algorithms.

Table 1. Comparison of the convergence performance for different iterative algorithms.

It can be seen from Figure  that the proposed Algorithm 3.1 is more effective and converge much faster than the previous gradient-based iterative Algorithms2.3 and 2.4. The total iterative numbers of the Algorithms2.3, 2.4 and 3.1 with a same cutoff error 1014 are 300, 260 and 120, respectively. Thus, the convergence rate of the proposed Algorithm 3.1 is much faster than the existing Algorithms 2.3 and 2.4.

Example 4.2

In this paper, we show that effectiveness of the proposed gradient-based Algorithm for the continuous CLMEs (Equation3) with high-dimensional system matrices. Consider the following continuous CLMEs in the form of (Equation3) with N=2. The system matrices are as follows:

A1=3.33133.11733.79113.45974.29783.56123.15144.44124.92753.11133.11736.05724.20816.07576.99335.82783.84786.09256.30575.47313.79114.20815.61834.35284.50354.99834.11945.37725.49064.82103.45976.07574.35287.75058.38445.49534.27875.72696.50436.18594.29786.99334.50358.384410.26035.78414.43457.44838.08056.60603.56125.82784.99835.49535.78418.29865.51466.43287.07975.19433.15143.84784.11944.27874.43455.51464.40885.04945.59474.08984.44126.09255.37725.72697.44836.43285.04949.03628.24836.53114.92756.30575.49066.50438.08057.07975.59478.24839.66036.20783.11135.47314.82106.18596.60605.19434.08986.53116.20787.3382,A2=5.64004.65585.82514.80266.60794.86874.40656.54608.18053.82384.65586.95135.42976.85328.21617.09804.67566.39138.31784.84815.82515.42977.29815.10866.36805.72614.91337.31668.24185.03724.80266.85325.10869.246810.25736.42035.05685.62618.41165.91186.60798.21616.368010.257312.83087.25765.71958.137810.91206.54024.86877.09805.72616.42037.257610.12406.50067.04999.54594.79224.40654.67564.91335.05685.71956.50065.25485.92977.72784.11166.54606.39137.31665.62618.13787.04995.929710.252410.28955.71938.18058.31788.24188.411610.91209.54597.727810.289514.60226.67143.82384.84815.03725.91186.54024.79224.11165.71936.67146.0503,

and transition rate matrix is Π=0.30.30.50.5. In this example, the dimension of the above system is n=9 and the positive definite matrices Qi,i=1,2 are both chosen as identity matrices. Next, the convergence performance of the different algorithms with the zero initial conditions. For this example, by applying the method in [Citation7], we choose the step size μ=μopt=0.0082 such that the Algorithm 2.3 with initial conditions (Equation28) has the maximal convergence rate. For the Algorithm2.4, the fastest convergence rate can be obtained when the step size is chosen as μ=μopt=0.0051. By using the method in Theorem 3.2, it can be derived that the Algorithm 3.1 has the best convergence performance if μ=μopt=0.0069.

First, the converge curves for Algorithms 2.3–3.1 are given in Figure .

Figure 3. The convergence performance of the different gradient-based iterative algorithms with high-dimensional system matrices.

Figure 3. The convergence performance of the different gradient-based iterative algorithms with high-dimensional system matrices.

For the continuous CLMEs (Equation3) with high-dimensional system matrices, it can be seen from Figure  that the proposed gradient-based Algorithm 3.1 with zero initial conditions has faster convergence rate than the previous Algorithms 2.3–2.4 if the tuning parameter is appropriately chosen.

For this example with high-dimensional system matrices, we next compare the proposed Algorithm 3.1 with the existing Algorithms 2.1–2.4 in terms of the computational time. By using different iterative algorithms to solve the coupled Lyapunov matrix Equation (Equation3) with zero initial conditions, the computational time with the cutoff iterative errors δ=1014 in the following table.

From Table , it can be seen that if the parameter μ is properly chosen, then the proposed gradient-based iterative Algorithm 3.1 takes less computational time than the existing Algorithms 2.1–2.4. From these results, it can be concluded that the proposed Algorithm 3.1 has much computational cost than the existing Algorithms 2.1–2.4 even if the coupled Lyapunov matrix Equation (Equation3) has high-dimensional system matrices.

Table 2. Comparison of the convergence performance of the Algorithms 2.1–3.1 with high-dimensional system matrices.

5. Conclusions

In this paper, a gradient-based iterative algorithm is given for solving the continuous CLMEs (Equation3) which arises in the continuous-time Markovian jump linear systems.The structure of this new gradient-based iterative algorithm is much simpler than the existing gradient-based iterative algorithms. It has been proved that the sequence generated by the proposed algorithm converge to the unique positive definite solution of the continuous CLMEs (Equation3) with faster convergence rates in comparison with existing algorithms. Moreover, the optimal tunable parameter achieving the fastest convergence rate is explicitly provided.

In future, it is expected that the ideas and methods can be further used to solve the other matrix equations such as coupled Riccati matrix equations, discrete coupled Lyapunov equations, etc.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This work was supported by the National Natural Science Foundation of China [grant numbers31370565 and 51278227].

References

  • Mariton M. Almost sure and moments stability of jump linear systems. Syst Control Lett. 1988;11(5):393–397. doi: 10.1016/0167-6911(88)90098-9
  • Ji Y, Chizeck HJ. Controllability, stabilizability, and continuous-time Markovian jump linear quadratic control. IEEE Trans Automat Contr. 1990;35(7):777–788. doi: 10.1109/9.57016
  • Sun HJ, Wu AG, Zhang Y. Stability and stabilisation of Ito stochastic systems with piecewise homogeneous Markov jumps. Int J Syst Sci. 2018;50(2):307–319. doi: 10.1080/00207721.2018.1551977
  • Luo TJ. Stability analysis of stochastic pantograph multi-group models with dispersal driven by G-Brownian motion. Appl Math Comput. 2019;354:396–410.
  • Jodar L, Mariton M. Explicit solutions for a system of coupled Lyapunov differential matrix equations. Proc Edinburgh Math Soc. 1987;30(2):427–434. doi: 10.1017/S0013091500026821
  • Ge ZW, Ding F, Xu L. Gradient-based iterative identification method for multivariate equation-error autoregressive moving average systems using the decomposition technique. J Franklin Inst. 2019;356(3):1658–1676. doi: 10.1016/j.jfranklin.2018.12.002
  • Zhang HM. Quasi gradient-based inversion-free iterative algorithm for solving a class of the nonlinear matrix equations. Comput Math Appl. 2019;77(5):1233–1244. doi: 10.1016/j.camwa.2018.11.006
  • Zhang H. Reduced-rank gradient-based algorithms for generalized coupled Sylvester matrix equations and its applications. Comput Math Appl. 2015;70(8):2049–2062. doi: 10.1016/j.camwa.2015.08.013
  • Ding F, Zhang H. Gradient-based iterative algorithm for a class of the coupled matrix equations related to control systems. IET Control Theory Appl. 2014;8(15):1588–1595. doi: 10.1049/iet-cta.2013.1044
  • Hajarian M. On the convergence of conjugate direction algorithm for solving coupled Sylvester matrix equations. Comput Appl Math. 2018;37(3):3077–3092. doi: 10.1007/s40314-017-0497-y
  • Sun HJ, Zhang Y, Fu YM. Accelerated Smith iterative algorithms for coupled Lyapunov matrix equations. J Franklin Inst. 2017;354(15):6877–6893. doi: 10.1016/j.jfranklin.2017.07.007
  • Sun HJ, Liu W, Teng Y. Explicit iterative algorithms for solving coupled discrete-time Lyapunov matrix equations. Iet Control Theory Appl. 2016;10(18):2565–2573. doi: 10.1049/iet-cta.2016.0437
  • Huang BH, Ma CF. Gradient-based iterative algorithms for generalized coupled Sylvester-conjugate matrix equations. Comput Math Appl. 2017;75(7):2295–2310. doi: 10.1016/j.camwa.2017.12.011
  • Zhang H, Yin H. New proof of the gradient-based iterative algorithm for the Sylvester conjugate matrix equation. Comput Math Appl. 2017;74(12):3260–3270. doi: 10.1016/j.camwa.2017.08.017
  • Borno I. Parallel computation of the solutions of coupled algebraic Lyapunov equations. Automatica. 1995;31(9):1345–1347. doi: 10.1016/0005-1098(95)00037-W
  • Qian YY, Pang WJ. An implicit sequential algorithm for solving coupled Lyapunov equations of continuous-time Markovian jump systems. Automatica. 2015;60:245–250. doi: 10.1016/j.automatica.2015.07.011
  • Wu AG, Duan GR, Liu W. Implicit iterative algorithms for continuous Markovian jump Lyapunov equations. IEEE Trans Automat Contr. 2016;61(10):3183–3189. doi: 10.1109/TAC.2015.2508884
  • Wu AG, Sun HJ, Zhang Y. An SOR implicit iterative algorithm for coupled Lyapunov equations. Automatica. 2018;97:38–47. doi: 10.1016/j.automatica.2018.07.021
  • Damm T, Sato K, Vierling A. Numerical solution of Lyapunov equations related to Markov jump linear systems. Numer Linear Algebra Appl. 2017;25(6):13–14.
  • Hajarian M. Convergence properties of BCR method for generalized Sylvester matrix equation over generalized reflexive and anti-reflexive matrices. Linear Multilinear Algebra. 2018;66(10):1975–1990. doi: 10.1080/03081087.2017.1382441
  • Zhang H. A finite iterative algorithm for solving the complex generalized coupled Sylvester matrix equations by using the linear operators. J Franklin Inst. 2017;354(4):1856–1874. doi: 10.1016/j.jfranklin.2016.12.011
  • Tian ZL, Fan CM, Deng YJ, et al. New explicit iteration algorithms for solving coupled continuous Markovian jump Lyapunov matrix equations. J Franklin Inst. 2018;355(17):8346–8372. doi: 10.1016/j.jfranklin.2018.09.027
  • Brewer J. Kronecker products and matrix calculus in system theory. IEEE Trans Circuits Syst. 1978;25(9):772–781. doi: 10.1109/TCS.1978.1084534