Publication Cover
Mathematical and Computer Modelling of Dynamical Systems
Methods, Tools and Applications in Engineering and Related Sciences
Volume 24, 2018 - Issue 6
475
Views
5
CrossRef citations to date
0
Altmetric
Original Articles

H2 optimal model order reduction on the Stiefel manifold for the MIMO discrete system by the cross Gramian

&
Pages 610-625 | Received 19 Jun 2017, Accepted 02 Sep 2018, Published online: 19 Sep 2018

ABSTRACT

In this paper, the H2 optimal model order reduction method for the large-scale multiple-input multiple-output (MIMO) discrete system is investigated. First, the MIMO discrete system is resolved into a number of single-input single-output (SISO) subsystems, and the H2 norm of the original MIMO discrete system is expressed by the cross Gramian of each subsystem. Then, the retraction and the vector transport on the Stiefel manifold are introduced, and the geometric conjugate gradient model order reduction method is proposed. The reduced system of the original MIMO discrete system is generated by using the proposed method. Finally, two numerical examples show the efficiency of the proposed method.

1. Introduction

Many engineering systems, such as control systems, electric power systems and network systems, are often modeled by large-scale differential equations [Citation1Citation3]. The direct simulation for these systems requires a lot of computer memory and time. In order to save computation sources and simulation time, model order reduction methods use a low-order system to approximate the original high-order system. Moreover, the low-order system is equipped with the good approximation of the original system, and preserves some properties, such as stability. Model order reduction methods have already been used in many practical applications, which involves the design of very large-scale integrated chips, and weather predictions [Citation4Citation6]. For details of model order reduction, one can refer to [Citation7,Citation8].

Discrete systems often arise in many fields of applications. For a simple example, consider the discretization of parabolic partial differential equations like the heat equation or the linear convection diffusion equation. It is well-known that a semi-discretization using the method of lines can lead to the computational instability problems if the order of the approximate ordinary differential equations is very high. Hence, one often prefers a full discretization via a Crank-Nicolson scheme which yields a discrete system, see [Citation9,Citation10]. There are many feasible model order reduction methods for the large-scale discrete systems, such as balanced truncation (BT) methods [Citation11,Citation12], Krylov subspace model order reduction methods [Citation13,Citation14] and the Laguerre-based model order reduction methods [Citation15]. Specifically, the BT methods construct the balanced transformation and truncate the smaller Hankel singular values to establish the reduced system, which preserves the stability and the a priori error estimation is obtained. The Krylov subspace model order reduction methods can efficiently generate the reduced system for the matrix-vector multiplication is only needed. In addition, the Laguerre-based model order reduction method provides the approximation of the original system in the time domain by the Laguerre polynomials, and the references therein [Citation16Citation18] provide more details for model order reduction methods.

Since the H2 norm is differentiable, the H2 optimal model order reduction method is developed to reduce the large-scale systems. The first order necessary conditions for H2 optimality can be found in [Citation19]. The literature [Citation20] shows the equivalence among the structured orthogonality conditions, first order necessary conditions and interpolation based conditions for H2 optimality, respectively. Then an iterative rational Krylov algorithm (IRKA) for continuous time systems and discrete time systems is presented. Another [Citation21] formulates the H2 optimal model order reduction problem as a minimization problem about the coefficient matrices of the reduced system. Besides, it shows that the stationary points of this minimization problem can be described via tangential interpolation. For the discrete systems with simple poles, a MIMO iterative rational interpolation method (MIRIAm) has been obtained independently in [Citation22]. It is worth noting that [Citation23] considers the H2 optimal model order reduction problem of the large-scale continuous time system as a minimization problem over the Stiefel manifold. Considering that the global convergence and locally superlinear convergence properties of the trust-region methods [Citation24], gives the trust-region model order reduction methods on the Stiefel manifold and Grassmann manifold based on the controllability and observability Gramians. However, the controllability (observability) Gramian is only equipped with the controllability (observability) information of the related system. Two Lyapunov equations need be solved to obtain these Gramians, which may lead to a huge calculation burden. The cross Gramian avoids this issue, because it can give the controllability and observability information at the same time. These properties of the cross Gramian affords its wide application in model order reduction. We also can see in [Citation20] that the H2 norm of the SISO system can be computed by using the cross Gramian. This provides another inspiration for solving the H2 optimal problem. Recently, an H2 iterative algorithm based on the cross Gramian for the general MIMO continuous system is given in [Citation25]. Another, the cross Gramian of the non-symmetric continuous system is defined in [Citation26].

In this paper, we mainly focus on the H2 optimal model order reduction of the MIMO discrete systems. The considered discrete system does not require to be SISO or symmetric. First, we divide the original MIMO discrete system into a number of SISO discrete subsystems. The H2 norm of the MIMO discrete system is formulated via the cross Gramian of each subsystem. Then, the H2 optimal model order reduction problem of the MIMO discrete system with orthogonal constraints is regarded as an unconstrained optimization problem on the Stiefel manifold. The geometric conjugate gradient model order reduction method is established and the reduced system of the original MIMO discrete system is generated. It should be pointed out that the H2 optimal model order reduction problem discussed in [Citation21] is posed on the Euclidean space, in which the cost function is viewed as the real function about the coefficient matrices of the reduced system. In this paper, we use the cross Gramian to reduce the MIMO discrete system on the Stiefel manifold instead of the controllability Gramian or observability Gramian in [Citation23]. In addition [Citation25], constructs the reduced system by reducing each subsystem, which needs simultaneously compute several transformation matrices. However, there is only one transformation matrix needed in our method.

This paper is organized as follows. In Section 2, some basic knowledge about discrete systems and model order reduction is reviewed. The division of the MIMO discrete system and the H2 norm of the original MIMO discrete system based on the cross Gramian are explored in Section 3. In Section 4, the Stiefel manifold is introduced, and the H2 optimal model reduction problem is formulated. In Sections 5, a geometric conjugate gradient model order reduction method on the Stiefel manifold is proposed to solve the H2 optimal model order reduction problem of the MIMO discrete system. In Section 6, two numerical examples are employed to illustrate the efficiency of the proposed model order reduction method. Some conclusions are drawn in Section 7.

2. Preliminaries

In this paper, we consider the following MIMO discrete system

(1) Σ:x(k+1)=Ax(k)+Bu(k),y(k)=Cx(k),(1)

where ARn×n is the state matrix, BRn×m and CRp×n are input and output matrices, respectively. x(k)Rn , y(k)Rp and u(k)Rm represent the state, the output and the input of the system (1). When p=m=1 , the system (1) is reduced to a SISO discrete system. Assume that the initial state x(0)=0. Applying the Z-transformation to the system (1), then its transfer function is represented as

(2) H(z)=C(zInA)1B,(2)

where In denotes the n-by-n identity matrix. We further assume that the system (1) is stable, i.e., all eigenvalues of the matrix A locate inside the unit circle. The H2 norm of the system (1) is expressed as [Citation22]

(3) H(z)H22=12π02πtrace([H(eiθ)][H(eiθ)])dθ,(3)

in which denotes the conjugate transpose. The cross Gramian of the discrete system (1) in the SISO case is defined as [Citation27,Citation28]

R=k=1AkBCAk,

and satisfies the following Stein equation

ARAR+BC=0.

H2 optimal model order reduction in this paper aims at searching the following low order system

(4) Σ^:{x^(k+1)=A^x^(k+1)+B^u(k),y^(k)=Cx^(k),(4)

with Aˆ=VTAVRr×r , Bˆ=VTBRr×m , Cˆ=CVRp×r under the constraint VTV=Ir , such that the output y(k) of the system (1) can be approximated by yˆ(k) very well, and the H2 norm of H(z)Hˆ(z) as small as possible, in which

Hˆ(z)=Cˆ(zIrAˆ)1Bˆ,

denotes the transfer function of the reduced system (4) with the initial state xˆ(0)=VTx(0).

It is very easy to see that the reduced system (4) is determined by the transformation matrix V, thus H(z)Hˆ(z)H22 is a real-valued function about V. Define the cost function

J(V):=∥H(z)Hˆ(z)H22.

The H2 optimal model order reduction problem of the system (1) is converted into the following optimization problem with orthogonal constraints

(5) minVTV=Ir, VRn×rJ(V).(5)

In this paper, J(V) is computed by the cross Gramian, and the optimization problem (5) under the constrain VTV=Ir is regressed as an unconstrained optimization problem on the Stiefel manifold. Then, some optimization techniques are employed to solve it.

3. The decomposition of the discrete system

In order to construct the reduced system (4) by using the cross Gramian, we decompose the coefficient matrices B and C of the system (1) into

B=[B1B2Bm],C=[C1TC2TCpT]T,

where BiRn×1 , CjR1×n are the i-th columns and the j-th rows of the matrices B and C, respectively. Then the transfer function H(z) of the system (1) can be rewritten as

H(z)=H11(z)H12(z)H1m(z)H21(z)H22(z)H2m(z)Hp1(z)Hp2(z)Hpm(z),

with Hji(z)=Cj(zIrA)1Bi , which corresponds to the following SISO discrete subsystem

(6) Σji:xi(k+1)=Axi(k)+Biui(k),yji(k)=Cjxi(k),(6)

where ui(k) (i=1, , m) is the i -th component of u(k). The j -th (j=1, ) component yj(k) of the output y(k) is expressed as yj(k)=i=1myji(k). It is obvious that the subsystem (6) is stable if the system (1) is stable. Let Rji denote the cross Gramian of the subsystems (6), then Rji can be written as

Rji=k=1AkBiCjAk,

which satisfies the Stein equation

(7) ARjiARji+BiCj=0.(7)

The H2 norm of the subsystem (6) based on Rji can be represented as [Citation29]

(8) Hji(z)H22 = trace(CjRjiBi),(8)

The relation between the H2 norms of the transfer functions H(z) and Hji(z) is given by the following Lemma.

Lemma 3.1 ([Citation22]) Let H(z) and Hji(z) individually denote the transfer functions of the systems (1) and (6). Then it holds that

(9) H(z)H22 = j=1pi=1mHji(z)H22.(9)

Combining the equation (8) and Lemma 3.1, the H2 norm of the system (1) can be written as

(10) H(z)H22=j=1pi=1mtrace(CjRjiBi).(10)

Similarly, we decompose the reduced system (4) into the subsystem

(11) Σˆji:xˆi(k+1)=Aˆxˆi(k)+Bˆiui(k),yˆji(k)=Cˆjxˆi(k),(11)

where Bˆi and Cˆj are the i -th column and the j -th row of the matrices Bˆ and Cˆ. The transfer function of (11) is Hˆji=Cˆj(zIrAˆ)1Bˆi. According to Lemma 3.1, the cost function J(V) is further expressed as

J(V)=j=1pi=1mHji(z)Hˆji(z)H22.

Define the error transfer function Hjie(z):=Hji(z)Hˆji(z) , which can be described by the following error system

Σˆjie:xjie(k+1)=Aexjie(k)+Bieui(k),yjie(k)=Cjexjie(k),

with

Ae=A00Aˆ,Bie=BiT,BˆiTT, Cje=Cj,Cˆj.

Let Rjie denote the cross Gramian of the error subsystem. If the subsystem (6) and its corresponding reduced system are stable, then Rjie satisfies

(12) AeRjieAeRjie + BieCje = 0.(12)

Combining the equations (8) and (10), the cost function J(V) is finally calculated as

(13) J(V)=j=1pi=1mtrace(CjeRjieBie).(13)

In order to obtain the expression of J(V) with respect to the transformation matrix V , we partition Rjie as

(14) Rjie=RjiXjiYjiRˆji,(14)

with Xji , YjiTRn×r. Substituting (14) into (12), it yields that

(15) AXjiAˆXjiBiCˆj=0,(15)
(16) AˆYjiAYji+BˆiCj=0,(16)
(17) AˆRˆjiAˆRˆjiBˆiCˆj=0.(17)

Moreover, plugging the block form of the matrices Bie , Cje and Rjie into (13), the J(V) can be expressed as

(18) J(V)=j=1pi=1mtrace(CjRjiBiCˆjYjiBi+CjXjiBˆiCˆjRˆjiBˆi).(18)

From the expression (18), the H2 optimal model reduction problem (5) can be specialized as

(19) minVRn×r, VTV=IrJ(V)=j=1pi=1mtrace(CjiRjiBiCˆjYjiBi+CjXjiBˆiCˆjRˆjiBˆi).(19)

4. The optimization on the stiefel manifold

In this section, we first introduce the Stiefel manifold, such that the H2 optimal model reduction problem is converted to an unconstrained optimization problem on the Stiefel manifold. Then some related geometric definitions are presented.

The Stiefel manifold St(n,r) is defined as [Citation30]

St(n,r)={VRn×r|VTV=Ir}.

Thus, the H2 optimal model reduction problem (19) can be formulated as the following minimization problem

(20) minVSt(n,r)J(V)=j=1pi=1mtrace(CjRjiBiCˆjYjiBi+CjXjiBˆiCˆjRˆjiBˆi).(20)

The tangent space at a point VSt(n,r) is

TVSt(n,r)={ΔRn×r|VTΔ+ΔTV=0}.

Endowing every tangent space with the metric

(21) ge(Δ1,Δ2)=trace(Δ1TΔ2),(21)

the Stiefel manifold is termed as a Riemannian manifold [Citation30].

According to the definition of the gradient on the Riemannian manifold, the gradient of the cost function J(V) defined on the Stiefel manifold is described as below.

Definition 4.1 ([Citation31]). Given a smooth scalar field f on a Riemannian manifold M , the gradient of f at x , denoted by f(x) , is defined as the unique element of TxM that satisfies

ge(f(x),ξ)=Df(x)[ξ],ξTxM.

Define the orthogonal projection onto the tangent space TVSt(n,r) as [Citation31]

(22) PV(D)=D12V(VTD+DTV),DRn×r.(22)

Let Jˉ(V) be a cost function defined on the Euclidean space Rn×r, and J(V) is the restriction of Jˉ(V) to St(n,r). The gradient of the cost function J(V) is equal to the projection of the gradient of Jˉ(V) onto TVSt(n,r), i.e,

(23) J(V)=PV(Jˉ(V)).(23)

Next, we introduce the retraction and the vector transport on the Stiefel manifold, which are adopted by the geometric conjugate gradient method to solve the optimization problem.

Definition 4.2 ([Citation32]) A retraction on a manifold M is a smooth mapping Rx from the tangent bundle TxM onto M with following properties.

(i) Rx(0x)=x , where 0x denotes the zero element of TxM.

(ii) With the canonical identification T0xTxMTxM , Rx satisfies

DRx(0x)=idTxM,

where idTxM denotes the identity mapping on TxM.

Based on Definition 4.2, the retraction of the Stiefel manifold St(n,r) can be taken as [Citation33,Citation34]

(24) RV(ηV)=qf(V+ηV)(24)

where qf(N) denotes the Q factor of the decomposition of NRn×r as N=Q , where Q belongs to St(n,r) and is upper triangular n×r matrix with strictly positive diagonal elements. In order to use the conjugate gradient method, the following vector transport is required.

Definition 4.3 ([Citation33]). A vector transport on a manifold M is a smooth mapping

TMTMTM:(ηx,ξx)Tηx(ξx)TM,

satisfying the following properties for all xM

(i) There exists a retraction Rx , called the retraction associated with T, such that following diagram commutes

(ηx,ξx)TTηx(ξx)πηxRπTηx(ξx)

where π(Tηx(ξx)) denotes the foot of the tangent vector Tηx(ξx).

(ii) T0xξx=ξx for all ξxTxM.

(iii) Tηx(aξx+bζx)=aTηx(ξx)+bTηx(ζx).

A special vector transport TηVξV on the St(n,r) can be found in [Citation31], that is

(25) TηVξV=(InYYT)ξV+Yskew(YTξV),(25)

where Y=RV(ηV) , and ξV , ηVTVSt(n,r). It is easy to verify that TηVξV satisfies all the conditions of Definition 4.3. In the next section, we employ some optimization techniques to solve the unconstrained optimization problem (20).

5. The geometric conjugate gradient method on the stiefel manifold

In this section, we first derive the gradient of the cost function J(V). Then the geometric conjugate gradient model order reduction method is presented to reduce the order of the original system.

We define the extension Jˉ(V) of J(V) to the Euclidean space Rn×r as

Jˉ(V)=j=1pi=1mtrace(CjRjiBiCˆjYjiBi+CjXjiBˆiCˆjRˆjiBˆi).

It is obvious that Jˉ|St(n,r)=J. In the following, we present the specific expression of the partial derivative JˉV. First, the following Lemma is introduced.

Lemma 5.1. If P and Q satisfy AQBQ+Y=0 and BPAP+X=0,then it holds that trace(YP)=trace(XQ).

Proof. From the conditions, we have

AQBPQP+YP=0,BPAQPQ+XQ=0.

Utilizing the properties of the trace , we have

trace(YP)=trace(QP)trace(AQBP)
=trace(PQ)trace(BPAQ)
=trace(XQ)

Thus, The proof of this Lemma is accomplished.

The partial derivative JˉV can be given by the following theorem.

Theorem 5.2. Given the system (1) and the reduced system (4). Xji, Yji and Rˆji are the solutions of the Stein equations (15), (16) and (17), respectively. Then the partial derivative JˉV of Jˉ(V) is given by

(26) JˉV=j=1pi=1m2CjTBiTYjiT+2BiCjXji2CjTBˆiTRˆjiT2BiCˆjRˆji+2AVYjiAXji+ j=1pi=1m2ATVXjiTATYjiT+2AVRˆjiAˆRˆji+2ATVRˆjiTAˆTRˆjiT.(26)

Proof. Since Bˆ=VT[B1,,Bm] , we have Bˆi=VTBi. Similarly, it holds Cˆj=CjV as well. Thus, the differential ΔJˉ of Jˉ with respect to ΔV is given by

ΔJ=j=1pi=1mtrace((YjiBiCj+XjiTCjTBiTRˆjiBˆiCjRˆjiTCˆjTBiT)ΔV)+j=1pi=1mtrace(ΔYjiBiCjV+BˆiCjΔXjiBˆiCˆjΔRˆji),

where ΔXji, ΔYji and ΔRˆji depend on ΔV via the following equations

(27) AΔXjiAˆΔXji+MΔV=0,(27)
(28) AˆΔYjiAΔYji+NΔV=0,(28)
(29) AˆΔRjiAˆΔRji+FΔV=0,(29)

with

MΔV=AXjiΔVTAV+AXjiVTAΔVBiCjΔV,NΔV=ΔVTAVYjiA+VTAΔVYjiA+ΔVTBiCj,
FΔV=ΔVTAVRˆjiAˆ+VTAΔVRˆjiAˆ+AˆRˆjiΔVTAV+AˆRˆjiVTAΔVΔVTBiCˆjBˆiCjΔV.

Applying Lemma 5.1 to (27) and (16), we have

j=1pi=1mtrace(BˆiCjΔXji)=trace(MΔVYji)=j=1pi=1mtrace((XjiTATYjiTVTAT+YjiAXjiVTA)ΔV)j=1pi=1m((YjiBiCj)ΔV).

Similarly, by (15) and (28), we can obtain

j=1pi=1mtrace(BiCˆjΔYji)=trace(NΔVXji)=j=1pi=1mtrace((XjiTATYjiTVTAT+YjiAXjiVTA)ΔV)+j=1pi=1m((XjiTCjTBiT)ΔV).

Further, it follows from the equations (17) and (29) that

j=1pi=1mtrace(BˆiCˆjΔRji)=j=1pi=1mtrace(FΔVRˆji))=j=1pi=1mtrace((2RˆjiTAˆTRˆjiTVTAT+2RˆjiAˆRˆjiVTA)ΔV)j=1pi=1m((RˆjiTCˆjTBiT+RˆjiBˆiCj)ΔV).

According to the results shown in the above formulas, we can obtain

ΔJˉ=2j=1pi=1mtrace((YjiBiCj+XjiTCjTBiTRˆjiBˆiCjRˆjiTCˆjTBiT+XjiTATYjiTVTAT+YjiAXjiVTA+RˆjiTAˆTRˆjiTVTAT+RjiAˆRˆjiVTA)ΔV)

On the other hand, the partial derivative JˉV satisfies ΔJˉ=trace(JˉVTΔV) , and it follows

JˉV=j=1pi=1m(2CjTBiTYjiT+2BiCjXji2CjTBˆiTRˆjiT2BiCˆjRˆji+2AVYjiAXji)
+j=1pi=1m(2ATVXjiTATYjiT+2AVRˆjiAˆRˆji+2ATVRˆjiTAˆTRˆjiT).

Thus, the proof of this theorem is accomplished.

In the following, we discuss the gradient of the cost function J(V) on the St(n,r). The explicit expression of the gradient is given by the following theorem.

Theorem 5.3. The gradient of the cost function J(V) at VSt(n,r) denoted by VJ is given by

(30) VJ=JˉV12(VVTJˉV+VJˉVTV).(30)

Proof. Since St(n,r) is a submanifold of Rn×r , it yields that

VJ=PV(JˉV).

Specializing the above form, it holds that

VJ=JˉV12(VVTJˉV+VJˉVTV).

Thus, the proof of this theorem is accomplished. ⁏

Based on the gradient of the cost function, we propose the geometric conjugate gradient model order reduction method. We suppose that the search direction ξk of the k -th step have been obtained. Then the transformation matrix V(k+1) of the next iteration is obtained by

V(k+1)=RV(k)(tkξk),

where tk denotes the Armijo step size, which is computed by tk=γωj , and j is the smallest nonnegative integer such that

(31) J(V(k))J(RV(k)(ωjγξk))λωjγge(V(k)J,ξk),(31)

for some given constants ω,λ(0,1) , γ>0. According to Theorem 5.3, we have

Ttkξkξk=(InV(k+1)(V(k+1))T)V(k)J+V(k+1)skew((V(k+1))TV(k)J).

Then the (k+1) -th step search direction ξk+1 can be generated by

ξk+1=V(k+1)J+βk+1Ttkξkξk,

with [Citation35]

(32) βk+1=ge(V(k+1)J,V(k+1)J)ge(V(k+1)J,Ttkξkξk)ge(V(k)J,ξk),(32)

for k=0,1,.

Remark 1. We point out that the stability of the reduced system can be guaranteed by the proposed algorithm at each iteration. Since tk is the Armijo step size, and V(k+1) is computed by (24), thus we have J(V(k+1))J(V(k)) for k=0 , 1 , . It follows that J(V(k))=∥(H(z)Hˆk(z)H22 equals to if Hˆk(z) is uns. From the assumption that Hˆ0 is stable, it holds that J(V(k+1))J(V(0))<. It is clear that Hˆk is also stable for any k.

Based on the above analysis, the specific iterative algorithm is presented as follows.

Algorithm 1 Geometric conjugate gradient model order reduction method on St(n,r) (GCGM).

Require:The coefficient matrices of the system (1), the reduced order r.

Ensure: The coefficient matrices of the reduced system (4).

1: Obtain the initial transformation matrix V(0).

2: Compute V(0)J by (30), and set ξ0=V(0)J.

3: for k=0, 1 , do

4: Compute the Armijo step size αk by (31).

5: Compute the retraction RV(k)(ξk) by (24), and set V(k+1)=RV(k)(tkξk).

6: Compute βk+1 by (32), and set ξk+1=V(k+1)J+β(k+1)Rtkηkξk.

7: end for

8: returnAˆ=VTAV, Bˆ=VTB , Cˆ=CV.

Remark 2. Since the Stiefel manifold St(n,r) is compact, the sequence generated by Algorithm 1 for solving the H2 optimal model order reduction problem (20) converges to a local optimal solution for any initial V(0). This means that the initial V(0) can be randomly produced. Thus the initial point of Algorithm 1 is not required to be selected by other model order reduction methods. But, to ensure that the sequence generated by Algorithm 1 fast converges to the local optimal solution of the problem (20). We suggest that readers use the existing model order reduction methods to determine the initial V(0).

We are now in a position to analyze the memory consumption of GCGM method by the floating point operations (flops). For each search step of minimization problem (20), we need to solve the equations (15), (16) and (17). (15) and (16) can be solved efficiently with O(rNl) flops [Citation11], in which Nl denotes the required flops to solve the linear equation (AρI)x=b , where ρ is not an eigenvalue of A and bRn. Since Rˆji is r×r matrix and r is small, thus we can solve (17) by using standard solvers, which requires O(r3) flops. Let l denote the maximum number of iterations of the GCGM method. Then the GCGM method requires O(l(rNl+r3)) flops.

6. Numerical examples

In this section, two numerical examples are employed to demonstrate the efficiency of the proposed algorithm. All the experiments are performed in MATLAB Version 8.1.0 (R2013a) on a PC with Intel(R) Core(TM) i5-4570 (4 Cores) CPU 3.20 GHz and 8 GB RAM. To observe the approximation results of the reduced system, we check the relative H2 error ΣΣˆH2ΣH2.

Example 6.1. In the first example, we consider the FOM model. This example from [Citation36] and can be described by the following continuous-time system

Σ:x˙(t)=Acx(t)+bcu(t),y(t)=ccx(t),

with the coefficient matrices

Ac=A1A2A3A4,BcT=(10,,106,1,,1),Cc=BcT,

where A1=11001001, A2=12002001, A3=14004001, A4=diag(1,,1000). By performing the semi-implicit Euler discretization with the step size Δt=0.01 to the above continuous system, a discrete system with the following coefficient matrices can be obtained

A=(IΔtAc)1,b=Δt(IΔtAc)1bc,c=cc.

To verify the efficiency of the algorithm 5, we randomly produce the initial V(0). Then, we apply Algorithm 5 to reduce this discrete model. The order of the reduced system is taken as r=15. The convergent condition of Algorithm 5 is that the maximum change of the eigenvalues for Aˆ between two successive iterations is than the threshold ε=1e3. The input functions of all systems are chosen as u(k)=0.1sin(0.5k). The output responses and the corresponding relative error are shown in .

Figure 1. Left: the output responses y(k). Right: the corresponding relative error in Example 6.1.

Figure 1. Left: the output responses y(k). Right: the corresponding relative error in Example 6.1.

shows that the output behavior of the original system can be well approximated by the reduced system. Furthermore, the relative H2 error and the computational time for Algorithm 5 are listed in .

Table 1. The relative H2 errors and the computational time for r=15.

Example 6.2. In this example, we investigate the CD player model, which can be described as the MIMO continuous system with n=120 , p=m=2. It is obtained from the Oberwolfach model reduction benchmark collection [Citation36] and has been studied in [Citation22,Citation25]. We obtain the corresponding 2 inputs and 2 outputs discrete system by using the semi-implicit Euler discretization with Δt=0.03.

For this model, we use the BT method to generate the initial V(0) , and use the same condition with Example 6.1 as the convergent condition. Then, we execute the GCGM algorithm and MIRIAm algorithm for this model and reduce the order to r=5.

and depict the output responses of all the systems and the corresponding relative errors for the input u(k)=0.001[sin(0.12k),cos(0.16k)]T. It can be observed from and that our algorithm performs as good as MIRIAm algorithm. Moreover, we list the relative H2 errors, the computational time for these two algorithms in .

Table 2. The relative H2 errors and the computational time for r=5.

Figure 2. Left: the output responses y1(k). Right: the corresponding relative errors in Example 6.2.

Figure 2. Left: the output responses y1(k). Right: the corresponding relative errors in Example 6.2.

Figure 3. Left: the output responses y2(k). Right: the corresponding relative errors in Example 6.2.

Figure 3. Left: the output responses y2(k). Right: the corresponding relative errors in Example 6.2.

We can observe from that the relative H2 errors are produced by both methods have the same order of magnitude and the less time is needed by GCGM method. In brief, both of them have their advantages. Thus, these two examples illustrate the efficiency of Algorithm 1.

7. Conclusions

We have proposed an efficient model order reduction method for the MIMO discrete system. First, we decompose the original MIMO discrete system into several SISO subsystems, and the H2 norm is derived based on the cross gramian of each subsystem. Next, the Stiefel manifold is introduced, and the H2 optimal model order reduction problem of the MIMO discrete system is converted into the unconstrained optimization problem defined on the Stiefel manifold. Then, the GCGM model order reduction method is proposed. Finally, the proposed method is applied to two numerical examples to verify the feasibility.

Supplemental material

interact.cls

Download (24.6 KB)

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This work is supported by the Natural Science Foundation of China (NSFC) under Grant 11871393 and 61663043.

References

  • Y.L. Jiang, Model Order Reduction Methods, Science Press, Beijing, 2010. ISBN 9787030274373.
  • Z.Z. Qi, Y.L. Jiang, and Z.H. Xiao, Structure-preserving model order reduction based on laguerre-SVD for coupled systems, Math. Comput.Model, Dyn. Syst. 21 (6) (2015), pp. 573–590. doi:10.1080/13873954.2015.1065279.
  • T. Ishizaki and K. Kashima, Model reduction and clusterization of large-scale bidirectional network, IEEE Trans. Autom. Contr. 59 (1) (2014), pp. 48–63. doi:10.1109/TAC.2013.2275891.
  • X.L. Wang and Y.L. Jiang, Two-sided projection methods for model reduction of MIMO bilinear systems, Math. Comput. Model, Dyn. Syst. 19 (6) (2013), pp. 575–592. doi:10.1080/13873954.2013.805145.
  • K. Milner, T.W. Becker, and L. Boschi, New software framework to share research tools, EOS. Trans. AGU. 90 (12) (2013), pp. 104. doi:10.1029/2009EO120005.
  • B.F. Rarrell and P.J. Ioannou, Sate estimation using a reduced order kalman filter, J. Atmos. Sci. 58 (23) (2001), pp. 3666–3680. doi:10.1175/1520-0469(2001)058<3666:SEUARO>2.0.CO;2.
  • Z.H. Xiao and Y.L. Jiang, Dimension reduction for second-order systems by general orthogonal polynomials, Math. Comput. Model, Dyn. Syst. 20 (4) (2014), pp. 414–432. doi:10.1080/13873954.2013.867274.
  • Y.L. Jiang and H.B. Chen, Time domain model order reduction of general orthogonal polynomials for linear input-output systems, IEEE Trans. Autom. Contr. 55 (99) (2012), pp. 330–343. doi:10.1109/TAC.2011.2161839.
  • J.W. Lee and G.E. Dullerud, Optimal disturbance attenuation for discrete-time switched and Markovian jump linear systems, SIAM J. Control Optim. 45 (4) (2006), pp. 1329–1358. doi:10.1137/050627538.
  • J.W. Lee and P.P. Khargonekar, Optimal output regulation for discrete-time switched and Markovian jump linear systems, SIAM J. Control Optim. 47 (1) (2008), pp. 40–72. doi:10.1137/060662290.
  • A.C. Antoulas, Approximation of Large-Scale Dynamical Systems, SIAM, Philadelphia, 2005.
  • L.Q. Zhang, L. James, B. Huang, and G.H. Yang, On gramians and balanced truncation of discrete time bilinear systems, Int. J. Contr 76 (6) (2003), pp. 414–427. doi:10.1080/0020717031000082540.
  • R.W. Freund, Krylov-subspace methods for reduced-order modeling in circuit simulation, J. Comput. Appl. Math. 123 (1) (2000), pp. 395–421. doi:10.1016/S0377-0427(00)00396-4.
  • K. Gallivan and P.V. Dooren, A rational Lanczos algorithm for model reduction, Numer. Algorithms 12 (1) (1996), pp. 33–63. doi:10.1007/BF02141740.
  • X.L. Wang and Y.L. Jiang, Model reduction of discrete-time bilinear systems by a Laguerre expansion technique, Appl. Math. Model. 40 (14) (2016), pp. 6650–6662. doi:10.1016/j.apm.2016.02.015.
  • S. Rahrovani, M.K. Vakilzadeh, and T. Abrahamsson, On Gramian-Based Techniques for Minimal Realization of Large-Scale Mechanical Systems, Proc. 31st IMAC, Conf. Structural Dyn. 7 (2013), pp. 797–805.
  • U. Al-Saggaf and G. Franklin, An error bound for a discrete reduced order model of a linear multivariable system, IEEE Trans. Autom. Contr. 32 (9) (2003), pp. 815–819. doi:10.1109/TAC.1987.1104712.
  • P. Kerfriden, O. Goury, T. Rabczuk, et. al, A partitioned model order reduction approach to rationalise computational expenses in nonlinear fracture mechanics, Comput. Methods Appl. Mech. Eng. 256 (5) (2013), pp. 169–188. doi:10.1016/j.cma.2012.12.004.
  • D.A. Wilson, Optimum solution of model reduction problem, Proc. IEEE 117 (1970), pp. 1161–1165.
  • S. Gugercin, A.C. Antoulas, and C. Beattie, H2 model reduction for large-scale linear dynamical systems, SIAM Matrix Anal. Appl. 30 (2) (2008), pp. 609–638. doi:10.1137/060666123.
  • P.V. Dooren, K.A. Gallivanb, and P.A. Absil, H2 optimal model reduction of MIMO systems, Appl. Math. Lett. 21 (12) (2008), pp. 1267–1273. doi:10.1016/j.aml.2007.09.015.
  • A. Bunse-Gerstner, D. Kubalinska, G. Vossen, and D. Wilczek, H2-norm optimal model reduction for large scale discrete dynamical MIMO systems, J. Comput. Appl. Math. 233 (5) (2010), pp. 1202–1216. doi:10.1016/j.cam.2008.12.029.
  • W.Y. Yan and J. Lam, An approximate approach to H2 optimal model reduction, IEEE Trans. Autom. Contr. 40 (1999), pp. 1515–1521.
  • H. Sato and K. Sato, Riemannian trust-region methods for H2 optimal model reduction, IEEE 54th Annual CDC. (2015), pp. 4648–4655.
  • Y.L. Jiang and K.,.L. Xu, H2 optimal reduced models of general MIMO LTI systems via the cross Gramian on the Stiefel manifold, J. Frankl. Inst. 354 (8) (2017), pp. 3210–3224. doi:10.1016/j.jfranklin.2017.02.019.
  • C. Himpe and M. Ohlberger, A note on the cross Gramian for non-symmetric systems, Systems Sci. Control Engineer. Ens. 4 (1) (2015), pp. 199–208. doi:10.1080/21642583.2016.1215273.
  • K.V. Fernando and H. Nicholson, Minimality of SISO linear systems, Proc. IEEE 70 (10) (1982), pp. 1241–1242. doi:10.1109/PROC.1982.12460.
  • R.W. Aldhaheri and A.H. Bamani, Balanced realization of non-minimal MIMO discrete-time systems, Int. J. Syst. Sci. 25 (1) (1994), pp. 173–182. doi:10.1080/00207729408928951.
  • Y. Chahlaoui, A posteriori error bounds for discrete balanced truncation, Linear A Lgebra Appl. 436 (8) (2012), pp. 2744–2763. doi:10.1016/j.laa.2011.07.025.
  • A. Edelman, T.A. Arias, and S.T. Smith, The geometry of algorithms with orthogonality constraints, SIAM J. Matrix Anal. Appl. 20 (2) (1999), pp. 303–353. doi:10.1137/S0895479895290954.
  • W. Huang and P.A. Absil, A Riemannian symmetric rank-one trust-region method, Math. Program. 150 (2) (2015), pp. 179–216. doi:10.1007/s10107-014-0765-1.
  • P.A. Absil and C.G. Baker, Trust-region methods on Riemannian manifolds, Found. Comput. Math 7 (3) (2007), pp. 303–330. doi:10.1007/s10208-005-0179-9.
  • P.A. Absil, R. Mahony, and R. Sepulchre, Optimization Algorithms on Matrix Manifolds, Princeton University Press, Prinection, 2008.
  • X. Zhu, A Riemannian conjugate gradient method for optimization on the Stiefel manifold, Comput. Optim. Appl. 67 (1) (2017), pp. 73–110. doi:10.1007/s10589-016-9883-4.
  • H. Sato, A Dai-Yuan-type Riemannian conjugate gradient method with the weak Wolfe conditions, Comput. Optim. Appl. 64 (1) (2016), pp. 101–118. doi:10.1007/s10589-015-9801-1.
  • Y. Chahlaoui and P.V. Dooren, A collection of benchmark examples for model reduction of linear time invariant dynamical systems, SLICOT Working Note 2002-2, 2002 .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.