Publication Cover
Mathematical and Computer Modelling of Dynamical Systems
Methods, Tools and Applications in Engineering and Related Sciences
Volume 20, 2014 - Issue 4
663
Views
12
CrossRef citations to date
0
Altmetric
Original Articles

Dimension reduction for second-order systems by general orthogonal polynomials

&
Pages 414-432 | Received 18 Jan 2013, Accepted 15 Nov 2013, Published online: 12 Dec 2013

Abstract

In this article, we discuss the time-domain dimension reduction methods for second-order systems by general orthogonal polynomials, and present a structure-preserving dimension reduction method for second-order systems. The resulting reduced systems not only preserve the second-order structure but also guarantee the stability under certain conditions. The error estimate of the reduced models is also given. The effectiveness of the proposed methods is demonstrated by three test examples.

1. Introduction

A variety of problems lead to large second-order systems, such as circuit simulation, structural mechanical systems, computational electromagnetics and microelectronic mechanical systems (MEMS) [Citation1Citation3].  In many areas of engineering, second-order systems have the following form:

(1) Mx¨(t)+Dx˙(t)+Kx(t)=Bu(t),y(t)=LTx(t),(1)
with initial conditions x(t0)=x0 and x˙(t0)=x˙0. We assume that u(t)\Rtp,x(t)\RtN,y(t)\Rtq,B\RtN×p,L\RtN×q and M,D,K\RtN×N with M invertible. M,DandK represent the mass, damping and stiffness matrices as known in structural dynamics; BandL are input distribution and output measurement matrices, respectively. Equivalently, the second-order model (1) can be written as the following first-order time-invariant linear system with the state vector z(t)=x(t)x˙(t):
(2) Ez˙(t)+Az(t)=Bu(t),y(t)=Cz(t),(2)
where E=F00M, A=0FKD, B=0B, C=LT0 and F\RtN×N is an arbitrary nonsingular matrix. In general, F is chosen to be the matrix K or an identity matrix of the appropriate dimension.

To reduce computational and resource demands, and to compute solutions and controls in acceptable time, dimension reduction techniques are applied. In particular, the structure-preserving dimension reduction of second-order systems [Citation4,Citation5] has been an active topic during recent years. We can essentially follow two paths during the computation of the reduced-order model (ROM) of second-order systems. We can rewrite the system (1) in first-order representation (2), and apply dimension reduction methods to the equivalent first-order model. However, this approach has a major disadvantage: it ignores the physical meaning of the original system matrices, and the reduced-order system is no longer in a second-order form generally.

On the contrary, we could decide to preserve the second-order structure of the system, and compute a second-order reduced model of the form:

M˜x˜¨(t)+D˜x˜˙(t)+K˜x˜(t)=B˜u(t),y˜(t)=L˜Tx˜(t),
where x˜(t)\Rtm,M˜,D˜,K˜\Rtm×m, B˜\Rtm×p and L˜\Rtm×q with mN.

During the last decades, a number of dimension reduction techniques have been proposed. Overview articles of the different reduction techniques are given in [Citation6Citation8] and the references therein. Among those methods, the main dimension reduction techniques for second-order systems are based on balanced truncation (BT) [Citation9Citation13] on the one hand, and Krylov subspace related [Citation2,Citation14Citation16] on the other hand. In the BT methods, solving the Lyapunov equation (or a Sylvester equation) is a key step. Direct solution of the Lyapunov equation is only possible for medium-scale models because the solution requires O(n3) operations and O(n2) storages, where n is the dimension of the system. Several methods have been proposed to extend the range of applicability of BT to large-scale systems. One approach is to implement some algorithms like the LR-ADI or Smith algorithm that can solve large Lyapunov or Sylvester equations approximately, leading to the low rank Gramians. Then, low rank Gramians are used for approximate balanced truncation [Citation9,Citation17Citation21]. The Krylov subspace method is superior in numerical efficiency with the low cost of computation, but in general, the stability of the original system may be lost and there is no general error bound similar to BT except under some special conditions [Citation22].

Apart from the above-mentioned dimension reduction methods, a number of methods based on orthogonal polynomials, such as the Chebyshev polynomial method [Citation23,Citation24], the Laguerre-SVD method [Citation25Citation27] and the general orthogonal polynomials method [Citation28,Citation29], have also received some attentions in dimension reduction of large-scale systems.

In this article, we discuss the time-domain dimension reduction methods based on orthogonal polynomials for second-order systems. A structure-preserving dimension reduction method is presented. This method expands the state and output variables in the space spanned by orthogonal polynomials, and then a structure-preserving reduced model is obtained by a projection transformation. The resulting reduced system not only preserves the second-order structure but also guarantees the stability under certain conditions. Additionally, an error estimate is given, and the effectiveness of this new method is shown by three test examples.

The remainder of this article is organized as follows. In Section 2, we briefly introduce some preliminary properties of general orthogonal polynomials. In Section 3, we present the dimension reduction methods based on orthogonal polynomials, including dimension reduction of the equivalent first-order system and the method of second-order system. The error and stability of the reduced system are also discussed in this section. Three test examples are given in Section 4. Conclusions are presented in Section 5.

Throughout this article, the following notation is used. The set of real numbers is denoted by \Rt. The n×n identity matrix is denoted by In and the zero matrix by 0. If the dimension of In is apparent from the context, we simply use I. The dimension of 0 will always be apparent from the context. A_0 for a square matrix A\Rtn×n denotes that A is nonnegative definite, i.e. xTAx0 for every x\Rtn. And A0 denotes that A is positive definite, i.e. xTAx>0 for all vectors x, except x=0.

2. General orthogonal polynomials

The orthogonal polynomials ϕi(t), which are in the orthogonal polynomial space Lω(t)2([a,b])=span{ϕ0(t),ϕ1(t),,ϕi(t),}, with respect to the weight function ω(t) over the interval atb satisfy the condition

ϕi(t),ϕj(t)abω(t)ϕi(t)ϕj(t)dx=ξi,i=j,0,ij,
where ϕi(t),ϕj(t) denotes the inner product in the orthogonal polynomial space Lω(t)2([a,b]), the weight function ω(t) is nonnegative and measurable in Lebesgue’s sense satisfying abω(t)dt > 0, and ξi is a nonzero constant.

Note that the Chebyshev polynomials, Legendre polynomials, Laguerre polynomials, Hermite polynomials and Jacobi polynomials are all derived from the following recurrence relation:

ϕi+1(t)=(ait+bi)ϕi(t)+ciϕi1(t),i=1,2,
with ϕ0(t)=1,ϕ1(t)=a0t+b0, where ai,bi and ci are the recurrence coefficients, and their values are related to the special classical orthogonal polynomials.

An important property of orthogonal polynomials is the differential recurrence relation of the form:

(3) ϕi(t)=αiϕ˙i+1(t)+βiϕ˙i(t)+γiϕ˙i1(t),i=1,2,(3)
where αi,βi and γi are the differential recurrence coefficients [Citation30], the values for the special classical orthogonal polynomials are listed in .

Table 1. Differential recurrence coefficients for classical orthogonal polynomials.

Orthogonal polynomials have been successfully used to reduce very large systems by many researchers, see [Citation24,Citation25,Citation28,Citation29] and the references therein. It should be pointed out that the Chebyshev polynomials, Legendre polynomials and Jacobi polynomials are suitable for dimension reduction in the domain [0,1.51], while the Hermite polynomials and Laguerre polynomials are suitable in the domain [0,1.5+]. And the suitable transformations are required to change the intervals on which orthogonal polynomials are defined into those for specific problems. Using the formula t=ababt+ababab, we can transform the domain [a,b] into the domain [a,b].

The convergence behaviour should be considered when a given function is expanded in the space spanned by orthogonal polynomials. Due to the fact that the zeros of the Laguerre polynomials and the Hermite polynomials are widely spread in [0,1.5+) and (,1.5+), respectively, the Chebyshev and Legendre polynomials have better convergence properties than the Laguerre and Hermite polynomials [Citation31]. Among these orthogonal polynomials, which one to be best for reduction is impacted by a specific problem.

In the following, we briefly present two lemmas for dimension reduction based on general orthogonal polynomials.

Lemma 2.1. ([Citation31]) If deg(p(t))<deg(ϕi(t))(i=0,1,) for arbitrary polynomial p(t), where deg(p(t)) denotes the degree of the polynomial p(t), then it has p(t),ϕi(t)=0.

Lemma 2.2. If ϕi(t)(i=0,1,) are orthogonal polynomials, then it has

span{ϕ0(t),ϕ1(t),,ϕi(t),}=span{ϕ˙1(t),ϕ˙2(t),,ϕ˙i(t),} =span{ϕ¨2(t),ϕ¨3(t),,ϕ¨i(t),}

Proof. The proof of the first equation can be found in [Citation28]. Now we only need to prove the second equation. According to the degree properties of the involved polynomials, we need to prove that the basis functions in {ϕ¨2(t),ϕ¨3(t),,ϕ¨i(t),} are linearly independent. Note that deg(ϕ¨i(t))=deg(ϕ¨i1(t))+1,i=3,4,. If η2ϕ¨2(t)+η3ϕ¨3(t)++ηiϕ¨i(t)+=0, then it has η2=η3==ηi==0. This implies that the conclusion holds. □

3. Dimension reduction of second-order systems

In this section, we discuss two dimension reduction methods of second-order systems by general orthogonal polynomials: dimension reduction of equivalent first-order systems, and dimension reduction of second-order systems by direct projection.

3.1. Dimension reduction of equivalent first-order systems

In this method, we first transform the second-order system to an equivalent first-order model, and then reduce its dimension by orthogonal polynomials.

Now we present the concrete dimension reduction procedure. First, we expand the state vector z(t) and the input variable u(t) of the equivalent system (2) as approximate forms

(4) z(t)zm(t)=i=0m1viϕi(t),u(t)um(t)=i=1m1uiϕ˙i(t),(4)

where vi\Rt2N, ui\Rtp,i=0,1,,m1 (note that u0=0), are vectors of coefficients. Substituting (4) and (3) into (2), leads to

E(i=0m1viϕ˙i(t))+Av0ϕ0(t)+Ai=1m1vi(αiϕ˙i+1(t)+βiϕ˙i(t)+γiϕ˙i1(t))=Bi=1m1uiϕ˙i(t).

After rearranging the above equation, we get

i=1m1(E+βiA)viϕ˙i(t)+Av0ϕ0(t)+i=2mAvi1αi1ϕ˙i(t)+i=1m2Avi+1γi+1ϕ˙i(t)=i=1m1Buiϕ˙i(t).

Comparing the constant terms and the coefficients of the same powers of ϕ˙i(t)(i=2,3,,m1) on both sides, we have

(5) G1v1ϕ˙1(t)+Av0ϕ0(t)+Av2γ2ϕ˙1(t)=Bu1ϕ˙1(t),Givi+Avi1αi1+Avi+1γi+1=Bui,i=2,3,,m2,Gm1vm1+Avm2αm2=Bum1,(5)

where Gi=E+βiA,i=1,2,,m1. And by the initial condition z(t0)=z0 and (4), we have i=0m1viϕi(t0)=z0.

Then, we can get the following linear equation

(6) HV=F,(6)

where V=[v0T,v1T,,vm1T]T, F=[z0T,(Bu1ϕ˙1(t))T,(Bu2)T,,(Bum1)T]T, and

H=ϕ0(t0)Iϕ1(t0)Iϕ2(t0)Iϕ3(t0)Iϕm1(t0)Iϕ0(t)Aϕ˙1(t)G1γ2ϕ˙1(t)A000α1AG2α3A000αm3AGm2αm1A000αm2AGm1.

The coefficient vectors v0,v1,,vm1 can be computed by solving the linear Equation (6). Then we can construct the projection matrix Q\Rt2N×m by the modified Gram–Schmidt scheme or the QR decomposition, and it holds that QTQ=I. Let z(t)=Qz˜(t). Then, we can get a first-order reduced system:

(7) {E˜z˜˙(t)+A˜z˜(t)=B˜u(t),y˜(t)=C˜z˜(t).(7)

where z˜(t)\Rtm,E˜=QTEQ\Rtm×m,A˜=QTAQ\Rtm×m, and B˜=QTB\Rtm×p,C˜=CQ\Rtq×m. The dimension reduction algorithm can be stated as shown in Algorithm 1 below.

Algorithm 1.

Dimension reduction algorithm for first-order systems

  • 1: Compute the coefficient matrix Vˉ=[v0,v1,,vm1] by solving (6);

  • 2: Construct the projection matrix Q such that colspan{Vˉ}colspan{Q} and QTQ=I;

  • 3:Compute the coefficient matrices of the reduced system by E˜=QTEQ,A˜=QTAQ,B˜=QTB,C˜=CQ;

  • 4: Output E˜,A˜,B˜,C˜. 

3.2. Dimension reduction of second-order systems

Algorithm 1 is used to reduce the equivalent first-order system, and it is a linearization approach. There are two major drawbacks with such an approach: the corresponding linear system (2) has a state space of double dimension which increases memory requirements, and the reduced system (7) is typically linear and the second-order structure of the original system is not preserved. For engineering design and control, it is extremely desirable to have a structure-preserving reduced-order model.

In this section, we apply the orthogonal polynomials to the second-order system directly, and get a structure-preserving dimension reduction algorithm for second-order systems in the time domain.

We expand the state vector x(t) and the input variable u(t) of the second-order system (1) as approximate forms

(8) x(t)xm(t)=i=0m1hiϕi(t),u(t)um(t)=i=2m1wiϕ¨i(t),(8)

where hi\RtN(i=0,1,,m1), and wi\Rtp(i=2,3,,m1) are vector of coefficients. Substituting (8) and (3) into (1) leads to

(9) Mi=0m1hiϕ¨i(t)+Di=0m1hiϕ˙i(t)+Ki=0m1hiϕi(t)=Bi=2m1wiϕ¨i(t).(9)

Substituting (3) into (9), and using it repeatedly, we have

Mi=2m1hiϕ¨i(t)+Di=2mhi1αi1ϕ¨i(t)+Di=2m1hiβiϕ¨i(t)+Di=2m2hi+1γi+1ϕ¨i(t)+Kh0ϕ0(t)+Ki=3m+1hi2αi2αi1ϕ¨i(t)+Ki=2mhi1(αi1βi+αi1βi1)ϕ¨i(t)+Ki=2m1hi(αiγi+1+βi2+αi1γi)ϕ¨i(t)+Ki=2m2hi+1(βi+1γi+1+βiγi+1)ϕ¨i(t)+Ki=2m3hi+2γi+1γi+2ϕ¨i(t)=Bi=2m1wiϕ¨i(t).

Comparing the constant terms and the coefficients of the same powers of ϕ¨i(t)(i=3,4,,m1) on both sides, we have

(10) Kh0ϕ0(t)+S2h1ϕ¨2(t)+P2h2ϕ¨2(t)+Q2h3ϕ¨2(t)+γ3γ4Kh4ϕ¨2(t)=Bw2ϕ¨2(t),Kihi2+Sihi1+Pihi+Qihi+1+γi+1γi+2Khi+2=Bwi,3i<m2,Km2hm4+Sm2hm3+Pm2hm2+Qm2hm1=Bwm2,Km1hm3+Sm1hm2+Pm1hm1=Bwm1,(10)

where Si=αi1D+(αi1βi+αi1βi1)K,Pi=M+βiD+(αiγi+1+βi2+αi1γi)K, Qi=γi+1D+(βi+1γi+1+βiγi+1)K and Ki=αi2αi1K,i=2,3,. From the initial conditions x(t0)=x0,x˙(t0)=x˙0 and (8), we have i=0m1hiϕi(t0)=x0, and i=1m1hiϕ˙i(t0)=x˙0. Then, we can get the following linear equation

(11) AH=F,(11)

where H=[h0T,h1T,,hm1T]T, F=[x0T,x˙0T,(Bw2)T,,(Bwm1)T]T and

(12) A=ϕ0(t0)Iϕ1(t0)Iϕ2(t0)Iϕ3(t0)Iϕ4(t0)Iϕ5(t0)Iϕm1(t0)I0ϕ˙1(t0)Iϕ˙2(t0)Iϕ˙3(t0)Iϕ˙4(t0)Iϕ˙5(t0)Iϕ˙m1(t0)Iϕ0(t)ϕ¨2(t)KS2P2Q2γ3γ4K000K3S3P3Q3γ4γ5K000Km3Sm3Pm3Qm3γm2γm1K000Km2Sm2Pm2Qm20000Km1Sm1Pm1 .(12)

Solving the linear Equation (11), we can get the coefficient vectors h0,h1,,hm1. Then the projection matrix W\RtN×m can be constructed by the QR decomposition or the modified Gram–Schmidt scheme, and we have WTW=I. Finally, using the orthogonal projection transformation x(t)=Wx˜(t), where x˜(t)\Rtm and mN, we obtain the reduced second-order system:

(13) M˜x˜¨(t)+D˜x˜˙(t)+K˜x˜(t)=B˜u(t),y˜(t)=L˜Tx˜(t),(13)

where M˜=WTMW2,D˜=WTDW,K˜=WTKW,B˜=WTB, and L˜=WTL. The procedure can be described by Algorithm 2.

Algorithm 2.

Dimension reduction algorithm for second-order systems

  • 1: Compute the coefficient matrix Hˉ=[h0,h1,,hm1] by solving (11);

  • 2: Construct the projection matrix W such that colspan{Hˉ}colspan{W} and WTW=I;

  • 3: Compute the coefficient matrices of the reduced system by M˜=WTMW,D˜=WTDW,K˜=WTKW,B˜=WTB and L˜=WTL;

  • 4: Output M˜,D˜,K˜,B˜ and L˜. 

In Algorithm 2, the main computational expense is spent in solving the linear Equation (11) (or (6) in Algorithm 1). A direct method to solve this equation will require O(n3) flops (n is mN or 2mN). For dense operations, the computational complexities are almost of the same order with balanced truncation method [Citation6]. In general, the work involved in performing operations on a sparse matrix is about O(c2n) [Citation31], where c denotes the average number of nonzero elements per row/column of this sparse matrix. In practice, it is important to balance the accuracy and computational complexity of the reduced system. And the iterative solvers, such as generalized minimal residual (GMRES) method (with restarts) [Citation32,Citation33] and biconjugate gradients stabilized (BI-CGSTAB) method [Citation34], can be used to solve these equations. From the numerical results in Section 4, for time-domain simulation of test examples, our approach may be superior than other model reduction methods in some situations.

Next, we discuss some properties of the reduced-order model obtained by Algorithm 2, including coefficients matching, error estimation and stability.

3.2.1. Coefficients matching

First, we approximately expand the state variable x(t), the output variable y(t) of the original system [Citation1] and those (x˜(t),y˜(t)) of the reduced-order model (13) in the space spanned by orthogonal polynomials as follows:

(14) x(t)xm(t)=i=0m1hiϕi(t),x˜(t)x˜m(t)=i=0m1h˜iϕi(t),(14)
(15) y(t)ym(t)=i=0m1giϕi(t),y˜(t)y˜m(t)=i=0m1g˜iϕi(t),(15)

where hi\RtN,h˜i\Rtm and gi,g˜i\Rtq.

Lemma 3.1. The coefficients hi and h˜i in (14) satisfy the following relation: hi=Wh˜i for i=0,1,,m1, where W is the projection matrix obtained by Algorithm 2.

Proof. From Algorithm 2, we have that span{h0,h1,,hm1}span{W}. Then there exist vectors hˆi\Rtm such that hi=Whˆi,i=0,1,,m1. From (11), we have AWm(W)Hˆ=F, where Wm(W)=diag(W,W,,Wmterms), and Hˆ=[hˆ0T,hˆ1T,,hˆm1T]T. Premultiplying both sides by Wm(WT), we obtain

(16) AˆHˆ=Fˆ,(16)

where Aˆ=Wm(WT)AWm(W) and Fˆ=Wm(WT)F. Considering the reduced-order system (13) with the initial conditions x˜0=WTx0 and x˜˙0=WTx˙0, we also have

(17) A˜H˜=F˜,(17)

where H˜=[h˜0T,h˜1T,,h˜m1T]T, F˜=[x˜0T,x˜˙0T,(B˜w2)T,,(B˜wm1)T]T,

A˜=ϕ0(t0)Iϕ1(t0)Iϕ2(t0)Iϕ3(t0)Iϕ4(t0)Iϕ5(t0)Iϕm1(t0)I0ϕ˙1(t0)Iϕ˙2(t0)Iϕ˙3(t0)Iϕ˙4(t0)Iϕ˙5(t0)Iϕ˙m1(t0)Iϕ0(t)ϕ¨2(t)K˜S˜2P˜2Q˜2γ3γ4K˜000K˜3S˜3P˜3Q˜3γ4γ5K˜000K˜m3S˜m3P˜m3Q˜m3γm2γm1K˜000K˜m2S˜m2P˜m2Q˜m20000K˜m1S˜m1P˜m1 ,

and S˜i=αi1D˜+(αi1βi+αi1βi1)K˜,P˜i=M˜+βiD˜+(αiγi+1+βi2+αi1γi)K˜, Q˜i=γi+1D˜+(βi+1γi+1+βiγi+1)K˜, K˜i=αi2αi1K˜,i=2,3,.

Note that M˜=WTMW,D˜=WTDW,K˜=WTKW,B˜=WTB,L˜=WTL and WTW=I. Then, Equations (16) and (17) are equivalent. That is H˜=Hˆ. Therefore, we have hi=Wh˜i,i=0,1,,m1.

Theorem 3.2. The coefficients gi and g˜i in (15) satisfy the following relation: gi=g˜i for i=0,1,,m1.

Proof. According to (14) and (15), it infers gi=LThi and g˜i=L˜Th˜i. By Lemma 3.1 and L˜=WTL, we have g˜i=L˜Th˜i=LTWh˜i=LThi=gi, for i=0,1,,m1.

Theorem 3.2 illustrates that the output variable y˜(t) of the reduced-order system (13) matches the first m expansion coefficients of the output variable y(t) of the original system (1) in the orthogonal polynomial space.

3.2.2. Error and stability analysis

In this section, we take into account the approximation error between the original system (1) and its reduced-order system (13).

First, for arbitrary fixed nonnegative integer k, we approximately expand the state variable x(t), the output variable y(t) of the original system (1) and those (x˜(t),y˜(t)) of the reduced-order model (13) as follows:

x(t)xm+k(t)=i=0m+k1hiϕi(t),x˜(t)x˜m+k(t)=i=0m+k1h˜iϕi(t),
y(t)ym+k(t)=i=0m+k1giϕi(t),y˜(t)y˜m+k(t)=i=0m+k1g˜iϕi(t),

and u(t) has the following approximate expansion:

u(t)um+k(t)=i=2m+k1wiϕ¨i(t).

We compare the approximate output variables ym+k(t) with y˜m+k(t). Similar to (11), we can obtain the following linear equations:

(18) A11A12A21A22H1H2=F1F2,(18)

where

H1=h0h1h2hm1,H2=hmhm+1hm+2hm+k1,F1=x0x˙0Bw2Bwm1,F2=BwmBwm+1Bwm+2Bwm+k1,

and A11=A (see (11)),

A12=ϕm(t0)Iϕm+1(t0)Iϕm+k1(t0)Iϕ˙m(t0)Iϕ˙m+1(t0)Iϕ˙m+k1(t0)I000 γm1γmK00Qm1γmγm+1K0,A21=0KmSm00Km+1000 000,
\scale86

Similarly, the coefficients h˜i(i=0,1,,m+k1) satisfy the following linear equations:

(19) A˜11A˜12A˜21A˜22H˜1H˜2=F˜1F˜2,(19)

where

A˜11=Wm(WT)A11Wm(W),A˜12=Wm(WT)A12Wk(W),
A˜21=Wk(WT)A21Wm(W),A˜22=Wk(WT)A22Wk(W),

and

H˜1=[h˜0h˜1h˜2h˜m1],H˜2=[h˜m h˜m+1h˜m+2h˜m+k1],F˜1=[x˜0x˜˙0B˜w2B˜wm1],F˜2=[B˜wmB˜wm+1B˜wm+2B˜wm+k1].

Theorem 3.3. Let T1=Wk(W)(A˜12TA˜12)1A˜12TWm(WT)A12I,T2=Wk(W)A˜221Wk(WT)A22I. Then it has

(1) Wm(W)H˜1H1=0;

(2) Wk(W)H˜2H22minT12,T22H22.

Proof. The conclusion (1) holds by Lemma 3.1. The proof of the conclusion (2) can be found in [Citation28], and we omit it here. □

Theorem 3.4. For the approximate output variables ym+k(t) of the original system (1) and y˜m+k(t) of the reduced-order model (13), it has

ym+k(t)y˜m+k(t)2minT12,T22LT2H22i=mm+k1ϕi2(t).

Proof. First, from the conclusion (1) of Theorem 3.3, we have

xm+k(t)Wx˜m+k(t)2=i=0m+k1(hiWh˜i)ϕi(t) =(ϕ0(t)INϕ1(t)INϕm+k1(t)IN)H1Wm(W)H˜1H2Wk(W)H˜22 H2Wk(W)H˜22(ϕm(t)INϕm+1IN(t)ϕm+k1(t)IN)2,

Further, from the conclusion (2) of Theorem 3.3, it has

xm+k(t)Wx˜m+k(t)2min{T12,T22}H22i=mm+k1ϕi2(t).

Also since y(t)=LTx(t) and y˜(t)=LTWx˜(t), then we can obtain

ym+k(t)y˜m+k(t)2min{T12,T22}LT2H22i=mm+k1ϕi2(t).

Next, we discuss the stability of the reduced-order system (13).

Lemma 3.5. ([Citation35]) The linear system (2) is stable, if A+AT_ 0 and E=ET_ 0.

Lemma 3.6. ([Citation5]) The second-order system (1) is stable, if D+DT_ 0, M=MT_ 0, and K=KT0.

Theorem 3.7. The reduced-order system (13) is stable, if D+DT_ 0, M=MT_ 0, and K=KT0.

The stability of the reduced-order system (13) is guaranteed by which the symmetry and nonnegativity of the matrices M,D and K are invariant under the congruence transformation W.

4. Numerical examples

In order to demonstrate the accuracy and effectiveness of the proposed algorithms, three test examples are given in this section. All numerical experiments were run in MATLAB (R2008a; The MathWorks, Inc., Natick, MA, USA), and we used ode15s to solve differential equations.

Example 1. Clamped beam model: this system is a second-order model of dimension N=174 with one input and one output [Citation7]. The input represents the force applied to the structure at the free end, and the output is the resulting displacement. M is a diagonal matrix, and D and K are dense matrices. In this example, we take the initial conditions be x(0)=x˙(0)=0.

Two kinds of approaches are applied to this system:

  1. Using Algorithm 1, the balanced truncation reduction (BTR) method in [Citation36], and the Krylov subspace method [Citation35] to the equivalent first-order system of order 348;

  2. Using Algorithm 2, the second-order balanced truncation (SOBT) method in [Citation10], and the second-order Arnoldi method (SOAR) in [Citation14] to the original second-order system directly.

(a) shows the transient responses of the equivalent first-order system, and the reduced models obtained by the BTR method, the Arnoldi method and Algorithm 1 based on the Chebyshev polynomials of the first kind, Chebyshev polynomials of the second kind and the Legendre polynomials. The corresponding relative errors are shown in (b). In order to achieve the same accuracy, the order of the reduced model obtained by the Arnoldi method is taken as 40, while the other is taken as 20.

Figure 1. Transient responses (a) and relative errors (b) of the reduced models obtained by Algorithm 1, the BTR method and the Arnoldi method in Example 1.

Figure 1. Transient responses (a) and relative errors (b) of the reduced models obtained by Algorithm 1, the BTR method and the Arnoldi method in Example 1.

shows the transient responses and relative errors of the reduced models obtained by the SOBT method, the SOAR method, and Algorithm 2 based on the Chebyshev polynomials of the first kind, Chebyshev polynomials of the second kind and the Legendre polynomials. To generate a reduced model of order 26, the SOBT method and Algorithm 2 require O(109) flops. And the computational times of obtaining the reduced models are listed in .

Figure 2. Transient responses (a) and relative errors (b) of the reduced models obtained by Algorithm 2, the SOBT method and the SOAR method in Example 1.

Figure 2. Transient responses (a) and relative errors (b) of the reduced models obtained by Algorithm 2, the SOBT method and the SOAR method in Example 1.

Table 2. Computational times of obtaining reduced models for Example 1.

The simulation results show that Algorithm 1 is effective for the dimension reduction of the second-order system except to preserve the second-order structure; Algorithm 2 not only preserves the second-order structure but also generates a great approximation.

Example 2. International Space Station model (ISS): it is a structural MIMO (multi-input and multi-output) model of component 1r (Russian service module) of the International Space Station (ISS) [Citation7]. This system is a second-order model of dimension N=135 with 3 inputs and 3 outputs. The matrices M, D and K are diagonal.

In this example, we use Algorithm 2, the SOBT method in [Citation10], and the SOAR method [Citation14] with the initial conditions x(0)=x˙(0)=0. The output y(t) includes three components y1(t), y2(t) and y3(t). Our numerical simulation shows that the orthogonal polynomial methods can approximate these components of output very well if the reduced dimension is set to 27.   shows the transient responses of the component y3(t) of the original system, and the reduced-order systems obtained by the SOBT method, the SOAR method and Algorithm 2 based on the Chebyshev polynomials of the first kind, Chebyshev polynomials of the second kind and the Legendre polynomials, respectively. The relative errors are also shown in this figure. The computational times of obtaining the reduced models are listed in . The simulation results demonstrate that the MIMO second-order reduced models obtained by Algorithm 2 have an accurate approximation.

Figure 3. Transient responses (a) and relative errors (b) of the output y3(t) of the reduced models obtained by Algorithm 2, the SOBT method and the SOAR method in Example 2.

Figure 3. Transient responses (a) and relative errors (b) of the output y3(t) of the reduced models obtained by Algorithm 2, the SOBT method and the SOAR method in Example 2.

Table 3. Computational times of obtaining reduced models for Example 2.

Example 3. We consider a second-order model of dimension N=1074, which comes from dynamic analysis in structural engineering. The sparse matrix K is the stiffness matrix and the diagonal matrix M is the mass matrix for the dynamic modeling of structures, respectively, and D=αM+βK, where we take α=0.675 and β=0.00315. B and C are both N×1 vectors with all elements equal to one. The data were extracted from http://math.nist.gov/MatrixMarket/data/Harwell-Boeing/bcsstruc1/bcsstk08.html.

In this example, we use the efficient SOBT method proposed in [Citation9], which presented a way to efficiently compute the reduced-order system exploiting the sparsity and second-order structure of the original system. Here, we calculated 20 parameters following the heuristic parameter choice as proposed by Penzl [Citation37]. Numerical simulation shows that the orthogonal polynomial methods perform very well. We compare our algorithm with the SOBT method [Citation9], and the SOAR method [Citation14]. The transient responses and relative errors of five methods including the Chebyshev polynomials of the first kind, the Chebyshev polynomials of the second kind, the Legendre polynomials, the SOBT method and the SOAR method are shown in . The computational times of obtaining the reduced models of order 18 are listed in . The above results illustrate the effectiveness of our method for this example.

Figure 4. Transient responses (a) and relative errors (b) of the reduced models obtained by Algorithm 2, the SOBT method and the SOAR method in Example 3.

Figure 4. Transient responses (a) and relative errors (b) of the reduced models obtained by Algorithm 2, the SOBT method and the SOAR method in Example 3.

Table 4. Computational times of obtaining reduced models for Example 3.

5. Conclusions

In this article, we have considered time-domain dimension reduction methods for second-order systems based on general orthogonal polynomials. And we presented a structure-preserving dimension reduction method for second-order systems. This method expands the state and output variables in the space spanned by orthogonal polynomials, and then a structure-preserving reduced model is obtained by a projection transformation. The resulting reduced model not only preserves the second-order structure but also guarantees the stability under certain conditions. Numerical examples are also given to demonstrate the effectiveness of the proposed methods. In order to make this method more efficient and reliable, it is necessary to derive numerical methods to solve linear equations of dimensions n>1e7. How to solve these large-scale sparse linear equations effectively will be the focus of our future research.

Funding

The work was supported by the Natural Science Foundation of China [grant number 11071192], [grant number 11371287]; the International Science and Technology Cooperation Program of China [grant number 2010DFA14700].

References

  • R.R. Craig Jr, Structural Dynamics: An Introduction to Computer Methods, John Wiley and Sons, New York, 1981.
  • R.W. Freund, Padé-type model reduction of second-order and higher-order linear dynamical systems, in Dimension Reduction of Large-Scale Systems, P. Benner, V. Mehrmann, and D. Sorensen, eds., Lecture Notes in Computational Science and Engineering Vol. 45, Springer-Verlag, Berlin, 2005, pp. 191–223.
  • J. Lienemann, E.B. Rudnyi, and J.G. Korvink, MST MEMS model order reduction: requirements and benchmarks. Linear Algebra Appl. 415 (2–3) (2006), pp. 469–498.
  • Y. Chahlaoui, K.A. Gallivan, A. Vandendorpe, and P. Van Dooren, Model reduction of second-order systems, in Dimension Reduction of Large-Scale Systems, P. Benner, V. Mehrmann, and D. Sorensen, eds., Lecture Notes in Computational Science and Engineering Vol. 45, Springer-Verlag, Berlin, 2005, pp. 149–172.
  • B. Salimbahrami, Structure preserving order reduction of large scale second order models, Ph.D. Thesis, Technische Universitaet Muenchen, 2005.
  • A.C. Antoulas, Approximation of Large-Scale Dynamical Systems, SIAM, Philadelphia, 2005.
  • A.C. Antoulas, D.C. Sorensen, and S. Gugercin, A survey of model reduction methods for large-scale systems. Contemp. Math. 280 (2001), pp. 193–219
  • T. Ersal, H.K. Fathy, L.S. Louca, D.G. Rideout, and J.L. Stein, A review of proper modeling techniques. J. Dyn. Syst. Meas. Control 130 (6) (2008), p. 061008.
  • P. Benner and J. Saak, Efficient balancing-based MOR for large-scale second-order systems. Math. Comput. Model. Dyn. Syst. 17 (2) (2011), pp. 123–143.
  • Y. Chahlaoui, D. Lemonnier, A. Vandendorpe, and P. Van Dooren, Second-order balanced truncation. Linear Algebra Appl. 415 (2–3) (2006), pp. 373–384.
  • M. Lehner and P. Eberhard, A two-step approach for model reduction in flexible multibody dynamics. Multibody Syst. Dyn. 17 (2–3) (2007), pp. 157–176.
  • D.G. Meyer and S. Srinivasan, Balancing and model reduction for second-order form linear systems. IEEE Trans Autom. Control 41 (11) (1996), pp. 1632–1644.
  • T. Reis and T. Stykel, Balanced truncation model reduction of second-order systems. Math. Comput. Model. Dyn. Syst. 14 (5) (2008), pp. 391–406.
  • Z.J. Bai and Y.F. Su, Dimension reduction of large-scale second-order dynamical systems via a second-order Arnoldi method. SIAM J. Sci. Comput. 26 (5) (2005), pp. 1692–1709.
  • M. Lehner and P. Eberhard, On the use of moment-matching to build reduced order models in flexible multibody dynamics. Multibody Syst. Dyn. 16 (2) (2006), pp. 191–211.
  • B. Salimbahrami and B. Lohmann, Order reduction of large scale second-order systems using Krylov subspace methods. Linear Algebra Appl. 415 (2–3) (2006), pp. 385–405.
  • S. Gugercin, D.C. Sorensen, and A.C. Antoulas, A modified low-rank Smith method for large-scale Lyapunov equations. Numer. Algorithms 32 (1) (2003), pp. 27–55.
  • P. Heydari and M. Pedram, Model-order reduction using variational balanced truncation with spectral shaping, IEEE Trans Circuits Syst I: Regular Papers 53 (2006), pp. 879–891.
  • J.R. Li and J. White, Efficient model reduction of interconnect via approximate system Gramians, in Proceedings of the 1999 IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers, San Jose, CA, 7–11 November, pp. 380–384.
  • C.W. Rowley, Model reduction for fluids, using balanced proper orthogonal decomposition. Internat. J. Bifur. Chaos 15 (3) (2005), pp. 997–1013.
  • D.C. Sorensen and A.C. Antoulas, The Sylvester equation and approximate balanced reduction. Linear Algebra Appl. 351 (2002), pp. 671–700
  • C.A. Beattie and S. Gugercin, Krylov-based minimization for optimal H2 model reduction, in 46th IEEE Conference on Decision and Control, New Orleans, LA, 12–14 December 2007, pp. 4385–4390.
  • J.M. Wang, C.C. Chu, Q.J. Yu, and E.S. Kuh, On projection-based algorithms for model-order reduction of interconnects. IEEE Trans Circuits Syst. I: Fundam. Theory Appl. 49 (11) (2002), pp. 1563–1585.
  • J.M. Wang, Q.J. Yu, and E.S. Kuh, Passive model order reduction algorithm based on Chebyshev expansion of impulse response of interconnect networks, in Proceedings of the 37th Annual Design Automation Conference, Los Angeles, CA, June 2000, pp. 520–525.
  • Y. Chen, V. Balakrishnan, C. Koh, and K. Roy, Model reduction in the time-domain using Laguerre polynomials and Krylov methods, in Proceeding of the 2002 Design, Automation and Test in Europe Conference and Exposition, Paris, 4–8 March 2002, pp. 931–935.
  • L. Knockaert and D. De Zutter, Laguerre-SVD reduced-order modeling. IEEE Trans. Microw. Theory Tech. 48 (9) (2000), pp. 1469–1475.
  • L. Knockaert and D. De Zutter, Stable Laguerre-SVD reduced-order modeling. IEEE Trans. Circuits Syst. I: Fundam. Theory Appl. 50 (4) (2003), 576–579.
  • Y.L. Jiang and H.B. Chen, Time domain model order reduction of general orthogonal polynomials for linear input-output systems. IEEE Trans. Automat. Control 57 (2) (2012), pp. 330–343.
  • Y.L. Jiang and H.B. Chen, Application of general orthogonal polynomials to fast simulation of nonlinear descriptor systems through piecewise-linear approximation. IEEE Trans. Comput. Aided Design Integrated Circuits Syst. 31 (5) (2012), pp. 804–808.
  • S.C. Tsay and T.T. Lee, Analysis and optimal control of linear time-varying systems via general orthogonal polynomials. Internat. J. Systems Sci. 18 (8) (1987), pp. 1579–1594.
  • G. Szegö, Orthogonal Polynomials, American Mathematical Society, New York, 1939.
  • Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM Publications, Philadelphia, PA, 2003.
  • Y. Saad and M.H. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Statist. Comput. 7 (3) (1986), pp. 856–869.
  • H.A. Van der Vorst and B. Bi-CGSTA, A fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems. SIAM J. Sci. Statist. Comput. 13 (2) (1992), pp. 631–644.
  • R.W. Freund, Krylov-subspace methods for reduced-order modeling in circuit simulation. J. Comput. Appl. Math. 123 (1–2) (2000), pp. 395–421.
  • S. Gugercin and A.C. Antoulas, A survey of model reduction by balanced truncation and some new results. Internat. J. Control. 77 (8) (2004), pp. 748–766.
  • T. Penzl, A cyclic low-rank Smith method for large sparse Lyapunov equations. SIAM J. Sci. Comput. 21 (4) (2000), pp. 1401–1418.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.