Publication Cover
Mathematical and Computer Modelling of Dynamical Systems
Methods, Tools and Applications in Engineering and Related Sciences
Volume 20, 2014 - Issue 4
216
Views
0
CrossRef citations to date
0
Altmetric
Original Articles

Structure-preserving approximation of distributed-parameter second-order systems using Krylov subspaces

&
Pages 395-413 | Received 05 Jun 2013, Accepted 07 Aug 2013, Published online: 06 Sep 2013

Abstract

In this article, the approximation of linear second-order distributed-parameter systems (DPS) is considered using a Galerkin approach. The resulting finite-dimensional approximation model also has a second-order structure and preserves the stability as well as the passivity. Furthermore, by extending the Krylov subspace methods for finite-dimensional systems of second order to DPS, the basis vectors of the Galerkin projection are determined such that the transfer behaviour of the DPS can be approximated by using moment matching. The structure-preserving approximation of an Euler–Bernoulli beam with Kelvin–Voigt damping demonstrates the results of the article.

1. Introduction

The modelling of mechanical structures with distributed parameters leads to second-order partial differential equations (PDE) with respect to time, i.e. to distributed-parameter systems (DPS) of second order. Their simulation in general and in many cases also their control, is based on a suitable finite-dimensional approximation. Thereby, the second-order structure of the original system should be preserved such that the reduced system has an interpretation of a finite-dimensional mechanical system. A common approach to this approximation problem is to model the flexible system using FEM and then to apply an order reduction method to the resulting finite-dimensional system of very high order. For this purpose, structure-preserving balancing and truncation methods as well as a modal reduction were proposed in [Citation1–4]. Another important method for model order reduction is the application of Krylov subspaces that lead to numerically efficient approximation procedures (see e.g. [Citation5–7]). Therefore, extensions of Krylov subspace methods for the structure-preserving order reduction of finite-dimensional second-order systems were developed in [Citation8–12]. However, if a description of the mechanical structure in form of a PDE is available, it is reasonable to determine an approximation directly on the basis of the PDE without recourse to an FE-modelling. This approach is also possible for complex geometries arising in practical applications, if the corresponding boundary conditions admit a formulation of the mechanical structure as a well-posed initial-boundary-value problem. This approach is considered in [Citation13] for the balanced truncation of PDE models. Thereby, it is demonstrated that the direct application of a model order reduction method to the PDE offers the advantage that adaptive meshes can be used in discretization of the boundary value problems arising in the approximation procedure. As a consequence, the resulting computational effort can be decreased. Furthermore, in order to conserve system properties of the DPS, such as stability or passivity, in the finite-dimensional approximation model, only the approximation of the PDE has to be structure-preserving. For the classical FE modelling approach, however, this requires that both the FE modelling and the order reduction method share this property.

In this article, the structure-preserving approximation of DPS of second order with distributed inputs and outputs is presented that also directly approximates the DPS. The proposed approximation is derived by extending the recently introduced Krylov subspace methods in [Citation14] for DPS in state space form to the approximation of second-order DPS. Thereby, the Krylov subspace methods of [Citation8,Citation15] for finite-dimensional second-order systems are generalized. Different from [Citation14], where nonorthogonal (i.e. Petrov–Galerkin) projections are used, only orthogonal (i.e. Galerkin) projections are applied for the approximation (see e.g. [Citation16]). This has the advantage that the stability and passivity of the considered DPS are preserved in the approximation. Thus, a stable system simulation is possible and passivity-based controller design methods are applicable to the reduced order system. The degrees of freedom contained in the choice of the basis vectors of the Galerkin projection are used to match certain moments of the DPS and its approximation. This means that some of the coefficients of the Taylor series expansion of the transfer matrices of the DPS and its approximation coincide for given expansion points. Consequently, an approximation of the transfer behaviour of the DPS in prescribed frequency ranges is possible. This is assured if the basis vectors span certain Krylov subspaces of second order. Simplifications of the presented approximation procedure are discussed that arise for systems with collocated inputs and outputs as well as for proportionally and structurally damped systems.

The next section introduces the considered class of DPS and analyses their properties. Then, the Galerkin approach is shortly reviewed in Section 3 and its structure-preservation properties are verified in Section 4. The calculation of the basis vectors assuring moment matching is presented in Section 5. Thereby, possible simplifications of the approximation procedure are investigated. The proposed structure-preserving approximation procedure is illustrated by means of an Euler–Bernoulli beam with Kelvin–Voigt damping.

2. A class of second-order systems

Consider the linear DPS

(1) w¨(t)+Dw˙(t)+Kw(t)=Gu(t),t>0(1)
(2) y(t)=H0w(t)+H1w˙(t),t0(2)
of second order with the input u(t)Rp and the output y(t)Rm. This model can be used to describe mechanical structures with the displacement w defined on the closure of the spatial domain Ω in the n-dimensional Euclidean space Rn, n{1,2,3}. The initial conditions of the system are w(0)=w0 and w˙(0)=w0t. The stiffness of flexible structures, e.g. of beams or flexible plates, or the restoring forces in non-flexible systems, e.g. heavy ropes, are characterized by the densely defined operator K:D(K)HH on the complex Hilbert space H with the inner product ,H. It is assumed that this operator is self-adjoint and coercive, i.e. there exists an ϵ>0 such that
(3) Kh,hHϵh2,hD(K).(3)

In Equation (1) the operator D:D(D)HH is the damping operator. The following damping operators are studied in this article:

  1. proportional damping D=αI+βK, α0,β0, which includes viscous damping for α>0 and β=0 as well as Kelvin–Voigt damping for α=0 and β>0 and

  2. structural damping D=γK12, γ>0, where the self-adjoint square root K12:D(K12)HH exists uniquely and is positive, since K is a positive operator (see e.g. [Citation17]).

For D0 these damping operators are self-adjoint, densely defined and positive, i.e.

(4) Dh,hH>0,hD(D)(4)
holds. The input operator G:CpH and the output operators H0:HCm as well as H1:HCm are bounded linear operators meaning that distributed actuation and measurements as well as pointwise deflection measurements are considered.

A state space representation of the second-order system (Equations (1)–(2)) can be obtained by introducing the states

(5) x1(t)=w(t)andx2(t)=w˙(t).(5)
When defining the energy inner product
(6) x1x2,y1y2=K12x1,K12y1H+x2,y2H(6)
x1,y1D(K12) and x2,y2H, a suitable state space for Equation (5) is X=\breakD(K12)H. This yields the state space model
(7) x˙(t)=Ax(t)+Bu(t),t>0,x(0)=x0X(7)
(8) y(t)=Cx(t),t0(8)
of Equations (1)(2) with the state x(t) = [x1(t) x2(t)]T and the system operator
(9) A=0IKD(9)
(10) (A)={[\matrixh1\crh2\cr][D(K12)D(K123)D(K12)]|K(h1+K1Dh2)H}X,(10)
as well as the input and output operators
(11) B=0GandC=H0H1.(11)

The next lemma shows that the abstract initial value problem (Equations (7)–(8)) is well-posed, which is equivalent to A generating a C0-semigroup because B and C are bounded linear operators (see [Citation17]).

Lemma 2.1:

(Well-posedness) The system operator A in Equation (9) is the infinitesimal generator of a C0-semigroup of contractions on X.

Proof:

Since K is coercive and thus boundedly invertible, there exists the algebraic inverse

(12) A1=K1DK1I0(12)
that is a bounded linear operator on X for all considered damping operators D. As a consequence, A is a closed linear operator (see e.g. [Citation17, Theorem A.3.46]). Furthermore, A is densely defined since the domains of the damping operator and the operator K are dense in H. By using (4), K12 is self-adjoint and h=[h1h2]T one has ReAh,h=Re(K12h2,K12h1H+Kh1Dh2,h2H)=Dh2,h2H0, hD(A), so that A is a dissipative operator (see e.g. [Citation18]). Similarly, the result ReAh,h0, hD(A), can readily be shown for the adjoint operator
(13) A=0IKD(13)
(14) D(A)=h1h2D(K12)D(K12)|K(h1+K1Dh2)HX(14)
meaning that A is also dissipative. Hence, by Corollary 2.2.3 in [Citation17], the system operator A is the infinitesimal generator of a C0-semigroup of contractions on X.

Since the C0-semigroup TA(t) generated by A is a contraction, it satisfies TA(t)∥≤1, t0. Thus, the solution of the homogeneous system Equation (7) has the property

(15) x(t)TA(t)∥∥x(0)x(0),t0,x(0)X(15)
meaning that the homogeneous system Equation (7) is stable in the sense of Lyapunov. In [Citation19,Citation20] it is shown that the damped homogeneous system Equation (7) for Kelvin–Voigt damping, i.e. α=0, β>0, and structural damping, i.e. γ>0, is also exponentially stable.

Another important property of Equations (7)–(8) can be shown by assuming that the output and input are collocated, i.e. H0=0 and H1=G. The storage functional of the second-order system is given by

(16) V(x)=12x,x(16)
that characterizes the stored energy since it is proportional to the sum of the kinetic and potential energy in the case of a mechanical system. The time derivative of Equation (16) reads
(17) V˙(x)=12x˙,x+12x,x˙=12x˙,x+12x˙,x=Rex˙,x.(17)

Thus, inserting Equation (7) yields the time derivative

(18) V˙(x)=ReAx,x+ReBu,x=ReAx,x+Reu,BxCp(18)
along the solutions x of Equation (7). Since A is a dissipative operator (see proof of Lemma 2.1) and for collocated inputs and outputs B=C holds the differential passivity inequality
(19) V˙(x)uTy(19)
results. This shows that the energy increase in the system Equations (7)–(8) is lower than or equal to the supplied power uTy. Thus, in this sense, the system Equations (7)–(8) with collocated inputs and outputs is passive.

In the next sections, it is shown that these structural properties of the DPS of second order are preserved when applying a Galerkin approximation.

3. Structure-preserving Galerkin approach

In the following, an approximation wˆ of w is determined in the form

(20) wˆ(t)=Vwr(t)(20)
with the bounded linear operator V:CfH, fN, defined by
(21) Vh=i=1fvihi,viD(K),hCf,(21)
since for the considered damping operators, D(D)D(K)=D(K) holds. Furthermore, the time derivative of w is approximated by
(22) w˙ˆ(t)=Vw˙r(t),(22)
i.e. with the same basis vectors vi contained in V as in Equation (20). This assures that w˙ˆ=wˆ˙, meaning that the physical relationship between the approximation wˆ of the displacement and w˙ˆ of its velocity is preserved. Substituting Equations (20) and (22) in Equation (1) yields the residual
(23) r(t)=Vw¨r(t)+DVw˙r(t)+KVwr(t)Gu(t),(23)
that does, in general, not vanish due to the approximations Equations (20) and (22). However, it can be made small for given V by a suitable choice of wr. To this end, the residual r is expanded in terms of the basis vectors vi by using the orthogonal projection P:HranV of H onto ranV defined by
(24) P=VV(24)
with
(25) V*h=[h,v1Hh,v333h,vfH],hH.(25)
Thereby,
(26) VV=I(26)
is assumed which means that the basis vectors vi are orthonormal. Consequently, the approximation
(27) rˆ(t)=Pr(t)=PVw¨r(t)+PDVw˙r(t)+PKVwr(t)PGu(t)(27)
of r has the property that rrˆH is minimal compared to all other rˆranV. Therefore, a small residual r can be obtained by choosing wr such that rˆ0. This leads to
(28) VVVw¨r(t)+VVDVw˙r(t)+VVKVwr(t)=VVGu(t)(28)
in view of Equation (24). Thus, equating coefficients with respect to the basis vectors vi yields the finite-dimensional second-order system
(29) w¨r(t)+Drw˙r(t)+Krwr(t)=Gru(t)(29)
with
(30) Dr=VDVCf×f,Kr=VKVCf×fandGr=VGCf×p(30)
in view of Equation (26). This shows that the Galerkin approach applied to Equation (1) preserves the second-order structure. By substituting Equations (20) and (22) in Equation (2), the approximation of the output y reads
(31) yˆ(t)=Hr0wr(t)+Hr1w˙r(t)(31)
in which
(32) Hr0=H0VandHr1=H1V.(32)

The next lemma states that the matrices of the Galerkin approximation Equations (29)–(32) have properties similar to the infinite-dimensional second-order system.

Lemma 3.1:

(Properties of the Galerkin approximation) The damping matrix Dr in Equation (30) and the matrix Kr in Equation (30) are hermitian and positive definite. For collocated inputs and outputs, i.e. H0=0 and H1=G, the relation Hr1=Gr holds.

Proof:

As D is self-adjoint, one has Dr=(VDV)=VDV=Dr, showing that the matrix Dr is hermitian. Furthermore, Drh,hCf=VDVh,hCf=DVh,VhH>0, hCf, because of (4) shows that Dr>0. Similarly, with K self-adjoint, the result Kr=(VKV)=VKV=Kr is obtained and Krh,hCf=VKVh,hCf=KVh,VhH>0, for all nonzero hCf, implied by Equation (3) proves Kr>0. Finally, Hr1=H1V=GV=Gr completes the proof.

4. Stability and passivity preservation

Further properties of the Galerkin approximation Equations (29)–(32) become apparent when investigating its state space model. To this end, introduce the states

(33) x1r(t)=wr(t)andx2r(t)=w˙r(t)(33)
and define the energy inner product
(34) x1x2,y1y2=Kr12x1,Kr12y1Cf+x2,y2Cf,(34)
where x1,y1,x2,y2Cf, on the state space Xr=C2f. Then, Equations (29) and (31) have the finite-dimensional state space model
(35) x˙r(t)=Arxr(t)+Bru(t),t>0,x(0)=x0Xr(35)
(36) yˆ(t)=Crxr(t),t0(36)
with the Xr(t) = [x1rT(t) x2rT(t)]T and the matrices
(37) Ar=0IKrDr,Br=0GrandCr=Hr0Hr1.(37)
The next theorem clarifies the preservation of stability and passivity by the Galerkin approximation.

Theorem 4.1:

(Stability and passivity preservation) The homogeneous Galerkin approximation Equation (35) of Equation (7) is asymptotically stable. If Equations (7)–(8) has collocated inputs and outputs, i. e. C=B, then the finite-dimensional approximation Equations (35)–(36) is passive. With V(xr)=12xr,xr this means that

(38) V˙(xr)uTyˆ.(38)

Proof:

Observe that the result ReArh,h=Drh2,h2Cf0 holds for all hCf with h=[h1Th2T]T, because of Dr>0 (see Lemma 3.1) and Kr12=(Kr12). Thus, by choosing the Lyapunov function V(xr)=12xr,xr, one gets the time derivative V˙(xr)=ReArxr,xr+ReBru,xrReBru,xr=u,Crxr=uTyˆ, since Hr0=0 and Hr1=Gr imply Cr=Br (see Lemma 3.1). Consequently, the finite-dimensional approximation Equations (35)–(36) is passive. Since Dr in Equation (30) is a positive definite matrix (see proof of Lemma 3.1), the Kelvin–Tait–Chetaev Theorem implies that the homogeneous Galerkin approximation Equation (35) is asymptotically stable (see [Citation21]).□

These structure-preserving results are independent from the choice of the basis vectors vi that only have to be orthonormal. In the next section, this degree of freedom is used to additionally approximate the transfer behaviour of the DPS of second order by using moment matching.

5. Moment matching

5.1. Second-order Krylov subspaces

The degrees of freedom contained in choosing the basis vectors vi can be used to assure that the transfer matrix of Equations (1)–(2) coincides with the transfer matrix of its approximation Equations (29)–(32) at certain points. For finite-dimensional systems, this moment matching is achieved by using Krylov subspaces (see e.g. [Citation5–7]). In [Citation14], this approximation method was also formulated for infinite-dimensional systems, which, however, does not preserve the structure of second-order systems. Therefore, this result is extended in the following to obtain a structure-preserving moment matching. Since Equations (7)–(8) is a state linear system (see [Citation17]),

(39) F(s)=C(sIA)1B,sρ(A)(39)
coincides with the transfer matrix of Equations (7)–(8) or equivalently of Equations (1)–(2), if sρ(A)ρ(A), where ρ(A) is the component of the resolvent set ρ(A) that contains an interval [α,), αR (see [Citation17 Lemma 4.3.6]). The resolvent operator (sIA)1 in Equation (39) has the expansion
(40) (sIA)1=i=0(As0I)i1(ss0)i(40)
about s0ρ(A), which is convergent for |ss0|<(As0I)11 (see [Citation22] Sections I 5.2 and III 6.1). Consequently, F(s) has the representation
(41) F(s)=i=0Mis0(ss0)i(41)
in which the coefficient matrices
(42) Mis0=C(As0I)i1B,iN0(42)
coincide with the moments at s0 of F(s) (see e.g. [Citation5]). Since Equation (41) is a Taylor series, its coefficient matrices can be computed from
(43) Mis0=1i!diFdsi(s0),iN0,(43)
which shows that if the moments of the original system Equations (7)–(8) and the moments Mˆis0 of its structure-preserving Galerkin approximation Equations (29)–(32) match, i.e.
(44) Mˆis0=Cr(Ars0I)i1Br=Mis0,(44)
then the corresponding transfer matrices nearly coincide in a neighburhood about s=s0, where s0ρ(Ar) is assumed. Especially, if s0=0, then the steady state transfer behaviour of the system Equations (7)–(8) coincides with its Galerkin approximation. If the so-called second-order Krylov subspaces are a subset of the space ranV defining the Galerkin projection, then this moment matching is assured. The next definition introduces the second-order Krylov subspaces for DPS of second order.

Definition 5.1:

(Second-order Krylov subspaces) The second-order Krylov subspace is defined by

(45) Kq(A1,A2,G1)=i=0q1ranPi,qN,(45)

where

(46) P0=G1,P1=A1P0,Pi=A1Pi1+A2Pi2,i=2,3,,q1(46)
and A1:HH, A2:HH and G1:CpH are bounded linear operators.

Thereby, the sum in Equation (45) denotes the internal sum of vector spaces. Furthermore, note that, since D(Pi)=Cp, i=0,1,,q1, the operators Pi have finite rank, and thus the defined second-order Krylov subspace is finite-dimensional. By using this definition, the next theorem presents the condition for the Galerkin approximation to match certain moments about s=0 of the original system, i.e. that achieves a vanishing steady state approximation error.

Theorem 5.2:

(Moment matching) If the input second-order Krylov subspace Kq(K1D,K1,K1G) satisfies

(47) Kq(K1D,K1,K1G)ranV,(47)
then the Galerkin approximation Equations (29)–(32) of Equations (1)–(2) has the property

(48) Mˆi0=Mi0,i=0,1,,q1.(48)

For the proof see Appendix A. Note that Mi0 exists since 0ρ(A) holds since A in Equation (9) is boundedly invertible (see proof of Lemma 2.1). Similarly, Mˆi0 exists in view of 0ρ(Ar), because Kr is positive definite (see Lemma 4.1). As a consequence of this theorem, the order of the corresponding Galerkin approximation Equations (35)–(36) has to be greater than or equal to the dimension of the corresponding Krylov subspace.

5.2. Rational interpolation

In order to approximate the frequency response of the DPS Equations (1)–(2) in a prescribed frequency range, moment matching at different points unequal to zero is necessary. This approach is called rational interpolation (see [Citation5,Citation7]) and is subsequently formulated for DPS of second order. To this end, consider

(49) F(s+s0)=C((s+s0)IA)1B=(H0+(s+s0)H1)((s+s0)2I+(s+s0)D+K)1G=(H0+(s+s0)H1)(s2I+s(D+2s0I)+K+s0D+s02I)1G,(49)
for s+s0ρ(A) that directly results from a Laplace transform of Equations (1)–(2). The moments of F(s+s0) about s=0 are equal to the moments of F(s) at s=s0. Consequently, comparing Equation (49) with F(s)=(H0+sH1)(s2I+sD+K)1G shows that substituting
(50) D˜(s0)=D+2s0IandK˜(s0)=K+s0D+s02I(50)
in the input second-order Krylov subspace and using Theorem 5.2 yields the condition for moment matching at s0. The next corollary generalizes this result to moment matching at different points s1,,sk, which is a direct consequence of Theorem 5.2. Thereby, this theorem is applied to the different points si by making use of the operators D˜(si) and K˜(si).

Corollary 5.3:

(Rational interpolation) Suppose siρ(A) and siρ(Ar), i=1,2,,k. If

(51) i=1kKqi(K˜1(si)D˜(si),K˜1(si),K˜1(si)G)ranV,(51)
then the Galerkin approximation Equations (29)–(32) of Equations (1)–(2) has the property
(52) Mˆisj=Misj,i=0,1,,qj1,j=1,2,,k.(52)

5.3. Numerical implementation

In order to implement the proposed approximation method, one needs to calculate the Krylov subspaces Kq(A1,A2,G1). Since the basis vectors vi are assumed to be orthonormal (see Equation (26)), an orthonormal basis for the second-order Krylov subspaces has to be determined. This also leads to a numerically reliable implementation of the approximation procedure. To this end, the vectors v˜i spanning the Krylov subspace of second order have to be determined by solving boundary value problems. For example, in the case of Kq(K1D,K1,K1G), this leads with Equation (46) to the boundary value problems

(53) Kv˜i=GeiandKv˜i+p=DP0ei,v˜i,v˜i+pD(K),i=1,2,,p(53)
in which eiRp is the ith unit vector and
(54) Kvˉj=DPi1ej,vˉjD(K),j=1,2,,p(54)
as well as
(55) Klˉj=Pi2ej,lˉjD(K),j=1,2,,p(55)
to determine v˜k=vˉj+lˉj, k>2p. In simple cases, the boundary value problems Equations (53)–(55) are solvable analytically, but in general, a solution has to be determined numerically. Then, a Gram–Schmidt orthogonalization with deflation (i.e. linearly dependent vectors are eliminated) is applied to determine an orthonormal basis for the considered Krylov subspace resulting in the basis vectors vi (see also [Citation8,Citation15]). An implementation of such an algorithm can be directly based on the algorithm for calculating classical Krylov subspaces for DPS in [Citation14]. If several interpolation points are considered in the approximation, then all vectors of the corresponding Krylov subspaces have to be taken into account for the orthogonalization.

5.4. Systems with collocated inputs and outputs

If the system Equations (1)–(2) has collocated inputs and outputs, i.e. H0=0 and H1=G hold, then 2q moments match instead of the q moments assured by Theorem 5.2. The next theorem presents this result.

Theorem 5.4:

(Collocated inputs and outputs) If H0=0 and H1=G hold and the input second-order Krylov subspace satisfies

(56) Kq(K1D,K1,K1G)ranV,(56)
then the Galerkin approximation Equations (29)–(32) of Equations (1)–(2) has the property

(57) Mˆi0=Mi0,i=0,1,,2q1.(57)

For the proof see Appendix B. Unfortunately this property gets lost when interpolation points s00 are considered.

5.5. Proportionally damped systems

For systems with proportional damping, only classical Krylov subspaces have to be determined in the approximation procedure. These subspaces are characterized by the next definition and were introduced in [Citation14].

Definition 5.5:

(Krylov subspaces) The (classical) Krylov subspace is defined by

(58) Kqclass(A,G)=i=0q1ranP˜i,qN,(58)

where

(59) P˜0=G,P˜i=AP˜i1,i=1,2,,q1(59)

and A:HH and G:CpH are bounded linear operators.

The next theorem presents the relation between the second-order and the classical Krylov subspaces for the special case of Kelvin–Voigt damping and undamped systems, i.e. D=βK, β0. Then, also the order of the approximation can be reduced since the classical Krylov subspace needed for the matching of q moments has, in general, a lower dimension.

Theorem 5.6:

(Kelvin–Voigt damping) If D=βK, β0, then

(60) Kq(K1D,K1,K1G)Kq2class(K1,K1G)for q evenKq+12class(K1,K1G)for q odd.(60)

The proof of this theorem is shown in Appendix C. The classical Krylov subspaces only amount to calculating one boundary value problem for each basis vector, whereas for second-order Krylov subspaces two boundary value problems have to be solved (see Equations (54)–(55)). Thus, the approximation procedure is simplified when using classical Krylov subspaces. The same is also true for the general case of proportionally damped systems for expansion points s0ρ(A)ρ(Ar) and s0β1. To this end, observe that the operators of the second-order system have the form Equation (50), so that one can represent the damping operator D˜(s0), according to

(61) D˜(s0)=α˜I+β˜K˜(s0)(61)

with

(62) α˜=α+2s0β(s0α+s02)1+s0βandβ˜=β1+s0β,s0β1,(62)

which results from a simple calculation. This leads to the following theorem.

Theorem 5.7:

(Proportional damping) If s0β1, then

(63) Kq(K˜1(s0)D˜(s0),K˜1(s0),K˜1(s0)G)Kqclass(K˜1(s0),K˜1(s0)G).(63)

For the proof see Appendix D. For undamped systems and s00, one has D˜(s0)=2s0I in view Equation (50), i.e. proportional damping results. Thus, one can use the result of Theorem 5.7 to simplify the calculation of the corresponding Krylov subspace.

5.6. Structurally damped systems

Theorem 5.7 implies that for proportionally damped systems, only classical Krylov subspaces are needed to determine the moment-matching approximation. The same is also true for systems with structural damping, i.e. D=γK12, γ>0, holds in Equation (1) provided that s0=0. This is the result of the next theorem.

Theorem 5.8:

(Structural damping) If D=γK12, γ>0, then

(64) Kq(K1D,K1,K1G)Kqclass(K12,K1G).(64)

The proof is given in Appendix E.

6. Example

The presented approximation approach is applied to a flexible beam with Kelvin–Voigt damping of normalized length =1. The transverse displacement w of the beam along the spatial coordinate zΩ=[0,1] is described by the Euler–Bernoulli beam model

(65) t2w(z,t)+βz4tw(z,t)+z4w(z,t)=g(z)u(t),t>0,z(0,1),(65)

wherein β=103 is the constant of the Kelvin–Voigt damping. The spatial distribution of the actor force u is described by g(z)=1[0.15,0.25](z) in which 1[a,b](z) denotes the characteristic function of the interval [a,b], i.e. 1[a,b](z) is 1 for z[a,b] and 0 otherwise. The boundary conditions

(66) w(0,t)=z2w(0,t)=w(1,t)=z2w(1,t)=0,t>0(66)

result from the assumption that the beam is simply supported. The initial conditions are w(z,0)=w0(z) and w˙(z,0)=w0t(z), zΩ, in which w0(z) satisfies Equation (66) for t=0. When setting H=L2(0,1) with the standard inner product ,L2, the considered output

(67) y(t)=w(t),h0L2,t0,(67)

with h0(z)=1[0.75,0.85](z) has the meaning of the average deflection on the interval [0.75,0.85]. Comparing Equations (65)–(67) with Equations (1)–(2) shows that the stiffness operator K and the damping operator D are given by

(68) Kh=dz4h,hD(K)(68)
(69) D(K)=1v22{hH|pth,dzh,dz2h,dz3hareabsolutelycontinuous,dz4hH,h(0)=dz2h(0)=h(1)=dz2h(1)=0}(69)

and

(70) D=βK.(70)

The input and output operators G, H0 and H1 are defined by

(71) Gν=gν,νC,andH0h=h,h0L2,hH(71)

as well as

(72) H1=0.(72)

It is easy to verify that these operators satisfy the corresponding conditions introduced in Section 2. In the following, an approximation of the beam is determined such that the first two moments at the expansion point s1=0 and the first moments at s2=jπ2 and s3=jπ2 are matched, i.e.

(73) Mˆis1=Mis1,i=0,1andMˆ0sj=M0sj,j=2,3(73)

has to hold. The choice of the first expansion point s1 assures a vanishing steady state approximation error. The other expansion points s2 and s3 lead to an approximation of the first resonance peak of the beam. The expansion points s2 and s3 are chosen as complex conjugate pair in order to assure that the approximation is real, i.e. its transfer function has real coefficients. These matches are obtained by the Galerkin approach if the vectors vi in Equation (21) span the sum of the classical Krylov subspaces K1class(K˜1(si),K˜1(si)G), i=1,2,3, with D˜(si) and K˜(si) according to (50), i. e.

(74) ranV=i=13K1class(K˜1(si),K˜1(si)G)(74)

has to hold (see Theorems 5.6 and 5.7). Therefore, ranV=span{v˜1,v˜2,v˜3} with v˜i=K˜1(si)g, i=1,2,3. For the sake of a numerically reliable implementation, it is advantageous to use, however, the real-valued basis {vˆ1,vˆ2,vˆ3} with vˆ1=v˜1, vˆ2=Rev˜2=12(v˜2+v˜3) and v˜3=Imv˜2=1j2(v˜2v˜3), whereas v˜3=v˜2 because s3=s2 is taken into account. By applying the Arnoldi algorithm for infinite-dimensional systems in [Citation14], a real-valued orthonormal basis for span{vˆ1,vˆ2,vˆ3} is computed analytically leading to the unitary operator V in (21). Then, Equations (30) and (32) yield the structure-preserving second-order approximation Equations (29) and (31) with f=3 and

Kr=115.22173.55146.82173.551789.041354.62146.821354.629075.81, Dr=103Kr, Gr=0.984291.482581.25426

Hr0=0.714071.172371.31220,Hr1=0.

Note, that these matrices and vectors are real-valued because a real-valued basis for ranV has been used. The matrix Kr is symmetric and positive definite and consequently also Dr has these properties which confirms Lemma 3.1. This means that the approximation has an interpretation as a finite-dimensional mechanical system. Then, by the Kelvin–Tait–Chetaev Theorem, the approximation is asymptotically stable (see [Citation21]), which is in accordance with Theorem 4.1 since D=K is a positive operator. shows the frequency responses of the infinite-dimensional beam and its Krylov approximation. It is interesting to note that also the second resonance peak in the Bode plot is approximated accurately, though this is not assured by the choice of the expansion points.

Figure 1. Bode plots of the infinite-dimensional beam (solid line) and its Krylov approximation of the order 6 (dashed line).

Figure 1. Bode plots of the infinite-dimensional beam (solid line) and its Krylov approximation of the order 6 (dashed line).

7. Concluding remarks

An interesting direction for further research is the approximation of the state space model of the DPS and then the conversion of the resulting finite-dimensional state space system back into a second-order representation. For finite-dimensional systems, this method has the advantage that more moments match when compared to the direct structure-preserving projection of the second-order system. Many DPS in technical applications exhibit boundary actuation. Therefore, an extension of the presented results to this class of systems is of interest. The modelling of Timoshenko beams and Mindlin–Timoshenko plates leads to DPS of second order with a vector displacement and hence to matrix-valued operators in the second-order PDE. It should be possible to extend the results of this article also to this system class. Beyond that, future work considers the investigation of the numerical implementation of the propsed results in comparison to the classical FE-modelling approach.

References

  • D. Meyer and S. Srinivasan, Balancing and model reduction for second-order form linear systems, IEEE Trans. Autom. Control 41 (1996), pp. 1632–1644.
  • Y. Chahlaoui, D. Lemonnier, K. Meerbergen, A. Vandendorpe, and P.V. Dooren, Model Reduction of Second-order Systems, Proc. Int. Symp. on Mathematical Theory of Networks and Systems, University of Notre Dame, USA, 2002.
  • T. Reis and T. Stykel, Balanced truncation model reduction of second-order systems, Math. Comp. Modelling Dyn. Sys. 14 (2008), pp. 391–406.
  • P. Benner, P. Kürschner, and J. Saak, An Improved Numerical Method for Balanced Truncation for Symmetric Second-Order Systems, Max Planck Institute, Magdeburg, preprints MPIMD/12-20. 2012.
  • A.C. Antoulas, Approximation of Large-scale Dynamical Systems, SIAM Press, Philadelphia, PA, 2005.
  • R.W. Freund, Model reduction methods based on Krylov subspaces, Acta Numerica 12 (2003), pp. 267–319.
  • E.J. Grimme, Krylov projection methods for model reduction, Ph.D. Thesis, ECE Department, University of Illinois, Urbana-Champaign, 1997.
  • B. Salimbahrami and B. Lohmann, Order reduction of large scale second-order systems using Krylov subspace methods, Lin. Alg. Appl. 415 (2006), pp. 385–405.
  • R. Eid, B. Salimbahrami, and B. Lohmann, Parametric order reduction of proportionally damped second-order systems, Technical Reports on Automatic Control 1 (2006), Available at: www.rt.mw.tum.de .
  • Z. Bai and Y. Su, Dimension reduction of large-scale second-order dynamical systems via a second-order Arnoldi method, SIAM J. Sci. Comput. 26 (2005), pp. 1692–1709.
  • C. Beattie and S. Gugercin, Krylov-based model reduction of second-order systems with proportional damping, Proc. CDC, Seville, Spain (2005), pp. 2278–2283.
  • C. Beattie and S. Gugercin, Interpolatory projection methods for structure-preserving model reduction, Syst. Control Lett. 58 (2009), pp. 225–232.
  • T. Reis and W. Wollner, Finite-rank ADI iteration for operator Lyapunov equations, Hamburger Beiträge zur Angewandten Mathematik, Nr. 2012–09, 2012.
  • C. Harkort and J. Deutscher, Krylov subspace methods for linear infinite-dimensional systems, IEEE Trans. Autom. Control 56 (2011), pp. 441–447.
  • B. Salimbahrami, Structure-preserving order reduction of large scale second-order models, Doctoral thesis, Lehrstuhl für Regelungstechnik, Universität München, Germany, 2005.
  • C. Fletcher, Computational Galerkin Methods, Springer-Verlag, New York, 1984.
  • R.F. Curtain and H.J. Zwart, An Introduction to Infinite-Dimensional Linear Systems Theory, Springer-Verlag, New York, 1995.
  • Z.H. Luo, B.Z. Guo, and Ö. Morgül, Stability and Stabilization of Infinite Dimensional Systems with Applications, Springer-Verlag, London, 1999.
  • F. Huang, On the mathematical model for linear elastic systems with analytic damping, SIAM J. Control Optim. 26 (1988), pp. 714–724.
  • R. Griniv and A. Shkalikov, Exponential stability of semigroups related to operator models in mechanics, Math. Notes 73 (2003), pp. 618–624.
  • E.E. Zajac, Comments on “Stability of damped mechanical systems” and a further extension, AIAA J. 3 (1965), pp. 1749–1750.
  • T. Kato, Perturbation Theory for Linear Operators, Springer-Verlag, Berlin, 1995.

Appendix A. Proof of Theorem 5.2

It is easy to verify that A1=K1D, A2=K21 and G1=K1G defining the input second-order Krylov subspace satisfy the condition of Definition 5.1. Next, it is shown that Pi, i=0,1,,q1, related to the input second-order Krylov subspace Kq(Kr1Dr,Kr1,Kr1Gr) for finite-dimensional systems, i.e. P0=Kr1Gr, P1=Kr1DrP0, Pi=Kr1DrPi1Kr1Pi2, i=2,3,,q1, of the Galerkin approximation Equations (29)–(32) satisfy

(A1) Pi=VPi,i=0,1,,q1.(A1)

To this end, use Equation (30) as well as Equation (46) and consider

(A2) VP0=VKr1Gr=V(VKV)1VG=V(VKV)1VK(K1G)=V(VKV)1VKP0,\aeqnum(A2)

in which Kr1 exists, since Kr>0 (see Lemma 3.1). In view of ranP0ranV, there exists a matrix R0Cf×p such that P0=VR0. Then, VP0=V(VKV)1VKVR0=VR0=P0. In the next step, this result is used to compute

(A3) VP1=VKr1DrP0=V(VKV)1VDVP0=V(VKV)1VK(K1DP0)=V(VKV)1VKP1.\aeqnum(A3)

Since ranP1ranV, there exists a matrix R1Cf×p such that P1=VR1. Consequently, VP1=V(VKV)1VKVR1=VR1=P1. In order to prove that Equation (A1) is true for i=2,3,,q1, assume that it holds for i=j1 and i=j2. Then,

(A4) VPj=V(Kr1DrPj1Kr1Pj2)=V((VKV)1VDVPj1(VKV)1VVPj2=V((VKV)1VKK1DPj1(VKV)1VKK1Pj2)=V(VKV)1VK(K1DPj1K1Pj2)=V(VKV)1VKPj,\aeqnum(A4)

in which Equations (26) and (46) were used. Because of ranPjranV, there exists a matrix RjCf×p such that Pj=VRj. Thus, VPj=V(VKV)1VKVRj=VRj=Pj. This proves that Equation (A1) is true for all i=0,1,,q1 by induction, since ranPiranV, i=0,1,,q1. In order to complete the proof, it is shown that the moments Mi0 satisfy

(A5) Mi0=H0Pi+H1Pi1,i=0,1,,q1,(A5)

in which P1=0 is assumed. The moments in question are Mi0=CAi1B (see Equation (42)) which gives with Equations (9)–(12)

(A6) Mi0=H0H10IKDi10G=H0H1K1DK1I0iK1G0.\aeqnum(A6)

Observe that

(A7) PiPi1=A1A2I0iP00=K1DK1I0iK1G0(A7)

is implied by Equation (46) showing Equation (A5). Consequently, Equations (A1), (A6) and (A7) lead to

(A8) Mi0=H0Pi+H1Pi1=H0VPi+H1VPi1=Hr0Pi+Hr1Pi1(A8)

for i=0,1,,q1 in view of Equation (32) and by setting P1=0. By using the same reasoning as for the DPS, one can readily verify Mˆi0=Hr0Pi+Hr1Pi1, so that Equation (A8) yields Mi0=Mˆi0, i=0,1,,q1, thus proving the theorem.

Appendix B. Proof of Theorem 5.4

In view of Equations (A1) and (A8), the result Mi0=H1Pi1=H1VPi1=Mˆi0, i=0,1,,q1 holds, meaning that the first q moments match. In order to show that also Mi0=Mˆi0, i=q,q+1,,2q1, is satisfied, consider

(B1) Mi0=0H1K1DK1I0iq+1Pq1Pq2=K1DK1I0iq+10GPq1Pq2\aeqnum(B1)

in view of Equations (A6) and (A7) and H1=G. It is straightforward to verify that

(B2) K1DK1I0iq+1=(1)iq+1K1DK1I0iq+1(B2)

holds, so that Equation (B1) becomes with Equation (A7)

(B3) Mi0=(1)iq+1K1DK1I0iqK1G0Pq1Pq2=(1)iq+1(PiqPq1+Piq1Pq2).\aeqnum(B3)

By using Equations (26) and (A1), this gives

(B4) Mi0=(1)iq+1(PiqVVPq1+Piq1VVPq2)=(1)iq+1(PiqPq1+Piq1Pq2),\aeqnum(B4)

for i=q,q+1,,2q1. Since Mˆi0=(1)iq+1(PiqPq1+Piq1Pq2), i=q,\breakq+1,,2q1, results from applying the same reasoning to the Galerkin approximation Equations (29)–(32), the result Equation (B4) implies Mi0=Mˆi0, i=q,q+1,,2q1, thus proving the theorem.

Appendix C. Proof of Theorem 5.6

Definitions 5.1 and 5.5 directly imply that P0=P˜0 and thus P1=βP˜0. Assume that for i1, the results P2i1=j=0i1cjP˜j and P2i2=j=0i1djP˜j hold with cjR and djR, then P2i=βP2i1K1P2i2=βj=0i1cjP˜jK1j=0i1djP˜j=βj=0i1cjP˜j+j=1idj1P˜j and P2i+1=βP2iK1P2i1 =β(βj=0i1cjP˜j+j=1idj1P˜j)K1j=0i1cjP˜j = β2j=0i1cjP˜jβj=1idj1P˜j+j=1icj1P˜j. Thus, the assumption is also valid for P2(i+1)1=P2i+1 and P2(i+1)2=P2i. Hence, induction yields i=0q1ranPii=0q21ranP˜i for q even and i=0q1ranPii=0q+121ranP˜i for q odd, which proves the theorem.

Appendix D. Proof of Theorem 5.7

At first the theorem is proven for s0=0. Definitions 5.1 and 5.5 directly imply that P0=P˜0. As a consequence, P1=K1DP0=K1(αI+βK)P˜0=αK1P˜0βP˜0=αP˜1βP˜0 holds. Assume that for i1, the results Pi1=j=0i1cjP˜j and Pi2=j=0i2djP˜j hold, then

(D1) Pi=K1DPi1K1Pi2=K1(αI+βK)Pi1K1Pi2=αK1Pi1βPi1K1Pi2=αj=1icj1P˜jβj=0i1cjP˜j+j=1i1dj1P˜j,\aeqnum(D1)

showing that the assumption is also valid for P(i+1)1=Pi. Consequently, by induction i=0q1ranPii=0q1ranP˜i, showing that the theorem is true for s0=0. In view of Equation (61), this result also proves Theorem 5.7.

Appendix E. Proof of Theorem 5.8

For the Krylov subspace Kq(K1D,K1,K1G) of second order, Definition 5.1 of P0 and P1 in Equation (46) leads with D=γK12 to the result P0=K1G and P1=γK12P0. By using Definition 5.5 for the classical Krylov subspace Kqclass(K12,K1G), this yields

(E1) P0=P˜0andP1=γP˜1.(E1)

In view of Equation (46), one obtains for the Krylov subspace of second order

(E2) Pi=γK12Pi1K1Pi2,i=2,3,,q1.(E2)

If Pi1=j=0i1ajP˜j and Pi2=j=0i2bjP˜j with ajR and bjR holds, then Equations (E2) and (59) implies

(E3) Pi=γj=0i1ajK12P˜jj=0i2bjK1P˜j=j=1icjP˜j(E3)

with suitable constants cjR for i=2,3,,q1 in view of Equation (59). Thus, by making use of Equations (E1) and (E3), the result ranPij=0iranP˜j, i=0,1,,q1, follows by induction. Hence, Equation (64) is proved when taking Equations (45) and (58) into account.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.