477
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Two-sided bounds on some output-related quantities in linear stochastically excited vibration systems with application of the differential calculus of norms

| (Reviewing Editor)
Article: 1147932 | Received 21 Sep 2015, Accepted 25 Jan 2016, Published online: 02 Mar 2016

Abstract

A linear stochastic vibration model in state-space form, x˙(t)=Ax(t)+b(t),x(0)=x0, with output equation xS(t)=Sx(t) is investigated, where A is the system matrix and b(t) is the white noise excitation. The output equation xS(t)=Sx(t) can be viewed as a transformation of the state vector x(t) that is mapped by the rectangular matrix S into the output vector x(t). It is known that, under certain conditions, the solution x(t) is a random vector that can be completely described by its mean vector, mx(t):=mx(t), and its covariance matrix, Px(t):=Px(t). If matrix A is asymptotically stable, then mx(t)0(t) and PxS(t)PS(t), where PS is a positive (semi-)definite matrix. Similar results will be derived for some output-related quantities. The obtained results are of special interest to applied mathematicians and engineers.

AMS Subject Classifications:

Public Interest Statement

When a dynamical system with solution vector x of length n describes an engineering problem, only a few components of x are needed, as a rule. But, nevertheless, the whole pertinent initial value problem must be solved. In order to obtain only the components of interest, one defines an output matrix, say S, that selects them from x by defining the new output vector xS:=Sx showing its importance. This equation is called output equation. For example, if the engineer wants to apply only the first, second, and nth component, then one defines S as S=[e1,e2,en]T where ei means the ith unit vector for i=1,2,n.

In the present paper, new two-sided estimates on the mean vector and covariance matrix pertinent to the output vector xS in linear stochastically excited vibration systems are derived that parallel those associated with x obtained recently.

1. Introduction

In order to make the paper more easily readable for a large readership, we first introduce the notions of output vector and output equation common to engineers. When a dynamical system with solution vector x of length n describes an engineering problem, only a few components of x are needed, as a rule. But, nevertheless, the whole pertinent initial value problem must be solved. In order to obtain only the components of interest, one defines an output or transformation matrix, say S, that selects them from x by defining the output xS:=Sx. This equation is called output equation. For example, if the engineer wants to use only the first, second, and nth component, then one defines S as S=[e1,e2,en]T where ei means the ith unit vector for i=1,2,n. In other words, by employing the output equation xS=Sx, a subset of components can be selected from the whole set of degrees of freedom which is usually necessary in practice. Of course, one can also define S such that it forms linear combinations of components of x. Whereas, in the preceding paper Kohaupt (Citation2015b), the whole vector x was analyzed, in the present paper, it is replaced by the output xS. The given comments on xS show why this is important.

In this paper, a linear stochastic vibration model of the form x˙(t)=Ax(t)+b(t),x(0)=x0, with output equation xS(t)=Sx(t) is investigated, where A is a real system matrix, b(t) white noise excitation, and x0 an initial vector that can be completely characterized by its mean vector m0 and its covariance matrix P0. Likewise, the solution x(t), also called response, is a random vector that can be described by its mean vector mx(t):=mx(t), and its covariance matrix, Px(t):=Px(t). For asymptotically stable matrices A, it is known that mx(t)0(t) and Px(t)P(t), where P is a positive (semi-)definite matrix. Similarly, for the output or transformed quantity xS(t), one has mxS(t)0(t) and PxS(t)PS(t) with a positive (semi-)definite matrix PS. The asymptotic behavior of mx(t) and Px(t)-P was studied in Kohaupt (Citation2015b).

In this paper, we investigate the asymptotic behavior of mxS(t) and PxS(t)-PS. As appropriate norms for the investigation of this problem, again the Euclidean norm for mxS(t)and the spectral norm for PxS(t)-PS is the respective natural choice; both norms are denoted by ·2.

The main new points of the paper are

  • the determination of two-sided bounds on mxS(t) and PxS(t)-PS,

  • the derivation of formulas for the right norm derivatives D+kPxS(t)-PS2,k=0,1,2, and

  • the application of these results to the computation of the best constants in the two-sided bounds.

  • Special attention is paid to conditions ensuring the positiveness of the constants in the lower bounds when S is only rectangular and not square regular.

The paper is structured as follows.

In Section 2, the linear stochastically excited vibration model with output equation is presented. Then, in Section 3, the transformed quantities mxS(t) and PxS(t)-PS are determined from mx(t) and Px(t)-P, respectively, by appropriate use of the output matrix S as transformation matrix. Section 4 derives two-sided bounds on xS(t)=Sx(t) with x˙(t)=Ax(t),x(0)=x0, as a preparation to derive two-sided bounds on mxS(t) in Section 6. Section 5 determines two-sided bounds on ΦS(t):=SΦ(t) with Φ˙(t)=AΦ(t),Φ(0)=E, as a preparation to derive two-sided bounds on PxS(t)-PS in Section 7. Section 8 studies the local regularity of PxS(t)-PS. Then in Section 9, as the main result, formulas for the right norm derivatives D+kPxS(t)-PS2,k=0,1,2 are obtained. Section 10, for the specified data in the stochastically exited model, presents applications, where the differential calculus of norms is employed by computing the best constants in the new two-sided bounds on mxS(t) and PxS(t)-PS. In Section , conclusions are drawn. The Appendix A contains sufficient algebraic conditions that ensure the positiveness of the constants in the lower bounds when S is only rectangular and not square regular. Finally, we comment on the References. The author’s papers on the differential calculus of norms are contained in Kohaupt (Citation1999,Citation2001,Citation2002,Citation2003,Citation2004a,Citation2004b,Citation2005,Citation2006,Citation2007a,Citation2007b,Citation2007c,Citation2008a,Citation2008b,Citation2008c,Citation2008d,Citation2009a,Citation2009b,Citation2009c,Citation2010a,Citation2010b,Citation2011,Citation2012,Citation2013,Citation2015a,Citation2015b). The articles Bhatia and Elsner (Citation2003), Benner, Denißen, and Kohaupt (Citation2013,Citation2016), and Whidborne and Amer (Citation2011) refer to some of the author’s works. The publications Coppel (Citation1965), Dahlquist (Citation1959), Desoer and Haneda (Citation1972), Hairer, Nørset, and Wanner (Citation1993), Higueras and García-Celayeta (Citation1999,Citation2000), Hu and Hu (Citation2000), Lozinskiǐ (Citation1958), Pao (Citation1973a,Citation1973b), Söderlind and Mattheij (Citation1985), Ström (Citation1972,Citation1975) contain subjects on the logarithmic norm which was the starting point of the author’s development of the differential calculus of norms. The References Bickley and McNamee (Citation1960), Kučera (Citation1974), and Ma (Citation1966) were important for the author’s article on the equation VA+AV=μV in Kohaupt (Citation2008a). The publications Achieser and Glasman (Citation1968), Heuser (Citation1975), Kantorovich and Akilov (Citation1982), Kato (Citation1966), and Taylor (Citation1958) are textbooks on functional analysis useful, for instance, in the proofs of the theorems in Section 5. The books Golub and van Loan (Citation1989), Niemeyer and Wermuth (Citation1987), and Stummel and Hainer (Citation1980) contain chapters on Matrix Theory and Numerical Mathematics valuable in connection with the subject of the present paper. The books Müller (Citation1985), Thomson and Dahleh (Citation1998), and Waller (Citation1975) are on engineering dynamical systems. In paper Guyan (Citation1965), a reduction method for stiffness and mass matrices is discussed, a method that is still in use nowadays. Last, but not least, Kloeden and Platen (Citation1992) is a standard book on the numerical solution of stochastic differential equations.

2. The linear stochastically excited vibration system with output equation

In order to make the paper as far as possible self-contained, we summarize the known facts on linear stochastically excited systems. In the presentation, we closely follow the line of Müller ((MSch, Sections 9.1 and 9.2).

So, let us depart from the deterministic model in state-space form (1) x˙(t)=Ax(t)+b(t),t>0,x(0)=x0,(1) (2) xS(t)=Sx(t)(2)

with system matrix AIRn×n, the state vector x(t)IRn and the excitation vector b(t)IRn,t0, the output matrix SIRl×n, and the output vector xS(t). We call (2) output equation. It can be understood as a transformation making of x(t) the transformed quantity xS(t) by applying the transformation matrix S to x(t).

Now, we replace the deterministic excitation b(t) by a stochastic excitation in the form of white noise. Thus, b(t) can be completely described by the mean vector mb(t) and the central correlation matrix Nb(t,τ) with(3) mb(t)=0,Nb(t,τ)=Qδ(t-τ),(3)

where Q=Qb is the n×nintensity matrix of the excitation and δ(t-τ) the δ-function (more precisely, the δ-functional).

From the central correlation matrix, for τ=t one obtains the positive semi-definite covariance matrix (4) Pb(t):=Nb(t,t).(4)

At this point, we mention that the definition of a real positive semi-definite matrix includes its symmetry.

When the excitation is white noise, the deterministic initial value problem (1) can be formally maintained as the theory of linear stochastic differential equations shows. However, the initial state x0 must be introduced as Gaussian random vector,(5) x0(m0,P0),(5)

which is to be independent of the excitation; here, the sign means that the initial state x0 is completely described by its mean vector m0 and its covariance matrix P0. More precisely: x0 is a Gaussian random vector whose density function is completely determined by m0 and P0 alone.

The stochastic response of the system (1) is formally given by(6) x(t)=Φ(t)x0+0tΦ(t-τ)b(τ)dτ,(6)

where - besides the fundamental matrix Φ(t)=eAt and the initial vector x0- a stochastic integral occurs.

It can be shown that the stochastic response x(t) is a non-stationary Gauss–Markov process that can be described by the mean vector mx(t):=mx(t) and the correlation matrix Nx(t,τ):=N(x(t),x(τ)). For τ=t, we get the covariance matrix Px(t):=Px(t).

If the system is asymptotically stable, the properties of first and second order for the stochastic response x(t) we need are given by(7) mx(t)=Φ(t)m0,(7) (8) Px(t)=Φ(t)(P0-P)ΦT(t)+P,(8)

where the positive semi-definite n×n matrix P satisfies the Lyapunov matrix equation AP+PAT+Q=0.

This is a special case of the matrix equation AX+XB=C, whose solution can be obtained by a method of Ma (Citation1966). For the special case of diagonalizable matrices A and B, this is shortly described in Kohaupt (Citation2015b, Appendix A.1).

For asymptotically stable matrices A, one has limtΦ(t)=0 and thus by (7) and (8),(9) limtmx(t)=0(9)

and(10) limtPx(t)=P.(10)

In Kohaupt (Citation2015b), we have investigated the asymptotic behavior of mx(t) and Px(t)-P.

In this paper, we want to derive formulas for mxS(t) and PxS(t) corresponding to those of (7) and (8) and study their asymptotic behavior. This will be done in the next five sections, that is, in Sections 37.

3. The output-related quantities mxS(t) and PxS(t)

In this section, we determine the output-related quantities mxS(t) and PxS(t) from the corresponding quantities mx(t) and Px(t) by appropriate use of the output matrix S as the transformation matrix.

The results of this section are known to mechanical engineers, but are added for the sake of completeness, especially for mathematicians.

One obtains the following lemma.

Lemma 1

(Formulas for mxS(t) and PxS(t))

Let SIRl×n and x(t) the solution vector of (1). Further, let xS(t) be given by (2), i.e.xS(t)=Sx(t).

Then, one has(11) mxS(t)=ΦS(t)m0,(11) (12) PxS(t)=SPx(t)ST=ΦS(t)(P0-P)ΦST(t)+PS,(12)

with(13) ΦS(t):=SΦ(t)(13)

and(14) PS:=SPST.(14)

Proof

(i)

One has mxS(t)=E(xS(t))=E(Sx(t))=S(E(x(t))=Smx(t), where here E denotes the expectation of a random vector. Using (7), this leads to (11).

(ii)

Next, we show that, for the central correlation matrices Nx(t,τ) and NxS(t,τ), the identity (15) NxS(t,τ)=SNx(t,τ)ST(15) holds. This is because NxS(t,τ)=E{[xS(t)-mxS(t)][xS(τ)-mxS(τ)]T}=E{S[x(t)-mx(t)][S(x(τ)-mx(τ))]T}=E{S[x(t)-mx(t)][x(τ)-mx(τ)]TST}=SE{[x(t)-mx(t)][x(τ)-mx(τ)]T}ST=SNx(t,τ)ST.

Thus, (15) is proven. Setting τ=t, this implies(16) PxS(t)=SPx(t)ST.(16)

Taking into account (8), this leads to (12).

Remark Let the system matrix A be asymptotically stable. Then, Φ(t)0,(t) and thus from (12) and (13),limtΦS(t)=0

as well aslimtPxS(t)=PS.

4. Two-sided bounds on xS(t)=Sx(t) with x˙(t)=Ax(t),x(t0)=x0,

In this section, we discuss the deterministic case xS(t)=Sx(t) with x˙(t)=Ax(t),x(t0)=x0, as a preparation for Section 6. There, two-sided bounds on mxS(t) will be given based on those for xS(t) here.

For the positiveness of the constants in the lower bounds, we discuss two cases: the special case when matrix A is diagonalizable and the case of a general square matrix A.

Let SICl×n and(17) xS(t)=Sx(t).(17)

We obtain

Theorem 1

(Two-sided bound on xS(t)=Sx(t) by eνx0[A](t-t0))

Let AICn×n, 0x0ICn, and x(t) be the solution of the initial value problem x˙(t)=Ax(t),x(t0)=x0. Let · be any vector norm.

Then, there exists a constant XS,00 and for every ε>0 a constant XS,1(ε)>0 such that(18) XS,0eνx0[A](t-t0)xS(t)XS,1(ε)e(νx0[A]+ε)(t-t0),tt0,(18)

where νx0[A] is the spectral abscissa of A with respect to x0.

If A is diagonalizable, then ε=0 may be chosen, and we write XS,1 instead of XS,1(ε=0).

If S is square and regular, then XS,0>0.

Proof

One has(19) 0x(t)xS(t)Sx(t).(19)

Further, according to Kohaupt (Citation2006, Theorem 8), there exists a constant X0>0 and for every ε>0 a constant X1(ε)>0 such that(20) X0eνx0[A](t-t0)x(t)X1(ε)e(νx0[A]+ε)(t-t0),tt0.(20)

Combining (19) and (20) leads to (18) with XS,0=0.

Further, if S is square and regular, then instead of (19) we get(21) 1/S-1x(t)xS(t)Sx(t).(21)

Thus, apparently, XS,0=X0/S-1>0 can be chosen in (18).

An interesting and important question is under what conditions the constant XS,0 is positive when S is only rectangular, but not necessarily square and regular. To assert that XS,0 is positive, additional conditions have to be imposed. We consider two cases.

Case 1: Diagonalizable matrix A In this case, we need the following hypotheses on A from Kohaupt (Citation2011, Section 3.1).

(H1)

m=2n and AIRm×m,

(H2)

T-1AT=J=diag(λk)k=1,,m, where λk=λk(A),k=1,,m are the eigenvalues of A,

(H3)

λi=λi(A)0,i=1,,m,

(H4)

λiλj,ij,i,j=1,,m,

(HS)

the eigenvectors p1,,pn;p¯1,,p¯n form a basis of ICm.

As a preparation to the subsequent derivations, we collect some definitions resp. notations and representations for the solution vector x(t) from Kohaupt (Citation2006,Citation2011).

Representation of the basis xk(r)(t),xk(i)(t),k=1,,n

Under the hypotheses (H1), (H2), and (HS), from Kohaupt (Citation2011), we obtain the following real basis functions for the ODE x˙=Ax:(22) xk(r)(t)=eλk(r)(t-t0)cosλk(i)(t-t0)pk(r)-sinλk(i)(t-t0)pk(i),xk(i)(t)=eλk(r)(t-t0)sinλk(i)(t-t0)pk(r)+cosλk(i)(t-t0)pk(i),(22) k=1,,n, whereλk=λk(r)+iλk(i)=Reλk+iImλk,pk=pk(r)+ipk(i)=Repk+iImpk,k=1,,m=2n are the decompositions of λk and pk into their real and imaginary parts. As in Kohaupt (Citation2011), the indices are chosen such that λn+k=λ¯k,pn+k=p¯k,k=1,,n.

The spectral abscissa of A with respect to the initial vector x0IRn Let, uk,k=1,,m=2n be the eigenvectors of A corresponding to the eigenvalues λ¯k,k=1,,m=2n. Under (H1), (H2), and (HS), the solution x(t) of (1) has the form(23) x(t)=k=1m=2nc1kpkeλk(t-t0)=k=1nc1kpkeλk(t-t0)+c2kp¯keλ¯k(t-t0)(23)

with uniquely determined coefficients c1k,k=1,,m=2n. Using the relations(24) c2k=c1,n+k=c¯1k,k=1,,n,(24) (see Kohaupt, Citation2011, Section 3.1 for the last relation), then according to Kohaupt (Citation2011), the spectral abscissa of A with respect to the initial vector x0IRn is given by(25) ν0:=νx0[A]:=maxk=1,,m=2nλk(r)(A)|x0⊥̸uk=maxk=1,,m=2n{λk(r)(A)|c1k0}=maxk=1,,n{λk(r)(A)|c1k0}=maxk=1,,n{λk(r)(A)|x0⊥̸uk}(25) Index sets In the sequel, we need the following index sets:(26) Jν0:={k0IN|1k0nandλk0(r)(A)=ν0}(26)

and(27) Jν0-:={1,,n}\Jν0={k0-IN|1k0-nandλk0-(r)(A)<ν0}.(27) Appropriate representation of x(t) We have(28) x(t)=k=1n[ck(r)xk(r)(t)+ck(i)xk(i)(t)](28)

withck(r)=2Rec1k,ck(i)=-2Imc1k,k=1,,n(cf. Kohaupt, Citation2011). Thus, due to (28) and (22),(29) x(t)=k=1neλk(r)(t-t0)fk(t)(29)

with(30) fk(t):=ck(r)[cosλk(i)(t-t0)pk(r)-sinλk(i)(t-t0)pk(i)]+ck(i)[sinλk(i)(t-t0)pk(r)+cosλk(i)(t-t0)pk(i)],(30) k=1,,n.

Appropriate representation of y(t) and y˙(t)(needed in the Appendix) Let(31) pk:=qkrkpk(r):=qk(r)rk(r),pk(i):=qk(i)rk(i),(31)

with qk,rkICn, qk(r),rk(r),qk(i),rk(i)IRn, k=1,,m=2n. Then, from (29), (30),(32) y(t)=k=1neλk(r)(t-t0)gk(t)(32)

with(33) gk(t):=ck(r)[cosλk(i)(t-t0)qk(r)-sinλk(i)(t-t0)qk(i)]+ck(i)[sinλk(i)(t-t0)qk(r)+cosλk(i)(t-t0)qk(i)],(33) k=1,,n as well as(34) z(t)=y˙(t)=k=1neλk(r)(t-t0)hk(t)(34)

with(35) hk(t):=ck(r)[cosλk(i)(t-t0)rk(r)-sinλk(i)(t-t0)rk(i)]+ck(i)[sinλk(i)(t-t0)rk(r)+cosλk(i)(t-t0)rk(i)],(35) k=1,,n.

Herewith, one obtains(36) xS(t)=Sx(t)=k=1neλk(r)(t-t0)Sfk(t)kJν0eλk(r)(t-t0)Sfk(t)-kJν0-eλk(r)(t-t0)Sfk(t)=kJν0Sfk(t)eν0(t-t0)-kJν0-eλk(r)(t-t0)Sfk(t),tt0;(36)

for a corresponding estimate on x(t), compare Kohaupt (Citation2011, (10)).

Now, let(37) kJν0Sfk(t)0,tt0.(37)

Then, similarly as in Kohaupt (Citation2011, (12)),(38) kJν0Sfk(t)inftt0kJν0Sfk(t)=:XS,ν0>0,tt0.(38)

Together with (36), this entails(39) xS(t)XS,0eν0(t-t0),tt1t0(39)

withXS,0:=XS,ν02>0

for sufficiently large t1. Thus, we obtain

Theorem 2

(Positiveness of the constant XS,0 in lower bound if A diagonalizable)

Let the hypotheses (H1), (H2), and (HS) for A be fulfilled, 0x0IRm, SIRl×m, A be diagonalizable as well as condition (37) be satisfied.

Then, there exists a positive constant XS,0 such that(40) XS,0eνx0[A](t-t0)xS(t),tt1t0.(40)

for sufficiently large t1t0.

If xS(t)0,tt0, then t1=t0 can be chosen.

Proof

The last statement is proven similarly as in the proof of Kohaupt (Citation2011, Theorem 2).

Remarks

  • As opposed to (37), the relation kJν0fk(t)0,tt0, in Kohaupt (Citation2011, (11)) could be proven there and thus needed not be assumed.

  • We mention that the quantities fk(t) depend on the initial vector x0 through their coefficients ck(r)ck(i) (Kohaupt, Citation2011, (8)). To stress this fact, one can write fk(t)=fk(t,x0) or fk(t)=fk,x0(t).

Case 2 General square matrix A In this case, we need the following hypotheses on A from Kohaupt (Citation2011, Section 3.2).

(H1′)

m=2n and AIRm×m,

(H2′)

T-1AT=J=diag(Ji(λi))i=1,,r where Ji(λi)ICmi×mi are the canonical Jordan forms,

(H3′)

λi=λi(A)0,i=1,,r,

(H4′)

λiλj,ij,i,j=1,,r,

(HS′)

r=2ρ, and the principal vectors p1(1),,pm1(1);;p1(ρ),,pmρ(ρ);p¯1(1),,p¯m1(1);;p¯1(ρ),,p¯mρ(ρ) form a basis of ICm.

In the case of a general square matrix A, we also have to collect some definitions resp. notations and representations of x(t) from Kohaupt (Citation2006,Citation2011).

Representation of the basis xk(l,r)(t),xk(l,i)(t),k=1,,ml,l=1,,ρ

Under the hypotheses (H1), (H2), and (HS), from Kohaupt (Citation2011) we obtain the following real basis functions for the ODE x˙=Ax:(41) xk(l,r)(t)=eλl(r)(t-t0)cosλl(i)(t-t0)p1(l,r)(t-t0)k-1(k-1)!++pk-1(l,r)(t-t0)+pk(l,r)-sinλl(i)(t-t0)p1(l,i)(t-t0)k-1(k-1)!++pk-1(l,i)(t-t0)+pk(l,i),xk(l,i)(t)=eλl(r)(t-t0)sinλl(i)(t-t0)p1(l,r)(t-t0)k-1(k-1)!++pk-1(l,r)(t-t0)+pk(l,r)+cosλl(i)(t-t0)p1(l,i)(t-t0)k-1(k-1)!++pk-1(l,i)(t-t0)+pk(l,i),(41) k=1,,ml,l=1,,ρ, wherepk(l)=pk(l,r)+ipk(l,i)

is the decomposition of pk(l) into its real and imaginary part.

The spectral abscissa of A with respect to the initial vector x0IRn Let uk(l),k=1,,ml be the principal vectors of stage k of A corresponding to the eigenvalue λ¯l,l=1,,r=2ρ. Under (H1), (H2), and (HS), the solution x(t) of (1) has the form(42) x(t)=l=1r=2ρk=1mlc1k(l)xk(l)(t)=l=1ρk=1ml[c1k(l)xk(l)(t)+c2k(l)x¯k(l)(t)].(42)

with uniquely determined coefficients c1k(l),k=1,,ml,l=1,,r=2ρ. Using the relations(43) c1k(l)=(x0,uk(l)),k=1,,ml,l=1,,ρc2k(l)=c1k(ρ+l)=c¯1k(l),l=1,,ρ(43) (see Kohaupt, Citation2011, Section 3.2 for the last relation), then the spectral abscissa of A with respect to the initial vector x0IRn is(44) ν0:=νx0[A]:=maxl=1,,r=2ρ{λl(r)(A)|x0⊥̸Mλ¯l(A):=[u1(l),,uml(l)]}=maxl=1,,r=2ρ{λl(r)(A)|c1k(l)0foratleastonek{1,,ml}}=maxl=1,,ρ{λl(r)(A)|c1k(l)0foratleastonek{1,,ml}}=maxl=1,,ρ{λl(r)(A)|x0⊥̸Mλ¯l(A)=[u1(l),,uml(l)]}(44) Index sets For the sequel, we need the following index sets:(45) Jν0:={l0IN|1l0ρandλl0(r)(A)=ν0}(45)

and(46) Jν0-:={1,,ρ}\Jν0={l0-IN|1l0-ρandλl0-(r)(A)<ν0}.(46) Appropriate representation of x(t) We have(47) x(t)=l=1ρk=1ml[ck(l,r)xk(l,r)(t)+ck(l,i)xk(l,i)(t)](47)

withck(l,r)=2Rec1k(l),ck(l,i)=-2Imc1k(l),k=1,,ml,l=1,,ρ(cf. Kohaupt, Citation2011). Thus, due (47),(48) x(t)=l=1ρeλl(r)(t-t0)k=1mlfk(l)(t)(48)

with(49) fk(l)(t):=ck(l,r)cosλl(i)(t-t0)p1(l,r)(t-t0)k-1(k-1)!++pk-1(l,r)(t-t0)+pk(l,r)-sinλl(i)(t-t0)p1(l,i)(t-t0)k-1(k-1)!++pk-1(l,i)(t-t0)+pk(l,i)+ck(l,i)sinλl(i)(t-t0)p1(l,r)(t-t0)k-1(k-1)!++pk-1(l,r)(t-t0)+pk(l,r)+cosλl(i)(t-t0)p1(l,i)(t-t0)k-1(k-1)!++pk-1(l,i)(t-t0)+pk(l,i)(49) k=1,,ml,l=1,,ρ.

Appropriate representation of y(t) and y˙(t)(needed in the Appendix) Set(50) pk(l):=qk(l)rk(l),pk(l,r):=qk(l,r)rk(l,r),pk(l,i)(t):=qk(l,i)rk(l,i),(50)

with qk(l),rk(l)ICn, qk(l,r),rk(l,r),qk(l,i),rk(l,i)IRn, k=1,,ml,l=1,,ρ.

Then, from (48), (49)(51) y(t)=l=1ρeλl(r)(t-t0)k=1mlgk(l)(t)(51)

with(52) gk(l)(t):=ck(l,r)cosλl(i)(t-t0)q1(l,r)(t-t0)k-1(k-1)!++qk-1(l,r)(t-t0)+qk(l,r)-sinλl(i)(t-t0)q1(l,i)(t-t0)k-1(k-1)!++qk-1(l,i)(t-t0)+qk(l,i)+ck(l,i)sinλl(i)(t-t0)q1(l,r)(t-t0)k-1(k-1)!++qk-1(l,r)(t-t0)+qk(l,r)+cosλl(i)(t-t0)q1(l,i)(t-t0)k-1(k-1)!++qk-1(l,i)(t-t0)+qk(l,i)(52) k=1,,ml,l=1,,ρ

as well as(53) z(t)=y˙(t)=l=1ρeλl(r)(t-t0)k=1mlhk(l)(t)(53)

with(54) hk(l)(t):=ck(l,r)cosλl(i)(t-t0)r1(l,r)(t-t0)k-1(k-1)!++rk-1(l,r)(t-t0)+rk(l,r)-sinλl(i)(t-t0)r1(l,i)(t-t0)k-1(k-1)!++rk-1(l,i)(t-t0)+rk(l,i)+ck(l,i)sinλl(i)(t-t0)r1(l,r)(t-t0)k-1(k-1)!++rk-1(l,r)(t-t0)+rk(l,r)+cosλl(i)(t-t0)r1(l,i)(t-t0)k-1(k-1)!++rk-1(l,i)(t-t0)+rk(l,i)(54) k=1,,ml,l=1,,ρ.

Now, let(55) lJν0k=1mlSfk(l)(t)0,tt0.(55)

Then, similarly as in Kohaupt (Citation2011, Section 3.2), there exists a constant XS,0>0 such that(56) xS(t)XS,0eν0(t-t0),tt1t0(56)

for sufficiently large t1t0.

Thus, we obtain

Theorem 3

(Positiveness of the constant XS,0 in lower bound if A general square)

Let the hypotheses (H1), (H2), and (HS) for A be fulfilled, 0x0IRm, SIRl×m, A be a general square matrix as well as condition (55) be satisfied.

Then, there exists a positive constant XS,0 such that(57) XS,0eνx0[A](t-t0)xS(t),tt1t0.(57)

for sufficiently large t1t0.

If xS(t)0,tt0, then t1=t0 can be chosen.

Remark Sufficient algebraic conditions for (37) resp. (55) will be given in the Appendix; they are independent of the initial vector x0 and the time t.

5. Two-sided bounds on ΦS(t)=SΦ(t) with Φ˙(t)=AΦ(t), Φ(0)=E,

In this section, we discuss the deterministic case ΦS(t)=SΦ(t) with Φ˙(t)=AΦ(t), Φ(0)=E, as a preparation for Section 7. There, two-sided bounds on PxS(t)-PS=ΦS(t)(P0-P)ΦST(t) will be given based on those for ΦS(t) here.

Moreover, for the positiveness of the constant in the lower bound, we discuss two cases: the special case when matrix A is diagonalizable and the case when A is general square.

We obtain

Theorem 4

(Two-sided bound on PxS(t)-PS based on ΦS(t)22)

Let AIRn×n, let Φ(t)=eAt be the associated fundamental matrix with Φ(0)=E where E is the identity matrix. Further, let P0,PIRn×n be the covariance matrices from Section 2.

Then,(58) q0ΦS(t)22PxS(t)-PS2q1ΦS(t)22,t0,(58)

where(59) q0=infv2=1|((P0-P)v,v)|(59)

and(60) q1=supv2=1|((P0-P)v,v)|=P0-P2.(60)

If P0P, then q1>0. If P0-P is regular, then(61) q0=(P0-P)-12-1>0.(61)

Proof

The proof follows from Kohaupt (Citation2015b, Lemmas 1 and 2) with C=P0-P and Ψ=ΦS(t)=ΦST(t) as well as Ψ(t)2=ΦS(t)2.

Next, we have to derive two-sided bounds on ΦS(t)2. For this, we write(62) SΦ(t)=S[φ1(t),,φn(t)]=[Sφ1(t),,Sφn(t)],(62)

where φ1(t),,φn(t) are the columns of the fundamental matrix Φ(t), i.e.Φ(t)=[φ1(t),,φn(t)].

Now, Φ˙(t)=AΦ(t), Φ(0)=E, is equivalent to(63) φ˙j(t)=Aφj(t),φ(0)=ej.j=1,,n,(63)

where ej is the jth unit column vector.

The two-sided bounds on ΦS(t)=SΦ(t) can be done in any norm. Let the matrix norm |·| be given by|B|=maxi,j=1,,n|Bi,j|,B=(Bi,j)ICn×n.

Then,(64) |ΦS(t)|=|SΦ(t)|=maxj=1,,nSφj(t).(64)

Thus, the two-sided bound on ΦS(t) has been reduced to two-sided bounds on Sφj(t),j=1,,n.

Similarly to Theorem 2, we obtain

Theorem 5

(Two-sided bound on ΦS(t)=SΦ(t) by eν[A]t) Let AICn×n and Φ(t) be the fundamental matrix of A with Φ(0)=E, i.e. let Φ(t) be the solution of the initial value problem Φ˙(t)=AΦ(t),Φ(0)=E and ΦS(t)=SΦ(t).

Then, there exists a constant φS,00 and for every ε>0 a constant φS,1(ε)>0 such that(65) φS,0eν[A]tΦS(t)φS,1(ε)e(ν[A]+ε)t,t0,(65)

where ν[A] is the spectral abscissa of A.

If A is diagonalizable, then ε=0 may be chosen, and we write φS,1 instead of φS,1(ε=0).

If S is square and regular, then φS,0>0.

Proof

From (18) and (63), there exist constants φS,0,j0 and for every ε>0 constants φS,1,j(ε)>0 such that(66) φS,0,jeνej[A]tSφj(t)φS,1,j(ε)e(νej[A]+ε)t,t0.(66)

DefineφS,0:=minj=1,,nφS,0,j

andφS,1(ε):=maxj=1,,nφS,1,j(ε).

Then, taking into account (64) and the relation(67) maxj=1,,nνej[A]=ν[A](67) (cf. Kohaupt, Citation2006, Proof of Theorem 7) as well as the equivalence of norms in finite-dimensional spaces, the two-sided bound (65) follows. The rest is clear from Theorem 1.

Corresponding to Theorems 2 and 3, we obtain the following two theorems.

Theorem 6

(Positiveness of the constant φS,0 in lower bound if A diagonalizable) Let the hypotheses (H1), (H2), and (HS) for A be fulfilled, SIRl×m, A be diagonalizable as well as condition (37) be satisfied with fk(t)=fk,ej(t),j=1,,m.

Then, there exists a positive constant φS,0 such that(68) φS,0eν[A]tΦS(t),tt1t0.(68)

for sufficiently large t1t0.

If ΦS(t)0,tt0, then t1=t0 can be chosen.

Theorem 7

(Positiveness of the constant φS,0 in lower bound if A general square) Let the hypotheses (H1), (H2), and (HS) for A be fulfilled, SIRl×m, A be a general square matrix as well as condition (55) be satisfied.

Then, there exists a positive constant φS,0 such that(69) φS,0eν[A]tΦS(t),tt1t0.(69)

for sufficiently large t1t0.

If ΦS(t)0,tt0, then t1=t0 can be chosen.

6. Two-sided bounds on mxS(t)

According to Equation (11), we havemxS(t)=ΦS(t)m0,t0.

Now, xS(t)=Sx(t)=SΦ(t)x0=ΦS(t)x0 in Theorem 1. Assuming m00 and choosing the Euclidean norm ·=·2 as well as m0 instead of x0, we therefore obtain from Theorem 1 for every ε>0 the two-sided bound(70) μS,0eνm0[A]tmxS(t)2μS,1(ε)e(νm0[A]+ε)t,t0,(70)

for constants μS,00 and μS,1(ε)>0. Sufficient conditions for μS,0>0 are obtained by Theorems 2 and 3 when replacing there x0 by m0.

7. Two-sided bounds on PxS(t)-PS=ΦS(t)(P0-P)ΦST(t)

Based on Theorems 4 and 5, we obtain

Corollary 1

(Two-sided bounds on PxS(t)-PS based on e2ν[A]t) Let AIRn×n, let Φ(t)=eAt be the associated fundamental matrix with Φ(0)=E, where E is the identity matrix, as well as SIRl×n and ΦS(t)=SΦ(t). Further, let P0,PIRn×n be the covariance matrices from Section 2.

Then, there exists a constant pS,00 and for every ε>0 a constant pS,1(ε)>0 such that(71) pS,0e2ν[A]tPxS(t)-PS2pS,1(ε)e2(ν[A]+ε)t,t0.(71)

If P0-P and S are regular, then pS,0>0.

Remark If SIRl×n is not square regular, under additional conditions stated in Theorems 6 and 7, it can also be asserted that pS,0>0.

8. Local regularity of the function PxS(t)-PS2

We have the following lemma which states - loosely speaking - that for every t00, the function tΔPxS(t)2:=PxS(t)-PS2:=ΦS(t)(P0-P)ΦST(t)2 is real analytic in some right neighborhood [t0,t0+Δt0].

Lemma 2

(Real analyticity of tPxS(t)-PS2 on [t0,t0+Δt0]) Let t0IR0+. Then, there exists a number Δt0>0 and a function tΔPxS^(t), which is real analytic on [t0,t0+Δt0] such that ΔPxS^(t)=ΔPxS(t)2=PxS(t)-PS2=ΦS(t)(P0-P)ΦST(t)2,t[t0,t0+Δt0].

Proof

Based on ΔPxS(t)2=max{|λmax(ΔPxS(t))|,|λmin(ΔPxS(t))}|, the proof is similar to that of Kohaupt (Citation2002, Lemma 1). The details are left to the reader.

9. Formulas for the norm derivatives D+kPxS(t)-PS2,k=0,1,2

Let AICn×n and CICn×n with C=C. As in Kohaupt (Citation2015b, Section 7), we setΨ(t):=Φ(t)CΦ(t),t0.

Let SICl×n and defineΨS(t):=SΨ(t)S,t0.

Then,ΨS(t):=SΦ(t)CΦ(t)S,t0.

Similarly as in Kohaupt (Citation2015b, Section 7), for t0IR0+,ΨS(t):=SΦ(t)CΦ(t)S=j=0SΦ(t0)BjΦ(t0)S(t-t0)jj!

withBj=k=0jjkAj-kCAk,j=0,1,2,, and thusΨ(t)=TS(0)+TS(1)(t-t0)+TS(2)(t-t0)2+

withTS(k)=ST(k)S,k=0,1,2,,

where the quantities T(k),k=0,1,2, are defined in Kohaupt (Citation2015b, Section 7).

Consequently, one obtains the formulas forD+kPxS(t)-PS2,k=0,1,2,

from those for D+kPx(t)-P2,k=0,1,2, when replacing T(k) by TS(k),k=0,1,2,.

10. Applications

In this section, we apply the new two-sided bounds on PxS(t)-PS2 obtained in Section 7 as well as the differential calculus of norms developed in Sections 8 and 9 to a linear stochastic vibration model with output equation for asymptotically stable system matrix and white noise excitation vector.

In Section , the stochastic vibration model as well as its state-space form is given, in Subsection 10.2 the transformation matrix S in chosen and in Section 10.3 the data are specified. In Section 10.4, the positiveness of the constants XS,0 and φS,0 in the lower bounds is verified. In Section 10.5, computations with the chosen data are carried out, such as the computation of P and P0-P as well as the computation of the curves y=D+kPxS(t)-PS2,k=0,1,2 and of the curve y=PxS(t)-PS2 along with its best upper and lower bounds for the two ranges t[0;5] and t[5;26]. In Section 10.6, computational aspects are shortly discussed.

10.1. The stochastic vibration model and its state-space form

Consider the multi-mass vibration model in Figure .

Figure 1. Multi-mass vibration model.

Figure 1. Multi-mass vibration model.

The associated initial-value problem is given byMy¨+By˙+Ky=f(t),y(0)=y0,y˙(0)=y˙0

where y=[y1,,yn]T and f(t)=[f1(t),,fn(t)]T as well asM=m1m2m3mn,B=b1+b2-b2-b2b2+b3-b3-b3b3+b4-b4-bn-1bn-1+bn-bn-bnbn+bn+1,K=k1+k2-k2-k2k2+k3-k3-k3k3+k4-k4-kn-1kn-1+kn-kn-knkn+kn+1.

Here, y is the displacement vector, f(t) the applied force, and M, B, and K are the mass, damping, and stiffness matrices, as the case may be.

In the state-space description, one obtainsx˙(t)=Ax(t)+b(t),x(0)=x0,

with x=[yT,zT]T,z=y˙, and x0=[y0T,z0T]T,z0=y˙0, where the initial vector x0=[y0T,z0T]T is characterized by the mean vector m0 and the covariance matrix P0.

The system matrix A and the excitation vector b(t) are given by

respectively. The vector x(t) is called state vector.

The (symmetric positive semi-definite) intensity matrix Q=Qb is obtained from the (symmetric positive semi-definite) intensity matrix Qf by

See Muller and Schiehlen (Citation1985, (9.65)) and the derivation of this relation in Kohaupt (Citation2015b, Appendix A.5).

10.2. The transformation matrix S and the output equation xS(t)=Sx(t)

We depart from the equation of motion in vector form, namely My¨+By˙+Ky=f(t), and rewrite it asy¨a(t):=y¨-M-1f(t)=-M-1Ky(t)-M-1By˙(t).

Following Müller (Citation1985, (9.56), (9.57)), for a one-mass model with base excitation, we call y¨a the absolute acceleration of our vibration system; it can be written asy¨a(t)=[-M-1K,-M-1B]y(t)y˙(t)=Sx(t)=:xS(t),t0,

with the transformation matrix S=[-M-1K,-M-1B].

Our output equation therefore isxS(t)=Sx(t),

where here SIRn×2n is a rectangular, but not a square regular matrix.

10.3. Data for the model

As of now, we specify the values asmj=1,j=1,,nkj=1,j=1,,n+1

andbj=1/2,jeven1/4,jodd.

Then,M=EB=34-12-1234-14-1434-12-1434-12-1234(if n is even), andK=2-1-12-1-12-1-12-1-12.

We choose n = 5 in this paper so that the state-space vector x(t) has the dimension m=2n=10.

For m0, we takem0=[my0T,mz0T]T

withmy0=[-1,1,-1,1,-1]T

andmz0=0,0,0,0,0,0T(CaseI)-1,-1,-1,-1,-1T(CaseII)

similarly as in Kohaupt (Citation2002) for y0 and y˙0. For the 10×10 matrix P0, we chooseP0=0.01E.

The white-noise force vector f(t) is specified asf(t)=0,,0;fn(t)T

so that its intensity matrix QfIRn×n with qf,nn=:q has the form

We chooseq=0.01.

With M=E, this leads to (see Kohaupt, Citation2015b, Appendix A.5)

10.4. Positiveness of the constants XS,0 and φS,0 (resp. pS,0)

The eigenvalues of matrix A are given by

andλ5+i=λ¯i,i=1,,5

where the numbering is chosen such that Imλi>0,i=1,,5. So,λiλj,ij,i,j=1,,10.

Thus, matrix A is diagonalizable. Further, conditions (H1)-(H4), and (HS) are fulfilled. Moreover, we have Jν0={5} and

Therefore, q5,q¯5 are linearly independent. Thus, by Lemma A.1 and Theorem 2 resp. Theorem 6, the constants XS,0 and φS,0 are positive. Therefore, also the constant pS,0 is positive.

10.5. Computations with the specified data

(i)

Bounds on y=ΦS(t)m0in the vector norm ·2Upper bounds on y=Φ(t)m0 in the vector norm ·2 for the two cases (I) and (II) of m0 are given in Kohaupt (Citation2002, Figures 2 and 3). There, we had a deterministic problem with f(t)=0 and the solution vector x(t)=Φ(t)x0, where x0 there had the same data as m0 here. We mention that for the specified data, νm0[A]=ν[A]=α in both cases (Kohaupt, Citation2006, p. 154) for a method to prove this. For the sake of brevity, we do not compute or plot the lower or upper bounds and thus the two-sided bounds on y=ΦS(t)m0 but leave this to the reader.

(ii)

Computation of P and P0-P The computation of these matrices was already done in Kohaupt (Citation2015b, Subsection 3). There, we saw that P is symmetric and P0-P symmetric and regular (but not positive definite). Matrix P0-P is needed to compute the curve y=PxS(t)-PS2=ΦS(t)(P0-P)ΦST(t)2.

(iii)

Computation of the curves y=D+kPxS(t)-PS2=D+kΦS(t)(P0-P)ΦST(t)2, k=0,1,2 The computation of y=D+kPxS(t)-PS2, k=0,1,2 for the given data is done according to Section 9 with C=P0-P. The pertinent curves are illustrated in Figures . We have checked the results numerically by difference quotients. More precisely, setting ΔPxS(t):=PxS(t)-PS,t>0, and g(t):=ΔPxS(t)2=PxS(t)-PS2,t>0, we have investigated the approximations δhg(t):=g(t+h)-g(t-h)2hD+g(t),t-h0, and δh22g(t):=δh2(δh2g(t))=g(t+h)-2g(t)+g(t-h)h2D+2g(t),t-h0, as well as δhDg(t):=Dg(t+h)-Dg(t-h)2hD+2g(t),t-h0. For, e.g. t=2.5,h=10-5, we obtain D+g(t)=D+PxS(t)-PS2=-0.00789413206599,δhg(t)=δhPxS(t)-PS2=-0.00789413206470, as well as D+2g(t)=D2PxS(t)-PS2=0.00180394234645,δh22g(t)=δh22Px(t)-P2=0.00180268994177, and D+2g(t)=D2PxS(t)-PS2=0.00180394234645,δhDg(t)=δhDPxS(t)-PS2=0.00180394235409, so that the computational results for y=D+kPxS(t)-PS2, k=0,1,2 with t=2.5 are well underpinned by the difference quotients. As we see, the approximation of D+2g(t)=D+2PxS(t)-PS2 by δhDg(t) is much better than by δh22g(t), which was to be expected, of course.

(iv)

Bounds on y=PxS(t)-PS=ΦS(t)(P0-P)ΦST(t)in the spectral norm ·2 Let α:=ν[A] be the spectral abscissa of the system matrix A. With the given data, we obtain α:=ν[A]=-0.05023936121946<0 so that the system matrix A is asymptotically stable. The upper bound on y=PxS(t)-PS2=ΦS(t)(P0-P)ΦST(t)2 is given by y=pS,1(ε)e2(α+ε)t, t0. Here, ε=0 can be chosen since matrix A is diagonalizable. But, in the programs, we have chosen the machine precision ε=eps=2-522.2204×10-16 of MATLAB in order not to be bothered by this question. With φ1,ε(t):=pS,1(ε)e2(α+ε)t,t0, the optimal constant pS,1(ε) in the upper bound is obtained by the two conditions PxS(tc)-PS2=φ1,ε(tc)=pS,1(ε)e2(α+ε)tc,D+PxS(tc)-PS2=φ1,ε(tc)=2(α+ε)φ1,ε(tc), where tc is the place of contact between the curves. This is a system of two nonlinear equations in the two unknowns tc and pS,1(ε). By eliminating φ1,ε(tc), this system is reduced to the determination of the zero of D+PxS(tc)-PS2-2(α+ε)PxS(tc)-PS2=0, which is a single nonlinear equation in the single unknown tc. For this, MATLAB routine fsolve was used. After tc has been computed from the above equation, the best constant pS,1(ε) is obtained from pS,1(ε)=PxS(tc)-PS2e-2(α+ε)tc.

Figure 2. Curve y=PxS(t)-PS2,0t5,Δt=0.01.

Figure 2. Curve y=‖PxS(t)-PS‖2,0≤t≤5,Δt=0.01.

Figure 3. Right norm derivative y=D+PxS(t)-PS2,0t5,Δt=0.01.

Figure 3. Right norm derivative y=D+‖PxS(t)-PS‖2,0≤t≤5,Δt=0.01.

Figure 4. Second right norm derivative y=D+2PxS(t)-PS2,0t5,Δt=0.01.

Figure 4. Second right norm derivative y=D+2‖PxS(t)-PS‖2,0≤t≤5,Δt=0.01.

Numerical values for range [0;5] First, we consider the range [0;5]. From the initial guess tc,0=1.4, the computations deliver the valuestc=1.355984,pS,1(ε)=0.024642.

To compute the lower bound, we have to notice that the curve y=PxS(tc)-PS2 has kinks like |t|12 at t=0. This is not seen in Figures in the range [0;5], but in Figure in the range [5;25]. Therefore, the point of contact ts between the lower bound y=MS,0e2αt and the curve y=PxS(t)-PS2 cannot be determined by the calculus of norms, but must be computed fromts=minj=1,2,PxS(tj)-PS2,

where tj,j=1,2, are the local minima of y=PxS(tc)-PS2. In this way, with the initial guess ts,0=3.0, the results arets=17.152006,pS,0=9.403735×10-5.

In Figure , the curve y=PxS(tc)-PS2 along with the best upper and lower bounds are illustrated with stepsize Δt=0.01. The upper bound is valid for tt11.172.

Numerical values for range [5;25] On the rage [5;25], the two-sided bounds can be better adapted to the curve y=PxS(t)-PS2. From the initial guess tc,0=14, the computations delivertc=14.876956,pS,1(ε)=0.00164024.

Further, with the initial guess ts,0=25, we obtaints=24.860534,pS,0=3.548384.

Figure 5. y=PxS(t)-PS2 along with the best upper and lower bounds on range [0;5].

Figure 5. y=‖PxS(t)-PS‖2 along with the best upper and lower bounds on range [0;5].

In Figure , the curve y=PxS(t)-PS2 along with the best upper and lower bounds are illustrated with stepsize Δt=0.01. The upper bound is valid for tt16.031.

Figure 6. y=PxS(t)-PS2 along with the best upper and lower bounds on range [5;25].

Figure 6. y=‖PxS(t)-PS‖2 along with the best upper and lower bounds on range [5;25].

10.6. Computational aspects

In this subsection, we say something about the computer equipment and the computation time for some operations.

(i)

As to the computer equipment, the following hardware was available: an Intel Pentium D(3.20 GHz, 800 MHz Front-Side-Bus, 2x2MB DDR2-SDRAM with 533 MHz high-speed memories). As software package, we used MATLAB, Version 6.5.

(ii)

The computation time t of an operation was determined by the command sequence ti=clock;operation;t=etime(clock,ti); it is put out in seconds rounded to two decimal places by Matlab. For the computation of the eigenvalues of matrix A, we used the command [XA,DA]=eig(A); the pertinent computation time is less than 0.01 s. To determine Φ(t)=eAt, we employed Matlab routine expm. For the computation of the 501 values tyyuyl in Figure , it took t(TableforFigure5)=1.17s. Here, t stands for the time value running from t0=0 to te=25 with stepsize Δt=0.1; y stands for the value of PxS(t)-PS2, yu for the value of the best upper bound pS,1(ε)e2(α+ε)t and yl for the value of the lower bound pS,0e2αt. For the computation of the 2501 values tyyuyl in Figure , it took t(TableforFigure6)=6.35s.

11. Conclusion

In the present paper, a linear stochastic vibration system of the form x˙(t)=Ax(t)+b(t),x(0)=x0, with output equation xS(t)=Sx(t) was investigated, where A is the system matrix and b(t) white noise excitation. The output equation xS(t)=Sx(t) is viewed as a transformation of the state vector x(t) mapped by the rectangular matrix S into the output vector xS(t). If the system matrix A is asymptotically stable, then the mean vector mxS(t) and the covariance matrix PxS(t) both converge with mxS(t)0(t) and PxS(t)PS(t) for some symmetric positive (semi-)definite matrix PS. This raises the question of the asymptotic behavior of both quantities. The pertinent investigations are made in the Euclidean norm ·2 for mxS(t) and in the spectral norm, also denoted by ·2, for PxS(t)-PS. The main new points are the derivation of two-sided bounds on both quantities, the determination of the right norm derivatives D+kPxS(t)-PS2,k=0,1,2 and, as application, the computation of the best constants in the bounds. In the presentation, the author exhibits the relations between the quantities mx(t), Px(t)-P, and the formulas for D+kPx(t)-P2, on the one hand, and the corresponding output-related quantities mxS(t), PxS(t)-PS, and D+kPxS(t)-PS2, on the other hand. As a result, we obtain that there is a close relationship between these quantities. Special attention is paid to the positiveness of the constants in the lower bounds if the transformation matrix is only rectangular and not necessarily square and regular. In the Appendix, a sufficient algebraic condition for the positiveness of the constants in the lower bounds is derived that is independent of the initial vector and the time variable. To make sure that the (new) formulas for D+kPxS(t)-PS2 are correct, we have checked them by various difference quotients. They underpin the correctness of the numerical values for the specified data.

The computation time to generate the last figure with a 10×10 matrix A is about 6 seconds. Of course, in engineering practice, much larger models occur. As in earlier papers, we mention that in this case engineers usually employ a method called condensation to reduce the size of the matrices.

We have shifted the details of the positiveness of the constants in the lower bounds to the Appendix in order to make the paper easier to comprehend.

The numerical values were given in order that the reader can check the results.

Altogether, the results of the paper should be of interest to applied mathematicians and particularly to engineers.

Acknowledgements

The author would like to give thanks to the anonymous referees for evaluating the paper and for comments that led to a better presentation of the paper.

Additional information

Funding

The author received no direct funding for this research.

Notes on contributors

L. Kohaupt

Ludwig Kohaupt received the equivalent to the master’s degree (Diplom-Mathematiker) in Mathematics in 1971 and the equivalent to the PhD (Dr.phil.nat.) in 1973 from the University of Frankfurt/Main. From 1974 until 1979, Kohaupt was a teacher in Mathematics and Physics at a Secondary School. During that time (from 1977 until 1979), he was also an auditor at the Technical University of Darmstadt in Engineering Subjects, such as Mechanics, and especially Dynamics. From 1979 until 1990, he joined the Mercedes-Benz car company in Stuttgart as a Computational Engineer, where he worked in areas such as Dynamics (vibration of car models), Cam Design, Gearing, and Engine Design. Then, in 1990, Dr. Kohaupt combined his preceding experiences by taking over a professorship at the Beuth University of Technology Berlin (formerly known as TFH Berlin). He retired on April 01, 2014.

References

  • Achieser, N. I., & Glasman, I. M. (1968). Theorie der linearen Operatoren im Hilbert--Raum [Theory of linear operators in Hilbert space]. Berlin: Akademie-Verlag.
  • Bhatia, R., & Elsner, L. (2003). Higher order logarithmic derivatives of matrices in the spectral norm. SIAM Journal on Matrix Analysis and Applications, 25, 662–668.
  • Benner, P., Deni{\ss}en, J., & Kohaupt, L. (2013). Bounds on the solution of linear time-periodic systems. Proceedings in Applied Mathematics and Mechanics, 13, 447–448.
  • Benner, P., Deni{\ss}en, J., & Kohaupt, L. (2016, January 12). Trigonometric spline and spectral bounds for the solution of linear time-periodic systems ( Preprint MPIMD/16-1). Magdeburg: Max Planck Institute for Dynamics of Complex Technical Systems. Retrieved from https://protect-us.mimecast.com/s/oXA1BRUQvnpoto
  • Bickley, W. G., & McNamee, J. (1960). Matrix and other direct methods for the solution of linear difference equations. Philosophical Transactions of the Royal Society of London: Series A, 252, 69–131.
  • Coppel, W. A. (1965). Stability and asymptotic behavior of differential equations. Boston, MA: D.C. Heath.
  • Dahlquist, G. (1959). Stability and error bounds in the numerical integration of ordinary differential equations. Transactions of the Royal Institute of Technology, Stockholm, No. 130. Uppsala; Almqvist and Wiksells Boktryckeri AB.
  • Desoer, Ch. A., & Haneda, H. (1972). The measure of a matrix as a tool to analyse computer algorithms for circuit analysis. IEEE Transaction on Circuit Theory, 19, 480–486.
  • Golub, G. H., & van Loan, Ch F (1989). Matrix computations. Baltimore, MD: Johns Hopkins University Press.
  • Guyan, R. J. (1965). Reduction of stiffness and mass matrices. AIAA Journal, 3, 380.
  • Hairer, E., N{\o}rset, S. P., & Wanner, G. (1993). Solving ordinary differential equations I. Berlin: Springer-Verlag.
  • Heuser, H. (1975). Funktionalanalysis [ Functional analysis]. Stuttgart: B.G. Teubner.
  • Higueras, I., & García-Celayeta, B. (1999). Logarithmic norms for matrix pencils. SIAM Journal on Matrix Analysis and Applications, 20, 646–666.
  • Higueras, I., & Garc\’{\i}a-Celayeta, B. (2000). How close can the logarithmic norm of a matrix pencil come to the spectral abscissa? SIAM Journal on Matrix Analysis and Applications, 22, 472–478.
  • Hu, G.-D., & Hu, G.-D. (2000). A relation between the weighted logarithmic norm of a matrix and the Lyapunov equation. BIT, 40, 606–610.
  • Kantorovich, L. V., & Akilov, G. P. (1982). Funktional analysis. Oxford: Pergamon Press.
  • Kato, T. (1966). Perturbation theory for linear operators. New York, NY: Springer.
  • Kloeden, P., & Platen, E. (1992). Numerical solution of stochastic differential equations. Springer-Verlag.
  • Kohaupt, L. (1999). Second logarithmic derivative of a complex matrix in the Chebyshev norm. SIAM Journal on Matrix Analysis and Applications, 21, 382–389.
  • Kohaupt, L. (2001). Differential calculus for some p-norms of the fundamental matrix with applications. Journal of Computational and Applied Mathematics, 135, 1–21.
  • Kohaupt, L. (2002). Differential calculus for p-norms of complex-valued vector functions with applications. Journal of Computational and Applied Mathematics, 145, 425–457.
  • Kohaupt, L. (2003). Extension and further development of the differential calculus for matrix norms with applications. Journal of Computational and Applied Mathematics, 156, 433–456.
  • Kohaupt, L. (2004a). Differential calculus for the matrix norms and with applications to asymptotic bounds for periodic linear systems. International Journal of Computer Mathematics, 81, 81–101.
  • Kohaupt, L. (2004b). New upper bounds for free linear and nonlinear vibration systems with applications of the differential calculus of norms. Applied Mathematical Modelling, 28, 367–388.
  • Kohaupt, L. (2005). Illustration of the logarithmic derivatives by examples suitable for classroom teaching. Rocky Mountain Journal of Mathematics, 35, 1595–1629.
  • Kohaupt, L. (2006). Computation of optimal two-sided bounds for the asymptotic behavior of free linear dynamical systems with application of the differential calculus of norms. Journal of Computational Mathematics and Optimization, 2, 127–173.
  • Kohaupt, L. (2007a). New upper bounds for excited vibration systems with applications of the differential calculus of norms. International Journal of Computer Mathematics, 84, 1035–1057.
  • Kohaupt, L. (2007b). Short overview on the development of a differential calculus of norms with applications to vibration problems (in Russian). Information Science and Control Systems, 13, 21–32. ISSN 1814-2400.
  • Kohaupt, L. (2007c). Construction of a biorthogonal system of principal vectors of the matrices A and A* with applications to the initial value problem. Journal of Computational Mathematics and Optimization, 3, 163–192.
  • Kohaupt, L. (2008a). Solution of the matrix eigenvalue problem with applications to the study of free linear systems. Journal of Computational and Applied Mathematics, 213, 142–165.
  • Kohaupt, L. (2008b). Biorthogonalization of the principal vectors for the matrices and with application to the computation of the explicit representation of the solution of. Applied Mathematical Sciences, 2, 961–974.
  • Kohaupt, L. (2008c). Solution of the vibration problem without the hypothesis or. Applied Mathematical Sciences, 2, 1989–2024.
  • Kohaupt, L. (2008d). Two-sided bounds on the difference between the continuous and discrete evolution as well as on with application of the differential calculus of norms. In M. P. {\’A}lvarez (Ed.), Leading-edge applied mathematical modeling research (pp. 319–340). Nova Science. ISBN 978-1-60021-977-1.
  • Kohaupt, L. (2009a, July 10--13). A short overview on the development of a differential calculus for norms with applications to vibration problems. In The 2nd International Multi-Conference on Engineering and Technological Innovation (Vol. II). Orlando, FL. ISBN-10: 1–934272-69-8, ISBN13: 978-1-934272-69-5
  • Kohaupt, L. (2009b). On an invariance property of the spectral abscissa with respect to a vector. Journal of Computational Mathematics and Optimization, 5, 175–180.
  • Kohaupt, L. (2009c). Contributions to the determination of optimal bounds on the solution of ordinary differential equations with vibration behavior (Habilitation Thesis, TU Freiberg, 91 pp.). Aachen: Shaker-Verlag.
  • Kohaupt, L. (2010). Two-sided bounds for the asymptotic behavior of free nonlinear vibration systems with application of the differential calculus of norms. International Journal of Computer Mathematics, 87, 653–667.
  • Kohaupt, L. (2010b, January). Phase diagram for norms of the solution vector of dynamical multi-degree-of-freedom systems. Lagakos, et al. (Eds.), Recent advances in applied mathematics (pp. 69–74, 27--29). American Conference on Applied Mathematics (American Math10), WSEAS Conference in Cambridge/USA at Harvard University. WSEAS Press. ISBN 978-960-474-150-2, ISSN 1790-2769.
  • Kohaupt, L. (2011). Two-sided bounds on the displacement and the velocity of the vibration problem with application of the differential calculus of norms. The Open Applied Mathematics Journal, 5, 1–18.
  • Kohaupt, L. (2012). Further investigations on phase diagrams for norms of the solution vector of multi-degree-of-freedom systems. Applied Mathematical Sciences, 6, 5453–5482. ISSN: 1312–885X.
  • Kohaupt, L. (2013). On the vibration-suppression property and monotonicity behavior of a special weighted norm for dynamical systems. Applied Mathematics and Computation, 222, 307–330.
  • Kohaupt, L. (2015a). On norm equivalence between the displacement and velocity vectors for free linear dynamical systems. Cogent Mathematics, 2, 1095699 (33 p.).
  • Kohaupt, L. (2015b). Two-sided bounds on the mean vector and covariance matrix in linear stochastically excited vibration systems with application of the differential calculus of norms. Cogent Mathematics, 2, 1021603 ( 26 p).
  • Kučera, V. (1974). The matrix equation A X+X B = C. SIAM Journal on Applied Mathematics, 26, 15–25.
  • Lozinski\v{i}, S. M. (1958). Error estimates for the numerical integration of ordinary differential equations I (in Russian). Izv. Vys\v{s}. U\v{c}ebn. Zaved Matematika, 5, 52–90.
  • Ma, E.-Ch. (1966). A finite series solution of the matrix equation A X - X B = C. SIAM Journal of Applied Mathematics, 14, 490–495.
  • Müller, P. C., & Schiehlen, W. O. (1985). Linear vibrations. Dordrecht: Martinus Nijhoff.
  • Niemeyer, H., & Wermuth, E. (1987). Lineare algebra [Linear algebra]. Braunschweig: Vieweg.
  • Pao, C. V. (1973a). Logarithmic derivatives of a square matrix. Linear Algebra and its Applications, 6, 159–164.
  • Pao, C. V. (1973b). A further remark on the logarithmic derivatives of a square matrix. Linear Algebra and its Applications, 7, 275–278.
  • S{\"o}derlind, G., & Mattheij, R. M. M. (1985). Stability and asymptotic estimates in nonautonomous linear differential systems. SIAM Journal on Mathematical Analysis, 16, 69–92.
  • Str{\"o}m, T. (1972). Minimization of norms and logarithmic norms by diagonal similarities. Computing, 10, 1–7.
  • Str{\"o}m, T. (1975). On logarithmic norms. SIAM Journal on Numerical Analysis, 10, 741–753.
  • Stummel, F., & Hainer, K. (1980). Introduction to numerical analysis. Edinburgh: Scottish Academic Press.
  • Taylor, A. E. (1958). Introduction to functional analysis. New York, NY: Wiley.
  • Thomson, W. T., & Dahleh, M. D. (1998). Theory of vibration with applications. Upper Saddle River, NJ: Prentice-Hall.
  • Waller, H., & Krings, W. (1975). Matrizenmethoden in der Maschinen- und Bauwerksdynamik [Matrix methods in machine and building dynamics]. Mannheim: Bibliographisches Institut.
  • Whidborne, J. F., & Amer, N. (2011). Computing the maximum transient energy growth. BIT Numerical Mathematics, 51, 447–457.

Appendix 1

Algebraic conditions ensuring the positiveness of XS,0 resp. φS,0 for S=[-M-1K,-M-1B].

We discuss two cases, namely the case of a diagonalizable matrix A and the case of a general square matrix A.

The corresponding Lemmas A.1 and A.2 will deliver sufficient algebraic criteria for the positiveness of XS,0 and φS,0, as the case may be, that are independent of the initial condition and the time, which is the important point.

The results are of interest on their own.

Case 1: Diagonalizable matrix A

Let the hypotheses (H1), (H2), and (HS) be fulfilled. Further, according to (37), we assume(A1) kJν0Sfk(t)0,tt0,(A1)

withfk(t)=gk(t)hk(t)

so thatSfk(t)=-M-1Kgk(t)-M-1Bhk(t)=-M-1[Kgk(t)+Bhk(t)],kJν0.

Thus, (37) resp. (72) is equivalent to(A2) kJν0[Kgk(t)+Bhk(t)]0,tt0.(A2)

We have the following

Sufficient condition for kJν0[Kgk(t)+Bhk(t)]0,tt0:(A3) Kqk(r)+Brk(r),Kqk(i)+Brk(i),kJν0,linearlyindependent(A3) (see Kohaupt, Citation2011, Section 5.1 for a similar case).

Lemma 3

(Some equivalences of the sufficient algebraic condition)

Let the conditions (H1), (H2), (H3), and (HS) be fulfilled. Further, let M,B,KIRn×n and M be regular.

Then, the following equivalences are valid:(A4) Kqk(r)+Brk(r),Kqk(i)+Brk(i),kJν0,linearlyindependent(A4) (A5) Kqk+Brk,Kq¯k+Br¯k,kJν0,linearlyindependent(A5) (A6) qkq¯k,kJν0,linearlyindependent(A6) (A7) qk(r),qk(i),kJν0,linearlyindependent(A7)

Proof

The equivalences (75) (76) and (77) (78) are clear. Further, since(A8) rk=λkqk,(A8) (76) is equivalent to(A9) Kqk+λkBqk,Kq¯k+λ¯kBq¯k,kJν0,linearlyindependent.(A9)

Sinceλk2Mqk+λkBqk+Kqk=0,λ¯k2Mq¯k+λ¯kBq¯k+Kq¯k=0,kJν0,(80) is equivalent to(A10) -λk2Mqk,-λ¯k2Mq¯k,kJν0,linearlyindependent.(A10)

Since λk0 due to (H3) and M is regular by assumption, (81) is equivalent to (77).

Case 2: General square matrix A Let the hypotheses (H1), (H2), (H3), and (HS) be fulfilled. Further, according to (55), we assume(A11) lJν0k=1mlSfk(l)(t)0,tt0,(A11)

withfk(l)(t)=gk(l)(t)hk(l)(t)

so thatSfk(l)(t)=-M-1Kgk(l)(t)-M-1Bhk(l)(t)=-M-1[Kgk(l)(t)+Bhk(l)(t)],k=1,,ml,lJν0. Thus, (55) resp. (82) is equivalent to(A12) lJν0k=1ml[Kgk(l)(t)+Bhk(l)(t)]0,tt0.(A12)

We have the following

Sufficient condition for lJν0k=1ml[Kgk(l)(t)+Bhk(l)(t)]0,tt0:(A13) Kqk(l,r)+Brk(l,r),Kqk(l,i)+Brk(l,i),k=1,,ml,lJν0,linearlyindependent(A13) (see Kohaupt, Citation2011, Section 5.2 for a similar case).

Lemma 4

(Some equivalences of the sufficient algebraic condition) Let the hypotheses (H1), (H2), (H3), and (HS) be fulfilled. Further, let M,B,KIRn×n, and M be regular.

Then, the following equivalences are valid:(A14) Kqk(l,r)+Brk(l,r),Kqk(l,i)+Brk(l,i),k=1,,ml,lJν0,linearlyindependent(A14) (A15) Kqk(l)+Brk(l),Kq¯k(l)+Br¯k(l),k=1,,ml,lJν0,linearlyindependent(A15) (A16) qk(l)q¯k(l),k=1,,ml,lJν0,linearlyindependent(A16) (A17) qk(l,r),qk(l,i),k=1,,ml,lJν0,linearlyindependent(A17)

Proof

The equivalences (85) (86) and (87) (88) are clear.

Now we prove the equivalence of (86) and (87). From the relations(A18) (A-λlE)pk(l)=pk-1(l)(A18)

with(A19) pk(l)=qk(l)rk(l),(A19) k=1,,ml,lJν0, in a first step, we derive associated relations for qk(l) and rk(l), k=1,,ml,lJν0. Now, (89) means

that is,(A20) rk(l)=λlqk(l)+qk-1(l)(A20) -M-1Kqk(l)-M-1Brk(l)=λlrk(l)+rk-1(l)

or(A21) Kqk(l)+Brk(l)=-M[λlrk(l)+rk-1(l)],(A21) k=1,,ml,lJν0. Based on (93) and the assumed regularity of M, we see that (86) is equivalent to(A22) λlrk(l)+rk-1(l),λ¯lr¯k(l)+r¯k-1(l),k=1,,ml,lJν0,linearlyindependent.(A22) In the second step, we show the equivalence of (94) with the following condition:(A23) rk(l)r¯k(l),k=1,,ml,lJν0,linearlyindependent,(A23)

and then in the third step, the equivalence of (95) with (87).

Equivalence of (94) and (95):

(94) (95): So, let (94) be fulfilled. We write rk(l) in the form(A24) rk(l)=λlr~k(l)+r~k-1(l)(A24) k=1,,ml,lJν0, where the r~k(l) are also principal vectors of stage k corresponding to the eigenvalue λl. This is done as follows:r1(l)=λlr~1(l)+r~0(l)

where r~1(l)=1λlr1(l) is a principal vector of stage 1 (or eigenvector) and r~0(l)=0. Similarly,r2(l)=λlr~2(l)+r~1(l)

where r~2(l)=1λl(r2(l)-r~1(l)) is a principal vector of stage 2 corresponding to the eigenvalue λl.

Proceeding in this way, using induction, (96) is proven.

Therefore, apart from (94), also the following property must hold:(A25) rk(l)=λlr~k(l)+r~k-1(l),r¯k(l)=λ¯lr~¯k(l)+r~¯k-1(l)k=1,,ml,lJν0,linearlyindependent,(A25)

which proves (95).

(95) (94): Let (95) be fulfilled andlJν0k=1mlαk(l)(λlrk(l)+rk-1(l))+βk(l)(λ¯lr¯k(l)+r¯k-1(l))=0.

Fully written, we obtain with r0(l)=0,lJν0α1(l)(λlr1(l)+r0(l))+β1(l)(λ¯lr¯1(l)+r¯0(l))+α2(l)(λlr2(l)+r1(l))+β2(l)(λ¯lr¯2(l)+r¯1(l))+α3(l)(λlr3(l)+r2(l))+β3(l)(λ¯lr¯3(l)+r¯2(l))++αml(l)(λlrml(l)+rml-1(l))+βml(l)(λ¯lr¯ml(l)+r¯ml-1(l))=0

or regroupinglJν0(α1(l)λl+α2(l))r1(l)+(β1(l)λ¯l+β2(l))r¯1(l)+(α2(l)λl+α3(l))r2(l)+(β2(l)λ¯l+β3(l))r¯2(l)++(αml-1(l)λl+αml(l))rml-1(l)+(βml-1(l)λ¯l+βml(l))r¯ml(l)+(αml(l)λl)rml(l)+(βml(l)λ¯l)r¯ml(l)=0

By assumption, from the last line, we getαml(l)λl=0,βml(l)λ¯l=0

orαml(l)=βml(l)=0;

further,αml-1(l)λl+αml(l)=0,βml-1(l)λ¯l+βml(l)=0,

leading toαml-1(l)=βml-1(l)=0.

Continuing in this way, we ultimately obtainαk(l)=βk(l)=0,k=1,,ml,lJν0,

so that (94) is proven.

Further, because of the representation (92), then (95) is equivalent to(A26) λlqk(l)+qk-1(l),λ¯lq¯k(l)+q¯k-1(l),k=1,,ml,lJν0,linearlyindependent.(A26)

Similarly as above, this is equivalent to (87).

On the whole, Lemma A.2 is proven.

Alternative proof for the positiveness of XS,0 resp. φS,0 when S=[-M-1K,-M-1B]

In the special case of S=[-M-1K,-M-1B], there is an alternative proof for the positiveness of XS,0. This alternative proof is simpler, but the foregoing one is applicable to more general forms of S.

We employ the vector resp. matrix norm ·2 to obtainxS(t)22=Sx(t)22=-M-1Ky(t)22+-M-1By˙(t)22K-1M2-2y(t)22+B-1M2-2y˙(t)22XS,2,02x(t)22

withXS,2,02=min{K-1M2-2,B-1M2-2}

so thatxS(t)2XS,2,0x(t)2.

Due to the equivalence of norms in finite-dimensional spaces, this entails that for every vector norm · one hasxS(t)XS,0x(t)

with a positive constant XS,0.

A similar proof for the positiveness of φS,0 is possible. This is left to the reader.